Structured output : Pydantic-AI vs Instructor.
Hello everyone,
Pydantc-AI is shaping up to be a beautiful framework for agentic development yet I feel there should be some clarification on how it handles structured output, specifically compared to the amazing Instructor library. From what I understand the difference is as follows:
### Instructor
  • Uses OpenAI's function calling API for structured outputs
  • Enforces schema *at the API level*
  • Automatic retries for invalid responses
  • Real-time streaming validation
  • Potentially fewer retries and more reliable results
### Pydantic-AI
  • Uses tool calling and Pydantic for schema verification
  • *Post-response* validation
  • More flexible, supports most LLM providers
  • Offers additional features like dependency injection and cost tracking
  • May require more retries for invalid outputs
  • Suitable for complex AI applications with broader model support
The main trade-off seems to be between Instructor's more reliable output structuring (especially with OpenAI) and Pydantic-AI's greater flexibility and additional features for diverse AI applications.
What do you think, is Pydantic-AI's method of post-response retry/validation sufficient compared to Instructor's or should we wait until more LLM providers provide structured outputs in their API ?
Cheers 💫
2
2 comments
Remo Sande
3
Structured output : Pydantic-AI vs Instructor.
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
Leaderboard (30-day)
Powered by