Skip to content

Using Pydantic AI

Why Pydantic AI?

Pydantic AI is a framework that makes it easy to build robust AI-powered agents with minimal fuss. Instead of just returning a text completion, Pydantic AI can handle function calling (tools) and structured outputs. This is particularly useful when your agent needs to: - Parse user requests, - Call external APIs (like weather or databases), - Return structured JSON results (automatically validated with Pydantic), - Provide reflection/self-correction if the AI’s initial output doesn’t match the schema.

Getting Started with Pydantic AI

Below is a simple “Hello World” snippet demonstrating usage inside your agent’s run function.

from typing import Optional, List
from pydantic_ai import Agent

from utils.pydanticai_utils import codeligence_pydantic_llm_model
from utils.codeligence_utils import Message, AgentInfo, codeligence_report_task_output, codeligence_report_status, ButtonResponse
from utils.pydanticai_utils import convert_messages_to_pydantic_messages

async def run(message: str, history: Optional[List[Message]] = None, button: ButtonResponse = None):
    # 1. Create an agent with a system prompt
    hello_agent = Agent(
        model=codeligence_pydantic_llm_model("gpt-4o"),
        system_prompt="Be concise. Answer in one sentence only."
    )

    # 2. (Optional) Send a status to the user
    codeligence_report_status("Asking the model about 'Hello World'")

    # 3. Execute the agent
    result = await hello_agent.run(
        prompt="Where does 'hello world' come from?",
        message_history=convert_messages_to_pydantic_messages(history)
    )

    # 4. Return the final data to the user
    codeligence_report_task_output("Answer", result.data)


def get_config():
    return AgentInfo(enabled=True, name="HelloWorld Pydantic Agent")

Adding Tools

Tools let the model call Python functions automatically. Suppose you want your agent to fetch weather data from an API:

import os
import httpx
from dataclasses import dataclass
from pydantic_ai import Agent, RunContext

from utils.codeligence_utils import codeligence_report_task_output, codeligence_report_status

@dataclass
class WeatherDependencies:
    http_client: httpx.AsyncClient
    api_key: str

async def run(message: str, history=None, button=None):
    # Step 1: Create dependencies
    deps = WeatherDependencies(
        http_client=httpx.AsyncClient(),
        api_key=os.getenv("OPENWEATHER_API_KEY", "")
    )

    # Step 2: Create the agent
    agent = Agent(
        model=codeligence_pydantic_llm_model("gpt-4o"),
        system_prompt="You are a helpful weather bot. Use the weather tool to fetch data.",
        deps_type=WeatherDependencies,   # The agent can inject these into your tools
        result_type=str                  # The agent's final result should be text
    )

    # Step 3: Define a tool
    @agent.tool
    async def get_weather(ctx: RunContext[WeatherDependencies], city: str) -> dict:
        """Fetch current weather for a city using OpenWeatherMap."""
        resp = await ctx.deps.http_client.get(
            f"http://api.openweathermap.org/data/2.5/weather?q={city}&units=metric&appid={ctx.deps.api_key}"
        )
        return resp.json()

    # Step 4: Run it
    codeligence_report_status("Fetching weather...")
    result = await agent.run(message, deps=deps)
    codeligence_report_task_output("Weather", result.data)

    # Step 5: Clean up
    await deps.http_client.aclose()

The LLM will decide when to call get_weather(city=…) automatically based on user input (“What is the temperature in Berlin?”).