On this tutorial, we display methods to design a contract-first agentic choice system utilizing PydanticAI, treating structured schemas as non-negotiable governance contracts slightly than non-compulsory output codecs. We present how we outline a strict choice mannequin that encodes coverage compliance, threat evaluation, confidence calibration, and actionable subsequent steps instantly into the agent’s output schema. By combining Pydantic validators with PydanticAI’s retry and self-correction mechanisms, we be certain that the agent can’t produce logically inconsistent or non-compliant selections. All through the workflow, we concentrate on constructing an enterprise-grade choice agent that causes below constraints, making it appropriate for real-world threat, compliance, and governance eventualities slightly than toy prompt-based demos. Take a look at the FULL CODES right here.
!pip -q set up -U pydantic-ai pydantic openai nest_asyncio
import os
import time
import asyncio
import getpass
from dataclasses import dataclass
from typing import Listing, Literal
import nest_asyncio
nest_asyncio.apply()
from pydantic import BaseModel, Discipline, field_validator
from pydantic_ai import Agent
from pydantic_ai.fashions.openai import OpenAIChatModel
from pydantic_ai.suppliers.openai import OpenAIProvider
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if not OPENAI_API_KEY:
attempt:
from google.colab import userdata
OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
besides Exception:
OPENAI_API_KEY = None
if not OPENAI_API_KEY:
OPENAI_API_KEY = getpass.getpass("Enter OPENAI_API_KEY: ").strip()
We arrange the execution surroundings by putting in the required libraries and configuring asynchronous execution for Google Colab. We securely load the OpenAI API key and make sure the runtime is able to deal with async agent calls. This establishes a secure basis for operating the contract-first agent with out environment-related points. Take a look at the FULL CODES right here.
class RiskItem(BaseModel):
threat: str = Discipline(..., min_length=8)
severity: Literal["low", "medium", "high"]
mitigation: str = Discipline(..., min_length=12)
class DecisionOutput(BaseModel):
choice: Literal["approve", "approve_with_conditions", "reject"]
confidence: float = Discipline(..., ge=0.0, le=1.0)
rationale: str = Discipline(..., min_length=80)
identified_risks: Listing[RiskItem] = Discipline(..., min_length=2)
compliance_passed: bool
situations: Listing[str] = Discipline(default_factory=record)
next_steps: Listing[str] = Discipline(..., min_length=3)
timestamp_unix: int = Discipline(default_factory=lambda: int(time.time()))
@field_validator("confidence")
@classmethod
def confidence_vs_risk(cls, v, data):
dangers = data.knowledge.get("identified_risks") or []
if any(r.severity == "excessive" for r in dangers) and v > 0.70:
elevate ValueError("confidence too excessive given high-severity dangers")
return v
@field_validator("choice")
@classmethod
def reject_if_non_compliant(cls, v, data):
if data.knowledge.get("compliance_passed") is False and v != "reject":
elevate ValueError("non-compliant selections have to be reject")
return v
@field_validator("situations")
@classmethod
def conditions_required_for_conditional_approval(cls, v, data):
d = data.knowledge.get("choice")
if d == "approve_with_conditions" and (not v or len(v) < 2):
elevate ValueError("approve_with_conditions requires a minimum of 2 situations")
if d == "approve" and v:
elevate ValueError("approve should not embrace situations")
return v
We outline the core choice contract utilizing strict Pydantic fashions that exactly describe a sound choice. We encode logical constraints equivalent to confidence–threat alignment, compliance-driven rejection, and conditional approvals instantly into the schema. This ensures that any agent output should fulfill enterprise logic, not simply syntactic construction. Take a look at the FULL CODES right here.
@dataclass
class DecisionContext:
company_policy: str
risk_threshold: float = 0.6
mannequin = OpenAIChatModel(
"gpt-5",
supplier=OpenAIProvider(api_key=OPENAI_API_KEY),
)
agent = Agent(
mannequin=mannequin,
deps_type=DecisionContext,
output_type=DecisionOutput,
system_prompt="""
You're a company choice evaluation agent.
It's essential to consider threat, compliance, and uncertainty.
All outputs should strictly fulfill the DecisionOutput schema.
"""
)
We inject enterprise context by a typed dependency object and initialize the OpenAI-backed PydanticAI agent. We configure the agent to supply solely structured choice outputs that conform to the predefined contract. This step formalizes the separation between enterprise context and mannequin reasoning. Take a look at the FULL CODES right here.
@agent.output_validator
def ensure_risk_quality(consequence: DecisionOutput) -> DecisionOutput:
if len(consequence.identified_risks) < 2:
elevate ValueError("minimal two dangers required")
if not any(r.severity in ("medium", "excessive") for r in consequence.identified_risks):
elevate ValueError("a minimum of one medium or excessive threat required")
return consequence
@agent.output_validator
def enforce_policy_controls(consequence: DecisionOutput) -> DecisionOutput:
coverage = CURRENT_DEPS.company_policy.decrease()
textual content = (
consequence.rationale
+ " ".be part of(consequence.next_steps)
+ " ".be part of(consequence.situations)
).decrease()
if consequence.compliance_passed:
if not any(ok in textual content for ok in ["encryption", "audit", "logging", "access control", "key management"]):
elevate ValueError("lacking concrete safety controls")
return consequence
We add output validators that act as governance checkpoints after the mannequin generates a response. We drive the agent to determine significant dangers and to explicitly reference concrete safety controls when claiming compliance. If these constraints are violated, we set off computerized retries to implement self-correction. Take a look at the FULL CODES right here.
async def run_decision():
world CURRENT_DEPS
CURRENT_DEPS = DecisionContext(
company_policy=(
"No deployment of techniques dealing with private knowledge or transaction metadata "
"with out encryption, audit logging, and least-privilege entry management."
)
)
immediate = """
Resolution request:
Deploy an AI-powered buyer analytics dashboard utilizing a third-party cloud vendor.
The system processes consumer habits and transaction metadata.
Audit logging will not be applied and customer-managed keys are unsure.
"""
consequence = await agent.run(immediate, deps=CURRENT_DEPS)
return consequence.output
choice = asyncio.run(run_decision())
from pprint import pprint
pprint(choice.model_dump())
We run the agent on a sensible choice request and seize the validated structured output. We display how the agent evaluates threat, coverage compliance, and confidence earlier than producing a last choice. This completes the end-to-end contract-first choice workflow in a production-style setup.
In conclusion, we display methods to transfer from free-form LLM outputs to ruled, dependable choice techniques utilizing PydanticAI. We present that by implementing exhausting contracts on the schema stage, we are able to routinely align selections with coverage necessities, threat severity, and confidence realism with out guide immediate tuning. This strategy permits us to construct brokers that fail safely, self-correct when constraints are violated, and produce auditable, structured outputs that downstream techniques can belief. Finally, we display that contract-first agent design allows us to deploy agentic AI as a reliable choice layer inside manufacturing and enterprise environments.
Take a look at the FULL CODES right here. Additionally, be happy to comply with us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you possibly can be part of us on telegram as nicely.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the most recent breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s traits at present: learn extra, subscribe to our publication, and turn into a part of the NextTech neighborhood at NextTech-news.com

