Skip to Content
ReferenceFrameworksOverview

Frameworks

Veris supports any agent framework. Your agent runs unmodified inside the sandbox — Veris talks to it through whatever production interface it already exposes.

Four frameworks are natively supported, meaning Veris understands their trace format for full observability (multi-agent traces, tool-call tracking, per-agent system prompts):

Use this page when you want framework-specific gotchas or setup notes. Most of what’s here is also encoded in the agent-integration skill , so if you’re using a coding agent to integrate, you probably don’t need to read this directly.

LLM provider base URLs

The Veris LLM proxy intercepts provider domains (api.openai.com, api.anthropic.com, Azure OpenAI, etc.) via DNS, so setting the provider’s API key with veris env vars set --secret is usually enough.

Some frameworks don’t default the base URL and crash at startup with unsupported protocol scheme "" unless one is set. If your framework exposes a base_url option (e.g. inference-gateway/adk’s A2A_AGENT_CLIENT_BASE_URL), point it at the real provider URL — DNS interception still routes it to the proxy.

.veris/veris.yaml
agent: environment: A2A_AGENT_CLIENT_BASE_URL: https://api.openai.com/v1

Framework-specific notes

Works out of the box with an HTTP channel. Expose your agent as an HTTP server (FastAPI, Flask, Express, etc.) wrapping a Runner.run call; Veris drives it through the same interface.

No framework-specific configuration required beyond what’s in Quickstart.

Using a framework not listed?

It’ll still work — your agent runs as a plain HTTP/WebSocket/email/function/CLI process. You just won’t get framework-specific trace enrichment (multi-agent traces, tool-call tracking, per-agent system prompts stay as raw LLM spans). Reach out if you’d like native support added for a framework you’re using.