Skip to Content
Configurationveris.yaml

veris.yaml

The .veris/veris.yaml file is the main configuration file for your Veris sandbox. It declares how to run your agent, which mock services it calls, and how the actor reaches it.

Looking for complete working configs? See the cookbook  for full agent repos.

Anatomy

.veris/veris.yaml
version: "1.0" billing-assistant-env: # Target — its name is the backend environment name agent: # How to start your agent name: Billing Assistant entry_point: python -m app.main port: 8008 services: # Which mock services your agent calls - name: calendar dns_aliases: - www.googleapis.com actor: # How the actor reaches your agent channels: - type: http url: http://localhost:8008

The three sections you’ll write most of:

  • agent — how your agent runs inside the container.
  • services — the mock services your agent calls.
  • actor — how the actor reaches your agent (HTTP, WebSocket, email, function call, or CLI). The actor can be a simulated user, an incoming webhook, a scheduled job, or another agent.

These are nested under a target environment key. The target’s name is the backend environment this config maps to. Reach for multiple targets when a monorepo defines more than one agent, or when you want to test a multi-agent system both together and independently. See Multiple Targets below or the CLI guide.

Full schema

.veris/veris.yaml
version: "1.0" # Optional version my-agent-env: # Target name (= backend env name) dockerfile: .veris/Dockerfile.sandbox # Optional per-target override agent: # How your agent runs name: My Agent # Display name (optional) code_path: /agent # Code location (default: /app) entry_point: python -m app.main # Startup command port: 8008 # Listen port (default: 8008) environment: # Runtime env vars LLM_PROVIDER: ${LLM_PROVIDER} # DATABASE_URL is auto-injected when `postgres` is in services — # you don't need to write it yourself. See reference/services/postgres. services: # Which mock services your agent calls - name: calendar # Service identifier dns_aliases: # Domains routed to mock - www.googleapis.com - calendar.google.com config: # Service-specific env vars SOME_KEY: some_value port: 8003 # Optional port override description: "Custom desc" # For generic services actor: # How the actor reaches your agent channels: # Communication channels - type: http # http | ws | email | function | cli url: http://localhost:8008 method: POST headers: Content-Type: application/json request: message_field: message session_field: session_id static_fields: prompt_type: default response: type: json # json | sse message_field: response session_field: session_id config: # Actor behavior config MAX_TURNS: "10"

Agent

The agent section tells the sandbox how to run your agent inside the container — where the code lives, how to start it, and what environment variables it needs.

agent: name: Billing Assistant entry_point: python -m app.main port: 8008 environment: LLM_PROVIDER: ${LLM_PROVIDER}

Fields

FieldDescription
nameDisplay name used in the console.
code_pathDirectory your agent code lives in, relative to the container root. Defaults to /app.
entry_pointCommand to start your agent. Required unless you’re using a function or cli channel.
portPort your agent listens on. Defaults to 8008.
environmentEnvironment variables passed to the agent process (key/value map).

Common entry points

StackExample entry_point
Python (module)python -m app.main
Python (uvicorn)uvicorn app.main:app --host 0.0.0.0 --port 8008
Python (shorthand)app.main:app — auto-expanded to the uvicorn form above, using agent.port
Node.jsnode dist/server.js
Node.js (npm)npm start

Services

The services array declares the mock services your agent calls. Each entry names a mock from the service catalog and lists the domains it should intercept.

services: - name: salesforce dns_aliases: - myorg.my.salesforce.com - login.salesforce.com

At container startup, the listed domains are routed to the local mock — your agent calls myorg.my.salesforce.com the same way it does in production, and the mock responds instead.

Fields

FieldDescription
nameService identifier from the catalog (e.g. calendar, salesforce, stripe). Required.
dns_aliasesProduction domains your agent calls. The sandbox routes them to the local mock at startup.
configService-specific environment variables passed to the mock (e.g. STRIPE_TEST_MODE). See each service’s entry in the catalog for accepted keys.
portOverride the mock’s listen port. Rarely needed.
descriptionFree-text description used by the generic mock so the LLM knows what API it’s standing in for.

Actor

The actor section configures how the actor reaches your agent. At minimum, you declare a transport in channels. Add init if your agent needs a setup step before the first turn, and config to tune actor behavior like turn limits.

actor: channels: - type: http url: http://localhost:8008 config: MAX_TURNS: "10"

Init (pre-connection setup)

Use init when your agent requires a setup step before messaging — like creating a conversation, obtaining a session token, or authenticating.

actor: init: type: http method: POST url: http://localhost:8080/api/v1/conversations body: banker_id: "B001" branch_id: "BR01" response: message_field: content # Capture initial message from response

Each top-level field in init’s response becomes a {field_name} variable you can use in your channel’s URL, headers, or static_fields. For example, if init returns {"conversation_id": "abc123"}, a channel URL can include /conversations/{conversation_id}/messages.

Channel types

Pick one per channel entry; most agents declare exactly one.

TypeUse when
httpAgent exposes a chat REST API (JSON or SSE streaming responses).
wsAgent communicates over WebSocket.
emailAgent communicates asynchronously via email.
functionPython agent invoked as a function in production (LangGraph Platform deploy, Lambda-style handler, importable library). Python-only today.
cliAgent’s production entry point is a command-line program (including Python CLIs like python -m app.main).
voice (beta)Agent handles phone/voice interactions.
browser-use (beta)Agent has a web UI users interact with via a browser.

HTTP

actor: channels: - type: http url: http://localhost:8008 method: POST headers: Content-Type: application/json Authorization: Bearer test-token request: message_field: message # JSON field for user message session_field: session_id # Session tracking field static_fields: # Extra fields per request prompt_type: default response: type: json # json or sse message_field: response # Field containing agent reply session_field: session_id

message_field and session_field target top-level JSON keys. If your agent speaks A2A or another nested JSON-RPC protocol, see the A2A adapter pattern under Google ADK.

Server-sent events (response.type: sse) — for agents that stream responses via SSE:

actor: channels: - type: http url: http://localhost:8008/chat method: POST request: message_field: message response: type: sse chunk_event: chunk # SSE event name for chunks chunk_field: chunk # Field in chunk event data done_event: end_turn # SSE event signaling completion

For mixed SSE streams where only some frames contain user-visible text, constrain which chunks get shown to the actor:

actor: channels: - type: http url: http://localhost:8008/chat method: POST request: message_field: message response: type: sse chunk_event: message chunk_field: delta chunk_filter_field: type chunk_filter_equals: delta done_data: "[DONE]"

In this example, Veris only appends SSE frames where data.type == "delta" and stops when it sees a raw data: [DONE] sentinel. Use chunk_event: message when the agent sends unnamed/default SSE frames — common in streams that put all frame types on the default SSE event and distinguish them with a JSON type field.

WebSocket

actor: channels: - type: ws url: ws://localhost:8008/ws request: message_field: message session_field: session_id response: message_field: response

Email

actor: channels: - type: email email_address: agent@ea.veris.ai

Function

For Python agents invoked as a function rather than as a server or CLI — LangGraph Platform deploys, Lambda-style handlers, importable library agents, orchestrator-spawned sub-agents. Veris imports your existing per-turn function and calls it once per actor turn; what it returns goes back as the agent’s reply.

The function channel is Python-only today. Agents in other languages should use http, ws, cli, or email.

actor: channels: - type: function callable: app.agent:run_turn

callable is the dotted import path to your agent’s existing per-turn function (module:function). Whatever your agent already does for a turn (LLM calls, tool routing, state updates) stays in that function:

app/agent.py
def run_turn(message: str) -> str: return f"You said: {message}"

Any importable module under agent.code_path works. State you keep in the module (globals, loaded models, open connections) persists across turns within a simulation — useful for agents with heavy startup (model loading, connection pools).

Function vs CLI for Python agents. Both channels work for Python; pick by production shape. If your agent runs as a long-lived process and callers import-and-call it (LangGraph, warm Lambda, a library), use function — the process stays alive across turns and in-memory state persists. If users invoke it fresh per run (python -m app.main ...), use cli — each turn spawns a new process, matching production.

Structured input/output — if your function already takes or returns a Pydantic BaseModel or dataclass, Veris picks up the type hint and auto-(de)serializes. The actor reads the JSON schema and composes schema-valid payloads from natural-language prompts.

app/handlers.py
from pydantic import BaseModel class RefundRequest(BaseModel): order_id: str reason: str class RefundResult(BaseModel): refunded: bool amount_cents: int async def process_refund(req: RefundRequest) -> RefundResult: """Issue a refund for the given order.""" ...

Omit agent.entry_point and agent.port when using a function channel — the container skips agent startup, and your callable is the agent.

CLI

For agents whose production entry point is a CLI (cron-driven binaries, shell-invoked programs, single-shot commands). Each actor turn spawns a fresh subprocess, passes the message as an argument or via stdin, and reads stdout as the reply.

actor: channels: - type: cli command: ["openclaw", "agent", "--local", "--agent", "brandpulse", "--deliver", "--reply-channel", "slack", "--reply-to", "C_SANDBOX"] message_arg: "--message" timeout_seconds: 600

Pick one of message_arg, message_stdin, or message_positional to control where the actor’s message lands (or omit all three if the command doesn’t take the message at all). timeout_seconds defaults to 600.

Fresh subprocess per turn, so there’s no persistent state on the Veris side. If the CLI supports sessions (e.g. openclaw agent --session-id ...), wire it in via your own arg list.

Omit agent.entry_point and agent.port when using a CLI channel — the container skips agent startup; each actor turn invokes the CLI directly.

Voice

For agents that handle phone/voice interactions:

actor: channels: - type: voice

Voice channels are in beta. If you’re working on phone or voice agents, reach out about enterprise support.

Browser-use

For agents with a web UI that users interact with via a browser:

actor: channels: - type: browser-use url: http://localhost:3000

Browser-use channels are in beta. If you’re working on web-UI agents, reach out about enterprise support.

Actor config

Values in actor.config are exported as environment variables for the actor service. Use the canonical uppercase names below.

VariableDefaultDescription
MAX_TURNS10Maximum actor turns before the simulation is forced to stop

Environment variables

Variable expansion

Values in agent.environment, services[].config, and actor.config support shell-style ${VAR} expansion. The variable is resolved from the container’s environment at config-load time:

agent: environment: LLM_PROVIDER: ${LLM_PROVIDER} # From container env DB_NAME: myapp_${SIMULATION_ID} # Dynamic per simulation

Never put secrets in veris.yaml. Use veris env vars set --secret for API keys and other sensitive values. Variables set with veris env vars set take precedence over values defined here.

Injected into the agent’s runtime

Beyond the values you set in veris.yaml, Veris injects these into your agent’s environment at startup:

VariableDescription
SIMULATION_IDUnique per run (e.g. sim_abc123). Use it for log correlation, DB isolation, or trace tagging.
PORTValue of agent.port. Present so frameworks that read $PORT (Express, Flask, etc.) just work.
SSL_CERT_FILE, REQUESTS_CA_BUNDLE, NODE_EXTRA_CA_CERTSPath to the sandbox’s CA bundle. Python (httpx, requests, urllib) and Node (https) automatically trust mock service TLS. Set manually only if your HTTP client reads certs from a non-standard location.

Multiple Targets

A single veris.yaml can define multiple targets. Each target is a top-level key with its own services, actor, and agent blocks:

.veris/veris.yaml
version: "1.0" billing-assistant-env: services: - name: postgres actor: channels: - type: http url: http://localhost:8008/chat agent: name: Billing Assistant entry_point: uv run app port: 8008 customer-support-env: dockerfile: .veris/Dockerfile.support # Per-target Dockerfile override services: - name: zendesk-support dns_aliases: - help.example.com actor: channels: - type: email email_address: support@ea.veris.ai agent: name: Customer Support entry_point: uv run support port: 8009

Target-level fields

FieldDefaultDescription
dockerfile.veris/Dockerfile.sandboxPer-target Dockerfile path (relative to project root)
servicesMock services for this target
actorActor config for this target
agentAgent config for this target

Each target is independent — it has its own services, actor channels, agent entry point, and optionally its own Dockerfile.

See Multiple Targets (CLI) for how to create, select, and manage targets from the command line.