Agent Settings
Copy page
Learn how to configure your agents
Agents are the core building blocks of our framework, designed to be both powerful individual workers and collaborative team members in multi-agent systems. Through the framework's agent graph architecture, each agent can seamlessly delegate tasks, share context, and work together using structured data components.
Creating an Agent
Every agent needs an id, name, and clear prompt that define its behavior:
Agent Options
The framework supports rich agent configuration. Here are the options you can configure:
Parameter | Type | Required | Description |
---|---|---|---|
id | string | Yes | Stable agent identifier used for consistency and persistence |
name | string | Yes | Human-readable name for the agent |
prompt | string | Yes | Detailed behavior guidelines and system prompt for the agent |
description | string | No | Brief description of the agent's purpose and capabilities |
models | object | No | AI model settings with separate settings for base, structuredOutput, and summarizer models. If no models settings are specified, the agent will inherit the models settings from its agent graph, which may inherit from the project settings. |
models.base | object | No | Primary model for conversational text generation and reasoning |
models.structuredOutput | object | No | Model used for structured JSON output only (falls back to base if not configured) |
models.summarizer | object | No | Model used for summaries and status updates (falls back to base if not configured) |
canUse | object | No | MCP tools that the agent can use. See MCP Servers for details |
dataComponents | array | No | Structured output components for rich, interactive responses. See Data Components for details |
artifactComponents | array | No | Components for handling tool or agent outputs. See Artifact Components for details |
canTransferTo | function | No | Function returning array of agents this agent can transfer to. See Transfer Relationships for details |
canDelegateTo | function | No | Function returning array of agents this agent can delegate to. See Delegation Relationships for details |
Model Settings
The models
object allows you to configure different models for different tasks, each with their own provider options:
Model Types
base
: Primary model used for conversational text generation and reasoningstructuredOutput
: Model used for structured JSON output only (falls back to base if not configured and nothing to inherit)summarizer
: Model used for summaries and status updates (falls back to base if not configured and nothing to inherit)
Supported Providers
The framework currently supports models from:
- Anthropic:
anthropic/claude-sonnet-4-20250514
,anthropic/claude-3-5-haiku-20241022
, etc. - OpenAI:
openai/gpt-5-2025-08-07
,openai/gpt-4.1-mini-2025-04-14
,openai/gpt-4.1-nano-2025-04-14
, etc. - Google:
google/gemini-2.5-pro
,google/gemini-2.5-flash
,google/gemini-2.5-flash-lite
, etc.
Provider Options
All models support providerOptions
to customize their behavior. These include both generic parameters that work across all providers and provider-specific features like reasoning.
Generic Parameters
These parameters work with all supported providers and go directly in providerOptions
:
Provider-Specific Features
Advanced features like reasoning require provider-specific configuration wrapped in the provider name:
OpenAI Reasoning
Anthropic Thinking
Google Gemini Thinking
Model Providers
Built-in Providers
The framework supports these providers directly:
- Anthropic:
anthropic/claude-sonnet-4-20250514
,anthropic/claude-3-5-haiku-20241022
, etc. - OpenAI:
openai/gpt-5-2025-08-07
,openai/gpt-4.1-mini-2025-04-14
,openai/gpt-4.1-nano-2025-04-14
, etc. - Google:
google/gemini-2.5-pro
,google/gemini-2.5-flash
,google/gemini-2.5-flash-lite
, etc.
Accessing Other Models
For models not directly supported, use these proxy providers:
- OpenRouter: Access any model via
openrouter/model-id
format (e.g.,openrouter/anthropic/claude-sonnet-4
,openrouter/meta-llama/llama-3.1-405b
) - Vercel AI SDK Gateway: Access models through your gateway via
gateway/model-id
format (e.g.,gateway/anthropic/claude-sonnet-4
)
Required API Keys
You need the appropriate API key for your chosen provider:
ANTHROPIC_API_KEY
for Anthropic modelsOPENAI_API_KEY
for OpenAI modelsGOOGLE_GENERATIVE_AI_API_KEY
for Google modelsOPENROUTER_API_KEY
for OpenRouter modelsAI_GATEWAY_API_KEY
for Vercel AI SDK Gateway models
Inheritance
If no models
settings are specified, the agent will inherit the models settings from its agent graph, which may inherit from the project settings.
Graph Prompt Integration
Agents automatically receive any graph-level prompt configuration in addition to their individual prompt:
The graphPrompt
is injected into each agent's system prompt, providing consistent context and behavior guidelines across all agents in the graph.
openai/gpt-5-2025-08-07
and openai/gpt-4.1-mini-2025-04-14
require a verified OpenAI organization. If your organization is not yet verified, these models will not be available.