Typescript sdk

Agent Settings

Copy page

Learn how to configure your agents

Agents are the core building blocks of our framework, designed to be both powerful individual workers and collaborative team members in multi-agent systems. Through the framework's agent graph architecture, each agent can seamlessly delegate tasks, share context, and work together using structured data components.

Creating an Agent

Every agent needs an id, name, and clear prompt that define its behavior:

import { agent } from "@inkeep/agents-sdk";

const supportAgent = agent({
  id: "customer-support",
  name: "Customer Support Agent",
  prompt: `You are a customer support specialist. Always be helpful, professional, and empathetic.`,
});

Agent Options

The framework supports rich agent configuration. Here are the options you can configure:

const agent = agent({
  // Required
  id: "my-agent-id", // Stable agent identifier
  name: "My Agent", // // Human-readable agent name provided to other agents which have a relationship with this agent
  prompt: "Detailed behavior guidelines", // System prompt. Not given to other agents which have a relationship with this agent

  // Optional - Agent Description
  description: "Agent description", // Brief description of the agent's purpose provided to other agents which have a relationship with this agent

  // Optional - Tools Integration
  canUse: () => [searchTool, mcpTool],

  // Optional - Agent Relationships (for multi-agent systems)
  canTransferTo: () => [agent1], // Agents this can hand off to
  canDelegateTo: () => [agent2], // Agents this can delegate to

  // Optional - AI Model Settings (Vercel AI SDK v5)
  models: {
    base: {
      model: "anthropic/claude-sonnet-4-20250514",
      providerOptions: {
        temperature: 0.7,
        maxOutputTokens: 2048,   // Maximum tokens to generate
        maxDuration: 30,          // Timeout in seconds
        topP: 0.95,               // Nucleus sampling
        topK: 40,                 // Top-k sampling
        frequencyPenalty: 0.0,    // Reduce repetition
        presencePenalty: 0.0,     // Encourage new topics
        stopSequences: ["\n\n"], // Stop generation at sequences
        seed: 12345,              // For deterministic output
      },
    },
    structuredOutput: {
      model: "openai/gpt-4.1-mini-2025-04-14", // For structured JSON output
      providerOptions: {
        temperature: 0.1,
        maxOutputTokens: 1024,
        experimental_reasoning: true,  // Enable reasoning mode (if supported)
      },
    },
    summarizer: {
      model: "openai/gpt-4.1-nano-2025-04-14", // For summaries
      providerOptions: {
        temperature: 0.5,
        maxOutputTokens: 1000,
      },
    },
  },

  // Optional - Data Components (Structured Outputs)
  dataComponents: [
    {
      id: "customer-info",
      name: "CustomerInfo",
      description: "Customer information display component",
      props: {
        type: "object",
        properties: {
          name: { type: "string", description: "Customer name" },
          email: { type: "string", description: "Customer email" },
          issue: { type: "string", description: "Customer issue description" },
        },
        required: ["name", "email", "issue"],
      },
    },
  ],

  // Optional - Artifact Components (Structured Outputs from tools or agents)
  artifactComponents: [
    {
      id: "customer-info",
      name: "CustomerInfo",
      description: "Customer information display component",
      summaryProps: {
        type: "object",
        properties: {
          name: { type: "string", description: "Customer name" },
        },
        required: ["name"],
      },
      fullProps: {
        type: "object",
        properties: {
          customer_info: {
            type: "string",
            description: "Customer information",
          },
        },
        required: ["customer_info"],
      },
    },
  ],
});
ParameterTypeRequiredDescription
idstringYesStable agent identifier used for consistency and persistence
namestringYesHuman-readable name for the agent
promptstringYesDetailed behavior guidelines and system prompt for the agent
descriptionstringNoBrief description of the agent's purpose and capabilities
modelsobjectNoAI model settings with separate settings for base, structuredOutput, and summarizer models.

If no models settings are specified, the agent will inherit the models settings from its agent graph, which may inherit from the project settings.
models.baseobjectNoPrimary model for conversational text generation and reasoning
models.structuredOutputobjectNoModel used for structured JSON output only (falls back to base if not configured)
models.summarizerobjectNoModel used for summaries and status updates (falls back to base if not configured)
canUseobjectNoMCP tools that the agent can use. See MCP Servers for details
dataComponentsarrayNoStructured output components for rich, interactive responses. See Data Components for details
artifactComponentsarrayNoComponents for handling tool or agent outputs. See Artifact Components for details
canTransferTofunctionNoFunction returning array of agents this agent can transfer to. See Transfer Relationships for details
canDelegateTofunctionNoFunction returning array of agents this agent can delegate to. See Delegation Relationships for details

Model Settings

The models object allows you to configure different models for different tasks, each with their own provider options:

models: {
  base: {
    model: "anthropic/claude-sonnet-4-20250514", // Primary model for text generation
    providerOptions: {
      temperature: 0.7,
      maxOutputTokens: 2048  // AI SDK v5 uses maxOutputTokens
    }
  },
  structuredOutput: {
    model: "openai/gpt-4.1-mini-2025-04-14", // For structured JSON output only
    providerOptions: {
      temperature: 0.1,
      maxOutputTokens: 1024,
      experimental_reasoning: true  // Enable reasoning for better structured outputs
    }
  },
  summarizer: {
    model: "anthropic/claude-3-5-haiku-20241022", // For summaries and status updates
    providerOptions: {
      temperature: 0.5,
      maxOutputTokens: 1000
    }
  }
}

Model Types

  • base: Primary model used for conversational text generation and reasoning
  • structuredOutput: Model used for structured JSON output only (falls back to base if not configured and nothing to inherit)
  • summarizer: Model used for summaries and status updates (falls back to base if not configured and nothing to inherit)

Supported Providers

The framework currently supports models from:

  • Anthropic: anthropic/claude-sonnet-4-20250514, anthropic/claude-3-5-haiku-20241022, etc.
  • OpenAI: openai/gpt-5-2025-08-07, openai/gpt-4.1-mini-2025-04-14, openai/gpt-4.1-nano-2025-04-14, etc.
  • Google: google/gemini-2.5-pro, google/gemini-2.5-flash, google/gemini-2.5-flash-lite, etc.

Provider Options

All models support providerOptions to customize their behavior. These include both generic parameters that work across all providers and provider-specific features like reasoning.

Generic Parameters

These parameters work with all supported providers and go directly in providerOptions:

models: {
  base: {
    model: "anthropic/claude-sonnet-4-20250514",
    providerOptions: {
      maxOutputTokens: 4096,        // Maximum tokens to generate (AI SDK v5)
      temperature: 0.7,             // Controls randomness (0.0-1.0)
      topP: 0.95,                   // Nucleus sampling (0.0-1.0)
      topK: 40,                     // Top-k sampling (integer)
      frequencyPenalty: 0.0,        // Reduce repetition (-2.0 to 2.0)
      presencePenalty: 0.0,         // Encourage new topics (-2.0 to 2.0)
      stopSequences: ["\n\n"],     // Stop generation at sequences
      seed: 12345,                  // For deterministic output
      maxDuration: 30,              // Timeout in seconds (not milliseconds)
      maxRetries: 2,                // Maximum retry attempts
    }
  }
}

Provider-Specific Features

Advanced features like reasoning require provider-specific configuration wrapped in the provider name:

OpenAI Reasoning

models: {
  base: {
    model: "openai/o3-mini",
    providerOptions: {
      maxOutputTokens: 4096,
      temperature: 0.7,
      openai: {
        reasoningEffort: 'medium'  // 'low' | 'medium' | 'high'
      }
    }
  }
}

Anthropic Thinking

models: {
  base: {
    model: "anthropic/claude-3-7-sonnet-20250219",
    providerOptions: {
      maxOutputTokens: 4096,
      temperature: 0.7,
      anthropic: {
        thinking: { 
          type: 'enabled', 
          budgetTokens: 8000  // Tokens allocated for reasoning
        }
      }
    }
  }
}

Google Gemini Thinking

models: {
  base: {
    model: "google/gemini-2-5-flash",
    providerOptions: {
      maxOutputTokens: 4096,
      temperature: 0.7,
      google: {
        thinkingConfig: {
          thinkingBudget: 8192,     // 0 disables thinking
          includeThoughts: true     // Return thought summary
        }
      }
    }
  }
}

Model Providers

Built-in Providers

The framework supports these providers directly:

  • Anthropic: anthropic/claude-sonnet-4-20250514, anthropic/claude-3-5-haiku-20241022, etc.
  • OpenAI: openai/gpt-5-2025-08-07, openai/gpt-4.1-mini-2025-04-14, openai/gpt-4.1-nano-2025-04-14, etc.
  • Google: google/gemini-2.5-pro, google/gemini-2.5-flash, google/gemini-2.5-flash-lite, etc.

Accessing Other Models

For models not directly supported, use these proxy providers:

  • OpenRouter: Access any model via openrouter/model-id format (e.g., openrouter/anthropic/claude-sonnet-4, openrouter/meta-llama/llama-3.1-405b)
  • Vercel AI SDK Gateway: Access models through your gateway via gateway/model-id format (e.g., gateway/anthropic/claude-sonnet-4)
models: {
  base: {
    model: "openrouter/anthropic/claude-sonnet-4",
    providerOptions: {
      temperature: 0.7,
      maxOutputTokens: 2048
    }
  },
  structuredOutput: {
    model: "gateway/openai/gpt-4.1-mini",
    providerOptions: {
      maxOutputTokens: 1024
    }
  }
}

Required API Keys

You need the appropriate API key for your chosen provider:

  • ANTHROPIC_API_KEY for Anthropic models
  • OPENAI_API_KEY for OpenAI models
  • GOOGLE_GENERATIVE_AI_API_KEY for Google models
  • OPENROUTER_API_KEY for OpenRouter models
  • AI_GATEWAY_API_KEY for Vercel AI SDK Gateway models

Inheritance

If no models settings are specified, the agent will inherit the models settings from its agent graph, which may inherit from the project settings.

Graph Prompt Integration

Agents automatically receive any graph-level prompt configuration in addition to their individual prompt:

// Graph-level prompt that gets added to all agents
const graph = agentGraph({
  id: "support-graph",
  graphPrompt: `You work for Acme Corp. Always be professional and helpful. 
Follow company policies and escalate complex issues appropriately.`,
  agents: () => [supportAgent, escalationAgent],
});

The graphPrompt is injected into each agent's system prompt, providing consistent context and behavior guidelines across all agents in the graph.

Note
Note
openai/gpt-5-2025-08-07 and openai/gpt-4.1-mini-2025-04-14 require a verified OpenAI organization. If your organization is not yet verified, these models will not be available.