# API Key Management URL: /api-keys Secure API key management for agent authentication *** title: API Key Management description: Secure API key management for agent authentication --------------------------------------------------------------- # API Key Management The Inkeep Agent Framework provides a comprehensive API key management system for authenticating access to agents. API keys are securely hashed and stored, with support for expiration and revocation. ## Overview API keys provide a secure way to authenticate programmatic access to your agents. Each API key is: * **Securely hashed** using scrypt algorithm before storage * **Scoped to a specific tenant and agent** * **Revocable** at any time * **Expirable** with optional expiration dates ## Usage ### Creating an API Key ```typescript import { createApiKey } from '@inkeep/agents-core'; const result = await createApiKey({ tenantId: 'your-tenant-id', agentId: 'your-agent-id', expiresAt: '2025-12-31T23:59:59Z', // Optional expiration }); // IMPORTANT: Show this key to the user only once! console.log('Your API Key:', result.key); // Example: sk_live_abc123def456... // The API key record (without the actual key) console.log('Key Details:', result.apiKey); ``` # Core concepts URL: /concepts Learn about the key building blocks of Inkeep - Agents, Sub Agents, tools, data components, and more. *** title: Core concepts sidebarTitle: Concepts description: Learn about the key building blocks of Inkeep - Agents, Sub Agents, tools, data components, and more. icon: "LuBoxes" --------------- ## Agents In Inkeep, an **Agent** is the top-level entity you can interface with via conversational experiences (chat) or trigger programmatically (via API). Under the hood, an Agent is made up of one or more **Sub Agents** that work together to respond to a user or complete a task. ## Tools When you send a message to an Agent, it is first received by a **Default Sub Agent** that decides what to do next. In a simple Agent, there may be only one Sub Agent with a few tools available to it. **Tools** are actions that a Sub Agent can take, like looking up information or performing a task on apps and APIs. In Inkeep, tools can be added to Sub Agents as: * **MCP Servers**: A common way to connect to external services and APIs. Many SaaS providers provide out-of-the-box MCP Servers, but you can also create your own and register them with their associated **Credentials** on Inkeep for Agents to use. * **Function Tools**: Custom JavaScript functions that Agents can execute directly without the need for standing up an MCP server. Typically, you want a Sub Agent to handle narrow, well-defined tasks. As a general rule of thumb, keep Sub Agents to be using 5-7 related tools at a time. ## Sub Agent relationships When your scenario gets complex, it can be useful to break up your logic into multiple Sub Agents that are specialized in specific parts of your task or workflow. This is often referred to as a "Multi-agent" system. A Sub Agent can be configured to: * **Transfer** control of the chat to another Sub Agent. When a transfer happens, the receiving Sub Agent becomes the primary driver of the thread and can respond to the user directly. * **Delegate** a subtask for another ('child') Sub Agent to do and wait for its response before proceeding with the next step. A child Sub Agent *cannot* respond directly to a user. ## Sub Agent 'turn' When it's a Sub Agent's turn, it can choose to: 1. Send an update message to the user 2. Call a tool to collect information or take an action 3. Transfer or delegate to another Sub Agent An Agent's execution stays in this loop until one of the Sub Agents chooses to respond to the user with a final result. Sub Agents in Inkeep are designed to respond to the user as a single, cohesive unit by default. ## Chatting with an Agent <> You can talk to an Inkeep Agent in a few ways, including: * **UI Chat Components**: Drop-in React components for chat UIs with built-in streaming and rich UI customization. See [`agents-ui`](/talk-to-your-agents/react/chat-button). * **As an MCP server**: Use your Inkeep Agent as if was an MCP Server. Allows you to connect it to any MCP client, like Claude, ChatGPT, Claude and other Agents. See [MCP server](/talk-to-your-agents/mcp-server). * **Via API (Vercel format)**: An API that streams responses over server-side events (SSE). Use from any language/runtime, including the Vercel's `useChat` and AI Element primitives for custom UIs. See [API (Vercel format)](/talk-to-your-agents/api). * **Via API (A2A format)**: An API that follows the Agent-to-Agent ('A2A') JSON-RPC protocol. Great for when combining Inkeep with different Agent frameworks that support the A2A format. See [A2A protocol](/talk-to-your-agents/a2a). Drop-in chat components for React apps with streaming and rich UI. POST /api/chat, SSE (text/event-stream), x-vercel-ai-data-stream: v2. JSON-RPC messages at /agents/a2a with blocking and streaming modes. HTTP JSON-RPC endpoint at /v1/mcp with session header management. ## Authentication & API Keys You can authenticate with your Agent using: * **API Keys**: Securely hashed keys that are scoped to specific Agents * **Development Mode**: No API key required, perfect for local development and testing * **Bypass Secrets**: For internal services and infrastructure that need direct access API keys are the recommended approach for production use, providing secure, scoped access to your Agents. ## Agent replies with Structured Data Sometimes, you want your Agent to reply not in plain text but with specific types of well-defined information, often called 'Structured Outputs' (JSON). With Inkeep, there are a few ways to do this: * **Data Components**: Structured Outputs that Sub Agents can output in their messages so they can render rich, interactive UIs (lists, buttons, forms, etc.) or convey structured information. * **Artifacts**: A Sub Agent can save information from a **tool call result** as an artifact in order to make it available to others. For example, a Sub Agent that did a web search can save the contents of a webpage it looked at as an artifact. Once saved, a Sub Agent can cite or reference artifacts in its response, and other Sub Agents or users can fetch the full artifacts if they'd like. * **Status Updates**: Real-time progress updates that can be plain text or Structured Outputs that can be used to keep users informed about what the Sub Agent is doing during longer operations. ## Passing context to Sub Agents Beyond using Tools to fetch information, Sub Agents also receive information via: * **Headers**: In the API request to an Agent, the calling application can include headers for a Sub Agent. Learn more [here](/typescript-sdk/headers). * **Context Fetchers**: Can be configured for an Agent so that at the beginning of a conversation, an API call is automatically made to an external service to get information that is then made available to any Sub Agent. For example, your Headers may include a `user-id`, which can be used to auto-fetch information from a CRM about the user for any Sub Agent to use. Headers and fetched context can then be referenced explicitly as `{{variables}}` in Sub Agent prompts. Learn more [here](/typescript-sdk/headers). ## Ways to build Quick reference to the key docs for building with the Visual Builder or the TypeScript SDK. Configure and manage MCP servers for your Sub Agents. Create and manage Agents visually. Build rich UI elements Sub Agents can render in conversations. Define structured outputs generated by tools or Sub Agents. Show progress updates during longer operations. Manage secrets and auth for MCP servers. Organize agents, MCP Servers, and other entities in Projects. Configure Sub Agents with prompts, tools, and data components. Add tools as MCP servers. Create custom JavaScript functions that run in secure sandboxes. Define how Sub Agents transfer and delegate tasks. Build custom UI elements Sub Agents can render. Create structured outputs from tools or Sub Agents. Provide real-time progress updates. Dynamically fetch and cache external context. Store and retrieve credentials for MCP tools. The Visual Builder and TypeScript SDK work seamlessly together—define your Sub Agents in code, push them to the Visual Builder, and iterate visually. ## Projects You can organize your related MCP Servers, Credentials, Agents, and more into **Projects**. A Project is generally used to represent a set of related scenarios. For example, you may create one Project for your support team that has all the MCP servers and Agents related to customer support. ## CLI: Push and pull The Inkeep CLI bridges your TypeScript SDK project and the Visual Builder. Run the following from your project (the folder that contains your `inkeep.config.ts`) which has an `index.ts` file that exports a project. * **Push (code → Builder)**: Sync locally defined agents, Sub Agents, tools, and settings from your SDK project into the Visual Builder. ```bash inkeep push ``` * **Pull (Builder → code)**: Fetch your project from the Visual Builder back into your SDK project. By default, the CLI will LLM-assist in updating your local TypeScript files to reflect Builder changes. ```bash inkeep pull ``` Push and pull operate at the project level (not individual agents). Define agents in your project and push/pull the whole project. See the [CLI Reference](/typescript-sdk/cli-reference) for full command details. ## Deployment Once you've built your Agents, you can deploy them using: Self-host your Agents using Docker for full control and flexibility. Deploy your Agents to Vercel for easy serverless hosting. ## Architecture The Inkeep Agent framework is composed of several key services and libraries that work together: * **agents-manage-api**: An API that handles configuration of Agents, Sub Agents, MCP Servers, Credentials, and Projects with a REST API. * **agents-manage-ui**: Visual Builder web interface for creating and managing Agents. Writes to the `agents-manage-api`. * **agents-sdk**: TypeScript SDK (`@inkeep/agents-sdk`) for declaratively defining Agents and custom tools in code. Writes to `agents-manage-api`. * **agents-cli**: Includes various handy utilities, including `inkeep push` and `inkeep pull` which sync your TypeScript SDK code with the Visual Builder. * **agents-run-api**: The Runtime API that exposes Agents as APIs and executes Agent conversations. Keeps conversation state and emits OTEL traces. * **agents-ui**: A UI component library of chat interfaces for embedding rich, dynamic Agent conversational experiences in web apps. # No-Code Agent Builder + Agents SDK URL: /overview Inkeep is a platform for building Agent-driven AI Chat Assistants and AI Workflows. *** title: No-Code Agent Builder + Agents SDK sidebarTitle: Overview icon: "LuBookOpen" description: Inkeep is a platform for building Agent-driven AI Chat Assistants and AI Workflows. ------------------------------------------------------------------------------------------------ With Inkeep, you can build AI Agents with a **No-Code Visual Builder** and **TypeScript SDK**. Agents can be edited in either with **full 2-way sync**, so technical and non-technical teams can create and manage their Agents in one platform. ## Two ways to build ### No-Code Visual Builder A no-code canvas so any team can create and own the Agents they care about. No-Code Agent Builder demo ### TypeScript Agents SDK A code-first framework so engineering teams can build with the tools they expect. ```typescript import { agent, subAgent } from "@inkeep/agents-sdk"; const helloAgent = subAgent({ id: "hello-agent", name: "Hello Agent", description: "Says hello", prompt: 'Only reply with the word "hello", but you may do it in different variations like h3110, h3110w0rld, h3110w0rld! etc...', }); export const basicAgent = agent({ id: "basic-agent", name: "Basic Agent", description: "A basic agent that just says hello", defaultSubAgent: helloAgent, subAgents: () => [helloAgent], }); ``` The **Visual Builder and TypeScript SDK are fully interoperable**: your technical and non-technical teams can edit and manage Agents in either format and switch or collaborate with others at any time. ## Use cases Inkeep Agents can operate as **Agentic Chat Assistants**, for example: * a customer experience agent for help centers, technical docs, or in-app experiences * an internal copilot to assist your support, sales, marketing, ops, and other teams Agents can also be used for **Agentic Workflow Automation** like: * Creating and updating knowledge bases, documentation, and blogs * Updating CRMs, triaging helpdesk tickets, and tackling repetitive tasks ## Platform Overview **Inkeep Open Source** includes: * A Visual Builder & TypeScript SDK with 2-way sync * Multi-agent architecture to support teams of agents * MCP Tools with credentials management * A UI component library for dynamic chat experiences * Triggering Agents via MCP, A2A, & Vercel SDK APIs * Observability via a Traces UI & OpenTelemetry * Easy deployment to Vercel and using Docker Interested in a managed platform? Sign up for the [Inkeep Cloud waitlist](https://inkeep.com/cloud-waitlist) or learn about [Inkeep Enterprise](https://inkeep.com/enterprise). ## Our Approach Inkeep is designed to be extensible and open: you can use the LLM provider of your choice, use Agents via open protocols, and with a [fair-code](/community/license) license and great devex, easily deploy and self-host Agents in your own infra. [Follow us](https://docs.inkeep.com/community/inkeep-community) to stay up to date, get help, and share feedback. ## Next Steps Get started with the Visual Builder and TypeScript SDK in under 5 minutes. Learn about the key concepts of building Agents with Inkeep. # Troubleshooting Guide URL: /troubleshooting Learn how to diagnose and resolve issues when something breaks in your Inkeep agent system. *** title: Troubleshooting Guide sidebarTitle: Troubleshooting description: Learn how to diagnose and resolve issues when something breaks in your Inkeep agent system. icon: LuWrench keywords: troubleshooting, debugging, errors, timeline, signoz, widget implementation ------------------------------------------------------------------------------------- ## Overview This guide provides a structured methodology for debugging problems across different components of your agent system. ## Step 1: Check the Timeline The timeline is your first stop for understanding what happened during a conversation or agent execution. Navigate to the **Traces** sections to view in depth details per conversation. Within each conversation, you'll find an **error card** that is clickable whenever something goes wrong during agent execution. ### What to Look For * **Execution flow**: Review the sequence of agent actions and tool calls * **Timing**: Check for delays or bottlenecks in the execution * **Agent transitions**: Verify that transfers and delegations happened as expected * **Tool usage**: Confirm that tools were called correctly and returned expected results * **Error cards**: Look for red error indicators in the timeline and click to view detailed error information ### Error Cards in the Timeline Clicking on this error card reveals: * **Error type**: The specific category of error (e.g., "Agent Generation Error") * **Exception stacktrace**: The complete stack trace showing exactly where the error occurred in the code This detailed error information helps you pinpoint exactly what went wrong and where in your agent's execution chain. <> ### Copy Trace for Debugging The `Copy Trace` button in the timeline view allows you to export the entire conversation trace as JSON. This is particularly useful for offline analysis and debugging complex flows. Copy Trace button in the timeline view for exporting conversation traces #### What's Included in the Trace Export When you click `Copy Trace`, the system exports a JSON object containing: ```json { "metadata": { "conversationId": "unique-conversation-id", "traceId": "distributed-trace-id", "agentId": "agent-identifier", "agentName": "Agent Name", "exportedAt": "2025-10-14T12:00:00.000Z" }, "timing": { "startTime": "2025-10-14T11:59:00.000Z", "endTime": "2025-10-14T12:00:00.000Z", "durationMs": 60000 }, "timeline": [ // Array of all activities with complete details: // - Agent messages and responses // - Tool calls and results // - Agent transfers // - Artifact information // - Execution context ] } ``` #### How to Use Copy Trace 1. Navigate to the **Traces** section in the management UI 2. Open the conversation you want to debug 3. Click the **Copy Trace** button at the top of the timeline 4. The complete trace JSON is copied to your clipboard 5. Paste it into your preferred tool for analysis This exported trace contains all the activities shown in the timeline, making it easy to share complete execution context with team members or support. ## Step 2: Check SigNoz SigNoz provides distributed tracing and observability for your agent system, offering deeper insights when the built-in timeline isn't sufficient. ### Accessing SigNoz from the Timeline You can easily access SigNoz directly from the timeline view. In the **Traces** section, click on any activity in the conversation timeline to view its details. Within the activity details, you'll find a **"View in SigNoz"** button that takes you directly to the corresponding span in SigNoz for deeper analysis. ### What SigNoz Shows * **Distributed traces**: End-to-end request flows across services * **Performance metrics**: Response times, throughput, and error rates ### Key Metrics to Monitor * **Agent response times**: How long each agent takes to process requests * **Tool execution times**: Performance of MCP servers and external APIs * **Error rates**: Frequency and types of failures ## Agent Stopped Unexpectedly ### StopWhen Limits Reached If your agent stops mid-conversation, it may have hit a configured stopWhen limit: * **Transfer limit reached**: Check `transferCountIs` on your Agent or Project - agent stops after this many transfers between Sub Agents * **Step limit reached**: Check `stepCountIs` on your Sub Agent or Project - execution stops after this many tool calls + LLM responses **How to diagnose:** * Check the timeline for the last activity before stopping * Look for messages indicating limits were reached * Review your stopWhen configuration in Agent/Project settings **How to fix:** * Increase the limits if legitimate use case requires more steps/transfers * Optimize your agent flow to use fewer transfers * Investigate if agent is stuck in a loop (limits working as intended) See [Configuring StopWhen](/typescript-sdk/agent-settings#configuring-stopwhen) for more details. ## Common Configuration Issues ### General Configuration Issues * **Missing environment variables**: Ensure all required env vars are set * **Incorrect API endpoints**: Verify you're using the right URLs * **Network connectivity**: Check firewall and proxy settings * **Version mismatches**: Ensure all packages are compatible ### MCP Server Connection Issues * **MCP not able to connect**: * Check that the MCP server is running and accessible * **401 Unauthorized errors**: * Verify that credentials are properly configured and valid * **Connection timeouts**: * Ensure network connectivity and firewall settings allow connections ### AI Provider Configuration Problems * **AI Provider key not defined or invalid**: * Ensure you have one of these environment variables set: `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, or `GOOGLE_GENERATIVE_AI_API_KEY` * Verify the API key is valid and has sufficient credits * Check that the key hasn't expired or been revoked * **GPT-5 access issues**: * Individual users cannot access GPT-5 as it requires organization verification * Use GPT-4 or other available models instead * Contact OpenAI support if you need GPT-5 access for your organization ### Credit and Rate Limiting Issues * **Running out of credits**: * Monitor your OpenAI usage and billing * Set up usage alerts to prevent unexpected charges * **Rate limiting by AI providers**: * Especially common with high-frequency operations like summarizers * Monitor your API usage patterns and adjust accordingly ### Context Fetcher Issues * **Context fetcher timeouts**: * Check that external services are responding within expected timeframes # Manage API URL: /api-reference undefined *** title: Manage API full: true \_openapi: toc: * depth: 2 title: List Projects url: '#list-projects' * depth: 2 title: Create Project url: '#create-project' * depth: 2 title: Get Project url: '#get-project' * depth: 2 title: Update Project url: '#update-project' * depth: 2 title: Delete Project url: '#delete-project' * depth: 2 title: List SubAgents url: '#list-subagents' * depth: 2 title: Create SubAgent url: '#create-subagent' * depth: 2 title: Get SubAgent url: '#get-subagent' * depth: 2 title: Delete SubAgent url: '#delete-subagent' * depth: 2 title: Update SubAgent url: '#update-subagent' * depth: 2 title: List Agent Relations url: '#list-agent-relations' * depth: 2 title: Create Agent Relation url: '#create-agent-relation' * depth: 2 title: Get Agent Relation url: '#get-agent-relation' * depth: 2 title: Delete Agent Relation url: '#delete-agent-relation' * depth: 2 title: Update Agent Relation url: '#update-agent-relation' * depth: 2 title: List Agents url: '#list-agent-graphs' * depth: 2 title: Create Agent url: '#create-agent-graph' * depth: 2 title: Get Agent url: '#get-agent-graph' * depth: 2 title: Delete Agent url: '#delete-agent-graph' * depth: 2 title: Update Agent url: '#update-agent-graph' * depth: 2 title: Get Related Agent Infos url: '#get-related-agent-infos' * depth: 2 title: Get Full Agent Definition url: '#get-full-agent-definition' * depth: 2 title: List SubAgent Tool Relations url: '#list-subagent-tool-relations' * depth: 2 title: Create SubAgent Tool Relation url: '#create-subagent-tool-relation' * depth: 2 title: Get SubAgent Tool Relation url: '#get-subagent-tool-relation' * depth: 2 title: Delete SubAgent Tool Relation url: '#delete-subagent-tool-relation' * depth: 2 title: Update SubAgent Tool Relation url: '#update-subagent-tool-relation' * depth: 2 title: Get SubAgents for Tool url: '#get-subagents-for-tool' * depth: 2 title: Get Artifact Components for Agent url: '#get-artifact-components-for-agent' * depth: 2 title: Get Agents Using Artifact Component url: '#get-agents-using-artifact-component' * depth: 2 title: Associate Artifact Component with Agent url: '#associate-artifact-component-with-agent' * depth: 2 title: Remove Artifact Component from Agent url: '#remove-artifact-component-from-agent' * depth: 2 title: Check if Artifact Component is Associated with Agent url: '#check-if-artifact-component-is-associated-with-agent' * depth: 2 title: Get Data Components for Agent url: '#get-data-components-for-agent' * depth: 2 title: Get Agents Using Data Component url: '#get-agents-using-data-component' * depth: 2 title: Associate Data Component with Agent url: '#associate-data-component-with-agent' * depth: 2 title: Remove Data Component from Agent url: '#remove-data-component-from-agent' * depth: 2 title: Check if Data Component is Associated with Agent url: '#check-if-data-component-is-associated-with-agent' * depth: 2 title: List Artifact Components url: '#list-artifact-components' * depth: 2 title: Create Artifact Component url: '#create-artifact-component' * depth: 2 title: Get Artifact Component url: '#get-artifact-component' * depth: 2 title: Delete Artifact Component url: '#delete-artifact-component' * depth: 2 title: Update Artifact Component url: '#update-artifact-component' * depth: 2 title: List Context Configurations url: '#list-context-configurations' * depth: 2 title: Create Context Configuration url: '#create-context-configuration' * depth: 2 title: Get Context Configuration url: '#get-context-configuration' * depth: 2 title: Delete Context Configuration url: '#delete-context-configuration' * depth: 2 title: Update Context Configuration url: '#update-context-configuration' * depth: 2 title: List Credentials url: '#list-credentials' * depth: 2 title: Create Credential url: '#create-credential' * depth: 2 title: Get Credential url: '#get-credential' * depth: 2 title: Delete Credential url: '#delete-credential' * depth: 2 title: Update Credential url: '#update-credential' * depth: 2 title: List Data Components url: '#list-data-components' * depth: 2 title: Create Data Component url: '#create-data-component' * depth: 2 title: Get Data Component url: '#get-data-component' * depth: 2 title: Delete Data Component url: '#delete-data-component' * depth: 2 title: Update Data Component url: '#update-data-component' * depth: 2 title: List External Agents url: '#list-external-agents' * depth: 2 title: Create External Agent url: '#create-external-agent' * depth: 2 title: Get External Agent url: '#get-external-agent' * depth: 2 title: Delete External Agent url: '#delete-external-agent' * depth: 2 title: Update External Agent url: '#update-external-agent' * depth: 2 title: List Function Tools url: '#list-function-tools' * depth: 2 title: Create Function Tool url: '#create-function-tool' * depth: 2 title: Get Function Tool by ID url: '#get-function-tool-by-id' * depth: 2 title: Delete Function Tool url: '#delete-function-tool' * depth: 2 title: Update Function Tool url: '#update-function-tool' * depth: 2 title: List Functions url: '#list-functions' * depth: 2 title: Create Function url: '#create-function' * depth: 2 title: Get Function by ID url: '#get-function-by-id' * depth: 2 title: Delete Function url: '#delete-function' * depth: 2 title: Update Function url: '#update-function' * depth: 2 title: List Tools url: '#list-tools' * depth: 2 title: Create Tool url: '#create-tool' * depth: 2 title: Get Tool url: '#get-tool' * depth: 2 title: Delete Tool url: '#delete-tool' * depth: 2 title: Update Tool url: '#update-tool' * depth: 2 title: List API Keys url: '#list-api-keys' * depth: 2 title: Create API Key url: '#create-api-key' * depth: 2 title: Get API Key url: '#get-api-key' * depth: 2 title: Delete API Key url: '#delete-api-key' * depth: 2 title: Update API Key url: '#update-api-key' * depth: 2 title: Create Full Agent url: '#create-full-agent' * depth: 2 title: Get Full Agent url: '#get-full-agent' * depth: 2 title: Delete Full Agent url: '#delete-full-agent' * depth: 2 title: Update Full Agent url: '#update-full-agent' * depth: 2 title: Create Full Project url: '#create-full-project' * depth: 2 title: Get Full Project url: '#get-full-project' * depth: 2 title: Delete Full Project url: '#delete-full-project' * depth: 2 title: Update Full Project url: '#update-full-project' * depth: 2 title: Initiate OAuth login for MCP tool url: '#initiate-oauth-login-for-mcp-tool' * depth: 2 title: OAuth authorization callback url: '#oauth-authorization-callback' structuredData: headings: * content: List Projects id: list-projects * content: Create Project id: create-project * content: Get Project id: get-project * content: Update Project id: update-project * content: Delete Project id: delete-project * content: List SubAgents id: list-subagents * content: Create SubAgent id: create-subagent * content: Get SubAgent id: get-subagent * content: Delete SubAgent id: delete-subagent * content: Update SubAgent id: update-subagent * content: List Agent Relations id: list-agent-relations * content: Create Agent Relation id: create-agent-relation * content: Get Agent Relation id: get-agent-relation * content: Delete Agent Relation id: delete-agent-relation * content: Update Agent Relation id: update-agent-relation * content: List Agents id: list-agent-graphs * content: Create Agent id: create-agent-graph * content: Get Agent id: get-agent-graph * content: Delete Agent id: delete-agent-graph * content: Update Agent id: update-agent-graph * content: Get Related Agent Infos id: get-related-agent-infos * content: Get Full Agent Definition id: get-full-agent-definition * content: List SubAgent Tool Relations id: list-subagent-tool-relations * content: Create SubAgent Tool Relation id: create-subagent-tool-relation * content: Get SubAgent Tool Relation id: get-subagent-tool-relation * content: Delete SubAgent Tool Relation id: delete-subagent-tool-relation * content: Update SubAgent Tool Relation id: update-subagent-tool-relation * content: Get SubAgents for Tool id: get-subagents-for-tool * content: Get Artifact Components for Agent id: get-artifact-components-for-agent * content: Get Agents Using Artifact Component id: get-agents-using-artifact-component * content: Associate Artifact Component with Agent id: associate-artifact-component-with-agent * content: Remove Artifact Component from Agent id: remove-artifact-component-from-agent * content: Check if Artifact Component is Associated with Agent id: check-if-artifact-component-is-associated-with-agent * content: Get Data Components for Agent id: get-data-components-for-agent * content: Get Agents Using Data Component id: get-agents-using-data-component * content: Associate Data Component with Agent id: associate-data-component-with-agent * content: Remove Data Component from Agent id: remove-data-component-from-agent * content: Check if Data Component is Associated with Agent id: check-if-data-component-is-associated-with-agent * content: List Artifact Components id: list-artifact-components * content: Create Artifact Component id: create-artifact-component * content: Get Artifact Component id: get-artifact-component * content: Delete Artifact Component id: delete-artifact-component * content: Update Artifact Component id: update-artifact-component * content: List Context Configurations id: list-context-configurations * content: Create Context Configuration id: create-context-configuration * content: Get Context Configuration id: get-context-configuration * content: Delete Context Configuration id: delete-context-configuration * content: Update Context Configuration id: update-context-configuration * content: List Credentials id: list-credentials * content: Create Credential id: create-credential * content: Get Credential id: get-credential * content: Delete Credential id: delete-credential * content: Update Credential id: update-credential * content: List Data Components id: list-data-components * content: Create Data Component id: create-data-component * content: Get Data Component id: get-data-component * content: Delete Data Component id: delete-data-component * content: Update Data Component id: update-data-component * content: List External Agents id: list-external-agents * content: Create External Agent id: create-external-agent * content: Get External Agent id: get-external-agent * content: Delete External Agent id: delete-external-agent * content: Update External Agent id: update-external-agent * content: List Function Tools id: list-function-tools * content: Create Function Tool id: create-function-tool * content: Get Function Tool by ID id: get-function-tool-by-id * content: Delete Function Tool id: delete-function-tool * content: Update Function Tool id: update-function-tool * content: List Functions id: list-functions * content: Create Function id: create-function * content: Get Function by ID id: get-function-by-id * content: Delete Function id: delete-function * content: Update Function id: update-function * content: List Tools id: list-tools * content: Create Tool id: create-tool * content: Get Tool id: get-tool * content: Delete Tool id: delete-tool * content: Update Tool id: update-tool * content: List API Keys id: list-api-keys * content: Create API Key id: create-api-key * content: Get API Key id: get-api-key * content: Delete API Key id: delete-api-key * content: Update API Key id: update-api-key * content: Create Full Agent id: create-full-agent * content: Get Full Agent id: get-full-agent * content: Delete Full Agent id: delete-full-agent * content: Update Full Agent id: update-full-agent * content: Create Full Project id: create-full-project * content: Get Full Project id: get-full-project * content: Delete Full Project id: delete-full-project * content: Update Full Project id: update-full-project * content: Initiate OAuth login for MCP tool id: initiate-oauth-login-for-mcp-tool * content: OAuth authorization callback id: oauth-authorization-callback contents: * content: Check if the management service is healthy * content: List all projects within a tenant with pagination heading: list-projects * content: Create a new project heading: create-project * content: Get a single project by ID heading: get-project * content: Update an existing project heading: update-project * content: Delete a project. Will fail if the project has existing resources. heading: delete-project * content: List all API keys for a tenant with optional pagination heading: list-api-keys * content: >- Create a new API key for an Agent. Returns the full key (shown only once). heading: create-api-key * content: Get a specific API key by ID (does not return the actual key) heading: get-api-key * content: Delete an API key permanently heading: delete-api-key * content: Update an API key (currently only expiration date can be changed) heading: update-api-key * content: >- Create a complete agent with all Sub Agents, tools, and relationships from JSON definition heading: create-full-agent * content: >- Retrieve a complete agent definition with all Sub Agents, tools, and relationships heading: get-full-agent * content: >- Delete a complete agent and cascade to all related entities (relationships, not other Sub Agents/tools) heading: delete-full-agent * content: >- Update or create a complete agent with all Sub Agents, tools, and relationships from JSON definition heading: update-full-agent * content: >- Create a complete project with all agents, Sub Agents, tools, and relationships from JSON definition heading: create-full-project * content: >- Retrieve a complete project definition with all agents, Sub Agents, tools, and relationships heading: get-full-project * content: >- Delete a complete project and cascade to all related entities (agents, Sub Agents, tools, relationships) heading: delete-full-project * content: >- Update or create a complete project with all agents, Sub Agents, tools, and relationships from JSON definition heading: update-full-project * content: >- Detects OAuth requirements and redirects to authorization server (public endpoint) heading: initiate-oauth-login-for-mcp-tool * content: >- Handles OAuth authorization codes and completes the authentication flow heading: oauth-authorization-callback *** {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} REST API for the management of the Inkeep Agent Framework. # Join & Follow URL: /community/inkeep-community undefined *** title: Join & Follow icon: "LuUsers" --------------- To get help, share ideas, and provide feedback, join our [Slack community](https://inkeep-ai.slack.com/ssb/redirect). We'd love to hear about what you're building. You can also find us on: Updates, tips, and shout‑outs for builders. Star the repo, open issues, and contribute. Practical demos, tutorials, and deep dives. Company updates, launches, and hiring news. Feel free to tag us as `@inkeep_ai` on 𝕏 or `@Inkeep` on LinkedIn with a video of what you're building — we like to highlight neat Agent use cases from the community where possible. Also feel free to submit a PR to our [template library](https://github.com/inkeep/agents-cookbook/tree/main/template-projects). # License URL: /community/license License for the Inkeep Agent Framework *** title: License description: License for the Inkeep Agent Framework icon: "LuFileText" ------------------ The Inkeep Agent Framework is licensed under the **Elastic License 2.0** ([ELv2](https://www.elastic.co/licensing/elastic-license)) subject to **Inkeep's Supplemental Terms** ([SUPPLEMENTAL\_TERMS.md](https://github.com/inkeep/agents/blob/main/SUPPLEMENTAL_TERMS.md)). This is a [fair-code](https://faircode.io/), source-available license that allows broad usage while protecting against certain competitive uses. # Credentials URL: /get-started/credentials Learn how to securely store and manage credentials for your agents using Nango, environment variables, or Keychain. *** title: Credentials description: Learn how to securely store and manage credentials for your agents using Nango, environment variables, or Keychain. icon: "LuKey" ------------- ## Overview There are multiple ways to manage credentials for your agents, each with different security and convenience trade-offs: 1. **Nango Cloud**: Managed credential store with web interface 2. **Nango Local**: Self-hosted credential store using Docker 3. **Environment Variables**: Simple SDK-based approach using environment variables 4. **Keychain**: Secure local storage using your operating system's credential manager ## Option 1: Nango Cloud setup ### Step 1: Create a Nango account Sign up [here](https://app.nango.dev/signin) ### Step 2: Save your Nango secret key After creating your Nango account, navigate to Environment Settings in your dashboard and copy the secret key. ### Step 3: Configure your root `.env` file ``` NANGO_SECRET_KEY=your_nango_secret_key ``` ### Step 4: Verify Setup 1. Restart your Inkeep agents: ```bash pnpm dev ``` 2. Create a new credential in the [Visual Builder](/visual-builder/credentials) ## Option 2: Nango local setup ### Step 1: Clone the optional services repository ```bash git clone https://github.com/inkeep/agents-optional-local-dev cd agents-optional-local-dev ``` ### Step 2: Start Nango Services Inside the `agents-optional-local-dev` repository, run the following command: ```bash docker-compose --profile nango up -d ``` ### Step 3: Configure Environment Variables In your **root project directory** (not inside the `agents-optional-local-dev` repository), update your `.env` file: ```bash NANGO_SECRET_KEY=your_nango_secret_key NANGO_HOST=http://localhost:3050 PUBLIC_NANGO_CONNECT_BASE_URL=http://localhost:3051 ``` To get your Nango secret key: 1. Open Nango at `http://localhost:3050` 2. Navigate to Environment Settings and copy the secret key ### Step 4: Verify Setup 1. Restart your Inkeep agents: ```bash pnpm dev ``` 2. Create a new credential in the [Visual Builder](/visual-builder/credentials) ## Option 3: Keychain (Bearer authentication) You can also store bearer tokens in your operating system's Keychain by choosing Keychain Store as the credential store when creating a credential in the [Visual Builder](/visual-builder/credentials). More information about Keychain Store can be found [here](/typescript-sdk/tools/credentials#keychain-store). ## Option 4: Environment variables When using the SDK, you can create credentials that reference environment variables using the [Memory Store](/typescript-sdk/tools/credentials#memory-store). Follow the Basic Setup Example [here](/typescript-sdk/tools/credentials#basic-setup-example) to see how to set this up. # Push / Pull URL: /get-started/push-pull Push and pull your agents to and from the Visual Builder *** title: Push / Pull description: Push and pull your agents to and from the Visual Builder icon: "LuArrowLeftRight" ------------------------ ## Push code to visual With Inkeep, you can define your agents in code, push them to the Visual Builder, and continue developing with the intuitive drag-and-drop interface. You can switch back to code any time. Let's walk through the process. ### Step 1: Install the Inkeep CLI ```bash pnpm install -g @inkeep/agents-cli ``` ### Step 2: Download a template project Navigate to the `src/projects` directory. ```bash cd src/projects ``` Add the docs assistant agent using `inkeep add`. ```bash inkeep add --project docs-assistant ``` Find the downloaded code in `src/projects/docs-assistant`. `inkeep add` imports a template project from our [cookbook library](https://github.com/inkeep/agents-cookbook/tree/main/template-projects) into your src/projects directory. ### Step 3: Push code to visual Navigate to your docs assistant project. ```bash cd docs-assistant ``` Use `inkeep push` to push the code to the Visual Builder. ```bash inkeep push ``` ### Step 4: Chat with your agent Refresh [http://localhost:3000](http://localhost:3000), switch to the **Docs Assistant** project (in the bottom left). Under **Agents**, click on the Docs Assistant agent and press **Try it**. Ask a question about Inkeep. Chat with your agent ## Run `inkeep pull` Make some changes, like changing a prompt, and let's walk through `inkeep pull`. ### Requirements The `inkeep pull` command in-part leverages AI to sync your TypeScript files to the state of your Visual Builder, so **at least one** of the below environment variables to be defined: ```txt .env # Choose one: ANTHROPIC_API_KEY=your_api_key_here # or OPENAI_API_KEY=your_api_key_here # or GOOGLE_API_KEY=your_api_key_here ``` The CLI prioritizes Anthropic → OpenAI → Google. Here are the models used: | Provider | Model(s) | Where to Get API Key | | --------- | ------------------------------------- | --------------------------------------------------- | | Anthropic | Claude Sonnet 4.5 (extended thinking) | [Anthropic Console](https://console.anthropic.com/) | | OpenAI | GPT-4.1 | [OpenAI Platform](https://platform.openai.com/) | | Google | Gemini models (with thinking mode) | [Google AI Studio](https://ai.google.dev/) | ### Step 1: Make an edit visually Make an edit to the docs assistant agent in the UI, such as changing the prompt of the agent. ### Step 2: Pull code from visual Navigate to your docs assistant project. ```bash cd src/projects/docs-assistant ``` Use `inkeep pull` to pull the code from the Visual Builder to your local project. ```bash inkeep pull ``` ### Step 3: Verify the code was pulled Check the `src/projects/docs-assistant/agents/docs-assistant.ts` file for the updated prompt. ```bash cat src/projects/docs-assistant/agents/docs-assistant.ts ``` ## Next steps Next, we recommend setting up observability to see live traces of your agent. See [Traces](/get-started/traces) to get started. Set up SigNoz to enable live debugging capabilities for your agents # Quick Start URL: /get-started/quick-start Start developing your agents with the Inkeep Agent framework *** title: Quick Start description: Start developing your agents with the Inkeep Agent framework icon: "LuRocket" ---------------- ## Launch your first agent ### Prerequisites Before getting started, ensure you have the following installed on your system: * [Node.js](https://nodejs.org/en/download/) version 22 or higher * [pnpm](https://pnpm.io/installation) version 10 or higher You can verify your installations by running: ```bash node --version pnpm --version ``` ### Step 1: Create a new agents project First, create a new agents project. ```bash npx @inkeep/create-agents my-agent-directory ``` Navigate to the new agents project. ```bash cd my-agent-directory ``` ### Step 2: Launch the dev environment ```bash pnpm dev ``` The Visual Builder automatically opens at [http://localhost:3000](http://localhost:3000). ### Step 3: Chat with your agent Navigate to the Event Planner agent at [http://localhost:3000](http://localhost:3000), click **Try it**, and ask about fun activities at a location of your choice. Chat with your agent ### Step 4: Install Inkeep MCP (optional) To help with development, add the `Inkeep Agents MCP` to your preferred IDE or MCP client. It has tools to help "vibe code" agents with the Inkeep SDK.
Add to Cursor Install in VS Code
Or manually install with any MCP client: ```json { "mcpServers": { "inkeep-agents": { "url": "https://agents.inkeep.com/mcp" } } } ``` ### Next steps Next, we recommend learning about `inkeep push` and `inkeep pull` so you can go from `SDK -> Visual Builder` and `Visual Builder -> SDK`. See the [Push / Pull](/get-started/push-pull) guide for a quick example. Use our cookbook to learn about push and pull. # Live Debugger, Traces, and OTEL Telemetry URL: /get-started/traces Set up SigNoz to enable full observability with traces and live debugging capabilities for your agents. *** title: Live Debugger, Traces, and OTEL Telemetry sidebarTitle: Traces description: Set up SigNoz to enable full observability with traces and live debugging capabilities for your agents. icon: "LuActivity" keywords: SigNoz, traces, live debugger, observability, OpenTelemetry, monitoring, distributed tracing, debugging, copy trace, export trace, JSON export -------------------------------------------------------------------------------------------------------------------------------------------------------- ## Overview The Inkeep Agent Framework provides powerful **traces** and **live debugging** capabilities powered by SigNoz. Setting up SigNoz gives you: * **Real-time trace visualization** - See exactly how your agents execute step-by-step * **Live debugging** - Debug agent conversations as they happen * **Export traces as JSON** - Copy complete traces for offline analysis and debugging * **Full observability** - Complete OpenTelemetry instrumentation for monitoring * **Performance insights** - Identify bottlenecks and optimize agent performance Live traces interface showing real-time agent execution ## Setup Options You can set up SigNoz in two ways: 1. **Cloud Setup**: Use SigNoz Cloud 2. **Local Setup**: Run SigNoz locally using Docker ## Option 1: SigNoz Cloud Setup ### Step 1: Create a SigNoz Cloud Project 1. Sign up at [SigNoz](https://signoz.io/teams/) 2. Create a new project or use an existing one ### Step 2: Save Your SigNoz Credentials You'll need to collect three pieces of information from your SigNoz dashboard: 1. **API Key**: * Navigate to Settings → Workspace Settings → API Keys → New Key * Choose any role (Admin, Editor, or Viewer) - Viewer is sufficient for observability * Set the expiration field to "No Expiry" to prevent the key from expiring * Copy the generated API key 2. **Ingestion Key**: * Navigate to Settings → Workspace Settings → Ingestion * Set the expiration field to "No Expiry" to prevent the key from expiring * Copy the ingestion key 3. **SigNoz URL**: * Copy the URL from your browser's address bar * It will look like: `https://.signoz.cloud` ### Step 3: Configure Your Root `.env` File ```bash # SigNoz SIGNOZ_URL=https://.signoz.cloud SIGNOZ_API_KEY= OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ingest.us.signoz.cloud:443/v1/traces OTEL_EXPORTER_OTLP_TRACES_HEADERS="signoz-ingestion-key=" ``` ### Step 4: Verify Cloud Setup 1. Restart your development environment: ```bash pnpm dev ``` 2. Generate some traces by interacting with your agents 3. Open your SigNoz cloud dashboard and navigate to "Traces" to see your agent traces ## Option 2: Local SigNoz Setup ### Prerequisites * Docker installed on your machine ### Step 1: Clone the Optional Services Repository Clone the Inkeep optional local development services repository: ```bash git clone https://github.com/inkeep/agents-optional-local-dev cd agents-optional-local-dev ``` ### Step 2: Start SigNoz Services Run the following command to start SigNoz and related services: ```bash docker-compose --profile signoz up -d ``` This will start: * SigNoz frontend (accessible at `http://localhost:3080`) * SigNoz query service * SigNoz OTEL collector * ClickHouse database When you visit `http://localhost:3080`, you can sign up with your desired credentials. ### Step 3: Configure Environment Variables In your **root project directory** (e.g., `my-agent-directory`), update your `.env` file: ```bash # SigNoz Configuration SIGNOZ_URL=http://localhost:3080 SIGNOZ_API_KEY=your-signoz-api-key # IMPORTANT: Comment out the OTEL Configuration # OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ingest.us.signoz.cloud:443/v1/traces # OTEL_EXPORTER_OTLP_TRACES_HEADERS="signoz-ingestion-key=" ``` To get your SigNoz API key: 1. Open SigNoz at `http://localhost:3080` 2. Navigate to Settings → Account Settings → API Keys → New Key 3. Create a new API key or copy an existing one. * Choose any role (Admin, Editor, or Viewer) - Viewer is sufficient for observability * Set the expiration field to "No Expiry" to prevent the key from expiring ### Step 4: Verify Setup 1. Restart your Inkeep agents: ```bash pnpm dev ``` 2. Make some requests to your agents to generate traces 3. Open SigNoz at `http://localhost:3080` and navigate to the "Traces" section to see your agent traces ## Viewing Traces and Using the Live Debugger Once SigNoz is set up, you can access traces and live debugging in two ways: ### 1. Visual Builder Traces Interface If you're using the Visual Builder: 1. Open your agent project in the Visual Builder 2. Navigate to the **Traces** section 3. You'll see real-time traces of your agent executions 4. Click on any trace to see detailed execution flow and timing The traces overview shows conversation metrics and recent activity: Traces overview dashboard showing conversation metrics and recent activity Click on any conversation to see detailed execution flow: Detailed conversation trace showing step-by-step execution and timing <> ### Copy Trace for Debugging The `Copy Trace` button in the timeline view allows you to export the entire conversation trace as JSON. This is particularly useful for offline analysis and debugging complex flows. Copy Trace button in the timeline view for exporting conversation traces #### What's Included in the Trace Export When you click `Copy Trace`, the system exports a JSON object containing: ```json { "metadata": { "conversationId": "unique-conversation-id", "traceId": "distributed-trace-id", "agentId": "agent-identifier", "agentName": "Agent Name", "exportedAt": "2025-10-14T12:00:00.000Z" }, "timing": { "startTime": "2025-10-14T11:59:00.000Z", "endTime": "2025-10-14T12:00:00.000Z", "durationMs": 60000 }, "timeline": [ // Array of all activities with complete details: // - Agent messages and responses // - Tool calls and results // - Agent transfers // - Artifact information // - Execution context ] } ``` #### How to Use Copy Trace 1. Navigate to the **Traces** section in the management UI 2. Open the conversation you want to debug 3. Click the **Copy Trace** button at the top of the timeline 4. The complete trace JSON is copied to your clipboard 5. Paste it into your preferred tool for analysis This exported trace contains all the activities shown in the timeline, making it easy to share complete execution context with team members or support. ### 2. SigNoz Dashboard For detailed analysis and further debugging: 1. Open your SigNoz dashboard (cloud or local) 2. Navigate to **Traces** to see all agent executions 3. Use filters to find specific conversations or agents 4. Click on traces to see: * Step-by-step execution details * Performance metrics * Error information * Agent-to-agent communication flows For more detailed information on using traces, see the [SigNoz Usage guide](/typescript-sdk/signoz-usage). ## Additional Observability and Evals 👉 For additional observability or a dedicated Evals platform, you can connect to any OTEL-based provider. For example, check out the [Langfuse Usage guide](/typescript-sdk/langfuse-usage) for end-to-end instructions. # Deploy to AWS EC2 URL: /self-hosting/aws-ec2 Deploy to AWS EC2 with Docker Compose *** title: Deploy to AWS EC2 sidebarTitle: AWS EC2 description: Deploy to AWS EC2 with Docker Compose icon: "LuServerCog" ------------------- ## Create a VM Instance * Go to [Compute Engine](https://console.aws.amazon.com/ec2/v2/home). * Launch an instance * Select Amazon Machine Image (AMI) * Recommended size is at least `t2.large` (2 vCPU, 8 GiB Memory). * Click "Edit" in the "Network settings" section. Set up an Inbound Security Group Rules for (TCP, 3000, 0.0.0.0/0), (TCP, 3002-3003, 0.0.0.0/0), (TCP, 3050-3051, 0.0.0.0/0), and (TCP, 3080, 0.0.0.0/0). These are the ports exposed by the Inkeep services. * Auto-assign public IP * Increase the size of storage to 30 GiB. ## Install Docker Compose 1. SSH into the EC2 Instance 2. Install packages ```bash sudo dnf update sudo dnf install -y git sudo dnf install -y docker ``` ```bash sudo mkdir -p /usr/libexec/docker/cli-plugins sudo curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-linux-$(uname -m) -o /usr/libexec/docker/cli-plugins/docker-compose sudo chmod +x /usr/libexec/docker/cli-plugins/docker-compose ``` ## Deploy SigNoz and Nango Clone this repo, which includes Docker files with SigNoz and Nango: ```bash git clone https://github.com/inkeep/agents-optional-local-dev inkeep-external-services cd inkeep-external-services ``` Run this command to autogenerate a `.env` file: ```bash cp .env.example .env && \ encryption_key=$(openssl rand -base64 32) && \ tmp_file=$(mktemp) && \ sed "s||$encryption_key|" .env > "$tmp_file" && \ mv "$tmp_file" .env && \ echo "Docker environment file created with auto-generated encryption key" ``` Nango requires a `NANGO_ENCRYPTION_KEY`. Once you create this, it cannot be edited. Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash NANGO_ENCRYPTION_KEY= # Replace these with your in production! NANGO_SERVER_URL=http://:3050 NANGO_PUBLIC_CONNECT_URL=http://:3051 # Modify these in production environments! NANGO_DASHBOARD_USERNAME=admin@example.com NANGO_DASHBOARD_PASSWORD=adminADMIN!@12 ``` Build and deploy SigNoz, Nango, OTEL Collector, and Jaeger: ```bash docker compose up -d ``` This may take up to 5 minutes to start. ### Retrieve your SigNoz and Nango API Keys To get your SigNoz API key `SIGNOZ_API_KEY`: * Open SigNoz in a browser at `http://:3080` * Navigate to Settings → Account Settings → API Keys → New Key * Choose a role, Viewer is sufficient for observability * Set the expiration field to "No Expiry" to prevent the key from expiring To get your Nango secret key `NANGO_SECRET_KEY`: * Open Nango in a browser at `http://:3050` * Nango auto-creates two environments, Prod and Dev. Select the one you will use. * Navigate to Environment Settings to find the secret key ## Deploy the Inkeep Agent Framework From the root directory, create a new project directory for the Docker Compose setup for the Inkeep Agent Framework ```bash mkdir inkeep && cd inkeep wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/docker-compose.yml wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/.env.docker.example ``` Generate a `.env` file from the example: ```bash cp .env.docker.example .env ``` Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash # Change to "production" if deploying to production ENVIRONMENT=production # AI Provider Keys (you need at least one) ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= # Nango NANGO_SECRET_KEY= # SigNoz SIGNOZ_API_KEY= # Uncomment and set each of these with (openssl rand -hex 32) INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= INKEEP_AGENTS_JWT_SIGNING_SECRET= # Uncomment and set these for the Manage UI at http://:3000 PUBLIC_INKEEP_AGENTS_MANAGE_API_URL=http://:3002 PUBLIC_INKEEP_AGENTS_RUN_API_URL=http://:3003 PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET= PUBLIC_NANGO_SERVER_URL=http://:3050 PUBLIC_NANGO_CONNECT_BASE_URL=http://:3051 PUBLIC_SIGNOZ_URL=http://:3080 # Uncomment and set these to access Manage UI at http://:3000 INKEEP_AGENTS_MANAGE_UI_USERNAME=admin@example.com INKEEP_AGENTS_MANAGE_UI_PASSWORD=adminADMIN!@12 ``` Run with Docker: ```bash docker compose up -d ``` Then open `http://:3000` in a browser! # Build a Custom Docker Image URL: /self-hosting/docker-build How to build your own Docker images *** title: Build a Custom Docker Image sidebarTitle: Custom Image description: How to build your own Docker images icon: "LuWrench" ---------------- If you created a project from the quick start, the template includes a set of Dockerfiles and `docker-compose.yml` files. To build and run locally: ``` docker compose build docker compose up -d ``` # Deploy using Docker (Local Development) URL: /self-hosting/docker-local undefined *** title: Deploy using Docker (Local Development) sidebarTitle: Local Development icon: "LuLaptop" ---------------- ## Install Docker * [Install Docker Desktop](https://www.docker.com/products/docker-desktop/) ## Deploy SigNoz and Nango For full functionality, the **Inkeep Agent Framework** requires [**SigNoz**](https://signoz.io/) and [**Nango**](https://www.nango.dev/). You can sign up for a cloud hosted account with them directly, or you can self host them. Follow these instructions to self-host both SigNoz and Nango. Clone this repo, which includes docker files with SigNoz and Nango: ```bash git clone https://github.com/inkeep/agents-optional-local-dev inkeep-external-services cd inkeep-external-services ``` Run this command to autogenerate a `.env` file: ```bash cp .env.example .env && \ encryption_key=$(openssl rand -base64 32) && \ tmp_file=$(mktemp) && \ sed "s||$encryption_key|" .env > "$tmp_file" && \ mv "$tmp_file" .env && \ echo "Docker environment file created with auto-generated encryption key" ``` Nango requires a `NANGO_ENCRYPTION_KEY`. Once you create this, it cannot be edited. Build and deploy SigNoz, Nango, OTEL Collector, and Jaeger: ```bash docker compose up -d ``` This may take up to 5 minutes to start. ### Retrieve your SigNoz and Nango API Keys To get your SigNoz API key `SIGNOZ_API_KEY`: * Open SigNoz in a browser at `http://localhost:3080` * Navigate to Settings → Account Settings → API Keys → New Key * Choose a role, Viewer is sufficient for observability * Set the expiration field to "No Expiry" to prevent the key from expiring To get your Nango secret key `NANGO_SECRET_KEY`: * Open Nango in a browser at `http://localhost:3050` * Nango autocreates two environments Prod and Dev, select the one you will use * Navigate to Environment Settings to find the secret key ## Deploy the Inkeep Agent Framework From the root directory, create a new project directory for the docker compose setup for Inkeep Agent Framework ```bash mkdir inkeep && cd inkeep wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/docker-compose.yml wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/.env.docker.example ``` Generate a `.env` file from the example: ```bash cp .env.docker.example .env ``` Here's an overview of the important environment variables when deploying. Make sure to replace all of these in the `.env` file. ```bash ENVIRONMENT=development # AI Provider Keys (you need at least one) ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= # Nango NANGO_SECRET_KEY= # SigNoz SIGNOZ_API_KEY= # Default username and password for Manage UI (http://localhost:3000) # INKEEP_AGENTS_MANAGE_UI_USERNAME=admin@example.com # INKEEP_AGENTS_MANAGE_UI_PASSWORD=adminADMIN!@12 ``` Run with docker: ```bash docker compose up -d ``` Then open [http://localhost:3000](http://localhost:3000) in a browser! * Manage UI ([http://localhost:3000](http://localhost:3000)) * Manage API Docs ([http://localhost:3002/docs](http://localhost:3002/docs)) * Run API Docs ([http://localhost:3003/docs](http://localhost:3003/docs)) * Nango Dashboard ([http://localhost:3050](http://localhost:3050)) * SigNoz Dashboard ([http://localhost:3080](http://localhost:3080)) # Deploy to GCP Cloud Run URL: /self-hosting/gcp-cloud-run Deploy to GCP Cloud Run with Docker Containers *** title: Deploy to GCP Cloud Run sidebarTitle: GCP Cloud Run description: Deploy to GCP Cloud Run with Docker Containers icon: "LuCloudLightning" ------------------------ ## Coming soon # Deploy to GCP Compute Engine URL: /self-hosting/gcp-compute-engine Deploy to GCP Compute Engine with Docker Compose *** title: Deploy to GCP Compute Engine sidebarTitle: GCP Compute Engine description: Deploy to GCP Compute Engine with Docker Compose icon: "LuCloudSun" ------------------ ## Create a VM Instance * Go to [Compute Engine](https://console.cloud.google.com/compute/instances) in your GCP project. * Create an instance; a recommended size is at least `e2-standard-2` (2 vCPU, 1 core, 8 GB memory). * Use Debian GNU/Linux 12 (bookworm) * Increase the size of the boot disk to 30 GB. * Allow HTTP traffic. * Allow ingress traffic from source IPv4 ranges `0.0.0.0/0` to TCP ports: `3000, 3002, 3003, 3050, 3051, 3080`. These are the ports exposed by the Inkeep services. * Retrieve an external IP address (if applicable, set up a static IP or set up a load balancer). ## Install Docker Compose 1. [SSH into the VM](https://cloud.google.com/compute/docs/connect/standard-ssh) 2. [Set up Docker's apt repository](https://docs.docker.com/engine/install/debian/#install-using-the-repository) ```bash # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update ``` 3. Install the Docker packages ```bash sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin ``` 4. Grant permissions ``` sudo usermod -aG docker $USER newgrp docker ``` ## Deploy SigNoz and Nango Clone this repo, which includes Docker files with SigNoz and Nango: ```bash git clone https://github.com/inkeep/agents-optional-local-dev inkeep-external-services cd inkeep-external-services ``` Run this command to autogenerate a `.env` file: ```bash cp .env.example .env && \ encryption_key=$(openssl rand -base64 32) && \ tmp_file=$(mktemp) && \ sed "s||$encryption_key|" .env > "$tmp_file" && \ mv "$tmp_file" .env && \ echo "Docker environment file created with auto-generated encryption key" ``` Nango requires a `NANGO_ENCRYPTION_KEY`. Once you create this, it cannot be edited. Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash NANGO_ENCRYPTION_KEY= # Replace these with your in production! NANGO_SERVER_URL=http://:3050 NANGO_PUBLIC_CONNECT_URL=http://:3051 # Modify these in production environments! NANGO_DASHBOARD_USERNAME=admin@example.com NANGO_DASHBOARD_PASSWORD=adminADMIN!@12 ``` Build and deploy SigNoz, Nango, OTEL Collector, and Jaeger: ```bash docker compose up -d ``` This may take up to 5 minutes to start. ### Retrieve your SigNoz and Nango API Keys To get your SigNoz API key `SIGNOZ_API_KEY`: * Open SigNoz in a browser at `http://:3080` * Navigate to Settings → Account Settings → API Keys → New Key * Choose a role, Viewer is sufficient for observability * Set the expiration field to "No Expiry" to prevent the key from expiring To get your Nango secret key `NANGO_SECRET_KEY`: * Open Nango in a browser at `http://:3050` * Nango auto-creates two environments, Prod and Dev. Select the one you will use. * Navigate to Environment Settings to find the secret key ## Deploy the Inkeep Agent Framework From the root directory, create a new project directory for the Docker Compose setup for the Inkeep Agent Framework ```bash mkdir inkeep && cd inkeep wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/docker-compose.yml wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/.env.docker.example ``` Generate a `.env` file from the example: ```bash cp .env.docker.example .env ``` Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash # Change to "production" if deploying to production ENVIRONMENT=production # AI Provider Keys (you need at least one) ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= # Nango NANGO_SECRET_KEY= # SigNoz SIGNOZ_API_KEY= # Uncomment and set each of these with (openssl rand -hex 32) INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= INKEEP_AGENTS_JWT_SIGNING_SECRET= # Uncomment and set these for the Manage UI at http://:3000 PUBLIC_INKEEP_AGENTS_MANAGE_API_URL=http://:3002 PUBLIC_INKEEP_AGENTS_RUN_API_URL=http://:3003 PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET= PUBLIC_NANGO_SERVER_URL=http://:3050 PUBLIC_NANGO_CONNECT_BASE_URL=http://:3051 PUBLIC_SIGNOZ_URL=http://:3080 # Uncomment and set these to access Manage UI at http://:3000 INKEEP_AGENTS_MANAGE_UI_USERNAME=admin@example.com INKEEP_AGENTS_MANAGE_UI_PASSWORD=adminADMIN!@12 ``` Run with Docker: ```bash docker compose up -d ``` Then open `http://:3000` in a browser! # Deploy to Hetzner URL: /self-hosting/hetzner Deploy to Hetzner with Docker Compose *** title: Deploy to Hetzner sidebarTitle: Hetzner description: Deploy to Hetzner with Docker Compose icon: "LuServerCog" ------------------- ## Create a server * Create a server, recommended size is at least CPX32 (4 VCPUS, 8 GB RAM, >30 GB Storage) * Select Ubuntu 24.04 Image * Create an inbound firewall rule to allow TCP ports: 3000, 3002, 3003, 3050, 3051, and 3080. These are the ports exposed by the Inkeep services. ## Install Docker Compose 1. SSH into the server as root 2. [Set up Docker's apt repository](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository) ```bash # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update ``` 3. Install the Docker packages ```bash sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin ``` ## Deploy SigNoz and Nango Clone this repo, which includes Docker files with SigNoz and Nango: ```bash git clone https://github.com/inkeep/agents-optional-local-dev inkeep-external-services cd inkeep-external-services ``` Run this command to autogenerate a `.env` file: ```bash cp .env.example .env && \ encryption_key=$(openssl rand -base64 32) && \ tmp_file=$(mktemp) && \ sed "s||$encryption_key|" .env > "$tmp_file" && \ mv "$tmp_file" .env && \ echo "Docker environment file created with auto-generated encryption key" ``` Nango requires a `NANGO_ENCRYPTION_KEY`. Once you create this, it cannot be edited. Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash NANGO_ENCRYPTION_KEY= # Replace these with your in production! NANGO_SERVER_URL=http://:3050 NANGO_PUBLIC_CONNECT_URL=http://:3051 # Modify these in production environments! NANGO_DASHBOARD_USERNAME=admin@example.com NANGO_DASHBOARD_PASSWORD=adminADMIN!@12 ``` Build and deploy SigNoz, Nango, OTEL Collector, and Jaeger: ```bash docker compose up -d ``` This may take up to 5 minutes to start. ### Retrieve your SigNoz and Nango API Keys To get your SigNoz API key `SIGNOZ_API_KEY`: * Open SigNoz in a browser at `http://:3080` * Navigate to Settings → Account Settings → API Keys → New Key * Choose a role, Viewer is sufficient for observability * Set the expiration field to "No Expiry" to prevent the key from expiring To get your Nango secret key `NANGO_SECRET_KEY`: * Open Nango in a browser at `http://:3050` * Nango auto-creates two environments, Prod and Dev. Select the one you will use. * Navigate to Environment Settings to find the secret key ## Deploy the Inkeep Agent Framework From the root directory, create a new project directory for the Docker Compose setup for the Inkeep Agent Framework ```bash mkdir inkeep && cd inkeep wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/docker-compose.yml wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/.env.docker.example ``` Generate a `.env` file from the example: ```bash cp .env.docker.example .env ``` Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash # Change to "production" if deploying to production ENVIRONMENT=production # AI Provider Keys (you need at least one) ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= # Nango NANGO_SECRET_KEY= # SigNoz SIGNOZ_API_KEY= # Uncomment and set each of these with (openssl rand -hex 32) INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= INKEEP_AGENTS_JWT_SIGNING_SECRET= # Uncomment and set these for the Manage UI at http://:3000 PUBLIC_INKEEP_AGENTS_MANAGE_API_URL=http://:3002 PUBLIC_INKEEP_AGENTS_RUN_API_URL=http://:3003 PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET= PUBLIC_NANGO_SERVER_URL=http://:3050 PUBLIC_NANGO_CONNECT_BASE_URL=http://:3051 PUBLIC_SIGNOZ_URL=http://:3080 # Uncomment and set these to access Manage UI at http://:3000 INKEEP_AGENTS_MANAGE_UI_USERNAME=admin@example.com INKEEP_AGENTS_MANAGE_UI_PASSWORD=adminADMIN!@12 ``` Run with Docker: ```bash docker compose up -d ``` Then open `http://:3000` in a browser! # Deploy to Vercel URL: /self-hosting/vercel Deploy the Inkeep Agent Framework to Vercel *** title: Deploy to Vercel description: Deploy the Inkeep Agent Framework to Vercel icon: "brand/VercelIcon" ------------------------ ## Deploy to Vercel ### Step 1: Create a GitHub repository for your project If you do not have an Inkeep project already, [follow these steps](/get-started/quick-start) to create one. Then push your project to a repository on GitHub. ### Step 2: Create a Turso Database Create a Turso database on [**Vercel Marketplace**](https://vercel.com/marketplace/tursocloud) or directly at [**Turso Cloud**](https://app.turso.tech/). ### Step 3: Save your Turso Database URL and Auth Token Create a token for your database, and then save the URL and token created. ``` TURSO_DATABASE_URL= TURSO_AUTH_TOKEN= ``` ### Step 4: Create a Vercel account Sign up for a Vercel account [here](https://vercel.com/signup). ### Step 5: Create a Vercel project for Manage API ![Vercel New Project - Manage API](/images/vercel-new-project-manage-api-hono.png) The Framework Preset should be "Hono" and the Root Directory should be `apps/manage-api`. Required environment variables for Manage API: ``` ENVIRONMENT=production INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= TURSO_DATABASE_URL= TURSO_AUTH_TOKEN= NANGO_SECRET_KEY= NANGO_SERVER_URL=https://api.nango.dev ``` | Environment Variable | Value | | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `ENVIRONMENT` | `production` | | `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` | Run `openssl rand -hex 32` in your terminal to generate this value. Save this value for `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` in Step 7. | | `TURSO_DATABASE_URL` | Turso database URL you saved from Step 3 | | `TURSO_AUTH_TOKEN` | Turso auth token you saved from Step 3 | | `NANGO_SECRET_KEY` | Nango secret key from your [Nango Cloud account](/get-started/credentials#option-1-nango-cloud-setup). Note: Local Nango setup won't work with Vercel deployments. | | `NANGO_SERVER_URL` | `https://api.nango.dev` | ### Step 6: Create a Vercel project for Run API ![Vercel New Project - Run API](/images/vercel-new-project-run-api-hono.png) The Framework Preset should be "Hono" and the Root Directory should be `apps/run-api`. Required environment variables for Run API: ``` ENVIRONMENT=production ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= TURSO_DATABASE_URL= TURSO_AUTH_TOKEN= OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ingest.us.signoz.cloud:443/v1/traces OTEL_EXPORTER_OTLP_TRACES_HEADERS=signoz-ingestion-key= NANGO_SECRET_KEY= NANGO_SERVER_URL=https://api.nango.dev INKEEP_AGENTS_JWT_SIGNING_SECRET= ``` | Environment Variable | Value | | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `ENVIRONMENT` | `production` | | `ANTHROPIC_API_KEY` | Your Anthropic API key | | `OPENAI_API_KEY` | Your OpenAI API key | | `GOOGLE_GENERATIVE_AI_API_KEY` | Your Google Gemini API key | | `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` | Run `openssl rand -hex 32` in your terminal to generate this value. Save this value for `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` in Step 7. | | `TURSO_DATABASE_URL` | Turso database URL you saved from Step 3 | | `TURSO_AUTH_TOKEN` | Turso auth token you saved from Step 3 | | `NANGO_SECRET_KEY` | Nango secret key from your [Nango Cloud account](/get-started/credentials#option-1-nango-cloud-setup). Note: Local Nango setup won't work with Vercel deployments. | | `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` | `https://ingest.us.signoz.cloud:443/v1/traces` | | `OTEL_EXPORTER_OTLP_TRACES_HEADERS` | `signoz-ingestion-key=`. Use the instructions from [SigNoz Cloud Setup](/get-started/traces#option-1-signoz-cloud-setup) to configure your ingestion key. Note: Local SigNoz setup won't work with Vercel deployments. | | `NANGO_SERVER_URL` | `https://api.nango.dev` | | `INKEEP_AGENTS_JWT_SIGNING_SECRET` | Run `openssl rand -hex 32` in your terminal to generate this value. Save this value for `INKEEP_AGENTS_JWT_SIGNING_SECRET` in Step 7. | ### Step 7: Create a Vercel project for Manage UI ![Vercel New Project - Manage UI](/images/vercel-new-project-manage-ui-nextjs.png) The Framework Preset should be "Next.js" and the Root Directory should be `apps/manage-ui`. Required environment variables for Manage UI: ``` ENVIRONMENT=production PUBLIC_INKEEP_AGENTS_RUN_API_URL= PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET= PUBLIC_INKEEP_AGENTS_MANAGE_API_URL= INKEEP_AGENTS_MANAGE_API_URL= INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= PUBLIC_SIGNOZ_URL=https://.signoz.cloud SIGNOZ_API_KEY= PUBLIC_NANGO_SERVER_URL=https://api.nango.dev PUBLIC_NANGO_CONNECT_BASE_URL=https://connect.nango.dev NANGO_SECRET_KEY= ``` | Environment Variable | Value | | -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `ENVIRONMENT` | `production` | | `PUBLIC_INKEEP_AGENTS_RUN_API_URL` | Your Vercel deployment URL for Run API | | `PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET` | Your generated Run API bypass secret from Step 6 | | `PUBLIC_INKEEP_AGENTS_MANAGE_API_URL` | Your Vercel deployment URL for Manage API (skip if same as `INKEEP_AGENTS_MANAGE_API_URL`) | | `INKEEP_AGENTS_MANAGE_API_URL` | Your Vercel deployment URL for Manage API | | `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` | Your generated Manage API bypass secret from Step 5 | | `PUBLIC_SIGNOZ_URL` | Use the instructions from [SigNoz Cloud Setup](/get-started/traces#option-1-signoz-cloud-setup) to configure your SigNoz URL. Note: Local SigNoz setup won't work with Vercel deployments. | | `SIGNOZ_API_KEY` | Use the instructions from [SigNoz Cloud Setup](/get-started/traces#option-1-signoz-cloud-setup) to configure your SigNoz API key. Note: Local SigNoz setup won't work with Vercel deployments. | | `NANGO_SECRET_KEY` | Nango secret key from your [Nango Cloud account](/get-started/credentials#option-1-nango-cloud-setup). Note: Local Nango setup won't work with Vercel deployments. | | `PUBLIC_NANGO_SERVER_URL` | `https://api.nango.dev` | | `PUBLIC_NANGO_CONNECT_BASE_URL` | `https://connect.nango.dev` | ### Step 8: Enable Vercel Authentication To prevent anyone from being able to access the UI, we recommend enabling Vercel authentication for all deployments: **Settings > Deployment Protection > Vercel Authentication > All Deployments**. ### Step 9: Create a Vercel project for your MCP server (optional) ![Vercel New Project - MCP Server](/images/vercel-new-project-mcp.png) The Framework Preset should be "Next.js" and the Root Directory should be `apps/mcp`. For more information on how to add MCP servers to your project, see [Create MCP Servers](/typescript-sdk/cli-reference#inkeep-add). ## Push your Agent ### Step 1: Configure your root .env file ``` INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= ``` ### Step 2: Create a cloud configuration file Create a new configuration file named `inkeep-cloud.config.ts` in your project's `src` directory, alongside your existing configuration file. ```typescript import { defineConfig } from "@inkeep/agents-cli/config"; const config = defineConfig({ tenantId: "default", agentsManageApi: { url: "https://", apiKey: process.env.INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET, }, agentsRunApi: { url: "https://", apiKey: process.env.INKEEP_AGENTS_RUN_API_BYPASS_SECRET, }, }); export default config; ``` ### Step 3: Push your Agent ```bash cd /src/ inkeep push --config ../inkeep-cloud.config.ts ``` ## Pull your Agent ```bash cd /src inkeep pull --config inkeep-cloud.config.ts ``` ## Function Tools with Vercel Sandbox When deploying to serverless environments like Vercel, you can configure [function tools](/typescript-sdk/tools/function-tools) to execute in [Vercel Sandbox](https://vercel.com/docs/vercel-sandbox) MicroVMs instead of your Agent's runtime service. This is **required** for serverless platforms since child process spawning is restricted. ### Why Use Vercel Sandbox? **When to use each provider:** * **Native** - Use for traditional cloud deployments (VMs, Docker, Kubernetes), self-hosted servers, or local development * **Vercel Sandbox** - Required for serverless platforms (Vercel, AWS Lambda, etc.) or if you'd like to isolate tool executions ### Setting Up Vercel Sandbox #### Step 1: Get Vercel Credentials You'll need three credentials from your Vercel account: 1. **Vercel Token** - Create an access token at [vercel.com/account/tokens](https://vercel.com/account/tokens) 2. **Team ID** - Find in your team settings at [vercel.com/teams](https://vercel.com/teams) 3. **Project ID** - Find in your Vercel project settings #### Step 2: Configure Sandbox in Your Application Update your Run API to use Vercel Sandbox. In the `apps/run-api/src` folder, create a `sandbox.ts` file: ```typescript sandbox.ts const isProduction = process.env.ENVIRONMENT === "production"; export const sandboxConfig = isProduction ? { provider: "vercel", runtime: "node22", // or 'typescript' timeout: 60000, // 60 second timeout vcpus: 4, // Allocate 4 vCPUs teamId: process.env.SANDBOX_VERCEL_TEAM_ID!, projectId: process.env.SANDBOX_VERCEL_PROJECT_ID!, token: process.env.SANDBOX_VERCEL_TOKEN!, } : { provider: "native", runtime: "node22", timeout: 30000, vcpus: 2, }; ``` Import it into your `index.ts` file: ```typescript index.ts import { sandboxConfig } from "./sandbox.ts"; // ... const app: Hono = createExecutionApp({ // ... sandboxConfig, // NEW }); ``` #### Step 3: Add Environment Variables to Run API Add these [environment variables in your Vercel project](https://vercel.com/docs/environment-variables/managing-environment-variables#declare-an-environment-variable) to your **Run API** app: ```bash SANDBOX_VERCEL_TOKEN=your_vercel_access_token SANDBOX_VERCEL_TEAM_ID=team_xxxxxxxxxx SANDBOX_VERCEL_PROJECT_ID=prj_xxxxxxxxxx ``` "Failed to refresh OIDC token" error:
  • This occurs when you're not in a Vercel environment or you don't provide a Vercel access token
  • Solution: Use a Vercel access token from vercel.com/account/tokens
Function execution timeouts:
  • Increase the timeout value in sandbox configuration
  • Consider allocating more vcpus for resource-intensive functions
  • Check Vercel Sandbox limits for your plan
Dependency installation failures:
  • Ensure dependencies are compatible with Node.js 22
  • Check that package versions are specified correctly
  • Verify network access to npm registry
High costs:
  • Reduce vcpus allocation if functions don't need maximum resources
  • Optimize function code to execute faster
  • Consider caching results when possible
  1. Use environment variables – Never hardcode credentials
  2. Start with fewer vCPUs – Scale up only if needed
  3. Set reasonable timeouts – Prevent runaway executions
  4. Monitor usage – Track sandbox execution metrics in Vercel dashboard
  5. Test thoroughly – Verify functions work in sandbox environment before deploying
  6. Choose the right provider – Use native for VMs/Docker/K8s, Vercel Sandbox for serverless only

For more information on function tools, see:

# Talk to your agent via A2A (JSON-RPC) URL: /talk-to-your-agents/a2a Use the Agent2Agent JSON-RPC protocol to send messages to your agent and receive results, with optional streaming. *** title: Talk to your agent via A2A (JSON-RPC) sidebarTitle: Agent2Agent API description: Use the Agent2Agent JSON-RPC protocol to send messages to your agent and receive results, with optional streaming. icon: "LuNetwork" ----------------- The A2A (Agent-to-Agent) endpoint lets third-party agents, agent platforms, or agent workspaces interact with your Inkeep Agent using a standard agent protocol. Here are some example platforms that you can add Inkeep Agents to: | Platform | Description | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | | **[Google Gemini Enterprise](https://cloud.google.com/gemini-enterprise/faq)** | Bring‑your‑own A2A agents into an enterprise agentspace and orchestrate alongside Google/vendor agents. | | **[Microsoft Copilot Studio / Azure AI Foundry](https://microsoftlearning.github.io/mslearn-ai-agents/Instructions/06-multi-remote-agents-with-a2a.html)** | Copilots can invoke external A2A agents as peer services in multi‑agent flows. | | **[Salesforce Agentforce](https://architect.salesforce.com/fundamentals/agentic-patterns)** | Add third‑party A2A agents (e.g., via AgentExchange) and compose them in CRM workflows. | | **[SAP Joule](https://learning.sap.com/courses/boosting-ai-driven-business-transformation-with-joule-agents/enabling-interoperability-for-ai-agents)** | Federate non‑SAP A2A agents into SAP’s business agent fabric. | | **[ServiceNow AI Agent Fabric](https://www.servicenow.com/community/now-assist-articles/introducing-ai-agent-fabric-enable-mcp-and-a2a-for-your-agentic/ta-p/3373907)** | Discover and call external A2A agents within IT/business automations. | | **[Atlassian Rovo](https://support.atlassian.com/atlassian-rovo-mcp-server/docs/getting-started-with-the-atlassian-remote-mcp-server/)** | Configure Rovo to call external A2A agents for cross‑tool tasks. | | **[Workday AI (Agent Gateway / ASOR)](https://investor.workday.com/2025-06-03-Workday-Announces-New-AI-Agent-Partner-Network-and-Agent-Gateway-to-Power-the-Next-Generation-of-Human-and-Digital-Workforces)** | Register external customer/partner A2A agents alongside Workday agents. | <> ## Authentication Choose the authentication method: **Create an API Key:** 1. Open the Visual Builder Dashboard 2. Go to your Project → **API Keys** 3. Click **Create**, select the target agent 4. Copy the API key (it will be shown once) and store it securely **Request Header:** ```http Authorization: Bearer ``` When running the API server locally with `pnpm dev`, authentication is automatically bypassed. You can use headers in the request instead: **Request Headers:** ```http x-inkeep-tenant-id: x-inkeep-project-id: x-inkeep-agent-id: ``` This mode is for development only. Never use in production as it bypasses all security checks. See [Authentication → Run API](/api-reference/authentication/run-api) for more details. ## Endpoints * **Agent card discovery (agent or sub-agent-level):** `GET /agents/.well-known/agent.json` * **A2A protocol (agent or sub-agent-level):** `POST /agents/.well-known/agent.json` Notes: * If you supply `x-inkeep-sub-agent-id` in headers, requests target that specific Sub Agent. This is supported in development (or when using the bypass secret). With production API keys, requests always use the Agent's default Sub Agent. ## Message Send (Blocking) * **Path:** `POST /agents/a2a` * **Headers:** Per Authentication section (Standard API Key recommended) * **Body (JSON-RPC 2.0):** ```json { "jsonrpc": "2.0", "method": "message/send", "id": 1, "params": { "message": { "messageId": "msg-123", "role": "user", "parts": [ { "kind": "text", "text": "Hello!" } ], "contextId": "conv-123", "kind": "message" }, "configuration": { "acceptedOutputModes": ["text", "text/plain"], "blocking": true } } } ``` * **Success Response (Message):** ```json { "jsonrpc": "2.0", "result": { "kind": "message", "messageId": "auto-generated", "role": "agent", "parts": [ { "kind": "text", "text": "..." } ], "taskId": "task-...", "contextId": "conv-123" }, "id": 1 } ``` * **Transfer Case:** if the agent decides to transfer, the response contains a `task` with a transfer artifact: ```json { "jsonrpc": "2.0", "result": { "kind": "task", "id": "task-...", "contextId": "conv-123", "status": { "state": "completed", "timestamp": "..." }, "artifacts": [ { "artifactId": "...", "parts": [ { "kind": "data", "data": { "type": "transfer", "targetSubAgentId": "other-agent-id" } }, { "kind": "text", "text": "Transfer reason text" } ] } ] }, "id": 1 } ``` ## Message Send (Non-blocking) Set `configuration.blocking` to `false`. The server immediately returns a `task`, and you can poll or stream updates. * **Request:** same as above, with `"blocking": false` * **Response (Task):** ```json { "jsonrpc": "2.0", "result": { "kind": "task", "id": "task-...", "contextId": "conv-123", "status": { "state": "completed", "timestamp": "..." }, "artifacts": [ { "artifactId": "...", "parts": [ { "kind": "text", "text": "..." } ] } ] }, "id": 1 } ``` ## Streaming Messages (SSE) * **Path:** `POST /agents/a2a` * **Headers:** include `Accept: text/event-stream` plus Authentication headers * **Body (JSON-RPC 2.0):** same as blocking, but method `"message/stream"` ```json { "jsonrpc": "2.0", "method": "message/stream", "id": 2, "params": { "message": { "messageId": "msg-456", "role": "user", "parts": [ { "kind": "text", "text": "Stream please" } ], "contextId": "conv-123", "kind": "message" }, "configuration": { "acceptedOutputModes": ["text", "text/plain"], "blocking": false } } } ``` * **SSE Events (each line is an SSE `data:` payload containing JSON-RPC):** ```text : keep-alive data: {"jsonrpc":"2.0","result":{"kind":"task","id":"task-...","contextId":"conv-123","status":{"state":"working","timestamp":"..."},"artifacts":[]},"id":2} data: {"jsonrpc":"2.0","result":{"kind":"message","messageId":"...","role":"agent","parts":[{"kind":"text","text":"..."}],"taskId":"task-...","contextId":"conv-123"},"id":2} data: {"jsonrpc":"2.0","result":{"kind":"task","id":"task-...","contextId":"conv-123","status":{"state":"completed","timestamp":"..."},"artifacts":[{"artifactId":"...","parts":[{"kind":"text","text":"..."}]}]},"id":2} ``` * **Transfer (streaming):** if a transfer is triggered, an SSE event with a JSON-RPC `result` of transfer details is sent, then the stream ends. ## Task APIs * **Get task:** `POST /agents/a2a` with method `"tasks/get"` and params `{ "id": "task-..." }` → returns the `task` * **Cancel task:** `POST /agents/a2a` with method `"tasks/cancel"` and params `{ "id": "task-..." }` → returns `{ "success": true }` * **Resubscribe (SSE, mock):** `POST /agents/a2a` with method `"tasks/resubscribe"` and params `{ "taskId": "task-..." }` → SSE with a task event Currently, `tasks/get` and `tasks/cancel` return stubbed responses, and `tasks/resubscribe` returns a mock SSE event. For live progress updates, use `message/stream`. ## Agent Card Discovery * **Agent-level:** `GET /agents/.well-known/agent.json` (uses Agent's default Sub Agent) * **Agent-level (dev/bypass only):** Provide `x-inkeep-sub-agent-id` in headers to target a specific agent for discovery ## Notes & Behavior * **contextId resolution:** The server first tries `task.context.conversationId` (derived from the request), then `params.message.metadata.conversationId`. Final fallback is `'default'`. * **Artifacts in responses:** Message/Task responses may include `artifacts[0].parts` as the agent's output parts. * **Errors (JSON-RPC):** Standard JSON-RPC error codes: `-32600`, `-32601`, `-32602`, `-32603`, `-32700`, plus A2A-specific `-3200x` codes. ## Development Notes * **Base URL (local):** `http://localhost:3003` * **Route Mounting:** A2A routes are mounted under `/agents`; use `/agents/a2a` for RPC and `/agents/.well-known/agent.json` for discovery * **Streaming support:** Requires agent capabilities `streaming: true` in the agent card # How to call your AI Agent using the Chat API URL: /talk-to-your-agents/chat-api Learn about details of the Vercel AI SDK data stream protocol that powers the `/chat` API endpoint. *** title: How to call your AI Agent using the Chat API sidebarTitle: Chat API description: Learn about details of the Vercel AI SDK data stream protocol that powers the `/chat` API endpoint. icon: LuMessagesSquare keywords: API, Vercel AI SDK, streaming, SSE, data stream, agents, chat ----------------------------------------------------------------------- ## Overview This guide shows how to call your agent directly over HTTP and stream responses using the Vercel AI SDK data stream format. It covers the exact endpoint, headers, request body, and the event stream response you should expect. If you are building a React UI, consider our prebuilt components under [React UI Components](/talk-to-your-agents/react/chat-button) or [Vercel AI Elements](/talk-to-your-agents/vercel-ai-sdk/ai-elements) headless primatives. This page is for the low-level streaming API. ## Endpoint * **Path (mounted by the Run API):** `/api/chat` * **Method:** `POST` * **Protocol:** Server-Sent Events (SSE) encoded JSON, using Vercel AI SDK data-stream v2 * **Content-Type (response):** `text/event-stream` * **Response Header:** `x-vercel-ai-data-stream: v2` <> ## Authentication Choose the authentication method: **Create an API Key:** 1. Open the Visual Builder Dashboard 2. Go to your Project → **API Keys** 3. Click **Create**, select the target agent 4. Copy the API key (it will be shown once) and store it securely **Request Header:** ```http Authorization: Bearer ``` When running the API server locally with `pnpm dev`, authentication is automatically bypassed. You can use headers in the request instead: **Request Headers:** ```http x-inkeep-tenant-id: x-inkeep-project-id: x-inkeep-agent-id: ``` This mode is for development only. Never use in production as it bypasses all security checks. See [Authentication → Run API](/api-reference/authentication/run-api) for more details. ## Request Body Schema ```json { "messages": [ { "role": "user", "content": "Hello!" } ], "conversationId": "optional-conversation-id" } ``` **Field Notes:** * **`messages`** — Must include at least one `user` message * **`content`** — Can be a string or an object with `parts` for multi-part content * **`conversationId`** — Optional; server generates one if omitted ### Optional Headers * **`x-emit-operations`** — Set to `true` to include detailed data operations in the response stream. Useful for debugging and monitoring agent behavior. See [Data Operations](/typescript-sdk/data-operations) for details. ### Example cURL When using an API key for auth: ```bash curl -N \ -X POST "http://localhost:3003/api/chat" \ -H "Authorization: Bearer $INKEEP_API_KEY" \ -H "Content-Type: application/json" \ -H "x-emit-operations: true" \ -d '{ "messages": [ { "role": "user", "content": "What can you do?" } ], "conversationId": "chat-1234" }' ``` ## Response: Vercel AI SDK Data Stream (v2) The response is an SSE stream of JSON events compatible with the Vercel AI SDK UI message stream. The server sets `x-vercel-ai-data-stream: v2`. ### Event Types #### Text Streaming Events * **`text-start`** — Indicates a new text segment is starting * **`text-delta`** — Carries the text content delta for the current segment * **`text-end`** — Marks the end of the current text segment #### Data Events * **`data-component`** — Structured UI data emitted by the agent (for rich UIs) * **`data-artifact`** — Artifact data emitted by tools/agents (documents, files, saved results) * **`data-operation`** — Low-level operational events (agent lifecycle, completion, errors) * **`data-summary`** — AI-generated status updates with user-friendly labels and contextual details ### Example Stream (abbreviated) ```text : keep-alive data: {"type":"text-start","id":"1726247200-abc123"} data: {"type":"text-delta","id":"1726247200-abc123","delta":"Hello! I can help with..."} data: {"type":"data-summary","data":{"label":"Searching documentation","details":{"summary":"Looking for relevant information","progress":"25%"}}} data: {"type":"text-delta","id":"1726247200-abc123","delta":" analyzing your query..."} data: {"type":"text-end","id":"1726247200-abc123"} data: {"type":"data-operation","data":{"type":"completion","ctx":{"agent":"search-agent","iteration":1}}} data: {"type":"data-component","id":"1726247200-abc123-0","data":{"type":"customer-info","name":"Ada","email":"ada@example.com"}} data: {"type":"data-artifact","data":{"artifact_id":"art_abc","task_id":"task_xyz","summary":{"title":"Search Results"}}} ``` ### Data Event Details #### `data-operation` Events Low-level operational events with technical context. Common types include: * **`agent_initializing`** — The agent runtime is starting * **`agent_ready`** — Agent is ready and processing * **`completion`** — The agent completed the task (includes agent ID and iteration count) * **`error`** — Error information (also emitted as a top-level `error` event) ```json {"type":"data-operation","data":{"type":"completion","ctx":{"agent":"search-agent","iteration":1}}} ``` #### `data-summary` Events AI-generated status updates designed for end-user consumption. Structure: * **`label`** — User-friendly description (required) * **`details`** — Optional structured/unstructured context data ```json {"type":"data-summary","data":{"label":"Processing search results","details":{"summary":"Found 12 relevant documents","itemsProcessed":12,"status":"analyzing"}}} ``` #### `data-artifact` Events Saved documents, files, or structured results from tool executions: ```json {"type":"data-artifact","data":{"artifact_id":"art_123","task_id":"task_456","summary":{"title":"Weather Report","type":"document"}}} ``` #### `data-component` Events Structured UI data for rich interface components: ```json {"type":"data-component","id":"comp_123","data":{"type":"chart","title":"Sales Data","chartData":[1,2,3]}} ``` ### Text Streaming Behavior * For each text segment, the server emits `text-start` → `text-delta` → `text-end` * The server avoids splitting content word-by-word; a segment is usually a coherent chunk * Operational events are queued during active text emission and flushed shortly after to preserve ordering and readability ## Error Responses ### Streamed Errors Errors are now delivered as `data-operation` events with unified structure: ```json { "type": "data-operation", "data": { "type": "error", "message": "Error description", "agent": "agent-id", "severity": "error", "code": "optional-error-code", "timestamp": 1640995200000 } } ``` ### Non-Streaming Errors Validation failures and other errors return JSON with an appropriate HTTP status code. ## HTTP Status Codes * **`200`** — Stream opened successfully * **`401`** — Missing/invalid authentication * **`404`** — Agent not found * **`400`** — Invalid request body/context * **`500`** — Internal server error ## Development Notes * **Default local base URL:** `http://localhost:3003` * **Endpoint mounting in the server:** * `/api/chat` → Vercel data stream (this page) * `/v1/mcp` → MCP JSON-RPC endpoint To test quickly without a UI, use `curl -N` or a tool that supports Server-Sent Events. # MCP Server URL: /talk-to-your-agents/mcp-server Learn how to use the MCP server to talk to your agents *** title: MCP Server description: Learn how to use the MCP server to talk to your agents icon: "LuServer" ---------------- The MCP server allows you to talk to your agents through the Model Context Protocol. <> ## Authentication Choose the authentication method: **Create an API Key:** 1. Open the Visual Builder Dashboard 2. Go to your Project → **API Keys** 3. Click **Create**, select the target agent 4. Copy the API key (it will be shown once) and store it securely **Request Header:** ```http Authorization: Bearer ``` When running the API server locally with `pnpm dev`, authentication is automatically bypassed. You can use headers in the request instead: **Request Headers:** ```http x-inkeep-tenant-id: x-inkeep-project-id: x-inkeep-agent-id: ``` This mode is for development only. Never use in production as it bypasses all security checks. See [Authentication → Run API](/api-reference/authentication/run-api) for more details. ## MCP Server Implementation The MCP server is implemented in the `@inkeep/agents-run-api` library and provides a standard interface for agent communication. ## Available Tools The MCP server exposes one core tool: ### send-query-to-agent Sends a query to your agent's default sub-agent. This tool: * **Name**: `send-query-to-agent` * **Description**: Dynamically generated based on your agent's name and description * **Parameters**: * `query` (string): The query to send to the agent * **Returns**: The agent's response as text content **Example usage in Cursor:** When the MCP server is configured, Cursor will automatically discover this tool and you can use it by asking questions. The tool will route your query to the appropriate sub-agent in your agent and return the response. The tool handles: * Message creation and conversation management * Sub-agent execution with your configured tools and capabilities * Context resolution if your agent has context configuration * Error handling and response formatting ## Using with Cursor ### Quick Install (Inkeep Hosted Docs MCP) Install the Inkeep Agents documentation MCP server with one click: ### Manual Configuration Add the following configuration to your Cursor MCP settings. #### Example Example when using an API key for auth: ```json { "AgentName": { "type": "mcp", "url": "http:///v1/mcp", "headers": { "Authorization": "Bearer " } } } ``` ## Session Management (Required by MCP HTTP Transport) * Initialize a session by sending an `initialize` JSON-RPC request to `/v1/mcp`. * The server will respond and set `Mcp-Session-Id` in response headers. * For all subsequent JSON-RPC requests in that session, include `Mcp-Session-Id` header with the value from initialization. Session management is required by MCP’s HTTP transport. If `Mcp-Session-Id` is missing or invalid on follow-up requests, the server will return a JSON-RPC error (e.g., "Session not found"). ## Configuration Notes * **URL**: Point to your `agents-run-api` instance (default: `http://localhost:3003`) * **Headers**: Use the appropriate authentication mode per the section above * **Authorization**: Only required outside development mode # Overview URL: /talk-to-your-agents/overview Learn how to talk to your agents *** title: Overview description: Learn how to talk to your agents icon: "LuMessageSquare" ----------------------- <> You can talk to an Inkeep Agent in a few ways, including: * **UI Chat Components**: Drop-in React components for chat UIs with built-in streaming and rich UI customization. See [`agents-ui`](/talk-to-your-agents/react/chat-button). * **As an MCP server**: Use your Inkeep Agent as if was an MCP Server. Allows you to connect it to any MCP client, like Claude, ChatGPT, Claude and other Agents. See [MCP server](/talk-to-your-agents/mcp-server). * **Via API (Vercel format)**: An API that streams responses over server-side events (SSE). Use from any language/runtime, including the Vercel's `useChat` and AI Element primitives for custom UIs. See [API (Vercel format)](/talk-to-your-agents/api). * **Via API (A2A format)**: An API that follows the Agent-to-Agent ('A2A') JSON-RPC protocol. Great for when combining Inkeep with different Agent frameworks that support the A2A format. See [A2A protocol](/talk-to-your-agents/a2a). Drop-in chat components for React apps with streaming and rich UI. POST /api/chat, SSE (text/event-stream), x-vercel-ai-data-stream: v2. JSON-RPC messages at /agents/a2a with blocking and streaming modes. HTTP JSON-RPC endpoint at /v1/mcp with session header management. # Sub Agent Relationships URL: /typescript-sdk/agent-relationships Learn how to add Sub Agent relationships to your agent *** title: Sub Agent Relationships description: Learn how to add Sub Agent relationships to your agent icon: "LuUsers" --------------- Sub Agent relationships are used to coordinate specialized Sub Agents for complex workflows. This framework implements Sub Agent relationships through using `canDelegateTo()` and `canTransferTo()`, allowing a parent Sub Agent to automatically coordinate specialized Sub Agents for complex workflows. ## Understanding Sub Agent Relationships The framework supports two types of Sub Agent relationships: ### Transfer Relationships Transfer relationships **completely relinquish control** from one Sub Agent to another. When a Sub Agent hands off to another: * The source Sub Agent stops processing * The target Sub Agent takes full control of the conversation * Control is permanently transferred until the target Sub Agent hands back ```typescript import { subAgent, agent } from "agent-sdk"; // Create specialized Sub Agents first const qaSubAgent = subAgent({ id: "qa-agent", name: "QA Sub Agent", description: "Answers product and service questions", prompt: "Provide accurate information using available tools. Hand back to router if unable to help.", canUse: () => [knowledgeBaseTool], canTransferTo: () => [routerSubAgent], }); const orderSubAgent = subAgent({ id: "order-agent", name: "Order Sub Agent", description: "Handles order-related inquiries and actions", prompt: "Assist with order tracking, modifications, and management.", canUse: () => [orderSystemTool], canTransferTo: () => [routerSubAgent], }); // Create router Sub Agent that coordinates other Sub Agents const routerSubAgent = subAgent({ id: "router-agent", name: "Router Sub Agent", description: "Routes customer inquiries to specialized Sub Agents", prompt: `Analyze customer inquiries and route them appropriately: - Product questions → Hand off to QA Sub Agent - Order issues → Hand off to Order Sub Agent - Complex issues → Handle directly or escalate`, canTransferTo: () => [qaSubAgent, orderSubAgent], }); // Create the agent with router as default entry point const supportAgent = agent({ id: "customer-support-agent", defaultSubAgent: routerSubAgent, subAgents: () => [routerSubAgent, qaSubAgent, orderSubAgent], modelSettings: { model: "anthropic/claude-sonnet-4-5", structuredOutput: "openai/gpt-4.1-mini", providerOptions: { anthropic: { temperature: 0.5 }, }, }, }); ``` ### Delegation Relationships Delegation relationships are used to **pass a task** from one Sub Agent to another while maintaining oversight: * The source Sub Agent remains in control * The target Sub Agent executes a specific task * Results are returned to the source Sub Agent * The source Sub Agent continues processing ```typescript // Sub Agents for specific tasks const numberProducerA = subAgent({ id: "number-producer-a", name: "Number Producer A", description: "Produces low-range numbers (0-50)", prompt: "Generate numbers between 0 and 50. Respond with a single integer.", }); const numberProducerB = subAgent({ id: "number-producer-b", name: "Number Producer B", description: "Produces high-range numbers (50-100)", prompt: "Generate numbers between 50 and 100. Respond with a single integer.", }); // Coordinating Sub Agent that delegates tasks const mathSupervisor = subAgent({ id: "math-supervisor", name: "Math Supervisor", description: "Coordinates mathematical operations", prompt: `When given a math task: 1. Delegate to Number Producer A for a low number 2. Delegate to Number Producer B for a high number 3. Add the results together and provide the final answer`, canDelegateTo: () => [numberProducerA, numberProducerB], }); const mathAgent = agent({ id: "math-delegation-agent", defaultSubAgent: mathSupervisor, subAgents: () => [mathSupervisor, numberProducerA, numberProducerB], modelSettings: { model: "anthropic/claude-3-5-haiku-20241022", providerOptions: { anthropic: { temperature: 0.1 }, }, }, }); ``` ## When to Use Each Relationship ### Use Transfers for Complex Tasks Use `canTransferTo` when the task is complex and the user will likely want to ask follow-up questions to the specialized Sub Agent: * **Customer support conversations** - User may have multiple related questions * **Technical troubleshooting** - Requires back-and-forth interaction * **Order management** - User might want to modify, track, or ask about multiple aspects * **Product consultations** - Users often have follow-up questions ### Use Delegation for Simple Tasks Use `canDelegateTo` when the task is simple and self-contained: * **Data retrieval** - Get a specific piece of information and return it * **Calculations** - Perform a computation and return the result * **Single API calls** - Make one external request and return the data * **Simple transformations** - Convert data from one format to another ```typescript // TRANSFER: User will likely have follow-up questions about their order const routerSubAgent = subAgent({ id: "router", prompt: "For order inquiries, transfer to order specialist", canTransferTo: () => [orderSubAgent], }); // DELEGATION: Just need a quick calculation, then continue const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to number producers, then add results together", canDelegateTo: () => [numberProducerA, numberProducerB], }); ``` ## Types of Delegation Relationships ### Sub Agent Delegation Sub agent delegation is used to delegate a task to a sub agent as seen above. ### External Agent Delegation External agent delegation is used to delegate a task to an [external agent](/typescript-sdk/external-agents). ```typescript import { myExternalAgent } from "./external-agents/external-agent-example"; const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to the external agent to calculate the answer to the question", canDelegateTo: () => [myExternalAgent], }); ``` You can also specify headers to include with every request to the external agent. ```typescript const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to the external agent to calculate the answer to the question", canDelegateTo: () => [myExternalAgent.with({ headers: { "authorization": "my-api-key" } })], }); ``` ### Team Agent Delegation Team agent delegation is used to delegate a task to another agent in the same project. ```typescript import { myAgent } from "./agents/my-team-agent"; const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to the team agent to calculate the answer to the question", canDelegateTo: () => [myAgent], }); ``` You can also specify headers to include with every request to the team agent. ```typescript const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to the team agent to calculate the answer to the question", canDelegateTo: () => [myAgent.with({ headers: { "authorization": "my-api-key" } })], }); ``` # Agents & Sub Agents URL: /typescript-sdk/agent-settings Learn how to customize your Agents. *** title: Agents & Sub Agents description: Learn how to customize your Agents. icon: "LuUser" -------------- Agents and Sub Agents are the core building blocks of the Inkeep Agent framework. An Agent is made up of one or more Sub Agents that can delegate or transfer control with each other, share context, use tools to respond to a user or complete a task. ## Creating an Agent An Agent is your top-level entity that you as a user interact with or can trigger programmatically. An Agent is made up of sub-agents, like so: ```typescript // Agent-level prompt that gets added to all Sub Agents const customerSupportAgent = agent({ id: "support-agent", prompt: `You work for Acme Corp. Always be professional and helpful. Follow company policies and escalate complex issues appropriately.`, subAgents: () => [supportAgent, escalationAgent], }); ``` **The `prompt` is automatically put into context and added into each Sub Agent's system prompt.** This provides consistent behavior and tone to all Sub Agents so they can act and respond as one cohesive unit to the end-user. ## Creating a Sub Agent Like an Agent, a Sub Agent needs an id, name, and clear prompt that define its behavior: ```typescript import { subAgent } from "@inkeep/agents-sdk"; const supportAgent = subAgent({ id: "customer-support", name: "Customer Support Agent", prompt: `You are a customer support specialist. Always be helpful, professional, and empathetic.`, }); ``` ## Configuring Models Configure `models` at either the Agent or Sub Agent level. Sub Agents inherit from their parent Agent when not explicitly set, and Agents inherit from project defaults. The `models` object allows you to configure different models for different tasks, each with their own provider options: ```typescript models: { base: { model: "anthropic/claude-sonnet-4-5", // Primary model for text generation providerOptions: { temperature: 0.7, maxOutputTokens: 2048 // AI SDK v5 uses maxOutputTokens } }, structuredOutput: { model: "openai/gpt-4.1-mini", // For structured JSON output only providerOptions: { temperature: 0.1, maxOutputTokens: 1024, experimental_reasoning: true // Enable reasoning for better structured outputs } }, summarizer: { model: "openai/gpt-4.1-nano", // For summaries and status updates providerOptions: { temperature: 0.5, maxOutputTokens: 1000 } } } ``` ### Model types * **`base`**: Primary model used for conversational text generation and reasoning * **`structuredOutput`**: Model used for structured JSON output only (falls back to base if not configured and nothing to inherit) * **`summarizer`**: Model used for summaries and status updates (falls back to base if not configured and nothing to inherit) ### Supported providers The framework supports a wide range of models from major AI providers: * **Anthropic**: For example `anthropic/claude-opus-4-1`, `anthropic/claude-sonnet-4-5`, `anthropic/claude-sonnet-4`, `anthropic/claude-haiku-4-5`, `anthropic/claude-3-5-haiku-latest`, and more * **OpenAI**: For example `openai/gpt-5`, `openai/gpt-4.1-mini`, `openai/gpt-4.1-nano`, and more * **Google**: For example `google/gemini-2.5-pro`, `google/gemini-2.5-flash`, `google/gemini-2.5-flash-lite`, and more * **Additional providers** via OpenRouter and gateway routing ### Provider options All models support `providerOptions` to customize their behavior. These include both generic parameters that work across all providers and provider-specific features like reasoning. #### Generic parameters These parameters work with all supported providers and go directly in `providerOptions`: ```typescript models: { base: { model: "anthropic/claude-sonnet-4-20250514", providerOptions: { maxOutputTokens: 4096, // Maximum tokens to generate (AI SDK v5) temperature: 0.7, // Controls randomness (0.0-1.0) topP: 0.95, // Nucleus sampling (0.0-1.0) topK: 40, // Top-k sampling (integer) frequencyPenalty: 0.0, // Reduce repetition (-2.0 to 2.0) presencePenalty: 0.0, // Encourage new topics (-2.0 to 2.0) stopSequences: ["\n\n"], // Stop generation at sequences seed: 12345, // For deterministic output maxDuration: 30, // Timeout in seconds (not milliseconds) maxRetries: 2, // Maximum retry attempts } } } ``` #### Provider-specific features Advanced features like reasoning require provider-specific configuration wrapped in the provider name: ##### OpenAI reasoning ```typescript models: { base: { model: "openai/o3-mini", providerOptions: { maxOutputTokens: 4096, temperature: 0.7, openai: { reasoningEffort: 'medium' // 'low' | 'medium' | 'high' } } } } ``` `openai/gpt-5`, `openai/gpt-5-mini`, and `openai/gpt-5-nano` require a verified OpenAI organization. If your organization is not yet verified, these models will not be available. ##### Anthropic thinking ```typescript models: { base: { model: "anthropic/claude-3-7-sonnet-20250219", providerOptions: { maxOutputTokens: 4096, temperature: 0.7, anthropic: { thinking: { type: 'enabled', budgetTokens: 8000 // Tokens allocated for reasoning } } } } } ``` ##### Google Gemini thinking ```typescript models: { base: { model: "google/gemini-2-5-flash", providerOptions: { maxOutputTokens: 4096, temperature: 0.7, google: { thinkingConfig: { thinkingBudget: 8192, // 0 disables thinking includeThoughts: true // Return thought summary } } } } } ``` ### Accessing other models For models not directly supported, use these proxy providers: * **OpenRouter**: Access any model via `openrouter/model-id` format (e.g., `openrouter/anthropic/claude-sonnet-4-0`, `openrouter/meta-llama/llama-3.1-405b`) * **Vercel AI SDK Gateway**: Access models through your gateway via `gateway/model-id` format (e.g., `gateway/anthropic/claude-sonnet-4-0`) ```typescript models: { base: { model: "openrouter/anthropic/claude-sonnet-4-0", providerOptions: { temperature: 0.7, maxOutputTokens: 2048 } }, structuredOutput: { model: "gateway/openai/gpt-4.1-mini", providerOptions: { maxOutputTokens: 1024 } } } ``` ### Required API keys You need the appropriate API key for your chosen provider to be defined in your environment variables: * `ANTHROPIC_API_KEY` for Anthropic models * `OPENAI_API_KEY` for OpenAI models * `GOOGLE_GENERATIVE_AI_API_KEY` for Google models * `OPENROUTER_API_KEY` for OpenRouter models * `AI_GATEWAY_API_KEY` for Vercel AI SDK Gateway models ### Default models When using the Inkeep CLI, the following defaults are applied based on your chosen provider: **Anthropic:** * `base`: `anthropic/claude-sonnet-4-5` * `structuredOutput`: `anthropic/claude-sonnet-4-5` * `summarizer`: `anthropic/claude-sonnet-4-5` **OpenAI:** * `base`: `openai/gpt-4.1` * `structuredOutput`: `openai/gpt-4.1` * `summarizer`: `openai/gpt-4.1-nano` **Google:** * `base`: `google/gemini-2.5-flash` * `structuredOutput`: `google/gemini-2.5-flash-lite` * `summarizer`: `google/gemini-2.5-flash-lite` ## Configuring StopWhen Control stopping conditions to prevent infinite loops: ```typescript // Agent level - limit transfers between Sub Agents agent({ id: "support-agent", stopWhen: { transferCountIs: 5 // Max transfers in one conversation }, }); // Sub Agent level - limit generation steps subAgent({ id: "my-sub-agent", stopWhen: { stepCountIs: 20 // Max tool calls + LLM responses }, }); ``` **Configuration levels:** * `transferCountIs`: Project or Agent level * `stepCountIs`: Project or Sub Agent level Settings inherit from Project → Agent → Sub Agent. ## Sub Agent overview Beyond model configuration, Sub Agents define tools, structured outputs, and agent-to-agent relationships available to the Sub Agent. | Parameter | Type | Required | Description | | -------------------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `id` | string | Yes | Stable Sub Agent identifier used for consistency and persistence | | `name` | string | Yes | Human-readable name for the Sub Agent | | `prompt` | string | Yes | Detailed behavior guidelines and system prompt for the Sub Agent | | `description` | string | No | Brief description of the Sub Agent's purpose and capabilities | | `models` | object | No | Model configuration for this Sub Agent. See [Configuring Models](#configuring-models) | | `stopWhen` | object | No | Stop conditions (`stepCountIs`). See [Configuring StopWhen](#configuring-stopwhen) | | `canUse` | function | No | Returns the list of MCP/tools the Sub Agent can use. See [MCP Servers](/typescript-sdk/tools/mcp-servers) | | `dataComponents` | array | No | Structured output components for rich, interactive responses. See [Data Components](/typescript-sdk/structured-outputs/data-components) | | `artifactComponents` | array | No | Components for handling tool or Sub Agent outputs. See [Artifact Components](/typescript-sdk/structured-outputs/artifact-components) | | `canTransferTo` | function | No | Function returning array of Sub Agents this Sub Agent can transfer to. See [Transfer Relationships](/typescript-sdk/agent-relationships#transfer-relationships) | | `canDelegateTo` | function | No | Function returning array of Sub Agents this Sub Agent can delegate to. See [Delegation Relationships](/typescript-sdk/agent-relationships#delegation-relationships) | ### Tools & MCPs Enable tools for a Sub Agent to perform actions like looking up information or calling external APIs. Tools can be: * **[MCP Servers](/typescript-sdk/tools/mcp-servers)** - Connect to external services and APIs using the Model Context Protocol * **[Function Tools](/typescript-sdk/tools/function-tools)** - Custom JavaScript functions that execute directly in secure sandboxes ```typescript import { subAgent, functionTool, mcpTool } from "@inkeep/agents-sdk"; const mySubAgent = subAgent({ id: "my-agent-id", name: "My Sub Agent", prompt: "Detailed behavior guidelines", canUse: () => [ functionTool({ name: "get-current-time", description: "Get the current time", execute: async () => ({ time: new Date().toISOString() }), }), mcpTool({ id: "inkeep-kb-rag", name: "Knowledge Base Search", description: "Search the company knowledge base.", serverUrl: "https://rag.inkeep.com/mcp", }), ], }); ``` ### Data components Structured output components for rich, interactive responses. See [Data Components](/typescript-sdk/structured-outputs/data-components). ```typescript import { z } from 'zod'; const mySubAgent = subAgent({ id: "my-agent-id", name: "My Sub Agent", prompt: "Detailed behavior guidelines", dataComponents: [ { id: "customer-info", name: "CustomerInfo", description: "Customer information display component", props: z.object({ name: z.string().describe("Customer name"), email: z.string().describe("Customer email"), issue: z.string().describe("Customer issue description"), }), }, ], }); ``` ### Artifact components Components for handling tool or Sub Agent outputs. See [Artifact Components](/typescript-sdk/structured-outputs/artifact-components). ```typescript import { z } from 'zod'; import { preview } from '@inkeep/agents-core'; const mySubAgent = subAgent({ id: "my-agent-id", name: "My Sub Agent", prompt: "Detailed behavior guidelines", artifactComponents: [ { id: "customer-info", name: "CustomerInfo", description: "Customer information display component", props: z.object({ name: preview(z.string().describe("Customer name")), customer_info: z.string().describe("Customer information"), }), }, ], }); ``` ### Sub Agent relationships Define other Sub Agents this Sub Agent can transfer control to or delegate tasks to. ```typescript const mySubAgent = subAgent({ // ... canTransferTo: () => [subAgent1], canDelegateTo: () => [subAgent2], }); ``` As a next step, see [Sub Agent Relationships](/typescript-sdk/agent-relationships) to learn how to design transfer and delegation relationships between Sub Agents. # CLI Observability with Langfuse URL: /typescript-sdk/cli-observability Track LLM operations during pull command with Langfuse *** title: CLI Observability with Langfuse description: Track LLM operations during pull command with Langfuse ------------------------------------------------------------------- The Inkeep CLI includes built-in observability for tracking LLM operations during the `pull` command. This allows you to monitor costs, latency, and quality of AI-generated code across different LLM providers. ## Overview When you run `inkeep pull`, the CLI uses LLMs to generate TypeScript files for your agents, tools, and components. With Langfuse integration enabled, you can: * **Track token usage and costs** across Anthropic, OpenAI, and Google models * **Monitor generation latency** to identify slow operations * **View complete traces** of multi-file code generation * **Analyze placeholder optimization** impact on token savings * **Debug failed generations** with full context ## Setup ### 1. Create a Langfuse Account Sign up for a free account at [cloud.langfuse.com](https://cloud.langfuse.com) (EU region) or [us.cloud.langfuse.com](https://us.cloud.langfuse.com) (US region). ### 2. Get API Keys From your Langfuse dashboard: 1. Navigate to **Settings** → **API Keys** 2. Create a new API key pair 3. Copy both the **Secret Key** (`sk-lf-...`) and **Public Key** (`pk-lf-...`) ### 3. Configure Environment Variables Add these variables to your `.env` file in your project root: ```bash # Enable Langfuse tracing LANGFUSE_ENABLED=true # Your Langfuse credentials LANGFUSE_SECRET_KEY=sk-lf-your-secret-key-here LANGFUSE_PUBLIC_KEY=pk-lf-your-public-key-here # Langfuse API URL (defaults to EU cloud) LANGFUSE_BASEURL=https://cloud.langfuse.com # or https://us.cloud.langfuse.com for US # Your LLM provider API keys (at least one required) ANTHROPIC_API_KEY=your-anthropic-key OPENAI_API_KEY=your-openai-key GOOGLE_API_KEY=your-google-key ``` ### 4. Run Pull Command Now when you run `inkeep pull`, all LLM operations will be traced to Langfuse: ```bash inkeep pull --project my-agent-project ``` ## Viewing Traces ### In Langfuse Dashboard 1. Go to your [Langfuse dashboard](https://cloud.langfuse.com) 2. Navigate to **Traces** 3. You'll see traces for each file generation operation ### Trace Metadata Each trace includes rich metadata: | Field | Description | Example | | ------------------ | ---------------------------- | --------------------------------- | | `fileType` | Type of file being generated | `agent`, `tool`, `data_component` | | `placeholderCount` | Number of placeholders used | `5` | | `promptSize` | Size of prompt in characters | `15234` | | `model` | LLM model used | `claude-sonnet-4-5` | ### Example Trace Structure ``` Service: inkeep-agents-cli ├── generate-agent-file │ ├── Model: claude-sonnet-4-5 │ ├── Tokens: 12,543 input / 3,421 output │ ├── Duration: 8.3s │ └── Metadata: │ ├── fileType: agent │ ├── placeholderCount: 12 │ └── promptSize: 25,891 chars ``` ## Monitoring Strategies ### Track Costs by Provider Compare costs across different LLM providers: 1. Filter traces by model in Langfuse 2. View cumulative costs in the **Usage** dashboard 3. Identify cost-saving opportunities ### Optimize Generation Time Find slow generation steps: 1. Sort traces by duration 2. Check if complex agents need longer timeouts 3. Consider using faster models for simpler files ### Analyze Token Savings Monitor placeholder optimization impact: 1. Look at `placeholderCount` metadata 2. Higher counts = more token savings 3. Useful for understanding efficiency gains ## Troubleshooting ### Traces Not Appearing **Check if Langfuse is enabled:** ```bash # Should output: true echo $LANGFUSE_ENABLED ``` **Verify API keys are set:** ```bash # Should show your keys (without exposing them) env | grep LANGFUSE ``` **Check for errors:** ```bash # Run with debug logging DEBUG=* inkeep pull --project my-project ``` ### Missing Metadata If traces appear but lack metadata: 1. Ensure you're using the latest CLI version 2. Check that file type context is being passed correctly 3. Report issues on [GitHub](https://github.com/anthropics/claude-code/issues) ## Privacy Considerations ### What Data is Sent to Langfuse * **Prompt content**: The full prompts sent to LLMs (includes your project data) * **Generated code**: The TypeScript code generated by LLMs * **Model metadata**: Model names, token counts, timings * **File metadata**: File types, sizes, placeholder counts ### What is NOT Sent * **Your API keys**: LLM provider keys are never sent to Langfuse * **Other environment variables**: Only Langfuse-specific vars are used ### Self-Hosted Option For complete control over your data, you can self-host Langfuse: ```bash # Use your self-hosted instance LANGFUSE_BASEURL=https://langfuse.your-domain.com ``` See [Langfuse self-hosting docs](https://langfuse.com/docs/deployment/self-host) for details. ## Best Practices 1. **Enable for development**: Keep tracing on during development to catch issues early 2. **Disable in CI/CD**: Turn off for automated builds to avoid unnecessary traces 3. **Review weekly**: Check Langfuse dashboard weekly to monitor costs and performance 4. **Set budgets**: Configure spending alerts in your LLM provider dashboards ## Related Documentation * [SignOz Usage](/typescript-sdk/signoz-usage) - OpenTelemetry tracing for runtime operations * [Langfuse Usage](/typescript-sdk/langfuse-usage) - Langfuse integration for agent runtime * [CLI Reference](/typescript-sdk/cli-reference) - Complete CLI command reference # CLI Reference URL: /typescript-sdk/cli-reference Complete reference for the Inkeep CLI commands *** title: CLI Reference description: Complete reference for the Inkeep CLI commands icon: "LuTerminal" ------------------ ## Overview The Inkeep CLI is the primary tool for interacting with the Inkeep Agent Framework. It allows you to push Agent configurations and interact with your multi-agent system. ## Installation ```bash # Install the CLI globally npm install -g @inkeep/agents-cli # Install the dashboard package (for visual agents orchestration) npm install @inkeep/agents-manage-ui ``` ## Global Options All commands support the following global options: * `--version` - Display CLI version * `--help` - Display help for a command ## Commands ### `inkeep init` Initialize a new Inkeep configuration file in your project. ```bash inkeep init [path] ``` **Options:** * `--no-interactive` - Skip interactive path selection * `--config ` - Path to use as template for new configuration **Examples:** ```bash # Interactive initialization inkeep init # Initialize in specific directory inkeep init ./my-project # Non-interactive mode inkeep init --no-interactive # Use specific config as template inkeep init --config ./template-config.ts ``` ### `inkeep push` **Primary use case:** Push a project containing Agent configurations to your server. This command deploys your entire multi-agent project, including all Agents, Sub Agents, and tools. ```bash inkeep push ``` **Options:** * `--project ` - Project ID or path to project directory * `--env ` - Load environment-specific credentials from `environments/.env.ts` * `--config ` - Override config file path (bypasses automatic config discovery) * `--tenant-id ` - Override tenant ID * `--agents-manage-api-url ` - Override the management API URL from config * `--agents-run-api-url ` - Override agents run API URL * `--json` - Generate project data as JSON file instead of pushing to server **Examples:** ```bash # Push project from current directory inkeep push # Push specific project directory inkeep push --project ./my-project # Push with development environment credentials inkeep push --env development # Generate project JSON without pushing inkeep push --json # Override tenant ID inkeep push --tenant-id my-tenant # Override API URLs inkeep push --agents-manage-api-url https://api.example.com inkeep push --agents-run-api-url https://run.example.com # Use specific config file inkeep push --config ./custom-config/inkeep.config.ts ``` **Environment Credentials:** The `--env` flag loads environment-specific credentials when pushing your project. This will look for files like `environments/development.env.ts` or `environments/production.env.ts` in your project directory and load the credential configurations defined there. **Example environment file:** ```typescript // environments/development.env.ts import { CredentialStoreType } from "@inkeep/agents-core"; import { registerEnvironmentSettings } from "@inkeep/agents-sdk"; export const development = registerEnvironmentSettings({ credentials: { "api-key-dev": { id: "api-key-dev", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "API_KEY_DEV", }, }, }, }); ``` #### Project Discovery and Structure The `inkeep push` command follows this discovery process: 1. **Config File Discovery**: Searches for `inkeep.config.ts` using this pattern: * Starts from current working directory * Traverses **upward** through parent directories until found * Can be overridden by providing a path to the config file with the `--config` flag 2. **Workspace Structure**: Expects this directory layout: ``` workspace-root/ ├── package.json # Workspace package.json ├── tsconfig.json # Workspace TypeScript config ├── inkeep.config.ts # Inkeep configuration file ├── my-project/ # Individual project directory │ ├── index.ts # Project entry point │ ├── agents/ # Agent definitions │ │ └── *.ts │ ├── tools/ # Tool definitions │ │ └── *.ts │ ├── data-components/ # Data component definitions │ │ └── *.ts │ └── environments/ # Environment-specific configs │ ├── development.env.ts │ └── production.env.ts └── another-project/ # Additional projects └── index.ts ``` 3. **Resource Compilation**: Automatically discovers and compiles: * All project directories containing `index.ts` * All TypeScript files within each project directory * Categorizes files by type (agents, Sub Agents, tools, data components) * Resolves dependencies and relationships within each project #### Push Behavior When pushing, the CLI: * Finds and loads configuration from `inkeep.config.ts` at workspace root * Discovers all project directories containing `index.ts` * Applies environment-specific settings if `--env` is specified * Compiles all project resources defined in each project's `index.ts` * Validates Sub Agent relationships and tool configurations across all projects * Deploys all projects to the management API * Prints deployment summary with resource counts per project ### `inkeep pull` Pull project configuration from the server and update all TypeScript files in your local project using LLM generation. ```bash inkeep pull ``` **Options:** * `--project ` - Project ID or path to project directory * `--config ` - Override config file path (bypasses automatic config discovery) * `--agents-manage-api-url ` - Override the management API URL from config * `--env ` - Environment file to generate (development, staging, production). Defaults to development * `--json` - Save project data as JSON file instead of updating TypeScript files * `--debug` - Enable debug logging for LLM generation **Directory-Aware Pull (Mirrors `inkeep push` behavior):** The pull command automatically detects if you're in a project directory and pulls to that location: 1. **Automatic Detection**: If your current directory contains an `index.ts` file that exports a project, the command automatically uses that project's ID 2. **Current Directory Pull**: Files are pulled directly to your current directory (no subdirectory is created) 3. **Conflict Prevention**: If you specify `--project` while in a project directory, an error is shown to prevent confusion **Pull Modes:** | Scenario | Command | Behavior | | ------------------------ | ---------------------------- | ----------------------------------------------------------- | | In project directory | `inkeep pull` | ✅ Automatically detects project, pulls to current directory | | In project directory | `inkeep pull --project ` | ❌ Error: Cannot specify --project when in project directory | | Not in project directory | `inkeep pull` | Prompts for project ID, creates subdirectory | | Not in project directory | `inkeep pull --project ` | Pulls specified project to subdirectory | **How it Works:** The pull command discovers and updates all TypeScript files in your project based on the latest configuration from the server: 1. **File Discovery**: Recursively finds all `.ts` files in your project (excluding `environments/` and `node_modules/`) 2. **Smart Categorization**: Categorizes files as index, agents, Sub Agents, tools, or other files 3. **Context-Aware Updates**: Updates each file with relevant context from the server: * **Agent files**: Updated with specific agent data * **Sub Agent files**: Updated with specific Sub Agent configurations * **Tool files**: Updated with specific tool definitions * **Other files**: Updated with full project context 4. **LLM Generation**: Uses AI to maintain code structure while updating with latest data #### TypeScript Updates (Default) By default, the pull command updates your existing TypeScript files using LLM generation: 1. **Context Preservation**: Maintains your existing code structure and patterns 2. **Selective Updates**: Only updates relevant parts based on server configuration changes 3. **File-Specific Context**: Each file type receives appropriate context (Agents get Agent data, Sub Agents get Sub Agent data, etc.) **Examples:** ```bash # Directory-aware pull: Automatically detects project from current directory cd my-project # Directory contains index.ts with project export inkeep pull # Pulls to current directory, no subdirectory created # Pull when NOT in a project directory (prompts for project ID) cd ~/projects inkeep pull # Prompts for project ID, creates subdirectory # Pull specific project (only works when NOT in a project directory) cd ~/projects inkeep pull --project my-project # Creates my-project/ subdirectory # Error case: --project flag in project directory cd my-project # Directory contains index.ts inkeep pull --project my-project # ERROR: Cannot specify --project in project directory # Save project data as JSON file instead of updating files inkeep pull --json # Pull with custom API endpoint inkeep pull --agents-manage-api-url https://api.example.com # Enable debug logging inkeep pull --debug # Generate environment-specific credentials inkeep pull --env production # Use specific config file inkeep pull --config ./custom-config/inkeep.config.ts ``` #### Model Configuration The `inkeep pull` command currently uses a fixed model for LLM generation: `anthropic/claude-sonnet-4-20250514`. #### Validation Process The `inkeep pull` command includes a two-stage validation process to ensure generated TypeScript files accurately represent your backend configuration: **1. Basic File Verification** * Checks that all expected files exist (index.ts, agent files, tool files, component files) * Verifies file naming conventions match (kebab-case) * Ensures project export is present in index.ts **2. Round-Trip Validation** *(New in v0.24.0)* * Loads the generated TypeScript using the same logic as `inkeep push` * Serializes it back to JSON format * Compares the serialized JSON with the original backend data * Reports any differences found This round-trip validation ensures: * ✅ Generated TypeScript can be successfully loaded and imported * ✅ The serialization logic (TS → JSON) works correctly * ✅ Generated files will work with `inkeep push` without errors * ✅ No data loss or corruption during the pull process **Validation Output:** ```bash ✓ Basic file verification passed ✓ Round-trip validation passed - generated TS matches backend data ``` **If validation fails:** The CLI will display specific differences between the generated and expected data: ```bash ✗ Round-trip validation failed ❌ Round-trip validation errors: The generated TypeScript does not serialize back to match the original backend data. • Value mismatch at agents.my-agent.name: "Original Name" vs "Generated Name" • Missing tool in generated: tool-id ⚠️ This indicates an issue with LLM generation or schema mappings. The generated files may not work correctly with `inkeep push`. ``` **TypeScript generation fails:** * Ensure your network connectivity and API endpoints are correct * Check that your model provider credentials (if required by backend) are set up * Try using `--json` flag as a fallback to get the raw project data **Validation errors during pull:** * The generated TypeScript may have syntax errors or missing dependencies * Check the generated file manually for obvious issues * Try pulling as JSON first to verify the source data: `inkeep pull --json` * If round-trip validation fails, report the issue with the error details ### `inkeep list-agents` List all available agents for a specific project. ```bash inkeep list-agents --project ``` **Options:** * `--project ` - **Required.** Project ID to list agents for * `--tenant-id ` - Tenant ID * `--agents-manage-api-url ` - Agents manage API URL * `--config ` - Path to configuration file * `--config-file-path ` - Path to configuration file (deprecated, use --config) **Examples:** ```bash # List agents for a specific project inkeep list-agents --project my-project # List agents using a specific config file inkeep list-agents --project my-project --config ./inkeep.config.ts # Override tenant and API URL inkeep list-agents --project my-project --tenant-id my-tenant --agents-manage-api-url https://api.example.com ``` ### `inkeep dev` Start the Inkeep dashboard server, build for production, or export the Next.js project. > **Note:** This command requires `@inkeep/agents-manage-ui` to be installed for visual agents orchestration. ```bash inkeep dev ``` **Options:** * `--port ` - Port to run the server on (default: 3000) * `--host ` - Host to bind the server to (default: localhost) * `--build` - Build the Dashboard UI for production (packages standalone build) * `--export` - Export the Next.js project source files * `--output-dir ` - Output directory for build files (default: ./inkeep-dev) * `--path` - Output the path to the Dashboard UI **Examples:** ```bash # Start development server inkeep dev # Start on custom port and host inkeep dev --port 3001 --host 0.0.0.0 # Build for production (packages standalone build) inkeep dev --build --output-dir ./build # Export Next.js project source files inkeep dev --export --output-dir ./my-dashboard # Get dashboard path for deployment DASHBOARD_PATH=$(inkeep dev --path) echo "Dashboard built at: $DASHBOARD_PATH" # Use with Vercel vercel --cwd $(inkeep dev --path) -Q .vercel build # Use with Docker docker build -t inkeep-dashboard $(inkeep dev --path) # Use with other deployment tools rsync -av $(inkeep dev --path)/ user@server:/var/www/dashboard/ ``` ### `inkeep config` Manage Inkeep configuration values. **Subcommands:** #### `inkeep config get [key]` Get configuration value(s). ```bash inkeep config get [key] ``` **Options:** * `--config ` - Path to configuration file * `--config-file-path ` - Path to configuration file (deprecated, use --config) **Examples:** ```bash # Get all config values inkeep config get # Get specific value inkeep config get tenantId ``` #### `inkeep config set ` Set a configuration value. ```bash inkeep config set ``` **Options:** * `--config ` - Path to configuration file * `--config-file-path ` - Path to configuration file (deprecated, use --config) **Examples:** ```bash inkeep config set tenantId my-tenant-id inkeep config set apiUrl http://localhost:3002 ``` #### `inkeep config list` List all configuration values. ```bash inkeep config list ``` **Options:** * `--config ` - Path to configuration file * `--config-file-path ` - Path to configuration file (deprecated, use --config) ### `inkeep add` Pull a template project or MCP from the [Inkeep Agents Cookbook](https://github.com/inkeep/agents-cookbook). ```bash inkeep add [options] ``` **Options:** * `--project ` - Add a project template * `--mcp ` - Add an MCP template * `--target-path ` - Target path to add the template to * `--config ` - Path to configuration file **Examples:** ```bash # List available templates (both project and MCP) inkeep add # Add a project template inkeep add --project event-planner # Add project template to specific path inkeep add --project event-planner --target-path ./examples # Add an MCP template (auto-detects apps/mcp/app directory) inkeep add --mcp zendesk # Add MCP template to specific path inkeep add --mcp zendesk --target-path ./apps/mcp/app # Using specific config file inkeep add --project event-planner --config ./my-config.ts ``` **Behavior:** * When adding an MCP template without `--target-path`, the CLI automatically searches for an `apps/mcp/app` directory in your project * If no app directory is found, you'll be prompted to confirm whether to add to the current directory * Project templates are added to the current directory or specified `--target-path` * Model configurations are automatically injected based on available API keys in your environment (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, or `GOOGLE_GENERATIVE_AI_API_KEY`) ### `inkeep update` Update the Inkeep CLI to the latest version from npm. ```bash inkeep update ``` **Options:** * `--check` - Check for updates without installing * `--force` - Force update even if already on latest version **How it Works:** The update command automatically: 1. **Detects Package Manager**: Identifies which package manager (npm, pnpm, bun, or yarn) was used to install the CLI globally 2. **Checks Version**: Compares your current version with the latest available on npm 3. **Updates CLI**: Executes the appropriate update command for your package manager 4. **Displays Changelog**: Shows a link to the changelog for release notes **Examples:** ```bash # Check if an update is available (no installation) inkeep update --check # Update to latest version (with confirmation prompt) inkeep update # Force reinstall current version inkeep update --force # Skip confirmation prompt (useful for CI/CD) inkeep update --force ``` **Output Example:** ``` 📦 Version Information: • Current version: 0.22.3 • Latest version: 0.23.0 📖 Changelog: • https://github.com/inkeep/agents/blob/main/agents-cli/CHANGELOG.md 🔍 Detected package manager: pnpm ✅ Updated to version 0.23.0 ``` **Troubleshooting:** If you encounter permission errors, try running with elevated permissions: ```bash # For npm, pnpm, yarn sudo inkeep update # For bun sudo -E bun add -g @inkeep/agents-cli@latest ``` **Package Manager Detection:** The CLI automatically detects which package manager you used by checking global package installations: * npm: Checks `npm list -g @inkeep/agents-cli` * pnpm: Checks `pnpm list -g @inkeep/agents-cli` * bun: Checks `bun pm ls -g` * yarn: Checks `yarn global list` If automatic detection fails, the CLI will prompt you to select your package manager. ## Configuration File The CLI uses a configuration file (typically `inkeep.config.ts`) to store settings: ```typescript import { defineConfig } from "@inkeep/agents-cli/config"; export default defineConfig({ tenantId: "your-tenant-id", agentsManageApiUrl: "http://localhost:3002", agentsRunApiUrl: "http://localhost:3003", }); ``` ### Configuration Priority Effective resolution order: 1. Command-line flags (highest) 2. Environment variables (override config values) 3. `inkeep.config.ts` values ## Environment Variables The CLI and SDK respect the following environment variables: * `INKEEP_TENANT_ID` - Tenant identifier * `INKEEP_AGENTS_MANAGE_API_URL` - Management API base URL * `INKEEP_AGENTS_RUN_API_URL` - Run API base URL * `INKEEP_ENV` - Environment name for credentials loading during `inkeep push` * `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` - Optional bearer for Manage API (advanced) * `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` - Optional bearer for Run API (advanced) ## Troubleshooting **Project Not Found:** * Projects are automatically managed based on your tenantId * `inkeep push` will create resources as needed ### Getting Help For additional help with any command: ```bash inkeep [command] --help ``` For issues or feature requests, visit: [GitHub Issues](https://github.com/inkeep/agents/issues) # Configuration Management URL: /typescript-sdk/configuration Configure your Inkeep Agent workspace with inkeep.config.ts - workspace-level settings for all agents and CLI commands *** title: Configuration Management description: Configure your Inkeep Agent workspace with inkeep.config.ts - workspace-level settings for all agents and CLI commands icon: "LuSettings" ------------------ ## Overview Inkeep Agent projects use a hierarchical configuration system that combines file-based configuration, environment variables, and command-line overrides. This flexible approach supports different deployment environments while maintaining developer-friendly defaults. ## Configuration File Format ### `inkeep.config.ts` The **workspace-level** configuration file for your Inkeep project. This file defines settings that apply to your entire workspace, including: * **Tenant and project identification** for organizing your agents * **API endpoints** for the Management and Runtime APIs * **Authentication credentials** for secure API access * **Output directories** for generated files :::tip `inkeep.config.ts` is typically placed at the **root of your workspace** (not within individual packages or subdirectories). All CLI commands in the workspace will discover and use this configuration automatically. ::: The configuration file supports both nested (recommended) and flat configuration formats: #### Nested Format (Recommended) ```typescript import { defineConfig } from '@inkeep/agents-cli/config'; import 'dotenv/config'; export default defineConfig({ // Required: Your tenant identifier tenantId: 'my-company', // Management API configuration agentsManageApi: { url: 'http://localhost:3002', apiKey: process.env.MANAGE_API_KEY, // Optional: API key for authentication }, // Runtime API configuration agentsRunApi: { url: 'http://localhost:3003', apiKey: process.env.RUN_API_KEY, // Optional: API key for authentication }, // Optional: Output directory for generated files outputDirectory: './output', }); ``` #### Flat Format (Deprecated - Legacy Only) :::warning **Deprecated:** The flat configuration format is maintained for backward compatibility only. New projects should use the nested format above. ::: ```typescript import { defineConfig } from '@inkeep/agents-cli/config'; export default defineConfig({ // Required: Your tenant identifier tenantId: 'my-company', // API endpoints (legacy format - DEPRECATED) agentsManageApiUrl: 'http://localhost:3002', // ⚠️ Use agentsManageApi.url instead agentsRunApiUrl: 'http://localhost:3003', // ⚠️ Use agentsRunApi.url instead }); ``` ### Configuration Options #### Nested Format Options | Option | Type | Description | Default | | ------------------------ | -------- | -------------------------------------------------- | ------------------------- | | `tenantId` | `string` | **Required.** Unique identifier for your tenant | - | | `agentsManageApi` | `object` | Management API configuration | - | | `agentsManageApi.url` | `string` | Management API endpoint URL | `'http://localhost:3002'` | | `agentsManageApi.apiKey` | `string` | Optional API key for Management API authentication | - | | `agentsRunApi` | `object` | Runtime API configuration | - | | `agentsRunApi.url` | `string` | Runtime API endpoint URL | `'http://localhost:3003'` | | `agentsRunApi.apiKey` | `string` | Optional API key for Runtime API authentication | - | | `outputDirectory` | `string` | Output directory for generated files | - | | `manageUiUrl` | `string` | Optional Management UI URL | `'http://localhost:3000'` | #### Flat Format Options (Deprecated - Legacy) | Option | Type | Description | Default | | -------------------- | -------- | ----------------------------------------------------------------------------- | ------------------------- | | `tenantId` | `string` | **Required.** Unique identifier for your tenant | - | | `agentsManageApiUrl` | `string` | **⚠️ Deprecated.** Management API endpoint. Use `agentsManageApi.url` instead | `'http://localhost:3002'` | | `agentsRunApiUrl` | `string` | **⚠️ Deprecated.** Runtime API endpoint. Use `agentsRunApi.url` instead | `'http://localhost:3003'` | | `outputDirectory` | `string` | Output directory for generated files | - | | `manageUiUrl` | `string` | Optional Management UI URL | `'http://localhost:3000'` | **Note:** API keys should be provided via the config file (using environment variables) rather than using legacy `INKEEP_AGENTS_*_BYPASS_SECRET` environment variables. ## Workspace Structure The `inkeep.config.ts` file is a **workspace-level configuration** that applies to all agent definitions and CLI commands within your project. Here's a typical workspace structure: ``` my-agent-workspace/ ├── inkeep.config.ts # Workspace configuration (shared) ├── .env # Environment variables ├── package.json ├── agents/ # Your agent definitions │ ├── qa-agent.ts │ ├── support-agent.ts │ └── router.ts └── tools/ # Custom tools (optional) └── search-tool.ts ``` **Key points:** * **Single config per workspace**: One `inkeep.config.ts` at the workspace root * **Shared settings**: All agent files use the same tenant, API endpoints, and credentials * **CLI discovery**: Commands run from any subdirectory will find the root config * **Monorepo support**: In monorepos, place config at the root or use `--config` flag ## Config File Discovery The CLI uses a sophisticated discovery mechanism to locate your configuration: ```mermaid graph TD A[CLI Command] --> B{--config flag provided?} B -->|Yes| C[Use specified config file] B -->|No| D[Start in current directory] D --> E{inkeep.config.ts exists?} E -->|Yes| F[Use found config] E -->|No| G[Check parent directory] G --> H{At filesystem root?} H -->|No| E H -->|Yes| I[Error: Config not found] C --> J[Load and validate config] F --> J J --> K[Apply environment overrides] K --> L[Apply CLI flag overrides] L --> M[Final configuration] ``` ### Search Pattern 1. **Explicit Path**: If `--config` flag is provided, use that file directly 2. **Upward Search**: Starting from current working directory: * Look for `inkeep.config.ts` in current directory * If not found, move to parent directory * Repeat until found or reach filesystem root * Config file should be at the same level as `package.json`/`tsconfig.json` 3. **Error Handling**: If no config found, provide helpful error message ### Example Discovery ```bash # Directory structure (workspace root) /home/user/workspace/ ├── package.json # Workspace package.json ├── tsconfig.json # Workspace TypeScript config ├── inkeep.config.ts # ✅ Config file at workspace root ├── my-agents/ # Project directory │ ├── index.ts # Project entry point │ └── subfolder/ │ └── current-location/ # CLI run from here └── other-project/ # CLI searches: current-location → subfolder → my-agents → workspace → FOUND! ``` ## Configuration Priority Settings are resolved in this order (highest to lowest priority): ```mermaid graph LR A[CLI Flags] --> B[Environment Variables] B --> C[Config File Values] C --> D[Built-in Defaults] style A fill:#ff6b6b style B fill:#ffa500 style C fill:#4ecdc4 style D fill:#95e1d3 ``` ### 1. CLI Flags (Highest Priority) Command-line flags override all other settings: ```bash # Override API URL inkeep push --agents-manage-api-url https://api.production.com # Override config file location inkeep pull --config /path/to/custom.config.ts # Override environment inkeep push --env production ``` ### 2. Environment Variables Environment variables override config file values: ```bash # Set via environment export INKEEP_AGENTS_MANAGE_API_URL=https://api.staging.com export INKEEP_TENANT_ID=staging-tenant export INKEEP_ENV=staging # Now CLI commands use these values inkeep push ``` **Supported Environment Variables:** | Variable | Config Equivalent | Description | | ------------------------------ | -------------------- | --------------------------------------- | | `INKEEP_TENANT_ID` | `tenantId` | Tenant identifier | | `INKEEP_AGENTS_MANAGE_API_URL` | `agentsManageApiUrl` | Management API URL | | `INKEEP_AGENTS_RUN_API_URL` | `agentsRunApiUrl` | Runtime API URL | | `INKEEP_ENV` | - | Environment name for credential loading | ### 3. Config File Values Values explicitly set in your `inkeep.config.ts`: ```typescript export default defineConfig({ tenantId: 'my-tenant', agentsManageApiUrl: 'http://localhost:3002', // These values used unless overridden }); ``` ### 4. Built-in Defaults (Lowest Priority) Default values used when not specified elsewhere: ```typescript const defaults = { agentsManageApiUrl: 'http://localhost:3002', agentsRunApiUrl: 'http://localhost:3003', apiTimeout: 30000, retryAttempts: 3, }; ``` ## Advanced Configuration ### TypeScript Support The config system is fully typed, providing IntelliSense and validation: ```typescript import { defineConfig, ConfigOptions } from '@inkeep/agents-cli/config'; // Get full type safety const config: ConfigOptions = { tenantId: 'my-tenant', // ✓ Required invalidOption: true, // ✗ TypeScript error }; export default defineConfig(config); ``` ### Dynamic Configuration You can use environment-based logic in your workspace config: ```typescript // inkeep.config.ts at workspace root import { defineConfig } from '@inkeep/agents-cli/config'; const isDevelopment = process.env.NODE_ENV === 'development'; export default defineConfig({ tenantId: process.env.TENANT_ID || 'default-tenant', agentsManageApiUrl: isDevelopment ? 'http://localhost:3002' : 'https://api.production.com', apiTimeout: isDevelopment ? 60000 : 30000, }); ``` **Note**: This single config file manages all projects within the workspace. ### Multiple Configurations For workspaces requiring different configurations: ```typescript // inkeep.config.ts (main config at workspace root) export default defineConfig({ tenantId: 'production-tenant', agentsManageApiUrl: 'https://api.production.com', }); ``` ```typescript // inkeep.dev.config.ts (development config at workspace root) export default defineConfig({ tenantId: 'dev-tenant', agentsManageApiUrl: 'http://localhost:3002', }); ``` ```bash # Use development config (specify from any project directory) inkeep push --config ../inkeep.dev.config.ts # or with absolute path inkeep push --config /path/to/workspace/inkeep.dev.config.ts ``` ## Configuration Validation The CLI validates configuration at runtime: ### Required Fields ```typescript export default defineConfig({ // ✗ Error: tenantId is required }); ``` ### URL Validation ```typescript export default defineConfig({ tenantId: 'my-tenant', agentsManageApiUrl: 'invalid-url', // ✗ Error: Invalid URL format }); ``` ### Type Validation ```typescript export default defineConfig({ tenantId: 'my-tenant', apiTimeout: 'thirty seconds', // ✗ Error: Expected number }); ``` ## Debugging Configuration ### View Current Configuration ```bash # View all configuration values inkeep config get # View specific value inkeep config get tenantId # View with specific config file inkeep config get --config ./custom.config.ts ``` ### Configuration Sources The CLI shows where each setting comes from: ```bash inkeep config get tenantId # Output: my-tenant (from environment variable) inkeep config get agentsManageApiUrl # Output: http://localhost:3002 (from config file) ``` ## Best Practices ### 1. Environment-Specific Configs Use different configs for different environments: ```bash # Development inkeep.config.ts # Local development settings # Staging inkeep.staging.config.ts # Staging environment settings # Production inkeep.prod.config.ts # Production environment settings ``` ### 2. Secret Management Never commit secrets to config files: ```typescript // ✗ Bad: Hardcoded secrets export default defineConfig({ tenantId: 'my-tenant', apiKey: 'sk-secret-key', // Don't do this! }); // ✓ Good: Use environment variables export default defineConfig({ tenantId: 'my-tenant', // API keys handled via environment-specific credential configs }); ``` ### 3. Documentation Document your configuration options: ```typescript export default defineConfig({ // Production tenant for main application tenantId: 'acme-corp-prod', // Use staging API for development agentsManageApi: { url: process.env.NODE_ENV === 'development' ? 'https://api-staging.acme.com' : 'https://api.acme.com', apiKey: process.env.MANAGE_API_KEY, }, agentsRunApi: { url: process.env.NODE_ENV === 'development' ? 'https://run-staging.acme.com' : 'https://run.acme.com', apiKey: process.env.RUN_API_KEY, }, }); ``` ## Migration Guide ### Migrating from Flat to Nested Format If you're using the legacy flat format, here's how to migrate to the new nested format: **Before (Flat Format):** ```typescript import { defineConfig } from '@inkeep/agents-cli/config'; export default defineConfig({ tenantId: 'my-tenant', agentsManageApiUrl: 'http://localhost:3002', agentsRunApiUrl: 'http://localhost:3003', }); ``` **After (Nested Format):** ```typescript import { defineConfig } from '@inkeep/agents-cli/config'; import 'dotenv/config'; export default defineConfig({ tenantId: 'my-tenant', agentsManageApi: { url: 'http://localhost:3002', // Optional: Add API key for authentication apiKey: process.env.MANAGE_API_KEY, }, agentsRunApi: { url: 'http://localhost:3003', // Optional: Add API key for authentication apiKey: process.env.RUN_API_KEY, }, }); ``` **Benefits of the Nested Format:** * **Explicit API Key Management**: Store API keys directly in config (via environment variables) with clear, organized structure * **Better Organization**: Related configuration grouped together * **Type Safety**: Improved IntelliSense and type checking * **Future-Proof**: New API configuration options can be added without cluttering the top-level config * **Cleaner Environment**: No need for legacy `INKEEP_AGENTS_*_BYPASS_SECRET` environment variables **Backward Compatibility:** The CLI fully supports both formats. You can continue using the flat format without any changes, or migrate at your convenience. ## Troubleshooting ### Config Not Found ```bash Error: Could not find inkeep.config.ts ``` **Solutions:** * Ensure `inkeep.config.ts` exists at **workspace root** (same level as `package.json`) * CLI searches upward - make sure you're running from within the workspace * Use `--config` flag to specify absolute or relative path to config file * Check file name (must be exactly `inkeep.config.ts`) * Verify you're not running from a completely separate directory tree ### Invalid Configuration ```bash Error: Invalid configuration: tenantId is required ``` **Solutions:** * Add required `tenantId` field * Check for typos in field names * Verify TypeScript compilation ### Environment Issues ```bash Warning: INKEEP_TENANT_ID environment variable overrides config ``` **Solutions:** * Unset environment variable: `unset INKEEP_TENANT_ID` * Use `--config` to override with specific file * Check `.env` files for conflicting values The configuration system provides the flexibility to adapt your Inkeep Agent projects to different environments while maintaining consistency and type safety across your development workflow. # Context Fetchers URL: /typescript-sdk/context-fetchers Learn how to use context fetchers to fetch data from external sources and make it available to your agents *** title: Context Fetchers description: Learn how to use context fetchers to fetch data from external sources and make it available to your agents icon: "LuCirclePlus" -------------------- ## Overview Context fetchers allow you to embed real-time data from external APIs into your agent prompts. Instead of hardcoding information in your agent prompt, context fetchers dynamically retrieve fresh data for each conversation. ## Key Features * **Dynamic data retrieval**: Fetch real-time data from APIs. * **Dynamic Prompting**: Use dynamic data in your agent prompts * **Headers integration**: Use request-specific parameters to customize data fetching. * **Data transformation**: Transform API responses into the exact format your agent needs. ## Context Fetchers vs Tools * **Context Fetchers**: Pre-populate agent prompts with dynamic data * Run automatically before/during conversation startup * Data becomes part of the agent's system prompt * Perfect for: Personalized agent personas, dynamic agent guardrails * Example Prompt: `You are an assistant for ${userContext.toTemplate('user.name')} and you work for ${userContext.toTemplate('user.organization')}` * **Tools**: Enable agents to take actions or fetch data during conversations * Called by the agent when needed during the conversation * Agent decides when and how to use them * Example Tool Usage: Agent calls a "send\_email" tool or "search\_database" tool ## Basic Usage Let's create a simple context fetcher that retrieves user information: ```typescript import { agent, subAgent } from "@inkeep/agents-sdk"; import { contextConfig, fetchDefinition, headers } from "@inkeep/agents-core"; import { z } from "zod"; // 1. Define a schema for headers validation. All header keys are converted to lowercase. // In this example all incoming headers will be validated to make sure they include user_id and api_key. const personalAgentHeaders = headers({ schema: z.object({ user_id: z.string(), api_key: z.string(), }) }); // 2. Create the fetcher with const userFetcher = fetchDefinition({ id: "user-info", name: "User Information", trigger: "initialization", // Fetch only once when a conversation is started with the Agent. When set to "invocation", the fetch will be executed every time a request is made to the Agent. fetchConfig: { url: `https://api.example.com/users/${personalAgentHeaders.toTemplate('user_id')}`, method: "GET", headers: { Authorization: `Bearer ${personalAgentHeaders.toTemplate('api_key')}`, }, transform: "user", // Extract user from response, for example if the response is { "user": { "name": "John Doe", "email": "john.doe@example.com" } }, the transform will return the user object }, responseSchema: z.object({ user: z.object({ name: z.string(), email: z.string(), }), }), // Used to validate the result of the transformed api response. defaultValue: "Unable to fetch user information", }); // 3. Configure context const personalAgentContext = contextConfig({ headers: personalAgentHeaders, contextVariables: { user: userFetcher, }, }); // 4. Create and use the Sub Agent const personalAssistant = subAgent({ id: "personal-assistant", name: "Personal Assistant", description: "A personalized AI assistant", prompt: `Hello ${personalAgentContext.toTemplate('user.name')}! I'm your personal assistant.`, }); // Initialize the Agent export const myAgent = agent({ id: "personal-agent", name: "Personal Assistant Agent", defaultSubAgent: personalAssistant, subAgents: () => [personalAssistant], contextConfig: personalAgentContext, }); ``` ## Using Context Variables Context variables can be used in your agent prompts using JSONPath template syntax `{{contextVariableKey.field_name}}`. Use the context config's `toTemplate()` method for type-safe templating with autocomplete and validation. ```typescript const personalGraphContext = contextConfig({ headers: personalGraphHeaders, contextVariables: { user: userFetcher, }, }); const personalAgent = subAgent({ id: "personal-agent", name: "Personal Assistant", description: "A personalized AI assistant", prompt: `Hello ${personalGraphContext.toTemplate('user.name')}! I'm your personal assistant.`, }); ``` Context variables are resolved using [JSONPath notation](https://jsonpath.com). ## Data transformation The `transform` property on fetch definitions lets you extract exactly what you need from API responses using JSONPath notation: ```typescript // API returns: { "user": { "profile": { "displayName": "John Doe" } } } transform: "user.profile.displayName"; // Result: "John Doe" // API returns: { "items": [{ "name": "First Item" }, { "name": "Second Item" }] } transform: "items[0].name"; // Result: "First Item" ``` ## Best Practices 1. **Use Appropriate Triggers** * `initialization`: Use when data rarely changes * `invocation`: Use for frequently changing data 2. **Handle Errors Gracefully** * Always provide a `defaultValue` * Use appropriate response schemas ## Related documentation * [Headers](/typescript-sdk/headers) - Learn how to pass dynamic context to your agents via HTTP headers # Data Operations URL: /typescript-sdk/data-operations Learn about data operations emitted by agents and how to use the x-emit-operations header to control their visibility. *** title: Data Operations sidebarTitle: Data Operations description: Learn about data operations emitted by agents and how to use the x-emit-operations header to control their visibility. icon: LuActivity keywords: data operations, emit operations, debugging, agent events, x-emit-operations header, agent monitoring --------------------------------------------------------------------------------------------------------------- ## Overview Data operations are detailed, real-time events that provide visibility into what agents are doing during execution. They include agent reasoning, tool executions, transfers, delegations, and artifact creation. By default, these operations are hidden from end users to keep the interface clean, but they can be enabled for debugging and monitoring purposes. ## The x-emit-operations Header The `x-emit-operations` header controls whether data operations are included in the response stream. When set to `true`, the system will emit detailed operational events alongside the regular response content. ### Usage ```bash curl -N \ -X POST "http://localhost:3003/api/chat" \ -H "Authorization: Bearer $INKEEP_API_KEY" \ -H "Content-Type: application/json" \ -H "x-emit-operations: true" \ -d '{ "messages": [ { "role": "user", "content": "What can you do?" } ], "conversationId": "chat-1234" }' ``` ### CLI Usage In the CLI, you can toggle data operations using the `operations` command: ```bash # Start a chat session inkeep chat # Toggle data operations on/off > operations 🔧 Emit operations: ON Data operations will be shown during responses. > operations 🔧 Emit operations: OFF Data operations are hidden. ``` ## Data Operation Types ### Agent Events #### `agent_generate` Emitted when an agent generates content (text or structured data). ```json { "type": "data-operation", "data": { "type": "agent_generate", "label": "Agent search-agent generating response", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "parts": [ { "type": "text", "content": "I found 5 relevant documents..." } ], "generationType": "text_generation" } } } } ``` #### `agent_reasoning` Emitted when an agent is reasoning through a request or planning its approach. ```json { "type": "data-operation", "data": { "type": "agent_reasoning", "label": "Agent search-agent reasoning through request", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "parts": [ { "type": "text", "content": "I need to search for information about..." } ] } } } } ``` ### Tool Execution Events #### `tool_call` Emitted when an agent starts calling a tool or function. ```json { "type": "data-operation", "data": { "type": "tool_call", "label": "Tool call: search_documents", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "toolName": "search_documents", "args": { "query": "machine learning best practices", "limit": 10 }, "toolCallId": "call_abc123", "toolId": "tool_xyz789" } } } } ``` #### `tool_result` Emitted when a tool execution completes (success or failure). ```json { "type": "data-operation", "data": { "type": "tool_result", "label": "Tool result: search_documents", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "toolName": "search_documents", "result": { "documents": [ { "title": "ML Best Practices Guide", "url": "/docs/ml-guide", "relevance": 0.95 } ] }, "toolCallId": "call_abc123", "toolId": "tool_xyz789", "duration": 1250 } } } } ``` **Error Example:** ```json { "type": "data-operation", "data": { "type": "tool_result", "label": "Tool error: search_documents", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "toolName": "search_documents", "result": null, "toolCallId": "call_abc123", "toolId": "tool_xyz789", "duration": 500, "error": "API rate limit exceeded" } } } } ``` ### Agent Interaction Events #### `transfer` Emitted when control is transferred from one agent to another. ```json { "type": "data-operation", "data": { "type": "transfer", "label": "Agent transfer: search-agent → analysis-agent", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "fromSubAgent": "search-agent", "targetAgent": "analysis-agent", "reason": "Specialized analysis required", "context": { "searchResults": "...", "userQuery": "..." } } } } } ``` #### `delegation_sent` Emitted when an agent delegates a task to another agent. ```json { "type": "data-operation", "data": { "type": "delegation_sent", "label": "Task delegated: coordinator-agent → search-agent", "details": { "timestamp": 1726247200000, "agentId": "coordinator-agent", "data": { "delegationId": "deleg_xyz789", "fromSubAgent": "coordinator-agent", "targetAgent": "search-agent", "taskDescription": "Search for information about machine learning", "context": { "priority": "high", "deadline": "2024-01-15T10:00:00Z" } } } } } ``` #### `delegation_returned` Emitted when a delegated task is completed and returned. ```json { "type": "data-operation", "data": { "type": "delegation_returned", "label": "Task completed: search-agent → coordinator-agent", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "delegationId": "deleg_xyz789", "fromSubAgent": "search-agent", "targetAgent": "coordinator-agent", "result": { "status": "completed", "documents": [...], "summary": "Found 5 relevant documents" } } } } } ``` ### Artifact Events #### `artifact_saved` Emitted when an agent creates or saves an artifact (document, chart, file, etc.). ```json { "type": "data-operation", "data": { "type": "artifact_saved", "label": "Artifact saved: chart", "details": { "timestamp": 1726247200000, "agentId": "analysis-agent", "data": { "artifactId": "art_123456", "taskId": "task_789", "toolCallId": "tool_abc123", "artifactType": "chart", "summaryData": { "title": "Sales Performance Q4 2023", "type": "bar_chart" }, "fullData": { "chartData": [...], "config": {...} }, "metadata": { "createdBy": "analysis-agent", "version": "1.0" } } } } } ``` ## System Events ### `agent_initializing` Emitted when the agent runtime is starting up. ```json { "type": "data-operation", "data": { "type": "agent_initializing", "details": { "sessionId": "session_abc123", "agentId": "graph_xyz789" } } } ``` ### `completion` Emitted when an agent completes its task. ```json { "type": "data-operation", "data": { "type": "completion", "details": { "agent": "search-agent", "iteration": 1 } } } ``` ### `error` Emitted when an error occurs during execution. ```json { "type": "data-operation", "data": { "type": "error", "message": "Tool execution failed: API rate limit exceeded", "agent": "search-agent", "severity": "error", "code": "RATE_LIMIT_EXCEEDED", "timestamp": 1726247200000 } } ``` ## Example: Complete Request with Data Operations Here's a complete example showing a request with data operations enabled: ```bash curl -N \ -X POST "http://localhost:3003/api/chat" \ -H "Authorization: Bearer $INKEEP_API_KEY" \ -H "Content-Type: application/json" \ -H "x-emit-operations: true" \ -d '{ "messages": [ { "role": "user", "content": "Create a sales report for Q4" } ], "conversationId": "chat-1234" }' ``` **Response Stream:** ```text data: {"type":"agent_initializing","details":{"sessionId":"session_abc123","agentId":"graph_xyz789"}} data: {"type":"data-operation","data":{"type":"agent_reasoning","label":"Agent coordinator-agent reasoning through request","details":{"timestamp":1726247200000,"agentId":"coordinator-agent","data":{"parts":[{"type":"text","content":"I need to create a sales report for Q4. This will require gathering data and generating a chart."}]}}}} data: {"type":"data-operation","data":{"type":"tool_call","label":"Tool call: get_sales_data","details":{"timestamp":1726247200000,"agentId":"coordinator-agent","data":{"toolName":"get_sales_data","args":{"quarter":"Q4","year":"2023"},"toolCallId":"call_abc123","toolId":"tool_xyz789"}}}} data: {"type":"data-operation","data":{"type":"tool_result","label":"Tool result: get_sales_data","details":{"timestamp":1726247200000,"agentId":"coordinator-agent","data":{"toolName":"get_sales_data","result":{"sales":[...]},"toolCallId":"call_abc123","toolId":"tool_xyz789","duration":850}}}} data: {"type":"data-artifact","data":{ ... }} data: {"type":"data-operation","data":{"type":"artifact_saved","label":"Artifact saved: chart","details":{"timestamp":1726247200000,"agentId":"coordinator-agent","data":{"artifactId":"art_123456","artifactType":"chart","summaryData":{"title":"Q4 Sales Report"}}}}} data: {"type":"text-start","id":"1726247200-abc123"} data: {"type":"text-delta","id":"1726247200-abc123","delta":"I've created a comprehensive Q4 sales report..."} data: {"type":"text-end","id":"1726247200-abc123"} data: {"type":"completion","details":{"agent":"coordinator-agent","iteration":1}} ``` This provides complete visibility into the agent's execution process, from initialization through reasoning, tool execution, artifact creation, and final response generation. # Environment Management URL: /typescript-sdk/environments Manage different deployment environments with environment-specific configurations and credential management *** title: Environment Management description: Manage different deployment environments with environment-specific configurations and credential management icon: "LuLayers" ---------------- ## Overview Environment management in Inkeep Agent projects enables you to maintain different configurations for development, staging, and production deployments. The `--env` flag system provides secure credential management and environment-specific settings without duplicating your core project configuration. ## Environment Structure ### Directory Layout ``` workspace-root/ ├── package.json # Workspace package.json ├── inkeep.config.ts # Base configuration (at workspace root) └── my-project/ # Individual project directory ├── index.ts # Project entry point ├── environments/ # Environment-specific configs │ ├── index.ts # Environment exports │ ├── development.env.ts # Development settings │ ├── staging.env.ts # Staging settings │ └── production.env.ts # Production settings └── ... ``` ### Environment File Format ```typescript // environments/development.env.ts import { registerEnvironmentSettings } from '@inkeep/agents-sdk'; import { CredentialStoreType } from '@inkeep/agents-core'; export const development = registerEnvironmentSettings({ credentials: { "openai-dev": { id: "openai-dev", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "OPENAI_API_KEY_DEV", }, }, "anthropic-dev": { id: "anthropic-dev", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "ANTHROPIC_API_KEY_DEV", }, }, }, }); ``` ## How Environments Work ### Environment Selection Flow ```mermaid graph TD A[inkeep push] --> B{--env flag provided?} B -->|Yes| C[Load environments/.env.ts] B -->|No| D[Use base configuration only] C --> E{Environment file exists?} E -->|Yes| F[Merge with base config] E -->|No| G[Error: Environment not found] F --> H[Apply environment credentials] D --> I[Deploy with base config] H --> I G --> J[Exit with error] ``` ### Usage Examples ```bash # Push with development environment inkeep push --env development # Push with production credentials inkeep push --env production # Push without environment (base config only) inkeep push ``` ## Environment Configuration ### Credential Management Environments primarily manage credentials for different deployment stages: ```typescript // environments/production.env.ts import { registerEnvironmentSettings } from '@inkeep/agents-sdk'; import { CredentialStoreType } from '@inkeep/agents-core'; export const production = registerEnvironmentSettings({ credentials: { // Production OpenAI credentials "openai-prod": { id: "openai-prod", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "OPENAI_API_KEY_PROD", }, }, // Production database credentials "database-prod": { id: "database-prod", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "DATABASE_URL_PROD", }, }, // External API credentials "external-api-prod": { id: "external-api-prod", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "EXTERNAL_API_SECRET_PROD", }, }, }, }); ``` ### Environment Index File Export all environments from the index file: ```typescript // environments/index.ts export { development } from './development.env'; export { staging } from './staging.env'; export { production } from './production.env'; ``` ## Credential Store Types ### Memory Store Loads credentials from environment variables at runtime: ```typescript { type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "API_KEY_ENV_VAR", }, } ``` ### File Store Loads credentials from secure files: ```typescript { type: CredentialStoreType.file, credentialStoreId: "file-store", retrievalParams: { filePath: "/secure/path/to/credentials.json", key: "apiKey", }, } ``` ### External Store Integrates with external credential management systems: ```typescript { type: CredentialStoreType.external, credentialStoreId: "vault-store", retrievalParams: { endpoint: "https://vault.company.com", path: "secret/api-keys", }, } ``` ## Environment Workflows ### Development Environment ```typescript // environments/development.env.ts export const development = registerEnvironmentSettings({ credentials: { "openai-dev": { id: "openai-dev", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "OPENAI_API_KEY_DEV", // Uses dev API key }, }, }, }); ``` ```bash # Set development environment variables export OPENAI_API_KEY_DEV=sk-dev-key... export ANTHROPIC_API_KEY_DEV=sk-ant-dev... # Deploy to development inkeep push --env development ``` ### Staging Environment ```typescript // environments/staging.env.ts export const staging = registerEnvironmentSettings({ credentials: { "openai-staging": { id: "openai-staging", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "OPENAI_API_KEY_STAGING", }, }, }, }); ``` ### Production Environment ```typescript // environments/production.env.ts export const production = registerEnvironmentSettings({ credentials: { "openai-prod": { id: "openai-prod", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "OPENAI_API_KEY_PROD", }, }, }, }); ``` ## Advanced Environment Features ### Environment-Specific Overrides While environments primarily manage credentials, they can include other settings: ```typescript // environments/development.env.ts export const development = registerEnvironmentSettings({ // Credential configuration credentials: { // ... credential definitions }, // Development-specific settings settings: { logLevel: 'debug', apiTimeout: 60000, // Longer timeout for debugging }, }); ``` ### Conditional Environments Create environments that adapt to runtime conditions: ```typescript // environments/dynamic.env.ts const isDevelopment = process.env.NODE_ENV === 'development'; export const dynamic = registerEnvironmentSettings({ credentials: { "api-key": { id: "api-key", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: isDevelopment ? "API_KEY_DEV" : "API_KEY_PROD", }, }, }, }); ``` ## Environment Variables Integration ### CLI Environment Variables The CLI respects these environment variables when using the `--env` flag: ```bash # Set environment name via environment variable export INKEEP_ENV=production inkeep push # Uses production environment automatically # Override via CLI (takes precedence) inkeep push --env development # Uses development instead ``` ### Credential Environment Variables Environment files reference environment variables for actual credential values: ```bash # Development export OPENAI_API_KEY_DEV=sk-dev-... export DATABASE_URL_DEV=postgresql://dev... # Production export OPENAI_API_KEY_PROD=sk-prod-... export DATABASE_URL_PROD=postgresql://prod... ``` ## Best Practices ### 1. Credential Isolation Keep credentials completely separate between environments: ```typescript // ✓ Good: Environment-specific credential IDs const development = registerEnvironmentSettings({ credentials: { "openai-dev": { /* dev credentials */ }, "database-dev": { /* dev database */ }, }, }); const production = registerEnvironmentSettings({ credentials: { "openai-prod": { /* prod credentials */ }, "database-prod": { /* prod database */ }, }, }); ``` ### 2. Secure Secret Management Never commit secrets to environment files: ```typescript // ✗ Bad: Hardcoded secrets export const production = registerEnvironmentSettings({ credentials: { "api-key": { value: "sk-secret-key", // Don't do this! }, }, }); // ✓ Good: Reference environment variables export const production = registerEnvironmentSettings({ credentials: { "api-key": { type: CredentialStoreType.memory, retrievalParams: { key: "API_KEY_PROD", // Loaded from environment }, }, }, }); ``` ### 3. Environment Naming Use consistent, descriptive environment names: ```bash # ✓ Good: Clear, standard names environments/ ├── development.env.ts ├── staging.env.ts └── production.env.ts # ✗ Avoid: Ambiguous names environments/ ├── dev.env.ts # Too abbreviated ├── test.env.ts # Confusing (test vs staging?) └── live.env.ts # Unclear (live vs production?) ``` ### 4. Environment Documentation Document what each environment is for: ```typescript /** * Development environment configuration * - Uses development API keys * - Connects to local development services * - Enables debug logging */ export const development = registerEnvironmentSettings({ credentials: { // Development-specific credentials... }, }); ``` ## Troubleshooting ### Environment Not Found ```bash Error: Environment file 'environments/staging.env.ts' not found ``` **Solutions:** * Check file exists in `/environments/` directory * Verify file name matches `--env` parameter exactly * Ensure file exports environment with correct name ### Credential Loading Issues ```bash Error: Environment variable 'OPENAI_API_KEY_PROD' not set ``` **Solutions:** * Set required environment variables before push * Check environment variable names in credential config * Verify credential store configuration ### Environment Override Issues ```bash Warning: No credentials found for environment 'production' ``` **Solutions:** * Check environment file exports credentials object * Verify credential IDs match agent requirements * Test environment file syntax with TypeScript compiler ## CI/CD Integration ### GitHub Actions ```yaml # .github/workflows/deploy.yml name: Deploy Agents on: push: branches: [main] jobs: deploy-staging: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup Node.js uses: actions/setup-node@v3 - name: Install CLI run: npm install -g @inkeep/agents-cli - name: Deploy to Staging env: OPENAI_API_KEY_STAGING: ${{ secrets.OPENAI_API_KEY_STAGING }} ANTHROPIC_API_KEY_STAGING: ${{ secrets.ANTHROPIC_API_KEY_STAGING }} run: inkeep push --env staging deploy-production: runs-on: ubuntu-latest needs: deploy-staging if: github.ref == 'refs/heads/main' steps: - uses: actions/checkout@v3 - name: Deploy to Production env: OPENAI_API_KEY_PROD: ${{ secrets.OPENAI_API_KEY_PROD }} ANTHROPIC_API_KEY_PROD: ${{ secrets.ANTHROPIC_API_KEY_PROD }} run: inkeep push --env production ``` ### Docker Integration ```dockerfile # Dockerfile FROM node:18 # Install CLI RUN npm install -g @inkeep/agents-cli # Copy project COPY . /app WORKDIR /app # Set environment and deploy ARG ENVIRONMENT=production ENV INKEEP_ENV=${ENVIRONMENT} CMD ["inkeep", "push"] ``` Environment management provides the foundation for secure, scalable deployment of your Inkeep Agent projects across different stages of your development lifecycle. # Add External Agents to your Agent URL: /typescript-sdk/external-agents Learn how to configure and use external agents using the A2A protocol *** title: Add External Agents to your Agent sidebarTitle: External Agents description: Learn how to configure and use external agents using the A2A protocol icon: "LuGlobe" --------------- External agents let you integrate agents built outside of Inkeep (using other frameworks or platforms) into your Agent. They communicate over the A2A (Agent‑to‑Agent) protocol so your Inkeep sub-agents can delegate tasks to them as if they were native. Note that Inkeep Agents are available via an [A2A endpoint](/talk-to-your-agents/a2a) themselves and used from other platforms. Learn more about A2A: * A2A overview on the Google Developers Blog: [A2A — a new era of agent interoperability](https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/) * A2A protocol site: [a2a.how](https://a2a.how/) Examples platforms that expose Agents in A2A-format: | Platform | Type | Description | | ----------------------------------------------------------------------------------------------------------------------------- | ------------ | ------------------------------------------------------------------- | | [LangGraph](https://docs.langchain.com/langgraph-platform/server-a2a) | Native | Built-in A2A endpoint & Agent Card for graph agents. | | [Google Agent Development Kit (ADK)](https://google.github.io/adk-docs/a2a/) | Native | Official guide to build agents that expose/consume A2A. | | [Microsoft Semantic Kernel](https://devblogs.microsoft.com/foundry/semantic-kernel-a2a-integration/) | Native | “SK now speaks A2A” with sample to expose compliant agents. | | [Pydantic AI](https://ai.pydantic.dev/a2a/) | Native | Convenience method to publish a Pydantic AI agent as an A2A server. | | [AWS Strands Agents SDK](https://strandsagents.com/latest/documentation/docs/user-guide/concepts/multi-agent/agent-to-agent/) | Native | A2A support in Strands for cross‑platform agent communication. | | [CrewAI](https://codelabs.developers.google.com/intro-a2a-purchasing-concierge) | With Adapter | Use the A2A Python SDK to serve a CrewAI agent over A2A. | | [LlamaIndex](https://a2aprotocol.ai/blog/a2a-samples-llama-index-file-chat-openrouter) | With Adapter | Example Workflows app exposed via A2A (agent + card). | Any agent that exposes an A2A‑compatible HTTP endpoint can be integrated by providing its `baseUrl` plus headers/auth (static or dynamic). ## Creating an External Agent Every external agent needs a unique identifier, name, description, base URL for A2A communication, and optional authentication configuration: ```typescript import { externalAgent } from "@inkeep/agents-sdk"; const technicalSupportAgent = externalAgent({ id: "technical-support-agent", name: "Technical Support Team", description: "External technical support specialists for complex issues", baseUrl: "https://api.example.com/agents/technical-support", // A2A endpoint }); ``` ## External Agent Relationships Agents can be configured to delegate tasks to external agents. ```typescript import { subAgent, agent } from "@inkeep/agents-sdk"; import { myExternalAgent } from "./external-agents/exernal-agent-example"; // Define the customer support sub-agent with delegation capabilities const supportSubAgent = subAgent({ id: "support-agent", name: "Customer Support Sub-Agent", description: "Handles customer inquiries and escalates technical issues", prompt: `You are a customer support sub-agent that handles general customer inquiries.`, canDelegateTo: () => [myExternalAgent], }); // Create the customer support agent with external agent capabilities export const supportAgent = agent({ id: "customer-support-agent", name: "Customer Support System", description: "Handles customer inquiries and escalates to technical teams when needed", defaultSubAgent: supportSubAgent, subAgents: () => [supportSubAgent], }); ``` ## External Agent Options Configure authentication by providing a [credential reference](/typescript-sdk/tools/credentials). ```typescript const myExternalAgent = externalAgent({ // Required name: "External Support Agent", // Human-readable agent name description: "External AI agent for specialized support", // Agent's purpose baseUrl: "https://api.example.com/agents/support", // A2A endpoint URL // Optional - Credential Reference credentialReference: myCredentialReference, }); ``` When delegating to an external agent, you can specify headers to include with every request to the external agent. These headers can be dynamic variables that are [resolved at runtime](/typescript-sdk/headers). ```typescript const supportSubAgent = subAgent({ id: "support-agent", name: "Customer Support Sub-Agent", description: "Handles customer inquiries and escalates technical issues", prompt: `You are a customer support sub-agent that handles general customer inquiries.`, canDelegateTo: () => [myExternalAgent.with({ headers: { Authorization: "Bearer {{headers.Authorization}}" } })], }); ``` | Parameter | Type | Required | Description | | --------------------- | ------------------- | -------- | --------------------------------------------------------------------------------------------------------------------- | | `id` | string | Yes | Stable agent identifier used for consistency and persistence | | `name` | string | Yes | Human-readable name for the external agent | | `description` | string | Yes | Brief description of the agent's purpose and capabilities | | `baseUrl` | string | Yes | The A2A endpoint URL where the external agent can be reached | | `credentialReference` | CredentialReference | No | Reference to dynamic credentials for authentication. See [Credentials](/typescript-sdk/tools/credentials) for details | # Headers URL: /typescript-sdk/headers Pass dynamic context to your Agents via HTTP headers for personalized interactions *** title: Headers sidebarTitle: Headers description: Pass dynamic context to your Agents via HTTP headers for personalized interactions icon: LuBoxes keywords: headers, context fetchers, prompt variables, personalization ---------------------------------------------------------------------- ## Overview Headers allow you to pass request-specific values (like user IDs, authentication tokens, or organization metadata) to your Agent at runtime via HTTP headers. These values are validated, cached per conversation, and made available throughout your Agent system for: * **Context Fetchers**: Dynamic data retrieval based on request values * **External Tools**: Authentication and personalization for API calls * **Agent Prompts**: Personalized responses using context variables ## Passing context via headers Include context values as HTTP headers when calling your agent API. These headers are validated against your configured schema and cached for the conversation. ```bash curl -N \ -X POST "http://localhost:3003/api/chat" \ -H "Authorization: Bearer $INKEEP_API_KEY" \ -H "user_id: u_123" \ -H "auth_token: t_abc" \ -H "org_name: Acme Corp" \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "What can you help me with?" } ], "conversationId": "conv-123" }' ``` Header keys are normalized to lowercase. Define them as lowercase in your schema and reference them as lowercase in templates. ## Configuring headers Define a schema for your headers and configure how it's used in your agent. You must include the headers schema in your context config. ```typescript import { z } from "zod"; import { agent, subAgent } from "@inkeep/agents-sdk"; import { contextConfig, fetchDefinition, headers } from '@inkeep/agents-core'; // Define schema for expected headers (use lowercase keys) const personalAgentHeaders = headers({ schema: z.object({ user_id: z.string(), auth_token: z.string(), org_name: z.string().optional() }); }); // Create a context fetcher that uses header values with type-safe templating const userFetcher = fetchDefinition({ id: "user-info", name: "User Information", trigger: "initialization", fetchConfig: { url: `https://api.example.com/users/${personalAgentHeaders.toTemplate('user_id')}`, method: "GET", headers: { Authorization: `Bearer ${personalAgentHeaders.toTemplate('auth_token')}`, }, transform: "user", // Extract user from response, for example if the response is { "user": { "name": "John Doe", "email": "john.doe@example.com" } }, the transform will return the user object }, responseSchema: z.object({ user: z.object({ name: z.string(), email: z.string(), }), }), defaultValue: "Guest User" }); // Configure context for your agent const personalAgentContext = contextConfig({ headers: personalAgentHeaders, contextVariables: { user: userFetcher, }, }); // Create a Sub Agent that uses context variables const personalAssistant = subAgent({ id: "personal-assistant", name: "Personal Assistant", description: "Personalized AI assistant", prompt: `You are a helpful assistant for ${personalAgentContext.toTemplate('user.name')} from ${personalAgentHeaders.toTemplate('org_name')}. User ID: ${personalAgentHeaders.toTemplate('user_id')} Provide personalized assistance based on their context.`, }); // Attach context to your Agent const myAgent = agent({ id: "personal-agent", name: "Personal Assistant Agent", defaultSubAgent: personalAssistant, subAgents: () => [personalAssistant], contextConfig: personalAgentContext, }); ``` ## How it works **Validation**: Headers are validated against your configured schema when a request arrives **Caching**: Validated context is cached per conversation for reuse across multiple interactions **Reuse**: Subsequent requests with the same `conversationId` automatically use cached context values **Updates**: Provide new header values to update the context for an ongoing conversation Context values persist across conversation turns. To update them, send new header values with the same conversation ID. ## Using headers in your agents Header values can be used in your agent prompts and fetch definitions using JSONPath template syntax `{{headers.field_name}}`. You can use the headers schema's `toTemplate()` method for type-safe templating with autocomplete and validation. ### In Context Fetchers Use header values to fetch dynamic data from external APIs: ```typescript // Define schema for expected headers (use lowercase keys) const personalAgentHeaders = headers({ schema: z.object({ user_id: z.string(), auth_token: z.string(), org_name: z.string().optional() }); }); const userDataFetcher = fetchDefinition({ id: "user-data", name: "User Data", fetchConfig: { url: `https://api.example.com/users/${personalAgentHeaders.toTemplate('user_id')}/profile`, headers: { Authorization: `Bearer ${personalAgentHeaders.toTemplate('auth_token')}`, "X-Organization": personalAgentHeaders.toTemplate('org_name') }, body: { includePreferences: true, userId: personalAgentHeaders.toTemplate('user_id') } }, responseSchema: z.object({ name: z.string(), preferences: z.record(z.unknown()) }) }); // Configure context for your Agent // You must include the headers schema and fetchers in your context config. const personalAgentContext = contextConfig({ headers: personalAgentHeaders, contextVariables: { user: userFetcher, }, }); ``` ### In Agent Prompts Reference context directly in agent prompts for personalization using the context config's template method: ```typescript // Create context config with both headers and fetchers const userContext = contextConfig({ headers: requestHeaders, contextVariables: { userName: userDataFetcher, }, }); const assistantAgent = subAgent({ prompt: `You are an assistant for ${userContext.toTemplate('userName')} from ${requestHeaders.toTemplate('org_name')}. User context: - ID: ${requestHeaders.toTemplate('user_id')} - Organization: ${requestHeaders.toTemplate('org_name')} Provide help tailored to their organization's needs.` }); ``` ### In External Tools Configure external agents or MCP servers with dynamic headers using the headers schema: ```typescript // Define schema for expected headers (use lowercase keys) const personalAgentHeaders = headers({ schema: z.object({ user_id: z.string(), auth_token: z.string(), org_name: z.string().optional() }); }); // Configure external agent const externalAgent = externalAgent({ id: "external-service", baseUrl: "https://external.api.com", headers: { Authorization: `Bearer ${personalAgentHeaders.toTemplate('auth_token')}`, "X-User-Context": personalAgentHeaders.toTemplate('user_id'), "X-Org": personalAgentHeaders.toTemplate('org_name') } }); // Configure context for your Agent with your headers schema. const personalAgentContext = contextConfig({ headers: personalAgentHeaders, }); ``` ## Best practices * **Use lowercase keys**: Always define schema properties in lowercase and reference them as lowercase in templates * **Validate early**: Test your schema configuration with sample headers before deploying * **Cache wisely**: Remember that context persists per conversation - design accordingly * **Secure sensitive data**: For long-lived secrets, use the [Credentials](/typescript-sdk/tools/credentials) system instead of headers * **Keep it minimal**: Only include context values that are actually needed by your agents ## Common use cases ### Multi-tenant applications Pass tenant-specific configuration to customize agent behavior per customer: ```typescript // Headers "tenant_id: acme-corp" "tenant_plan: enterprise" "tenant_features: advanced-analytics,custom-branding" ``` ### User authentication Provide user identity and session information for personalized interactions: ```typescript // Headers "user_id: user_123" "user_role: admin" "session_token: sk_live_..." ``` ### API gateway integration Forward headers from your API gateway for consistent authentication: ```typescript // Headers "x-api-key: your-api-key" "x-request-id: req_abc123" "x-client-version: 2.0.0" ``` ## Troubleshooting ### Invalid headers errors If you receive a 400 error about invalid headers: 1. Verify your schema matches the headers you're sending 2. Ensure all header keys are lowercase 3. Check that required fields are present 4. Validate the data types match your schema ### Context not persisting If context values aren't available in subsequent requests: 1. Ensure you're using the same `conversationId` across requests 2. Verify headers are being sent correctly 3. Check that your context config is properly attached to the Agent ## Related documentation * [Context Fetchers](/typescript-sdk/context-fetchers) - Learn about fetching and caching external data * [External Agents](/typescript-sdk/external-agents) - Configure external agent integrations * [Credentials](/typescript-sdk/tools/credentials) - Manage secure credentials for your Agents # Using Langfuse for LLM Observability URL: /typescript-sdk/langfuse-usage Complete guide to using Langfuse for LLM observability, tracing, and analytics in the Inkeep Agent Framework *** title: Using Langfuse for LLM Observability sidebarTitle: Langfuse Usage description: Complete guide to using Langfuse for LLM observability, tracing, and analytics in the Inkeep Agent Framework keywords: Langfuse, LLM observability, tracing, OpenTelemetry, AI monitoring, token usage, model analytics icon: "brand/Langfuse" ---------------------- Langfuse is an open-source LLM engineering platform that provides specialized observability for AI applications, including token usage tracking, model performance analytics, and detailed LLM interaction tracing. ## Quick Start ### 1. Setup Langfuse Account First, create a Langfuse account and get your API keys: 1. **Sign up** at [Langfuse Cloud](https://cloud.langfuse.com) 2. **Create a new project** in your Langfuse dashboard 3. **Get your API keys** from the project settings: * Public Key: `pk-lf-xxxxxxxxxx` * Secret Key: `sk-lf-xxxxxxxxxx` ### 2. Configure Langfuse To integrate Langfuse with your Inkeep Agent Framework instrumentation, you need to modify your instrumentation file to include the Langfuse span processor. Replace the default setup with a custom NodeSDK configuration: Set your environment variables: ```bash LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxx LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxx LANGFUSE_BASE_URL=https://us.cloud.langfuse.com ``` Update your instrumentation file: ```typescript import { defaultSpanProcessors, defaultContextManager, defaultResource, defaultTextMapPropagator, defaultInstrumentations } from "@inkeep/agents-run-api/instrumentation"; import { NodeSDK } from "@opentelemetry/sdk-node"; import { LangfuseSpanProcessor } from "@langfuse/otel"; export const defaultSDK = new NodeSDK({ resource: defaultResource, contextManager: defaultContextManager, textMapPropagator: defaultTextMapPropagator, spanProcessors: [...defaultSpanProcessors, new LangfuseSpanProcessor()], instrumentations: defaultInstrumentations, }); defaultSDK.start(); ``` Make sure to install the required dependencies in your run API directory: ```bash cd apps/run-api pnpm add @opentelemetry/sdk-node @langfuse/otel ``` #### What This Configuration Does * **Preserves all default instrumentation**: Uses the same resource, context manager, propagator, and instrumentations as the default setup * **Adds Langfuse span processor**: Extends the default span processors with Langfuse's processor for specialized LLM observability * **Maintains compatibility**: Your existing traces will continue to work while adding Langfuse-specific features ## Dataset setup and execution Use the [Inkeep Agent Cookbook](https://github.com/inkeep/agents-cookbook) repository which provides ready-to-use scripts for creating and running Langfuse dataset evaluations programmatically. #### 1. Clone the Agent Cookbook Repository ```bash git clone https://github.com/inkeep/agent-cookbook.git cd agent-cookbook/evals/langfuse-dataset-example ``` **Set up environment variables in a `.env` file:** ```bash # Langfuse configuration (required for both scripts) LANGFUSE_PUBLIC_KEY=your_langfuse_public_key LANGFUSE_SECRET_KEY=your_langfuse_secret_key LANGFUSE_BASE_URL=https://cloud.langfuse.com # Chat API configuration (for dataset runner) INKEEP_AGENTS_RUN_API_KEY=your_api_key INKEEP_AGENTS_RUN_API_URL=your_chat_api_base_url # Execution context (for dataset runner) INKEEP_TENANT_ID=your_tenant_id INKEEP_PROJECT_ID=your_project_id INKEEP_AGENT_ID=your_agent_id ``` #### 2. Initialize Dataset with Sample Data Run the basic Langfuse example to initialize a dataset with sample user messages: ```bash pnpm run langfuse-init-example ``` This script will: * Connect to your Langfuse project * Create a new dataset called "inkeep-weather-example-dataset" with sample dataset items #### 3. Run Dataset Items to Generate Traces Run dataset items to generate traces that can be evaluated: ```bash pnpm run langfuse-run-dataset ``` This script will: * Read items from your Langfuse dataset * Execute each item against your weather agent * Generate the data needed for evaluation ## Running LLM Evaluations in Langfuse Dashboard Langfuse provides a powerful web interface for running LLM evaluations without writing code. You can create datasets, set up evaluators, and run evaluations directly in the dashboard. ### Accessing the Evaluation Features 1. **Log into your Langfuse dashboard**: [https://cloud.langfuse.com](https://cloud.langfuse.com) 2. **Navigate to your project** where your agent traces are being collected 3. **Click "Evaluations"** in the left sidebar 4. **Click "Set up evaluator"** to begin creating evaluations ### Setting Up LLM-as-a-Judge Evaluators #### Set Up Default Evaluation Model Before creating evaluators, you need to configure a default LLM connection for evaluations: Langfuse LLM Connection setup showing OpenAI provider configuration with API key field and advanced settings **Setting up the LLM Connection:** 1. **Navigate to "Evaluator Library"** in your Langfuse dashboard 2. **Click "Set up"** next to "Default Evaluation Model" 3. **Configure the LLM connection**: * **LLM Adapter**: Select your preferred provider * **Provider Name**: Give it a descriptive name (e.g., "openai") * **API Key**: Enter your OpenAI API key (stored encrypted) * **Advanced Settings**: Configure base URL, model parameters if needed 4. **Click "Create connection"** to save #### Navigate to Evaluator Setup 1. **Go to "Evaluations"** → **"Running Evaluators"** 2. **Click "Set up evaluator"** button 3. **You'll see two main steps**: "1. Select Evaluator" and "2. Run Evaluator" #### Choose Your Evaluator Type You have two main options: ## Option A: Langfuse Managed Evaluators Langfuse provides a comprehensive catalog of **pre-built evaluators** **To use a managed evaluator:** 1. **Browse the evaluator list** and find one that matches your needs 2. **Click on the evaluator** to see its description and criteria 3. **Click "Use Selected Evaluator"** button #### Customizing Managed Evaluators for Dataset Runs Once you've selected a managed evaluator, you can **edit it to target your dataset runs**. This is particularly useful for evaluating agent performance against known test cases. ### Example: Customizing a Helpfulness Evaluator 1. **Select the "Helpfulness" evaluator** from the managed list 2. Under **Target** select dataset runs 3. **Configure variable mapping** * **{`{{input}}`}** → **Object**: Trace, **Object Variable**: Input * **{`{{generation}}`}** → **Object**: Trace, **Object Variable**: Output ## Option B: Create Custom Evaluator 1. **Click "+ Create Custom Evaluator"** button 2. **Fill in evaluator details**: * **Name**: Choose a descriptive name (e.g., "weather\_tool\_used") * **Description**: Explain what this evaluator measures * **Model**: Select evaluation model * **Prompt**: Configure a custom prompt ### Example: Customizing a Weather Tool Evaluator 1. **Prompt** ``` You are an expert evaluator for an AI agent system. Your task is to rate the correctness of tool usage on a scale from 0.0 to 1.0. Instructions: If the user’s question is not weather-related and the tool used is not get_weather_forecast, return 1.0. If the user’s question is not weather-related and the tool is get_weather_forecast, return 0.0. If the user’s question is weather-related, return 1.0 only if the tool used is get_weather_forecast; otherwise return 0.0. Input: User Question: {`{{input}}`} Tool Used: {`{{tool_used}}`} ``` 2. **Configure variable mapping**: * **{`{{input}}`}** → **Object**: Trace, **Object Variable**: Input * **{`{{tool_used}}`}** → **Object**: Span, **Object Name**: weather-forecaster.ai.toolCall, **Object Variable**: Metadata, **JsonPath**: $.attributes\["ai.toolCall.name"] Langfuse helpfulness evaluator setup screen showing evaluator configuration with variable mapping and trace targeting options ## Enable and Monitor 1. **Click "Enable Evaluator"** to start automatic evaluation 2. **Monitor evaluation progress** in the dashboard 3. **View evaluation results** as they complete # Project Structure URL: /typescript-sdk/project-structure Learn how to organize your Inkeep Agent projects for optimal development and deployment *** title: Project Structure description: Learn how to organize your Inkeep Agent projects for optimal development and deployment icon: "LuFolder" ---------------- ## Overview Inkeep Agent projects follow a standardized directory structure that enables the CLI to automatically discover and manage your Agents, Sub Agents, tools, and configurations. This convention-based approach simplifies project organization and deployment workflows. ## Standard Project Layout ``` workspace-root/ # Repository/workspace root ├── package.json # Workspace package.json ├── tsconfig.json # TypeScript configuration ├── inkeep.config.ts # Inkeep configuration file ├── my-agent-project/ # Individual project directory │ ├── index.ts # Project entry point │ ├── agents/ # Agent definitions │ │ ├── main-agent.ts │ │ └── support-agent.ts │ ├── tools/ # Tool definitions │ │ ├── search-tool.ts │ │ └── calculator-tool.ts │ ├── data-components/ # Data component definitions │ │ ├── user-profile.ts │ │ └── product-catalog.ts │ ├── external-agents/ # External agent definitions │ │ ├── exernal-agent-example.ts │ └── environments/ # Environment-specific configurations │ ├── index.ts │ ├── development.env.ts │ ├── staging.env.ts │ └── production.env.ts └── another-project/ # Additional projects can coexist ├── index.ts └── ... ``` ## Core Files ### `inkeep.config.ts` The configuration file at the workspace root that defines settings for all projects in this workspace: ```typescript // Located at workspace root, same level as package.json import { defineConfig } from '@inkeep/agents-cli/config'; export default defineConfig({ tenantId: 'my-company', agentsManageApiUrl: 'http://localhost:3002', agentsRunApiUrl: 'http://localhost:3003', }); ``` **Important**: This file lives at the workspace/repository root level, **not** inside individual project directories. ### `index.ts` The project entry point inside each project directory that exports your project definition: ```typescript // Located inside project directory (e.g., my-agent-project/index.ts) import { project } from '@inkeep/agents-sdk'; import { mainAgent } from './agents/main-agent'; import { supportAgent } from './agents/support-agent'; import { searchTool } from './tools/search-tool'; import { calculatorTool } from './tools/calculator-tool'; import { userProfile } from './data-components/user-profile'; export const myProject = project({ id: 'my-agent-project', name: 'My Agent Project', description: 'A comprehensive multi-agent system', subAgents: () => [mainAgent, supportAgent], tools: () => [searchTool, calculatorTool], dataComponents: () => [userProfile], }); ``` ## Directory Conventions ### `/agents/` Contains agent definitions. Each file typically exports one agent: ```typescript // agents/customer-support.ts import { agent, subAgent } from '@inkeep/agents-sdk'; const routerSubAgent = subAgent({ id: 'support-router', name: 'Support Router', prompt: 'Route customer inquiries to appropriate specialists', }); const billingSubAgent = subAgent({ id: 'billing-specialist', name: 'Billing Specialist', prompt: 'Handle billing and payment inquiries', }); export const customerSupportAgent = agent({ defaultSubAgent: routerSubAgent, subAgents: () => [routerSubAgent, billingSubAgent], }); ``` ### `/tools/` Tool definitions that can be used by Sub Agents: ```typescript // tools/database-query.ts import { tool } from '@inkeep/agents-sdk'; export const databaseQueryTool = tool({ id: 'db-query', name: 'Database Query Tool', description: 'Execute SQL queries against the database', inputSchema: { type: 'object', properties: { query: { type: 'string' }, database: { type: 'string' } } }, // Tool implementation... }); ``` ### `/data-components/` Data components for structured UI output: ```typescript // data-components/customer-data.ts import { dataComponent } from '@inkeep/agents-sdk'; import { z } from 'zod'; export const customerData = dataComponent({ id: 'customer-data', name: 'Customer Information', description: 'Customer profile and interaction history', props: z.object({ customerId: z.string().describe("Customer ID"), name: z.string().describe("Customer name"), email: z.string().describe("Customer email"), }), }); ``` ### `/external-agents/` External agent definitions: ```typescript import { externalAgent } from '@inkeep/agents-sdk'; export const exernalAgentExample = externalAgent({ id: 'exernal-agent-example', name: 'Exernal Agent Example', description: 'An example external agent', baseUrl: 'https://api.example.com/agents/support', credentialReference: myCredentialReference, }); ``` ### `/environments/` Environment-specific configurations for different deployment stages: ```typescript // environments/production.env.ts import { registerEnvironmentSettings } from '@inkeep/agents-sdk'; import { CredentialStoreType } from '@inkeep/agents-core'; export const production = registerEnvironmentSettings({ credentials: { "openai-prod": { id: "openai-prod", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "OPENAI_API_KEY_PROD", }, }, }, }); ``` ## File Discovery Process The CLI automatically discovers files using these patterns: 1. **Config Discovery**: Searches for `inkeep.config.ts`: * Starts from current working directory * Traverses **upward** through parent directories until found * Looks at the same level as `package.json` and `tsconfig.json` * Can be overridden with `--config` flag 2. **Project Discovery**: Once config is found: * Uses the config file's directory as the workspace root * Scans for project subdirectories containing `index.ts` * Each project directory is treated as a separate agent project 3. **Resource Discovery**: Within each project directory: * Excludes `node_modules/` and `.git/` * Categorizes files by directory name and content * Processes dependencies and relationships 4. **File Categorization**: * **Index files**: `index.ts`, `main.ts` (project entry points) * **Agent files**: Files in `/agents/` directory * **Sub Agent files**: Files containing Sub Agent definitions * **Tool files**: Files in `/tools/` directory * **Data component files**: Files in `/data-components/` directory * **External agent files**: Files in `/external-agents/` directory * **Environment files**: Files in `/environments/` directory ## Best Practices ### Naming Conventions * Use kebab-case for file names: `customer-support-agent.ts` * Use camelCase for variable names: `customerSupportAgent` * Use descriptive IDs: `id: 'customer-support-router'` ### File Organization * **One primary export per file**: Each file should export one main resource * **Group related functionality**: Keep related Sub Agents in the same Agent file * **Separate concerns**: Keep tools, data components, and agents in separate directories * **Environment isolation**: Use separate files for different environments ### Dependencies * **Explicit imports**: Import all dependencies explicitly * **Circular dependency avoidance**: Structure imports to prevent circular references * **Type safety**: Use TypeScript for all configuration files ## Troubleshooting ### Common Issues **Config file not found:** ```bash Error: Could not find inkeep.config.ts in current directory or parent directories ``` * Ensure `inkeep.config.ts` exists at your **workspace root** (same level as `package.json`) * CLI searches upward from current directory - make sure you're in or below the workspace * Use `--config` flag to specify custom location if needed **Invalid project structure:** ```bash Warning: No agents found in project ``` * Check that you're running from within a project directory (containing `index.ts`) * Verify Agent files are in the project's `/agents/` subdirectory * Ensure exports are properly named and typed **Missing dependencies:** ```bash Error: Cannot resolve import './agents/missing-agent' ``` * Ensure all imported files exist within the project directory * Check relative file paths and extensions * Verify imports use correct paths relative to project root ### Validation Use the CLI to validate your project structure: ```bash # Validate project without pushing inkeep push --json # Check config resolution inkeep config get ``` ## Migration from Legacy Structures If migrating from older project structures: 1. **Move config to workspace root**: Ensure `inkeep.config.ts` is at same level as `package.json` 2. **Create project directories**: Organize agents into project subdirectories 3. **Create standard subdirectories**: Add `/agents/`, `/tools/`, `/data-components/` within each project 4. **Move files appropriately**: Organize existing files into correct project and subdirectories 5. **Update imports**: Fix import paths after restructuring 6. **Test compilation**: Run `inkeep push --json` to validate structure 7. **Update CI/CD**: Adjust build scripts for new workspace structure This standardized structure ensures your projects work seamlessly with the Inkeep CLI and can be easily shared, deployed, and maintained across different environments. # Push and Pull Workflows URL: /typescript-sdk/push-pull-workflows Understand the complete workflows for pushing and pulling agent projects with detailed flow diagrams *** title: Push and Pull Workflows description: Understand the complete workflows for pushing and pulling agent projects with detailed flow diagrams icon: "LuGitBranch" ------------------- ## Overview The `inkeep push` and `inkeep pull` commands implement sophisticated workflows for deploying and synchronizing agent projects. These workflows handle project discovery, configuration resolution, resource compilation, and bidirectional synchronization between local and remote environments. ## Push Workflow The push workflow deploys your local project to the Inkeep management API: ```mermaid graph TD A[inkeep push] --> B[Parse CLI Arguments] B --> C{--config provided?} C -->|Yes| D[Load specified config] C -->|No| E[Search for inkeep.config.ts] E --> F{Config found?} F -->|No| G[Error: Config not found] F -->|Yes| H[Load config file] D --> I[Apply Environment Variables] H --> I I --> J[Apply CLI Flag Overrides] J --> K{--env provided?} K -->|Yes| L[Load environment config] K -->|No| M[Use base config only] L --> N{Environment file exists?} N -->|No| O[Error: Environment not found] N -->|Yes| P[Merge environment settings] P --> Q[Project Discovery] M --> Q Q --> R[Scan project directory] R --> S[Find TypeScript files] S --> T[Categorize files by type] T --> U[Load and compile resources] U --> V{--json flag?} V -->|Yes| W[Export to JSON file] V -->|No| X[Validate project resources] X --> Y[Connect to Management API] Y --> Z[Deploy resources] Z --> AA[Print deployment summary] W --> BB[Save JSON and exit] G --> CC[Exit with error] O --> CC ``` ### Push Process Details 1. **Argument Parsing**: CLI parses command-line arguments and flags 2. **Configuration Resolution**: Loads and merges configuration from multiple sources 3. **Environment Application**: Applies environment-specific settings if specified 4. **Project Discovery**: Scans project directory for resources 5. **Resource Compilation**: Compiles TypeScript files and resolves dependencies 6. **Validation**: Validates resource configurations and relationships 7. **Deployment**: Uploads resources to management API 8. **Confirmation**: Returns deployment summary ### Configuration Resolution Flow ```mermaid graph TD A[Start Config Resolution] --> B{--config flag?} B -->|Yes| C[Load specified file] B -->|No| D[Current working directory] D --> E{inkeep.config.ts exists?} E -->|Yes| F[Load config from current dir] E -->|No| G[Check parent directory] G --> H{At filesystem root?} H -->|No| E H -->|Yes| I[Config not found error] C --> J[Base Configuration] F --> J J --> K[Apply Environment Variables] K --> L[Apply CLI Flags] L --> M{--env flag?} M -->|Yes| N[Load environments/env.env.ts from project] M -->|No| O[Final Configuration] N --> P{Environment file exists in project?} P -->|No| Q[Environment error] P -->|Yes| R[Merge environment settings] R --> O style C fill:#e1f5fe style F fill:#e1f5fe style O fill:#c8e6c9 style I fill:#ffcdd2 style Q fill:#ffcdd2 %% Notes S["📝 Config file lives at workspace root
same level as package.json"] T["📝 Projects are subdirectories
containing index.ts"] style S fill:#fff3e0 style T fill:#fff3e0 ``` ## Pull Workflow The pull workflow synchronizes local files with remote configurations: **Prerequisites:** The `inkeep pull` command uses AI (via Anthropic's Claude) to generate TypeScript files from your project configuration. You must have an `ANTHROPIC_API_KEY` environment variable set before running the pull command. ```bash export ANTHROPIC_API_KEY=your_api_key_here ``` Or add it to your `.env` file. Get your API key from the [Anthropic Console](https://console.anthropic.com/). ```mermaid graph TD A[inkeep pull] --> B[Parse CLI Arguments] B --> C[Configuration Resolution] C --> D[Connect to Management API] D --> E[Fetch project data] E --> F{--json flag?} F -->|Yes| G[Save as JSON file] F -->|No| H[Project File Discovery] H --> I[Scan local project directory] I --> J[Find TypeScript files] J --> K[Categorize files by type] K --> L[File Processing Loop] L --> M{File type?} M -->|Index| N[Full project context] M -->|Agent| O[Specific Agent data] M -->|Sub Agent| P[Specific Sub Agent data] M -->|Tool| Q[Specific tool data] M -->|Data Component| R[Specific component data] M -->|Other| S[General project context] N --> T[LLM Generation Request] O --> T P --> T Q --> T R --> T S --> T T --> U[Generate updated TypeScript] U --> V[Validate generated code] V --> W{Validation successful?} W -->|Yes| X[Write updated file] W -->|No| Y[Log generation error] X --> Z{More files?} Y --> Z Z -->|Yes| L Z -->|No| AA[Update complete] G --> BB[JSON saved] style T fill:#fff3e0 style U fill:#fff3e0 style V fill:#e8f5e8 style X fill:#e8f5e8 style Y fill:#ffebee ``` ### Pull Process Details 1. **Configuration Resolution**: Same as push workflow 2. **API Connection**: Connects to management API 3. **Data Fetching**: Retrieves complete project data from server 4. **File Discovery**: Scans local project for TypeScript files 5. **File Categorization**: Identifies file types for context-aware processing 6. **LLM Generation**: Uses AI to update files with server data 7. **Validation**: Validates generated TypeScript syntax 8. **File Updates**: Writes updated content to local files ### File Categorization Logic ```mermaid graph TD A[TypeScript File] --> B{Filename check} B -->|index.ts, main.ts| C[Index File] B -->|In /agents/ dir| D[Agent File] B -->|In /tools/ dir| E[Tool File] B -->|In /data-components/ dir| F[Data Component File] B -->|In /environments/ dir| G[Environment File] B -->|Contains subAgent call| H[Sub Agent File] B -->|Other| I[Generic File] C --> J[Full project context] D --> K[Relevant Agent data] E --> L[Relevant tool data] F --> M[Relevant component data] G --> N[Environment data] H --> O[Relevant Sub Agent data] I --> P[General context] style C fill:#e3f2fd style D fill:#f3e5f5 style E fill:#e8f5e8 style F fill:#fff3e0 style G fill:#fce4ec style H fill:#e1f5fe style I fill:#f5f5f5 ``` ## Resource Processing Flow Both push and pull operations process project resources systematically: ```mermaid graph TD A[Workspace Root] --> A1[Find inkeep.config.ts] A1 --> B[Discover Project Directories] B --> B1[Look for subdirs with index.ts] B1 --> C[Scan Project Files] C --> D[Exclude patterns] D --> E{File location} E -->|/node_modules/| F[Skip] E -->|/.git/| F E -->|project/environments/| G[Environment Config] E -->|project/*.ts files| H[Project Resource] H --> I[Parse TypeScript] I --> J[Extract exports] J --> K{Export type} K -->|project call| L[Project Definition] K -->|agent call| M[Agent Definition] K -->|subAgent call| N[Sub Agent Definition] K -->|tool call| O[Tool Definition] K -->|dataComponent call| P[Data Component] K -->|Other| Q[Generic Resource] L --> R[Compile Resources per Project] M --> R N --> R O --> R P --> R G --> S[Load Environment per Project] R --> T[Validate Dependencies] T --> U[Build Resource Graph] U --> V[Ready for Push/Pull] style F fill:#ffcdd2 style G fill:#fff9c4 style R fill:#c8e6c9 style V fill:#c8e6c9 %% Structure note W["📁 Workspace Structure:
config.ts at root
projects in subdirs"] style W fill:#fff3e0 ``` ## Error Handling and Recovery Both workflows include comprehensive error handling: ```mermaid graph TD A[Operation Start] --> B{Config Resolution} B -->|Success| C[Continue Processing] B -->|Error| D[Config Error Handler] C --> E{Project Discovery} E -->|Success| F[Resource Processing] E -->|Error| G[Project Error Handler] F --> H{Resource Validation} H -->|Success| I[API Operation] H -->|Error| J[Validation Error Handler] I --> K{API Request} K -->|Success| L[Operation Complete] K -->|Error| M[API Error Handler] D --> N[Show Config Help] N --> O[Suggest Solutions] O --> P[Exit with Code 1] G --> Q[Show Project Structure Help] Q --> O J --> R[Show Validation Errors] R --> S[Suggest Fixes] S --> P M --> T[Show API Error Details] T --> U[Retry Suggestions] U --> P style L fill:#c8e6c9 style P fill:#ffcdd2 style N fill:#fff3e0 style Q fill:#fff3e0 style R fill:#fff3e0 style T fill:#fff3e0 ``` ## Performance Optimizations The workflows include several performance optimizations: ### Parallel Processing ```mermaid graph TD A[File Discovery] --> B[Group by Type] B --> C[Parallel Processing] C --> D[Agent Files] C --> E[Tool Files] C --> F[Sub Agent Files] C --> G[Data Component Files] D --> H[Process Agents] E --> I[Process Tools] F --> J[Process Sub Agents] G --> K[Process Components] H --> L[Merge Results] I --> L J --> L K --> L L --> M[Dependency Resolution] M --> N[Final Resource Graph] ``` ### Caching Strategy ```mermaid graph LR A[API Request] --> B{Cache exists?} B -->|Yes| C{Cache valid?} B -->|No| D[Fetch from API] C -->|Yes| E[Return cached data] C -->|No| D D --> F[Update cache] F --> G[Return fresh data] style E fill:#c8e6c9 style G fill:#c8e6c9 ``` ## Best Practices ### 1. Pre-Push Validation Always validate your project before pushing: ```bash # Validate without pushing inkeep push --json # Check for TypeScript errors npx tsc --noEmit # Run tests npm test ``` ### 2. Environment-Specific Deployments Use environments for different deployment stages: ```bash # Development deployment inkeep push --env development # Staging deployment inkeep push --env staging # Production deployment (with validation) inkeep push --json && inkeep push --env production ``` ### 3. Pull with Backup Always backup before pulling changes: ```bash # Create backup branch git checkout -b backup-before-pull # Pull changes inkeep pull # Review changes git diff # Commit or revert as needed ``` ### 4. Monitoring Deployments Use logging and monitoring during deployments: ```bash # Enable debug logging DEBUG=1 inkeep push --env production # Monitor deployment inkeep list-graphs ``` These workflows provide a robust foundation for managing your Inkeep Agent projects across different environments and deployment scenarios. The visual diagrams help understand the complex interactions between configuration resolution, resource processing, and API communication that make the CLI both powerful and reliable. # Using SigNoz for Observability URL: /typescript-sdk/signoz-usage Complete guide to using SigNoz for observability, monitoring, tracing in the Inkeep Agent Framework *** title: Using SigNoz for Observability sidebarTitle: SigNoz Usage description: Complete guide to using SigNoz for observability, monitoring, tracing in the Inkeep Agent Framework keywords: SigNoz, observability, monitoring, tracing, OpenTelemetry, APM, metrics icon: "brand/Signoz" -------------------- SigNoz is a full-stack observability platform that provides distributed tracing so that you can track requests across multiple agents and services. ### Quick Start Before using SigNoz, ensure it's properly set up and running. For setup instructions, see the [Quick Start](/get-started/traces) guide. ## Using SigNoz UI The **Traces** page provides detailed request tracing: #### Viewing Traces The SigNoz traces interface provides comprehensive visibility into your agent operations: SigNoz Traces Explorer showing inkeep-agents-run-api service traces with timestamp, service name, operation name, duration, HTTP method, and response status code columns The traces explorer shows: * **Timestamp**: When each span occurred * **Service Name**: The service that generated the span (e.g., `inkeep-agents-run-api`) * **Operation Name**: Specific operations like `ai.generateObject`, `tls.connect`, `ai.toolCall` * **Duration**: How long each operation took (in milliseconds) * **HTTP Method**: For HTTP operations, shows the method (POST, GET, etc.) * **Response Status Code**: HTTP status codes (200, 404, etc.) Key features of the traces view: * **Filtering Options**: Use the left sidebar to filter by duration, deployment environment, service name, and more * **Time Range Selection**: Choose from preset ranges or custom time periods * **Multiple Views**: Switch between List View, Traces, Time Series, and Table View * **Real-time Updates**: Traces refresh automatically to show new data * **Trace List**: Browse all traces with filtering options * **Trace Details**: Drill down into individual traces * **Span Timeline**: See the execution flow across agents #### Filtering Traces ``` # Filter by service service_name = "inkeep-agents-run-api" # Filter by operation operation = "agent.generate" # Filter by status status = "error" # Filter by duration duration > 1000ms # Filter by custom attributes agent.id = "customer-support-agent" ``` #### Analyzing Individual Traces When you click on a specific trace from the list, you'll see the detailed trace view with a flamegraph visualization: SigNoz Trace Details showing flamegraph visualization with span hierarchy, timing information, and detailed span attributes **Flamegraph Visualization:** * **Horizontal Bars**: Each bar represents a span (operation) in your trace * **Bar Width**: Proportional to the duration of the operation * **Color Coding**: * Blue bars: Successful operations * Red bars: Operations with errors **Key Information Displayed:** * **Total Spans**: Total number of operations in this trace (e.g., 122) * **Error Spans**: Number of spans that encountered errors (e.g., 19) * **Trace Duration**: Total time for the entire trace (e.g., 5.2 mins) * **Timestamp**: When the trace occurred * **Service**: The primary service (e.g., `inkeep-agents-run-api`) **Span Details Panel (Right Side):** * **Span Name & ID**: Operation name and unique identifier * **Timing**: Start time and duration * **Service & Kind**: Which service and span type (Server, Client, etc.) * **Status**: Success/error status code * **Attributes, Events & Links**: Additional span metadata **How to Use This View:** 1. **Identify Bottlenecks**: Look for the widest bars in the flamegraph - these represent the longest-running operations 2. **Find Errors**: Red bars indicate operations that failed - click on them to see error details 3. **Understand Flow**: Follow the vertical hierarchy to see how operations call each other 4. **Analyze Performance**: Use the timeline to see which operations run in parallel vs. sequentially 5. **Drill Down**: Click on any span to see detailed attributes, events, and error information # JSON Schema guide for components URL: /ui-components/json-schema-validation Guide for writing valid JSON Schemas for data components and artifact components *** title: JSON Schema guide for components sidebarTitle: JSON Schemas description: Guide for writing valid JSON Schemas for data components and artifact components icon: LuFileCheck keywords: JSON Schema, data components, artifact components, LLM compatibility ------------------------------------------------------------------------------ This guide shows you how to write valid JSON Schemas for your components. The framework validates these schemas to ensure they work properly with LLMs. ## Why validation matters LLMs need clear, structured information to understand how to use your components. The validation ensures: * All properties have descriptions (so the LLM knows what they're for) * Required fields are clearly marked * The schema structure is correct ## Data component props When creating data components, your `props` field must be a valid JSON Schema: ### ✅ Valid example ```json { "type": "object", "properties": { "title": { "type": "string", "description": "The title of the content item" }, "url": { "type": "string", "description": "The URL where this content can be accessed" }, "tags": { "type": "array", "description": "List of tags to categorize this content", "items": { "type": "string" } }, "priority": { "type": "string", "enum": ["low", "medium", "high"], "description": "Priority level for this content item" } }, "required": ["title", "url"] } ``` ### ❌ Common mistakes ```json { // ❌ Missing "type": "object" "properties": { "title": { "type": "string" // ❌ Missing description } } // ❌ Missing required array } ``` ## Artifact component schemas Artifact components use a single unified schema called `props` with `inPreview` indicators. ### Unified props schema Fields marked with `inPreview: true` appear in summary views, while all fields are stored in the database: ```json { "type": "object", "properties": { "title": { "type": "string", "description": "Title of the artifact", "inPreview": true }, "status": { "type": "string", "enum": ["draft", "published", "archived"], "description": "Current status of the artifact", "inPreview": true }, "createdAt": { "type": "string", "format": "date-time", "description": "When the artifact was created", "inPreview": true }, "content": { "type": "string", "description": "Main content of the artifact" }, "metadata": { "type": "object", "description": "Additional metadata for the artifact", "properties": { "author": { "type": "string", "description": "Who created this artifact" }, "version": { "type": "string", "description": "Version number of the artifact" } }, "required": ["author"] }, "attachments": { "type": "array", "description": "Files attached to this artifact", "items": { "type": "object", "properties": { "filename": { "type": "string", "description": "Name of the attached file" }, "url": { "type": "string", "description": "URL to download the file" } }, "required": ["filename", "url"] } } }, "required": ["title", "status", "content"] } ``` ### Using Zod schemas with preview helpers You can also use Zod schemas with the preview helper: ```typescript import { z } from 'zod'; import { preview } from '@inkeep/agents-core/utils/schema-conversion'; const artifactSchema = z.object({ title: preview(z.string().describe("Title of the artifact")), status: preview(z.enum(["draft", "published", "archived"]).describe("Current status")), content: z.string().describe("Main content of the artifact"), metadata: z.object({ author: z.string().describe("Who created this artifact"), version: z.string().describe("Version number") }) }); ``` ## Validation rules The editor enforces these rules: 1. **Must be an object**: Top level must have `"type": "object"` 2. **Must have properties**: Include a `"properties"` object 3. **Must have required array**: Include a `"required"` array (even if empty) 4. **All properties need descriptions**: Every property must have a `"description"` field ### Quick template Need a starting point? Click the "Template" button in the editor, or use this: ```json { "type": "object", "properties": { "example_property": { "type": "string", "description": "Description of what this property represents" } }, "required": ["example_property"] } ``` ## Common patterns ### Optional vs required ```json { "type": "object", "properties": { "name": { "type": "string", "description": "User's full name" }, "email": { "type": "string", "description": "User's email address (optional)" } }, "required": ["name"] // email is optional since it's not in the required array } ``` ### Enums for fixed choices ```json { "type": "object", "properties": { "category": { "type": "string", "enum": ["work", "personal", "urgent"], "description": "Category to organize this item" } }, "required": ["category"] } ``` ### Arrays with specific item types ```json { "type": "object", "properties": { "images": { "type": "array", "description": "List of image URLs for this item", "items": { "type": "string", "format": "uri" } } }, "required": ["images"] } ``` That's it! The editor will guide you with real-time validation feedback as you type. # Context Fetchers URL: /visual-builder/context-fetchers Learn how to use context fetchers to fetch data from external sources and make it available to your agents *** title: Context Fetchers sidebarTitle: Context Fetchers description: Learn how to use context fetchers to fetch data from external sources and make it available to your agents icon: "LuCirclePlus" -------------------- ## Overview Context fetchers allow you to embed real-time data from external APIs into your agent prompts. Instead of hardcoding information in your agent prompt, context fetchers dynamically retrieve fresh data for each conversation. ## Key Features * **Dynamic data retrieval**: Fetch real-time data from APIs. * **Dynamic Prompting**: Use dynamic data in your agent prompts * **Headers integration**: Use request-specific parameters to customize data fetching. * **Data transformation**: Transform API responses into the exact format your agent needs. ## Context Fetchers vs Tools * **Context Fetchers**: Pre-populate agent prompts with dynamic data * Run automatically before/during conversation startup * Data becomes part of the agent's system prompt * Perfect for: Personalized agent personas, dynamic agent guardrails * Example Prompt: `You are an assistant for {{user.name}} and you work for {{user.organization}}` * **Tools**: Enable agents to take actions or fetch data during conversations * Called by the agent when needed during the conversation * Agent decides when and how to use them * Example Tool Usage: Agent calls a "send\_email" tool or "search\_database" tool ## Basic Usage 1. Go to the Agents tab in the left sidebar. Then click on the agent you want to configure. 2. On the right pane scroll down to the "Context Variables" section. 3. Add your context variables in JSON format. 4. Click on the "Save" button. ## Defining Context Variables The keys that you define in the Context Variables JSON object are used to reference fetched data in your agent prompts. Each key in the JSON should map to a fetch definition with the following properties: * **`id`** (required): Unique identifier for the fetch definition * **`name`** (optional): Human-readable name for the fetch definition * **`trigger`** (required): When to execute the fetch: * `"initialization"`: Fetch only once when a conversation is started with the agent * `"invocation"`: Fetch every time a request is made to the agent * **`fetchConfig`** (required): HTTP request configuration: * **`url`** (required): The API endpoint URL (supports template variables) * **`method`** (optional): HTTP method - `GET`, `POST`, `PUT`, `DELETE`, or `PATCH` (defaults to `GET`) * **`headers`** (optional): Object with string key-value pairs for HTTP headers * **`body`** (optional): Request body for POST/PUT/PATCH requests * **`transform`** (optional): JSONPath expression or JavaScript transform function to extract specific data from the response * **`timeout`** (optional): Request timeout in milliseconds (defaults to 10000) * **`responseSchema`** (optional): Valid JSON Schema object to validate the API response structure. * **`defaultValue`** (optional): Default value to use if the fetch fails or returns no data * **`credential`** (optional): Reference to stored credentials for authentication Here is an example of a valid Context Variables JSON object: ```json { "userInfo": { "id": "user-info", "name": "User Information", "trigger": "initialization", "fetchConfig": { "url": "https://api.example.com/users/{{headers.user_id}}", "method": "GET", "headers": { "Authorization": "Bearer {{headers.api_key}}" }, "transform": "user" }, "responseSchema": { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "name": { "type": "string" }, "email": { "type": "string" } }, }, "defaultValue": "Unable to fetch user information" } } ``` ## Using Context Variables Once you have defined your context variables, you can use them in your agent prompts. 1. Click on the agent you want to modify. 2. In the "Prompt" section, you can embed fetched data in the prompt using the key defined in the "Context Variables" section. Reference them using double curly braces `{{}}`. Here is an example of an agent prompt using the context variable defined above: ``` You are a helpful assistant for {{userInfo.name}}. ``` # Related documentation * [Headers](/visual-builder/headers) - Learn how to pass dynamic context to your agents via HTTP headers # Headers URL: /visual-builder/headers Pass dynamic context to your agents via HTTP headers for personalized interactions *** title: Headers sidebarTitle: Headers description: Pass dynamic context to your agents via HTTP headers for personalized interactions icon: LuBoxes keywords: headers, context fetchers, prompt variables, personalization ---------------------------------------------------------------------- ## Overview Headers allow you to pass request-specific values (like user IDs, authentication tokens, or organization metadata) to your agent at runtime via HTTP headers. These values are validated and made available throughout your agent system for: * **Context Fetchers**: Dynamic data retrieval based on request values * **External Tools**: Authentication and personalization for API calls * **Agent Prompts**: Personalized responses using context variables ## Configuring Headers 1. Go to the Agents tab in the left sidebar. Then click on the agent you want to configure. 2. On the right pane scroll down to the "Headers schema" section. 3. Enter the schema in JSON Schema format. 4. Click on the "Save" button. Here is an example of a valid headers schema: ```json { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "userId": { "type": "string" } }, "required": [ "userId" ], "additionalProperties": {} } ``` You can generate custom schemas using this [JSON Schema generator](https://transform.tools/json-to-json-schema). ## Sending Custom Headers 1. On your agent page, click on the **Try it** button in the top right corner. 2. Click on the "Custom headers" button in the top right corner. 3. Enter the custom headers in JSON format. 4. Click on the "Apply" button. ## Using Headers in Your Agent Prompts 1. Go to the Agents tab in the left sidebar. Then click on the agent you want to configure. 2. Either add a new agent or edit an existing agent by clicking on the agent you want to edit. 3. On the right pane scroll down to the "Prompt" section. 4. Use the double curly braces `{{}}` to reference the headers variables. Here is an example of a valid prompt: ```text You are a helpful assistant for {{headers.user_id}}!. ``` # Project Management URL: /visual-builder/project-management Learn how to create and manage projects in the Inkeep Agent Framework *** title: Project Management description: Learn how to create and manage projects in the Inkeep Agent Framework icon: "LuFolder" ---------------- ## Overview Projects are the top-level organizational unit in the Inkeep Agent Framework. Each project contains its own agents, tools, and resources. This allows you to separate different applications or environments within a single tenant. ## Creating a Project There are two ways to create a new project: ### Using the Project Dialog When you have existing projects, you can create a new one using the "Create Project" button in the project switcher at the bottom of the sidebar. ### First Project Creation If you don't have any projects yet, you'll be automatically redirected to the project creation page when you log in. You can also access it directly at `/{tenantId}/projects/new`. ## Project Fields When creating a project, you'll need to provide: * **Project ID**: A unique identifier for your project. This must: * Start and end with lowercase alphanumeric characters * May contain hyphens in the middle * Cannot be changed after creation * Examples: `my-project`, `production-v2`, `test-env1` * **Project Name**: A friendly display name for your project (up to 100 characters) * **Description**: A brief description of what the project is for (up to 500 characters) ## Project Structure Each project can contain: * **Agents**: Collections of AI agents that work together * **API Keys**: Authentication keys for accessing your agents via API * **MCP Servers**: Model Context Protocol servers for external tools * **Data Components**: Reusable data structures for your agents * **Artifact Components**: Reusable UI components for agent outputs * **Credentials**: Secure storage for API keys and authentication tokens ## Project-Level Configuration Projects can define default configurations that cascade down to all agents and Sub Agents within the project. This provides a consistent baseline while allowing specific overrides where needed. ### Model Settings Define default models at the project level that all agents and Sub Agents will inherit: ```typescript // Project configuration { id: "my-project", name: "My AI Project", models: { base: { model: "anthropic/claude-sonnet-4-5", providerOptions: { temperature: 0.7, maxTokens: 4096 } }, structuredOutput: { model: "openai/gpt-4.1-mini", providerOptions: { temperature: 0.1, maxTokens: 2048 } }, summarizer: { model: "openai/gpt-4.1-nano", providerOptions: { temperature: 0.3, maxTokens: 1024 } } } } ``` ### StopWhen Configuration Configure default stopping conditions to prevent infinite loops: ```typescript { id: "my-project", stopWhen: { transferCountIs: 5, // Max transfers between Sub Agents per conversation stepCountIs: 20 // Max tool calls + LLM responses per Sub Agent execution } } ``` ## Configuration Inheritance The framework uses a cascading inheritance system with specific rules for different configuration types: ### Model Inheritance - Partial Cascading **Project** → **Agent** → **Sub Agent** The framework supports **partial cascading**, meaning each level can override specific model types while inheriting others. This provides maximum flexibility while maintaining sensible defaults. #### How Partial Cascading Works Each model type (`base`, `structuredOutput`, `summarizer`) cascades independently: ```typescript // Project Level - Sets defaults for all agents Project: { models: { base: { model: "anthropic/claude-sonnet-4-5" }, structuredOutput: { model: "openai/gpt-4.1-mini" }, summarizer: { model: "openai/gpt-4.1-nano" } } } // Agent Level - Partially overrides project defaults Agent: { models: { base: { model: "openai/gpt-4.1" } // Override: use different base model // structuredOutput: INHERITED from project (openai/gpt-4.1-mini) // summarizer: INHERITED from project (openai/gpt-4.1-nano) } } // Sub Agent Level - Can override any inherited model Sub Agent: { models: { summarizer: { model: "anthropic/claude-haiku-4-5" } // Override: use faster summarizer // base: INHERITED from Agent (openai/gpt-4.1) // structuredOutput: INHERITED from project (openai/gpt-4.1-nano) } } ``` #### Final Resolution Example In the above example, the Sub Agent ends up with: * **base**: `gpt-4.1` (from Agent) * **structuredOutput**: `gpt-4.1-mini` (from project) * **summarizer**: `claude-3.5-haiku` (from Sub Agent) #### Provider Options Override When a model is overridden at any level, the entire configuration (including provider options) is replaced: ```typescript // Project defines base model with options Project: { models: { base: { model: "claude-sonnet-4-0", providerOptions: { temperature: 0.7, maxTokens: 4096 } } } } // Agent completely overrides the base model settingsuration Agent: { models: { base: { model: "claude-sonnet-4-0", // Same model providerOptions: { temperature: 0.3, // New temperature maxTokens: 2048 // New maxTokens - project maxTokens is NOT inherited } } } } ``` #### Fallback to Base Model **Important**: `structuredOutput` and `summarizer` only fall back to the `base` model when there's nothing to inherit from higher levels (project/agent). #### System Defaults **Supported Providers**: The framework supports Anthropic, OpenAI, and Google models. **API Keys**: You need the appropriate API key for the provider you choose to use: * `ANTHROPIC_API_KEY` for Anthropic models * `OPENAI_API_KEY` for OpenAI models * `GOOGLE_GENERATIVE_AI_API_KEY` for Google models ```typescript // No higher-level defaults - fallback to base occurs Project: { // No models configured } Agent: { // No models configured } Sub Agent: { models: { base: { model: "gpt-4.1-2025-04-14", providerOptions: { temperature: 0.7, maxTokens: 2048 } } // No structuredOutput or summarizer specified } } ``` **Final Resolution with Fallback:** * **base**: `gpt-4.1-2025-04-14` with specified provider options (from Agent) * **structuredOutput**: `gpt-4.1-2025-04-14` with same provider options (falls back to Agent's base) * **summarizer**: `gpt-4.1-2025-04-14` with same provider options (falls back to Agent's base) **Contrast with Inheritance:** ```typescript // Project has summarizer configured - inheritance takes priority Project: { models: { summarizer: { model: "gpt-4.1-mini"; } } } Sub Agent: { models: { base: { model: "gpt-4.1"; } } } ``` **Final Resolution with Inheritance:** * **base**: `gpt-4.1` (from Sub Agent) * **summarizer**: `gpt-4.1-mini` (inherited from project - NO fallback to Sub Agent's base) * **structuredOutput**: Falls back to Sub Agent's base `gpt-4.1` (no inheritance available, so fallback occurs) ### StopWhen Inheritance StopWhen settings inherit from Project → Agent → Sub Agent: * **`transferCountIs`**: Project or Agent level * **`stepCountIs`**: Project or Sub Agent level ```typescript // Project sets defaults Project: { stopWhen: { transferCountIs: 5, // Max transfers per conversation stepCountIs: 15 // Max steps per Sub Agent } } // Agent can override transfer limit, inherits step limit Agent: { stopWhen: { transferCountIs: 8 // Override: allow more transfers in this Agent // stepCountIs: 15 (inherited from project) } } // Individual Sub Agent can override its own step limit Sub Agent: { stopWhen: { stepCountIs: 25 // This Sub Agent can take more steps // transferCountIs not applicable at Sub Agent level } } ``` ## Switching Between Projects Use the project switcher in the bottom left of the sidebar to quickly switch between projects. The switcher shows: * The project name (or ID if no name is set) * The project ID * A brief description (if provided) ## API Access Projects can be accessed programmatically via the CRUD API: ```typescript // List all projects GET /tenants/{tenantId}/crud/projects // Get a specific project GET /tenants/{tenantId}/crud/projects/{projectId} // Create a new project POST /tenants/{tenantId}/crud/projects { "id": "my-project", "name": "My Project", "description": "A project for my AI agents" } // Update a project PATCH /tenants/{tenantId}/crud/projects/{projectId} { "name": "Updated Name", "description": "Updated description" } // Delete a project DELETE /tenants/{tenantId}/crud/projects/{projectId} ``` ## Best Practices 1. **Use descriptive IDs**: Choose project IDs that clearly indicate their purpose (e.g., `customer-support`, `internal-tools`) 2. **Separate by environment**: Create different projects for development, staging, and production 3. **Document your projects**: Use the description field to explain what each project is for 4. **Organize by application**: Group related agents and tools within the same project ## Next Steps After creating a project, you can: * [Create your first Agent](/visual-builder/sub-agents) * [Configure MCP servers for tools](/visual-builder/tools/mcp-servers) * [Set up API keys for external access](/api-keys) # Get started with the Visual Agent Builder URL: /visual-builder/sub-agents Create Agents with a No-Code Visual Agent Builder *** title: Get started with the Visual Agent Builder sidebarTitle: Agents & Sub Agents description: Create Agents with a No-Code Visual Agent Builder icon: "LuSpline" ---------------- ## Overview An Agent is the top-level entity you can chat and interact with in the Visual Builder. An Agent is made up of one or more Sub Agents. The Sub Agents that make up an Agent can delegate or transfer control to each other, share context, or use tools to respond to a user or complete a task. You can use the Visual Builder to add Sub Agents to an Agent, give Sub Agents tools, and connect Sub Agents with each other to establish their relationships. ## Creating your first Agent