# Core concepts URL: /concepts Learn about the key building blocks of Inkeep - Agents, Sub Agents, tools, data components, and more. ## Agents In Inkeep, an **Agent** is the top-level entity you can interface with via conversational experiences (chat) or trigger programmatically (via API). Under the hood, an Agent is made up of one or more **Sub Agents** that work together to respond to a user or complete a task. ## Tools When you send a message to an Agent, it is first received by a **Default Sub Agent** that decides what to do next. In a simple Agent, there may be only one Sub Agent with a few tools available to it. **Tools** are actions that a Sub Agent can take, like looking up information or performing a task on apps and APIs. In Inkeep, tools can be added to Sub Agents as: - **MCP Servers**: Connect to external services and APIs via the Model Context Protocol. You can: - **Connect to Native MCP servers** provided directly by SaaS vendors (no building required) - **Access Composio's platform** for 10,000+ out-of-box MCP servers for popular services (no building required) - **Use Gram** to convert OpenAPI specs into MCP servers - **Build and deploy Custom servers** for your own APIs and business logic Register any of these with their associated **Credentials** for your Agents to use. - **Function Tools**: Custom JavaScript functions that Agents can execute directly without the need for standing up an MCP server. Typically, you want a Sub Agent to handle narrow, well-defined tasks. As a general rule of thumb, keep Sub Agents to be using 5-7 related tools at a time. ## Sub Agent relationships When your scenario gets complex, it can be useful to break up your logic into multiple Sub Agents that are specialized in specific parts of your task or workflow. This is often referred to as a "Multi-agent" system. A Sub Agent can be configured to: - **Transfer** control of the chat to another Sub Agent. When a transfer happens, the receiving Sub Agent becomes the primary driver of the thread and can respond to the user directly. - **Delegate** a subtask for another ('child') Sub Agent to do and wait for its response before proceeding with the next step. A child Sub Agent *cannot* respond directly to a user. ## Sub Agent 'turn' When it's a Sub Agent's turn, it can choose to: 1. Send an update message to the user 2. Call a tool to collect information or take an action 3. Transfer or delegate to another Sub Agent An Agent's execution stays in this loop until one of the Sub Agents chooses to respond to the user with a final result. ## Chatting with an Agent The Visual Builder and TypeScript SDK work seamlessly together—define your Sub Agents in code, push them to the Visual Builder, and iterate visually. ## Projects You can organize your related MCP Servers, Credentials, Agents, and more into **Projects**. A Project is generally used to represent a set of related scenarios. For example, you may create one Project for your support team that has all the MCP servers and Agents related to customer support. ## CLI: Push and pull The Inkeep CLI bridges your TypeScript SDK project and the Visual Builder. Run the following from your project (the folder that contains your `inkeep.config.ts`) which has an `index.ts` file that exports a project. - **Push (code → Builder)**: Sync locally defined agents, Sub Agents, tools, and settings from your SDK project into the Visual Builder. ```bash inkeep push ``` - **Pull (Builder → code)**: Fetch your project from the Visual Builder back into your SDK project. By default, the CLI will LLM-assist in updating your local TypeScript files to reflect Builder changes. ```bash inkeep pull ``` See the [CLI Reference](/typescript-sdk/cli-reference) for full command details. ## Deployment Once you've built your Agents, you can deploy them using: ## Architecture The Inkeep Agent framework is composed of several key services and libraries that work together: - **agents-manage-api**: An API that handles configuration of Agents, Sub Agents, MCP Servers, Credentials, and Projects with a REST API. - **agents-manage-ui**: Visual Builder web interface for creating and managing Agents. Writes to the `agents-manage-api`. - **agents-sdk**: TypeScript SDK (`@inkeep/agents-sdk`) for declaratively defining Agents and custom tools in code. Writes to `agents-manage-api`. - **agents-cli**: Includes various handy utilities, including `inkeep push` and `inkeep pull` which sync your TypeScript SDK code with the Visual Builder. - **agents-run-api**: The Runtime API that exposes Agents as APIs and executes Agent conversations. Keeps conversation state and emits OTEL traces. - **agents-ui**: A UI component library of chat interfaces for embedding rich, dynamic conversational AI experiences in web apps. # The No-Code + Code Agent Builder URL: /overview Inkeep is a platform for building Agent Chat Assistants and AI Workflows. With Inkeep, you can build AI Agents with a **No-Code Visual Builder** and **Developer SDK**. Agents can be edited in either with **full 2-way sync**, so technical and non-technical teams can create and manage their Agents in one platform. ## Two ways to build ### No-Code Visual Builder A drag-and-drop canvas so any team can create and own the Agents they care about. # Pricing URL: /pricing Learn about Inkeep's pricing plans and features Inkeep offers three ways to get started: **Open Source** (free forever), **Cloud** (managed deployment), and **Enterprise** (managed platform with dedicated support). ## Feature Comparison ### Building Agents | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | No-Code Visual Builder | ✓ | ✓ | ✓ | | Agent Developer SDK (TypeScript) | ✓ | ✓ | ✓ | | 2-way Sync: Edit in Code or UI | ✓ | ✓ | ✓ | ### Core Framework | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | Take actions on any MCP Server, App, or API | ✓ | ✓ | ✓ | | Multi-agent Architecture (Teams of Agents) | ✓ | ✓ | ✓ | | Agent Credential and Permissions Management | ✓ | ✓ | ✓ | | Agent Traces available in UI and OTEL | ✓ | ✓ | ✓ | | Talk to Agents via A2A, MCP, and Vercel AI SDK formats | ✓ | ✓ | ✓ | ### Talk to Your Agents (Out of the Box) | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | With Claude, ChatGPT, and Cursor | ✓ | ✓ | ✓ | | With Slack, Discord, and Teams integrations | — | — | ✓ | | With Zendesk, Salesforce, and support integrations | — | — | ✓ | ### Building Agent UIs | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | Agent Messages with Custom UIs (forms, cards, etc.) | ✓ | ✓ | ✓ | | Custom UIs using Vercel AI SDK format | ✓ | ✓ | ✓ | | Out-of-box Chat Components (React) | ✓ | ✓ | ✓ | | Out-of-box Chat Components (JavaScript) | — | — | ✓ | | Answers with Inline Citations | ✓ | ✓ | ✓ | ### Unified AI Search (Managed RAG) | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | Real-time fetch from databases, APIs, and the web | ✓ | ✓ | ✓ | | Public sources ingestion (docs, help center, etc.) | — | — | ✓ | | Private sources ingestion (Notion, Confluence, etc.) | — | — | ✓ | | Optimized Retrieval and Search (Managed RAG) | — | — | ✓ | | Semantic Search | — | — | ✓ | ### Insights & Analytics | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | AI Reports on Knowledge Gaps | — | — | ✓ | | AI Reports on Product Feature Gaps | — | — | ✓ | ### Authentication and Authorization | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | Single Sign-on | — | — | ✓ | | Role-Based Access Control | — | — | ✓ | | Audit Logs | — | — | ✓ | ### Security | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | PII Removal | — | — | ✓ | | Uptime and Support SLAs | — | — | ✓ | | SOC II Type II and Pentest Reports | — | — | ✓ | | GDPR, HIPAA, DPA, and Infosec Reviews | — | — | ✓ | ### Deployment | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | Hosting Types | Self-hosted | Cloud | Cloud, Hybrid, or Self-hosted | | Support Type | Community | Community | Dedicated Engineering Team | ### Forward Deployed Engineer Program | Feature | Open Source | Cloud | Enterprise | |---------|:-----------:|:-----:|:----------:| | Dedicated Architect and AI Agents Engineer | — | — | ✓ | | 1:1 Office Hours and Trainings | — | — | ✓ | | Structured Pilot | — | — | ✓ | # Troubleshooting Guide URL: /troubleshooting Learn how to diagnose and resolve issues when something breaks in your Inkeep agent system. ## Overview This guide provides a structured methodology for debugging problems across different components of your agent system. ## Step 1: Check the Timeline The timeline is your first stop for understanding what happened during a conversation or agent execution. Navigate to the **Traces** sections to view in depth details per conversation. Within each conversation, you'll find an **error card** that is clickable whenever something goes wrong during agent execution. ### What to Look For - **Execution flow**: Review the sequence of agent actions and tool calls - **Timing**: Check for delays or bottlenecks in the execution - **Agent transitions**: Verify that transfers and delegations happened as expected - **Tool usage**: Confirm that tools were called correctly and returned expected results - **Error cards**: Look for red error indicators in the timeline and click to view detailed error information ### Error Cards in the Timeline Clicking on this error card reveals: - **Error type**: The specific category of error (e.g., "Agent Generation Error") - **Exception stacktrace**: The complete stack trace showing exactly where the error occurred in the code This detailed error information helps you pinpoint exactly what went wrong and where in your agent's execution chain. ## Step 2: Check SigNoz SigNoz provides distributed tracing and observability for your agent system, offering deeper insights when the built-in timeline isn't sufficient. ### Accessing SigNoz from the Timeline You can easily access SigNoz directly from the timeline view. In the **Traces** section, click on any activity in the conversation timeline to view its details. Within the activity details, you'll find a **"View in SigNoz"** button that takes you directly to the corresponding span in SigNoz for deeper analysis. ### What SigNoz Shows - **Distributed traces**: End-to-end request flows across services - **Performance metrics**: Response times, throughput, and error rates ### Key Metrics to Monitor - **Agent response times**: How long each agent takes to process requests - **Tool execution times**: Performance of MCP servers and external APIs - **Error rates**: Frequency and types of failures ## Agent Stopped Unexpectedly ### StopWhen Limits Reached If your agent stops mid-conversation, it may have hit a configured stopWhen limit: - **Transfer limit reached**: Check `transferCountIs` on your Agent or Project - agent stops after this many transfers between Sub Agents - **Step limit reached**: Check `stepCountIs` on your Sub Agent or Project - execution stops after this many tool calls + LLM responses **How to diagnose:** - Check the timeline for the last activity before stopping - Look for messages indicating limits were reached - Review your stopWhen configuration in Agent/Project settings **How to fix:** - Increase the limits if legitimate use case requires more steps/transfers - Optimize your agent flow to use fewer transfers - Investigate if agent is stuck in a loop (limits working as intended) See [Configuring StopWhen](/typescript-sdk/agent-settings#configuring-stopwhen) for more details. ## Check service logs (local development) When running `pnpm dev` from your [quickstart workspace](/quick-start/start-development), you will see an interactive terminal interface. This interface allows you to inspect the logs of each [running service](/quick-start/start-development#service-ports). You can navigate between services using the up and down arrow keys. ![Service logs in local development](/images/agents-quickstart-pnpm-dev.png) - The `service-info` tab displays the health of each running service. - The `manage-api` tab contains logs for all database operations. This is useful primarily for debugging issues with [`inkeep push`](/typescript-sdk/push-pull-workflows). - The `run-api` tab contains logs for all agent execution and tool calls. This is useful for debugging issues with your agent's behavior. - The `mcp` tab contains logs for your [custom MCP servers](/tutorials/how-to-create-mcp-servers/inkeep). - The `dashboard` tab displays logs for the [Visual Builder](/visual-builder/overview) dashboard. To terminate the running services, click press `q` or `esc` in the terminal. ## Common Configuration Issues ### General Configuration Issues - **Missing environment variables**: Ensure all required env vars are set - **Incorrect API endpoints**: Verify you're using the right URLs - **Network connectivity**: Check firewall and proxy settings - **Version mismatches**: Ensure all packages are compatible ### MCP Server Connection Issues - **MCP not able to connect**: - Check that the MCP server is running and accessible - **401 Unauthorized errors**: - Verify that credentials are properly configured and valid - **Connection timeouts**: - Ensure network connectivity and firewall settings allow connections ### AI Provider Configuration Problems - **AI Provider key not defined or invalid**: - Ensure you have one of these environment variables set: `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, or `GOOGLE_GENERATIVE_AI_API_KEY` - Verify the API key is valid and has sufficient credits - Check that the key hasn't expired or been revoked - **GPT-5 access issues**: - Individual users cannot access GPT-5 as it requires organization verification - Use GPT-4 or other available models instead - Contact OpenAI support if you need GPT-5 access for your organization ### Credit and Rate Limiting Issues - **Running out of credits**: - Monitor your OpenAI usage and billing - Set up usage alerts to prevent unexpected charges - **Rate limiting by AI providers**: - Especially common with high-frequency operations like summarizers - Monitor your API usage patterns and adjust accordingly ### Context Fetcher Issues - **Context fetcher timeouts**: - Check that external services are responding within expected timeframes # Inkeep Agents Manage API URL: /api-reference REST API for the management of the Inkeep Agent Framework. # Inkeep Agents Run API URL: /api-reference/run-api Chat completions, MCP, and A2A run endpoints in the Inkeep Agent Framework. # Join & Follow URL: /community/inkeep-community To get help, share ideas, and provide feedback, join our community: You can also find us on: Feel free to tag us as `@inkeep` on 𝕏 or `@Inkeep` on LinkedIn with a video of what you're building — we like to highlight neat Agent use cases from the community where possible. Also feel free to submit a PR to our [template library](https://github.com/inkeep/agents/tree/main/agents-cookbook/template-projects). To keep up to date with all news related to AI Agents, sign up for the Agents Newsletter: # License URL: /community/license License for the Inkeep Agent Framework The Inkeep Agent Framework is licensed under the **Elastic License 2.0** ([ELv2](https://www.elastic.co/licensing/elastic-license)) subject to **Inkeep's Supplemental Terms** ([SUPPLEMENTAL_TERMS.md](https://github.com/inkeep/agents/blob/main/SUPPLEMENTAL_TERMS.md)). This is a [fair-code](https://faircode.io/), source-available license that allows broad usage while protecting against certain competitive uses. # Deploy to Vercel URL: /deployment/vercel Deploy the Inkeep Agent Framework to Vercel ## Deploy to Vercel ### Step 1: Create a GitHub repository for your project If you do not have an Inkeep project already, [follow these steps](/get-started/quick-start) to create one. Then push your project to a repository on GitHub. ### Step 2: Create a Postgres Database Create a Postgres database on the [**Vercel Marketplace**](https://vercel.com/marketplace/neon) or directly at [**Neon**](https://neon.tech/). ### Step 3: Configure Database Connection Set your database connection string as an environment variable: ``` DATABASE_URL= ``` ### Step 4: Create a Vercel account Sign up for a Vercel account [here](https://vercel.com/signup). ### Step 5: Create a Vercel project for Manage API ![Vercel New Project - Manage API](/images/vercel-new-project-manage-api-hono.png) Required environment variables for Manage API: ``` ENVIRONMENT=production INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= DATABASE_URL= NANGO_SECRET_KEY= NANGO_SERVER_URL=https://api.nango.dev ``` | Environment Variable | Value | | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `ENVIRONMENT` | `production` | | `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` | Run `openssl rand -hex 32` in your terminal to generate this value. Save this value for `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` in Step 7. | | `DATABASE_URL` | Postgres connection string from Step 3 (e.g., `postgresql://user:password@host:5432/database`) | | `NANGO_SECRET_KEY` | Nango secret key from your [Nango Cloud account](/get-started/credentials#option-1-nango-cloud-setup). Note: Local Nango setup won't work with Vercel deployments. | | `NANGO_SERVER_URL` | `https://api.nango.dev` | ### Step 6: Create a Vercel project for Run API ![Vercel New Project - Run API](/images/vercel-new-project-run-api-hono.png) Required environment variables for Run API: ``` ENVIRONMENT=production ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= DATABASE_URL= OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ingest.us.signoz.cloud:443/v1/traces OTEL_EXPORTER_OTLP_TRACES_HEADERS=signoz-ingestion-key= NANGO_SECRET_KEY= NANGO_SERVER_URL=https://api.nango.dev INKEEP_AGENTS_JWT_SIGNING_SECRET= ``` | Environment Variable | Value | | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `ENVIRONMENT` | `production` | | `ANTHROPIC_API_KEY` | Your Anthropic API key | | `OPENAI_API_KEY` | Your OpenAI API key | | `GOOGLE_GENERATIVE_AI_API_KEY` | Your Google Gemini API key | | `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` | Run `openssl rand -hex 32` in your terminal to generate this value. Save this value for `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` in Step 7. | | `DATABASE_URL` | Postgres connection string from Step 3 (e.g., `postgresql://user:password@host:5432/database`) | | `NANGO_SECRET_KEY` | Nango secret key from your [Nango Cloud account](/get-started/credentials#option-1-nango-cloud-setup). Note: Local Nango setup won't work with Vercel deployments. | | `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` | `https://ingest.us.signoz.cloud:443/v1/traces` | | `OTEL_EXPORTER_OTLP_TRACES_HEADERS` | `signoz-ingestion-key=`. Use the instructions from [SigNoz Cloud Setup](/get-started/traces#option-1-signoz-cloud-setup) to configure your ingestion key. Note: Local SigNoz setup won't work with Vercel deployments. | | `NANGO_SERVER_URL` | `https://api.nango.dev` | | `INKEEP_AGENTS_JWT_SIGNING_SECRET` | Run `openssl rand -hex 32` in your terminal to generate this value. Save this value for `INKEEP_AGENTS_JWT_SIGNING_SECRET` in Step 7. | ### Step 7: Create a Vercel project for Manage UI ![Vercel New Project - Manage UI](/images/vercel-new-project-manage-ui-nextjs.png) Required environment variables for Manage UI: ``` ENVIRONMENT=production PUBLIC_INKEEP_AGENTS_RUN_API_URL= PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET= PUBLIC_INKEEP_AGENTS_MANAGE_API_URL= INKEEP_AGENTS_MANAGE_API_URL= INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= PUBLIC_SIGNOZ_URL=https://.signoz.cloud SIGNOZ_API_KEY= PUBLIC_NANGO_SERVER_URL=https://api.nango.dev PUBLIC_NANGO_CONNECT_BASE_URL=https://connect.nango.dev NANGO_SECRET_KEY= ``` | Environment Variable | Value | | -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `ENVIRONMENT` | `production` | | `PUBLIC_INKEEP_AGENTS_RUN_API_URL` | Your Vercel deployment URL for Run API | | `PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET` | Your generated Run API bypass secret from Step 6 | | `PUBLIC_INKEEP_AGENTS_MANAGE_API_URL` | Your Vercel deployment URL for Manage API (skip if same as `INKEEP_AGENTS_MANAGE_API_URL`) | | `INKEEP_AGENTS_MANAGE_API_URL` | Your Vercel deployment URL for Manage API | | `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` | Your generated Manage API bypass secret from Step 5 | | `PUBLIC_SIGNOZ_URL` | Use the instructions from [SigNoz Cloud Setup](/get-started/traces#option-1-signoz-cloud-setup) to configure your SigNoz URL. Note: Local SigNoz setup won't work with Vercel deployments. | | `SIGNOZ_API_KEY` | Use the instructions from [SigNoz Cloud Setup](/get-started/traces#option-1-signoz-cloud-setup) to configure your SigNoz API key. Note: Local SigNoz setup won't work with Vercel deployments. | | `NANGO_SECRET_KEY` | Nango secret key from your [Nango Cloud account](/get-started/credentials#option-1-nango-cloud-setup). Note: Local Nango setup won't work with Vercel deployments. | | `PUBLIC_NANGO_SERVER_URL` | `https://api.nango.dev` | | `PUBLIC_NANGO_CONNECT_BASE_URL` | `https://connect.nango.dev` | ### Step 8: Enable Vercel Authentication To prevent anyone from being able to access the UI, we recommend enabling Vercel authentication for all deployments: **Settings > Deployment Protection > Vercel Authentication > All Deployments**. ### Step 9: Create a Vercel project for your MCP server (optional) ![Vercel New Project - MCP Server](/images/vercel-new-project-mcp.png) For more information on how to add MCP servers to your project, see [Create MCP Servers](/typescript-sdk/cli-reference#inkeep-add). ## Push your Agent ### Step 1: Configure your root .env file ``` INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= ``` ### Step 2: Create a cloud configuration file Create a new configuration file named `inkeep-cloud.config.ts` in your project's `src` directory, alongside your existing configuration file. ```typescript const config = defineConfig({ tenantId: "default", agentsManageApi: { url: "https://", apiKey: process.env.INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET, }, agentsRunApi: { url: "https://", apiKey: process.env.INKEEP_AGENTS_RUN_API_BYPASS_SECRET, }, }); ``` ### Step 3: Push your Agent ```bash cd /src/ inkeep push --config ../inkeep-cloud.config.ts ``` ## Pull your Agent ```bash cd /src inkeep pull --config inkeep-cloud.config.ts ``` ## Function Tools with Vercel Sandbox When deploying to serverless environments like Vercel, you can configure [function tools](/typescript-sdk/tools/function-tools) to execute in [Vercel Sandbox](https://vercel.com/docs/vercel-sandbox) MicroVMs instead of your Agent's runtime service. This is **required** for serverless platforms since child process spawning is restricted. ### Why Use Vercel Sandbox? **When to use each provider:** - **Native** - Use for traditional cloud deployments (VMs, Docker, Kubernetes), self-hosted servers, or local development - **Vercel Sandbox** - Required for serverless platforms (Vercel, AWS Lambda, etc.) or if you'd like to isolate tool executions ### Setting Up Vercel Sandbox #### Step 1: Get Vercel Credentials You'll need three credentials from your Vercel account: 1. **Vercel Token** - Create an access token at [vercel.com/account/tokens](https://vercel.com/account/tokens) 2. **Team ID** - Find in your team settings at [vercel.com/teams](https://vercel.com/teams) 3. **Project ID** - Find in your Vercel project settings #### Step 2: Configure Sandbox in Your Application Update your Run API to use Vercel Sandbox. In the `apps/run-api/src` folder, create a `sandbox.ts` file: ```typescript sandbox.ts const isProduction = process.env.ENVIRONMENT === "production"; ? { provider: "vercel", runtime: "node22", // or 'typescript' timeout: 60000, // 60 second timeout vcpus: 4, // Allocate 4 vCPUs teamId: process.env.SANDBOX_VERCEL_TEAM_ID!, projectId: process.env.SANDBOX_VERCEL_PROJECT_ID!, token: process.env.SANDBOX_VERCEL_TOKEN!, } : { provider: "native", runtime: "node22", timeout: 30000, vcpus: 2, }; ``` Import it into your `index.ts` file: ```typescript index.ts // ... const app: Hono = createExecutionApp({ // ... sandboxConfig, // NEW }); ``` #### Step 3: Add Environment Variables to Run API Add these [environment variables in your Vercel project](https://vercel.com/docs/environment-variables/managing-environment-variables#declare-an-environment-variable) to your **Run API** app: ```bash SANDBOX_VERCEL_TOKEN=your_vercel_access_token SANDBOX_VERCEL_TEAM_ID=team_xxxxxxxxxx SANDBOX_VERCEL_PROJECT_ID=prj_xxxxxxxxxx ``` # Connect Your Data with Context7 URL: /connect-your-data/context7 Learn how to connect your code repositories and documentation to your agents using Context7 Context7 specializes in connecting code repositories and technical documentation to your agents. It's particularly well-suited for developers who want their agents to have access to library documentation, API specs, and code repositories. ## Supported data sources With Context7 you can connect: - **Code Repositories**: GitHub Repositories, GitLab, BitBucket - **API Documentation**: OpenAPI Spec - **Documentation Formats**: LLMs.txt - **Websites**: Website crawling and indexing ## Getting started ### Step 1: Check if your library is already available Context7 maintains a library of pre-indexed documentation for popular libraries and frameworks. Before creating an account, check if your library is already available: Visit [context7.com](https://context7.com/) and browse the list of available libraries. If your library is listed, you can use it immediately without additional setup. ### Step 2: Create an account If your library isn't listed or you want to connect custom sources: 1. [Sign up for Context7](https://context7.com/sign-in) 2. Complete the account setup process 3. Verify your email address if required ### Step 3: Connect your data sources 1. Log in to your Context7 dashboard 2. Add your repositories, websites, or OpenAPI specs 3. Wait for Context7 to index your content 4. Verify that your content is accessible ### Step 4: Get the Context7 Library ID To use a specific library with your agents, you'll need the Context7 Library ID. You can find this ID in the library's URL on context7.com. **How to find the Library ID:** 1. Navigate to the library page on context7.com (e.g., `https://context7.com/supabase/supabase`) 2. Copy the path after the domain name 3. The Library ID is the path portion of the URL **Example:** - URL: `https://context7.com/supabase/supabase` - Library ID: `supabase/supabase` ### Step 5: Register the MCP server Register the Context7 MCP server as a tool in your agent configuration: **Using TypeScript SDK:** ```typescript const context7Tool = mcpTool({ id: "context7-docs", name: "context7_search", description: "Search code documentation and library references", serverUrl: "https://mcp.context7.com/mcp", }); const devAgent = subAgent({ id: "dev-agent", name: "Developer Assistant", description: "Helps with code questions using library documentation", prompt: `You are a developer assistant with access to code documentation.`, canUse: () => [context7Tool], }); ``` **Using Visual Builder:** 1. Go to the **MCP Servers** tab in the Visual Builder 2. Click "New MCP server" 3. Enter: - **Name**: `Context7 Documentation` - **URL**: `https://mcp.context7.com/mcp` - **Transport Type**: `Streamable HTTP` or `SSE` 4. Save the server 5. Add it to your agent by dragging it onto your agent canvas ### Step 6: Use the Context7 MCP server in your agent Once your Context7 MCP server is registered, you can use it with a specific library ID (from Step 4) or let it automatically match libraries based on your query. #### Specifying a library ID If you know which library you want to use, specify its Context7 Library ID in your agent's prompt. This allows the Context7 MCP server to skip the library-matching step and directly retrieve documentation. **Example:** ```typescript const devAgent = subAgent({ id: "react-agent", name: "React Assistant", description: "Helps with React development", prompt: `You are a React development assistant. Use library ID "websites/inkeep" when searching for documentation. Use the get_library_docs tool to find React documentation and examples.`, canUse: () => [context7Tool], }); ``` #### Automatic library matching If you don't specify a library ID, the Context7 MCP server will automatically match libraries based on your query. The server uses the `resolve-library-id` tool to identify the appropriate library, then uses the `get_library_docs` tool to retrieve the relevant documentation. This approach works well when: - You're working with multiple libraries - The library name is mentioned in the user's query - You want the agent to dynamically select the most relevant library # Connect Your Data with Firecrawl URL: /connect-your-data/firecrawl Connect websites to your agents using Firecrawl ## Overview Firecrawl is a web scraping and web crawling platform that extracts clean content from web pages and converts it to markdown or structured JSON, ready for embedding and use in RAG pipelines. With Firecrawl you can connect your agents to: - **Websites**: Website crawling and indexing for extracting clean content from web pages - **Web pages**: Individual page scraping with automatic content extraction ## RAG pipeline workflow Here's what a complete RAG pipeline looks like to connect your websites to your agents: ## Getting started ### Prerequisites Before we get started, make sure you have the following: - A [Firecrawl account](https://firecrawl.dev/) - A [Pinecone account](https://app.pinecone.io/) - [uv](https://docs.astral.sh/uv/) installed - A python virtual environment running ### Step 1: Set up Firecrawl and collect data Install Firecrawl and retrieve your API key from [firecrawl.dev](https://firecrawl.dev): ```bash uv pip install firecrawl-py python-dotenv ``` Save your API key to a `.env` file: ```bash title=".env" FIRECRAWL_API_KEY=fc-YOUR-KEY-HERE ``` The following script uses Firecrawl to explore a website's structure, identify all available pages, and convert each page's content into markdown files. ```python from firecrawl import Firecrawl from dotenv import load_dotenv from pathlib import Path load_dotenv() app = Firecrawl() # Crawl to discover pages crawl_result = app.crawl( "https://www.mayoclinic.org/drugs-supplements", limit=10, scrape_options={'formats': ['markdown']} ) # Extract URLs urls = [page.metadata.url for page in crawl_result.data if page.metadata and page.metadata.url] # Batch scrape batch_job = app.batch_scrape(urls, formats=["markdown"]) # This creates a directory for our documents and saves each scraped page as a numbered markdown file output_dir = Path("data/documents") output_dir.mkdir(parents=True, exist_ok=True) for i, result in enumerate(batch_job.data): filename = f"doc_{i:02d}.md" with open(output_dir / filename, "w") as f: f.write(result.markdown) ``` ### Step 2: Set up Pinecone Assistant and index your documents We'll load those markdown files, chunk them, and store them in Pinecone. First, install the required packages: ```bash uv pip install langchain-pinecone langchain-openai pinecone langchain langchain-text-splitters ``` Set up your Pinecone API key environment variable: ```bash title=".env" PINECONE_API_KEY=your-key ``` In your Pinecone Assistant, create a new assistant named "drug-info-rag". The code below indexes your documents in the assistant with their embeddings. ```python import os from pathlib import Path from dotenv import load_dotenv from pinecone import Pinecone load_dotenv() # Initialize Pinecone pc = Pinecone(api_key=os.environ["PINECONE_API_KEY"]) # Create assistant assistant = pc.assistant.Assistant( assistant_name="drug-info-rag", ) # Upload markdown files to assistant for md_file in Path("data/documents").glob("*.md"): response = assistant.upload_file( file_path=str(md_file.absolute()), timeout=None ) print(f"Uploaded {md_file.name}: {response}") ``` ### Step 3: Get your Pinecone Assistant MCP server URL 1. Navigate to the **Settings** tab in [Pinecone Assistant](https://app.pinecone.io/) 2. Copy the MCP URL provided ### Step 4: Register the MCP server Register the Pinecone MCP server as a tool in your agent configuration. Replace `` with the MCP URL you copied in Step 4. **Using TypeScript SDK:** You can create your [credential](/typescript-sdk/credentials/overview) using keychain, nango, or environment variables, but in this example we use environment variables. **Using Visual Builder:** 1. **Add a Pinecone credential:** - Go to the **Credentials** tab in the Visual Builder - Click **"New credential"** - Select **"Bearer authentication"** - Enter: - **Name**: `Pinecone API Key` (or your preferred name) - **API key**: Your Pinecone API key (found in your [Pinecone dashboard](https://app.pinecone.io/)) - Click **"Create Credential"** to save 2. **Register the MCP server:** - Go to the **MCP Servers** tab in the Visual Builder - Click **"New MCP server"** - Select **"Custom Server"** - Enter: - **Name**: `Pinecone Documents` - **URL**: Your MCP URL from Pinecone Settings tab - **Transport Type**: `Streamable HTTP` - **Credential**: Select the Pinecone credential you created - Click **"Create"** to save the server 3. **Add the MCP tool to your sub agent:** - Drag the Pinecone Documents MCP tool onto your agent canvas and connect it to the sub agent ### Step 5: Use the Pinecone Assistant MCP server in your agent Once you have registered your MCP server as a tool and connected it to your agent, your agent can use the Pinecone Assistant tool to search and retrieve relevant content from your uploaded documents. Ask an interesting question like, "What are the primary uses of amlodipine and atorvastatin, and how do they work in the body?" The Pinecone tool provides a `get_context` function that retrieves relevant document snippets from your knowledge base. When your agent calls this tool, it will: **Parameters:** - `query` (required): The search query to retrieve context for - `top_k` (optional): The number of context snippets to retrieve. Defaults to 15. # Connect Your Data with Inkeep Unified Search URL: /connect-your-data/inkeep Learn how to connect your data sources to your agents using Inkeep Unified Search's comprehensive knowledge base platform Inkeep Unified Search is part of [Inkeep's Enterprise offering](https://inkeep.com/enterprise). Connect 25+ data sources to create a unified knowledge base that your agents can access. ## Supported data sources Inkeep Unified Search supports a wide variety of data sources: - **Documentation**: OpenAPI Spec, Plain text - **Collaboration**: Confluence, Notion, Slack, Discord, Discourse, Missive - **Code Repositories**: GitHub - **Cloud Storage**: Google Drive - **Websites**: Website crawling and indexing - **Video**: YouTube - **Support Systems**: - Freshdesk Tickets - HelpScout Docs and Tickets - Zendesk Knowledge Base, Tickets, and Help Center - **Project Management**: Jira ## Getting started ### Step 1: Set up your Inkeep account If you don't have an Inkeep account yet, you can: - [Try Inkeep on your content](https://inkeep.com/demo) - Test Inkeep with your own content - [Schedule a call](https://inkeep.com/schedule-demo) - Get personalized setup assistance ### Step 2: Connect your data sources 1. Log in to your Inkeep dashboard 2. Navigate to the "Sources" tab 3. Add and configure your desired data sources (websites, GitHub repos, Notion pages, etc.) 4. Wait for Inkeep to index your content ### Step 3: Get your MCP server URL Once your data sources are connected and indexed, obtain your Inkeep MCP server URL from your Inkeep dashboard: 1. Go to the **Assistants** tab 2. Click **"Create Assistant"** 3. Select **MCP** from the dropdown 4. Copy the MCP server URL ### Step 4: Register the MCP server Register your Inkeep MCP server as a tool in your agent configuration: **Using TypeScript SDK:** ```typescript const inkeepTool = mcpTool({ id: "inkeep-knowledge-base", name: "knowledge_base", description: "Search the company knowledge base powered by Inkeep", serverUrl: "YOUR_INKEEP_MCP_SERVER_URL", // From your Inkeep dashboard }); const supportAgent = subAgent({ id: "support-agent", name: "Support Agent", description: "Answers questions using the company knowledge base", prompt: `You are a support agent with access to the company knowledge base.`, canUse: () => [inkeepTool], }); ``` **Using Visual Builder:** 1. Go to the **MCP Servers** tab in the Visual Builder 2. Click "New MCP server" 3. Select "Custom Server" 4. Enter: - **Name**: `Inkeep Knowledge Base` - **URL**: Your Inkeep MCP server URL - **Transport Type**: `Streamable HTTP` 5. Save the server 6. Add it to your agent by dragging it onto your agent canvas ### Step 5: Use the Inkeep MCP server in your agent Once your have registered your MCP server as a tool and connected it to your agent, it's now ready to use! Click on the "Try it" button to test it out. # Connecting Your Data URL: /connect-your-data/overview Learn how to connect your data sources to your agents through MCP servers Connect your data sources to your agents through MCP servers. Support websites, GitHub repositories, documentation, knowledge bases, PDFs, and more. Your agents can search and access this content in real-time. ## How it works 1. **Set up your data source** - Configure your chosen provider and connect your data sources 2. **Get your MCP server URL** - Obtain the MCP server endpoint from your provider 3. **Register the MCP server** - Register it as a tool in the Visual Builder or TypeScript SDK 4. **Use in your agents** - Your agents can now search and access your connected data ## Choose a data provider Select a provider that supports your data sources: ## Other options Consider these additional options for connecting your data: - **[Reducto](https://reducto.ai/)** - Self-serve document processing with a free tier. Upload documents and access them via APIs or SDKs. Wrap their APIs in your own MCP server or invoke them directly using [function tools](/typescript-sdk/tools/function-tools). - **[Unstructured](https://unstructured.io/)** - Document processing platform with a free tier. Upload documents and access them via APIs or SDKs. Wrap their APIs in your own MCP server or invoke them directly using [function tools](/typescript-sdk/tools/function-tools). - **[Milvus](https://milvus.io/)** - Open-source vector database with self-hosting options. See their [MCP integration guide](https://milvus.io/docs/milvus_and_mcp.md) for setup instructions. Must manage deployment and hosting for MCP server. - **[SingleStore](https://singlestore.com/)** - Relational database with a managed MCP server that converts natural language queries into SQL. Learn more in their [MCP server documentation](https://docs.singlestore.com/cloud/ai-services/singlestore-mcp-server/). # Connect Your Data with Pinecone URL: /connect-your-data/pinecone Connect documents to your agents using Pinecone Assistant's MCP server Pinecone is a vector database, and Pinecone Assistant helps you build production-grade chat and agent applications. Connect your documents and files to your agents using Pinecone Assistant's MCP server for semantic search and retrieval. ## Supported data sources With Pinecone Assistant you can connect: - **Documents**: DOCX (.docx), PDF (.pdf), Text (.txt) - **Structured Data**: JSON (.json) - **Documentation**: Markdown (.md) ## Getting started ### Step 1: Create a Pinecone account [Sign up for Pinecone](https://app.pinecone.io/) ### Step 2: Create an Assistant 1. Log in to your [Pinecone dashboard](https://app.pinecone.io/) 2. Navigate to the **Assistant** tab 3. Click **"Create an Assistant"** 4. Give your assistant a name ### Step 3: Upload your files 1. In your assistant, navigate to the **Files** tab (located in the top right corner) 2. Upload your documents (DOCX, JSON, Markdown, PDF, or Text files) 3. Wait for Pinecone to process and index your content ### Step 4: Get your MCP server URL 1. Navigate to the **Settings** tab in your assistant 2. Copy the MCP URL provided ### Step 5: Register the MCP server Register the Pinecone MCP server as a tool in your agent configuration. Replace `` with the MCP URL you copied in Step 4. **Using TypeScript SDK:** You can create your [credential](/typescript-sdk/credentials/overview) using keychain, nango, or environment variables, but in this example we use environment variables. **Using Visual Builder:** 1. **Add a Pinecone credential:** - Go to the **Credentials** tab in the Visual Builder - Click **"New credential"** - Select **"Bearer authentication"** - Enter: - **Name**: `Pinecone API Key` (or your preferred name) - **API key**: Your Pinecone API key (found in your [Pinecone dashboard](https://app.pinecone.io/)) - Click **"Create Credential"** to save 2. **Register the MCP server:** - Go to the **MCP Servers** tab in the Visual Builder - Click **"New MCP server"** - Select **"Custom Server"** - Enter: - **Name**: `Pinecone Documents` - **URL**: Your MCP URL from Pinecone Settings tab - **Transport Type**: `Streamable HTTP` - **Credential**: Select the Pinecone credential you created - Click **"Create"** to save the server 3. **Add the MCP tool to your sub agent:** - Drag the Pinecone Documents MCP tool onto your agent canvas and connect it to the sub agent ### Step 6: Use the Pinecone Assistant MCP server in your agent Once you have registered your MCP server as a tool and connected it to your agent, your agent can use the Pinecone Assistant tool to search and retrieve relevant content from your uploaded documents. The Pinecone tool provides a `get_context` function that retrieves relevant document snippets from your knowledge base. When your agent calls this tool, it will: **Parameters:** - `query` (required): The search query to retrieve context for - `top_k` (optional): The number of context snippets to retrieve. Defaults to 15. # Connect Your Data with Ref URL: /connect-your-data/ref Learn how to connect your documentation and code repositories to your agents using Ref Ref provides a simple and focused solution for connecting your documentation and code repositories to your agents. With support for GitHub repositories, PDFs, and Markdown files, Ref is perfect for teams that need straightforward documentation access. ## Supported data sources With Ref you can connect: - **Code Repositories**: GitHub Repositories - **Documents**: PDF files - **Documentation**: Markdown files ## Getting started ### Step 1: Create an account 1. [Sign up for Ref](https://ref.tools/login) 2. Complete the account registration process ### Step 2: Upload your resources 1. Log in to your Ref dashboard 2. Navigate to the [Resources page](https://ref.tools/resources) 3. Upload your PDFs, Markdown files, or connect your GitHub repositories 4. Wait for Ref to process and index your content ### Step 3: Get your MCP server URL 1. Navigate to the [Install MCP page](https://ref.tools/install) 3. Copy the MCP server URL (it will start with `https://api.ref.tools/mcp?apiKey=ref-`) ### Step 4: Register the MCP server Register the Ref MCP server as a tool in your agent configuration. Replace `` with your actual API key from Step 3. **Using TypeScript SDK:** ```typescript const refTool = mcpTool({ id: "ref-documentation", name: "ref_search", description: "Search uploaded documentation and code repositories", serverUrl: "https://api.ref.tools/mcp?apiKey=ref-", }); const docAgent = subAgent({ id: "doc-agent", name: "Documentation Assistant", description: "Answers questions using uploaded documentation", prompt: `You are a documentation assistant with access to company documentation`, canUse: () => [refTool], }); ``` **Using Visual Builder:** 1. Go to the **MCP Servers** tab in the Visual Builder 2. Click "New MCP server" 3. Select "Custom Server" 4. Enter: - **Name**: `Ref Documentation` - **URL**: `https://api.ref.tools/mcp?apiKey=ref-` (replace with your API key) - **Transport Type**: `Streamable HTTP` 5. Save the server 6. Add it to your agent by dragging it onto your agent canvas ### Step 5: Use the Ref MCP server in your agent Once your have registered your MCP server as a tool and connected it to your agent, it's now ready to use! Click on the "Try it" button to test it out. # Push / Pull URL: /get-started/push-pull Push and pull your agents to and from the Visual Builder ## Push code to visual With Inkeep, you can define your agents in code, push them to the Visual Builder, and continue developing with the intuitive drag-and-drop interface. You can switch back to code any time. Let's walk through the process. ### Step 3: Push code to visual Navigate to your docs assistant project. ```bash cd docs-assistant ``` Use `inkeep push` to push the code to the Visual Builder. ```bash inkeep push ``` ### Step 4: Chat with your agent Refresh http://localhost:3000, switch to the **Docs Assistant** project (in the bottom left). Under **Agents**, click on the Docs Assistant agent and press **Try it**. Ask a question about Inkeep. # Quick Start URL: /get-started/quick-start Get started with Inkeep Agents in <1min ## Launch your first agent ### Prerequisites Before getting started, ensure you have the following installed on your system: - [Node.js](https://nodejs.org/en/download/) version 22 or higher - [Docker](https://docs.docker.com/get-docker/) - [pnpm](https://pnpm.io/installation) version 10 or higher You can verify by running: ```bash node --version pnpm --version docker --version ``` ### Step 1: Create a new agents project Run the quickstart script on a target folder: ```bash npx @inkeep/create-agents my-agents ``` Navigate to the folder ```bash cd my-agents ``` Open the folder using your coding editor. To open with Cursor, you can run `cursor .` ### Step 2: Run the setup script Ensure Docker Desktop (or Docker daemon) is running before running the setup script. ```bash pnpm setup-dev ``` Or if you are using a cloud database, you can skip the docker database startup by running: ```bash pnpm setup-dev --skip-docker ``` Make sure your DATABASE_URL environment variable is configured for your cloud database. ### Step 3: Launch the dev environment ```bash pnpm dev ``` The Visual Builder will auto-open at http://localhost:3000. ### Step 4: Chat with your agent Navigate to the **Activities Planner** agent at http://localhost:3000, click **Try it**, and ask about fun activities at a location of your choice: # Live Debugger, Traces, and OTEL Telemetry URL: /get-started/traces Set up SigNoz to enable full observability with traces and live debugging capabilities for your agents. ## Overview The Inkeep Agent Framework provides powerful **traces** and **live debugging** capabilities powered by SigNoz. Setting up SigNoz gives you: - **Real-time trace visualization** - See exactly how your agents execute step-by-step - **Live debugging** - Debug agent conversations as they happen - **Export traces as JSON** - Copy complete traces for offline analysis and debugging - **Full observability** - Complete OpenTelemetry instrumentation for monitoring - **Performance insights** - Identify bottlenecks and optimize agent performance ## Setup Options You can set up SigNoz in two ways: 1. **Cloud Setup**: Use SigNoz Cloud 2. **Local Setup**: Run SigNoz locally using Docker ## Option 1: SigNoz Cloud Setup ### Step 1: Create a SigNoz Cloud Project 1. Sign up at [SigNoz](https://signoz.io/teams/) 2. Create a new project or use an existing one ### Step 2: Save Your SigNoz Credentials You'll need to collect three pieces of information from your SigNoz dashboard: 1. **API Key**: - Navigate to Settings → Workspace Settings → API Keys → New Key - Choose any role (Admin, Editor, or Viewer) - Viewer is sufficient for observability - Set the expiration field to "No Expiry" to prevent the key from expiring - Copy the generated API key 2. **Ingestion Key**: - Navigate to Settings → Workspace Settings → Ingestion - Set the expiration field to "No Expiry" to prevent the key from expiring - Copy the ingestion key 3. **SigNoz URL**: - Copy the URL from your browser's address bar - It will look like: `https://.signoz.cloud` ### Step 3: Configure Your Root `.env` File ```bash # SigNoz SIGNOZ_URL=https://.signoz.cloud SIGNOZ_API_KEY= OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ingest.us.signoz.cloud:443/v1/traces OTEL_EXPORTER_OTLP_TRACES_HEADERS="signoz-ingestion-key=" ``` ### Step 4: Verify Cloud Setup 1. Restart your development environment: ```bash pnpm dev ``` 2. Generate some traces by interacting with your agents 3. Open your SigNoz cloud dashboard and navigate to "Traces" to see your agent traces ## Option 2: Local SigNoz Setup ### Prerequisites - Docker installed on your machine ### Step 1: Clone the Optional Services Repository Clone the Inkeep optional local development services repository: ```bash git clone https://github.com/inkeep/agents-optional-local-dev cd agents-optional-local-dev ``` ### Step 2: Start SigNoz Services Run the following command to start SigNoz and related services: ```bash docker-compose --profile signoz up -d ``` This will start: - SigNoz frontend (accessible at `http://localhost:3080`) - SigNoz query service - SigNoz OTEL collector - ClickHouse database When you visit `http://localhost:3080`, you can sign up with your desired credentials. ### Step 3: Configure Environment Variables In your **root project directory** (e.g., `my-agents`), update your `.env` file: ```bash # SigNoz Configuration SIGNOZ_URL=http://localhost:3080 SIGNOZ_API_KEY=your-signoz-api-key # IMPORTANT: Comment out the OTEL Configuration # OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ingest.us.signoz.cloud:443/v1/traces # OTEL_EXPORTER_OTLP_TRACES_HEADERS="signoz-ingestion-key=" ``` To get your SigNoz API key: 1. Open SigNoz at `http://localhost:3080` 2. Navigate to Settings → Account Settings → API Keys → New Key 3. Create a new API key or copy an existing one. - Choose any role (Admin, Editor, or Viewer) - Viewer is sufficient for observability - Set the expiration field to "No Expiry" to prevent the key from expiring ### Step 4: Verify Setup 1. Restart your Inkeep agents: ```bash pnpm dev ``` 2. Make some requests to your agents to generate traces 3. Open SigNoz at `http://localhost:3080` and navigate to the "Traces" section to see your agent traces ## Viewing Traces and Using the Live Debugger Once SigNoz is set up, you can access traces and live debugging in two ways: ### 1. Visual Builder Traces Interface If you're using the Visual Builder: 1. Open your agent project in the Visual Builder 2. Navigate to the **Traces** section 3. You'll see real-time traces of your agent executions 4. Click on any trace to see detailed execution flow and timing The traces overview shows conversation metrics and recent activity: Click on any conversation to see detailed execution flow: ### 2. SigNoz Dashboard For detailed analysis and further debugging: 1. Open your SigNoz dashboard (cloud or local) 2. Navigate to **Traces** to see all agent executions 3. Use filters to find specific conversations or agents 4. Click on traces to see: - Step-by-step execution details - Performance metrics - Error information - Agent-to-agent communication flows For more detailed information on using traces, see the [SigNoz Usage guide](/typescript-sdk/signoz-usage). ## Additional Observability and Evals 👉 For additional observability or a dedicated Evals platform, you can connect to any OTEL-based provider. For example, check out the [Langfuse Usage guide](/typescript-sdk/langfuse-usage) for end-to-end instructions. ## Next steps Next, we recommend setting up the Nango credential store for production-ready credential management. See [Credentials](/typescript-sdk/credentials/overview) to get started. # Talk to your agent via A2A (JSON-RPC) URL: /talk-to-your-agents/a2a Use the Agent2Agent JSON-RPC protocol to send messages to your agent and receive results, with optional streaming. The A2A (Agent-to-Agent) endpoint lets third-party agents, agent platforms, or agent workspaces interact with your Inkeep Agent using a standard agent protocol. Here are some example platforms that you can add Inkeep Agents to: | Platform | Description | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | | **[Google Gemini Enterprise](https://cloud.google.com/gemini-enterprise/faq)** | Bring‑your‑own A2A agents into an enterprise agentspace and orchestrate alongside Google/vendor agents. | | **[Microsoft Copilot Studio / Azure AI Foundry](https://microsoftlearning.github.io/mslearn-ai-agents/Instructions/06-multi-remote-agents-with-a2a.html)** | Copilots can invoke external A2A agents as peer services in multi‑agent flows. | | **[Salesforce Agentforce](https://architect.salesforce.com/fundamentals/agentic-patterns)** | Add third‑party A2A agents (e.g., via AgentExchange) and compose them in CRM workflows. | | **[SAP Joule](https://learning.sap.com/courses/boosting-ai-driven-business-transformation-with-joule-agents/enabling-interoperability-for-ai-agents)** | Federate non‑SAP A2A agents into SAP’s business agent fabric. | | **[ServiceNow AI Agent Fabric](https://www.servicenow.com/community/now-assist-articles/introducing-ai-agent-fabric-enable-mcp-and-a2a-for-your-agentic/ta-p/3373907)** | Discover and call external A2A agents within IT/business automations. | | **[Atlassian Rovo](https://support.atlassian.com/atlassian-rovo-mcp-server/docs/getting-started-with-the-atlassian-remote-mcp-server/)** | Configure Rovo to call external A2A agents for cross‑tool tasks. | | **[Workday AI (Agent Gateway / ASOR)](https://investor.workday.com/2025-06-03-Workday-Announces-New-AI-Agent-Partner-Network-and-Agent-Gateway-to-Power-the-Next-Generation-of-Human-and-Digital-Workforces)** | Register external customer/partner A2A agents alongside Workday agents. | ## Agent Card Discovery - **Agent-level:** `GET /agents/.well-known/agent.json` (uses Agent's default Sub Agent) - **Agent-level (dev/bypass only):** Provide `x-inkeep-sub-agent-id` in headers to target a specific agent for discovery ## Notes & Behavior - **contextId resolution:** The server first tries `task.context.conversationId` (derived from the request), then `params.message.metadata.conversationId`. Final fallback is `'default'`. - **Artifacts in responses:** Message/Task responses may include `artifacts[0].parts` as the agent's output parts. - **Errors (JSON-RPC):** Standard JSON-RPC error codes: `-32600`, `-32601`, `-32602`, `-32603`, `-32700`, plus A2A-specific `-3200x` codes. ## Development Notes - **Base URL (local):** `http://localhost:3003` - **Route Mounting:** A2A routes are mounted under `/agents`; use `/agents/a2a` for RPC and `/agents/.well-known/agent.json` for discovery - **Streaming support:** Requires agent capabilities `streaming: true` in the agent card # How to call your AI Agent using the Chat API URL: /talk-to-your-agents/chat-api Learn about details of the Vercel AI SDK data stream protocol that powers the `/chat` API endpoint. ## Overview This guide shows how to call your agent directly over HTTP and stream responses using the Vercel AI SDK data stream format. It covers the exact endpoint, headers, request body, and the event stream response you should expect. ## Endpoint - **Path (mounted by the Run API):** `/api/chat` - **Method:** `POST` - **Protocol:** Server-Sent Events (SSE) encoded JSON, using Vercel AI SDK data-stream v2 - **Content-Type (response):** `text/event-stream` - **Response Header:** `x-vercel-ai-data-stream: v2` # MCP Server URL: /talk-to-your-agents/mcp-server Learn how to use the MCP server to talk to your agents The MCP server allows you to talk to your agents through the Model Context Protocol. ## Configuration Notes - **URL**: Point to your `agents-run-api` instance (default: `http://localhost:3003`) - **Headers**: Use the appropriate authentication mode per the section above - **Authorization**: Only required outside development mode # Overview URL: /talk-to-your-agents/overview Learn how to talk to your agents # Upgrading your Inkeep version URL: /tutorials/upgrading Upgrade the packages that make up the Inkeep Agent Framework ## Overview The Inkeep Agent Framework is composed of several npm packages: - [`@inkeep/agents-manage-api`](https://www.npmjs.com/package/@inkeep/agents-manage-api): The API for managing projects and agent configurations. - [`@inkeep/agents-run-api`](https://www.npmjs.com/package/@inkeep/agents-run-api): The API for executing conversations with your agents. - [`@inkeep/agents-manage-ui`](https://www.npmjs.com/package/@inkeep/agents-manage-ui): The UI for the visual builder and dashboard. - [`@inkeep/agents-core`](https://www.npmjs.com/package/@inkeep/agents-core): The core shared functionality of the agent framework. - [`@inkeep/agents-sdk`](https://www.npmjs.com/package/@inkeep/agents-sdk): The TypeScript SDK for building multi-agent systems. - [`@inkeep/agents-cli`](https://www.npmjs.com/package/@inkeep/agents-cli): The CLI for managing and interacting with the agent framework. - [`@inkeep/agents-ui`](https://www.npmjs.com/package/@inkeep/agents-ui): The UI library containing chat widget components. ## Upgrading the quickstart template If you used the `npx @inkeep/create-agents` CLI command to create your workspace, run the following command from the workspace root: ```bash pnpm upgrade-agents ``` This will update all the packages to the latest version and migrate your database schema to the latest version. ## Upgrading the Agent CLI If you installed the `@inkeep/agents-cli` package globally, you can upgrade it to the latest version by running the following command: ```bash inkeep update ``` # Context Fetchers URL: /visual-builder/context-fetchers Learn how to use context fetchers to fetch data from external sources and make it available to your agents ## Overview Context fetchers allow you to embed real-time data from external APIs into your agent prompts. Instead of hardcoding information in your agent prompt, context fetchers dynamically retrieve fresh data for each conversation. ## Key Features - **Dynamic data retrieval**: Fetch real-time data from APIs. - **Dynamic Prompting**: Use dynamic data in your agent prompts - **Headers integration**: Use request-specific parameters to customize data fetching. - **Data transformation**: Transform API responses into the exact format your agent needs. ## Context Fetchers vs Tools - **Context Fetchers**: Pre-populate agent prompts with dynamic data - Run automatically before/during conversation startup - Data becomes part of the agent's system prompt - Perfect for: Personalized agent personas, dynamic agent guardrails - Example Prompt: `You are an assistant for {{user.name}} and you work for {{user.organization}}` - **Tools**: Enable agents to take actions or fetch data during conversations - Called by the agent when needed during the conversation - Agent decides when and how to use them - Example Tool Usage: Agent calls a "send_email" tool or "search_database" tool ## Basic Usage 1. Go to the Agents tab in the left sidebar. Then click on the agent you want to configure. 2. On the right pane scroll down to the "Context Variables" section. 3. Add your context variables in JSON format. 4. Click on the "Save" button. ## Defining Context Variables The keys that you define in the Context Variables JSON object are used to reference fetched data in your agent prompts. Each key in the JSON should map to a fetch definition with the following properties: - **`id`** (required): Unique identifier for the fetch definition - **`name`** (optional): Human-readable name for the fetch definition - **`trigger`** (required): When to execute the fetch: - `"initialization"`: Fetch only once when a conversation is started with the agent - `"invocation"`: Fetch every time a request is made to the agent - **`fetchConfig`** (required): HTTP request configuration: - **`url`** (required): The API endpoint URL (supports template variables) - **`method`** (optional): HTTP method - `GET`, `POST`, `PUT`, `DELETE`, or `PATCH` (defaults to `GET`) - **`headers`** (optional): Object with string key-value pairs for HTTP headers - **`body`** (optional): Request body for POST/PUT/PATCH requests - **`transform`** (optional): JSONPath expression or JavaScript transform function to extract specific data from the response - **`timeout`** (optional): Request timeout in milliseconds (defaults to 10000) - **`responseSchema`** (optional): Valid JSON Schema object to validate the API response structure. - **`defaultValue`** (optional): Default value to use if the fetch fails or returns no data - **`credential`** (optional): Reference to stored credentials for authentication Here is an example of a valid Context Variables JSON object: ```json { "timeInfo": { "id": "time-info", "name": "Time Information", "trigger": "invocation", "fetchConfig": { "url": "https://world-time-api3.p.rapidapi.com/timezone/US/Pacific", "method": "GET", "headers": { "x-rapidapi-key": "590c52974dmsh0da44377420ef4bp1c64ebjsnf8d55149e28d" } }, "defaultValue": "Unable to fetch time information", "responseSchema": { "type": "object", "$schema": "http://json-schema.org/draft-07/schema#", "required": [ "datetime" ], "properties": { "datetime": { "type": "string" }, "timezone": { "type": "string" } } } } } ``` ## Using Context Variables Once you have defined your context variables, you can use them in your agent prompts. 1. Click on the agent you want to modify. 2. In the "Prompt" section, you can embed fetched data in the prompt using the key defined in the "Context Variables" section. Reference them using double curly braces `{{}}`. Here is an example of an agent prompt using the context variable defined above: ``` You are a helpful assistant, the time in the US Pacific timezone is {{timeInfo.datetime}}. ``` # Headers URL: /visual-builder/headers Pass dynamic context to your agents via HTTP headers for personalized interactions ## Overview Headers allow you to pass request-specific values (like user IDs, authentication tokens, or organization metadata) to your agent at runtime via HTTP headers. These values are validated and made available throughout your agent system for: - **Context Fetchers**: Dynamic data retrieval based on request values - **External Tools**: Authentication and personalization for API calls - **Agent Prompts**: Personalized responses using context variables ## Configuring Headers 1. Go to the Agents tab in the left sidebar. Then click on the agent you want to configure. 2. On the right pane scroll down to the "Headers schema" section. 3. Enter the schema in JSON Schema format. 4. Click on the "Save" button. Here is an example of a valid headers schema: ```json { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "userId": { "type": "string" } }, "required": [ "userId" ], "additionalProperties": {} } ``` You can generate custom schemas using this [JSON Schema generator](https://transform.tools/json-to-json-schema). ## Sending Custom Headers 1. On your agent page, click on the **Try it** button in the top right corner. 2. Click on the "Custom headers" button in the top right corner. 3. Enter the custom headers in JSON format. 4. Click on the "Apply" button. ## Using Headers in Your Agent Prompts 1. Go to the Agents tab in the left sidebar. Then click on the agent you want to configure. 2. Either add a new agent or edit an existing agent by clicking on the agent you want to edit. 3. On the right pane scroll down to the "Prompt" section. 4. Use the double curly braces `{{}}` to reference the headers variables. Here is an example of a valid prompt: ```text You are a helpful assistant for {{headers.user_id}}!. ``` # Get started with the Visual Agent Builder URL: /visual-builder/sub-agents Create Agents with a No-Code Visual Agent Builder ## Overview An Agent is the top-level entity you can chat and interact with in the Visual Builder. An Agent is made up of one or more Sub Agents. The Sub Agents that make up an Agent can delegate or transfer control to each other, share context, or use tools to respond to a user or complete a task. You can use the Visual Builder to add Sub Agents to an Agent, give Sub Agents tools, and connect Sub Agents with each other to establish their relationships. ## Creating your first Agent ## Add other Sub Agents When you have tasks that get more complex, you'll likely want to create more Sub Agents that are specialized in narrow tasks and have prompts and tools focused on their roles. To create more Sub Agents: 1. Drag and drop a **Sub Agent** block from the top left toolbar onto the canvas. 2. Configure the Sub Agent with its own prompt and settings 3. Connect it to the parent Sub Agent by clicking the connector at the top of the Sub Agent box and dragging it to the main Sub Agent When connected, a line will appear showing the relationship between the Sub Agents. A few notes: - To delete a Sub Agent, click on the Sub Agent box and then hit the backspace key. - An Agent must have at least one Sub Agent. If you delete the default Sub Agent, add a new one before trying the Agent. ### Sub Agent configuration The following are configurable options for the Sub Agent through the Visual Builder: | Parameter | Type | Required | Description | | --------------------- | ------ | -------- | ---------------------------------------------------------------------------------------------------------------------------- | | `name` | string | Yes | Human-readable name for the Sub Agent | | `description` | string | No | Brief description of the Sub Agent's purpose and capabilities | | `prompt` | string | Yes | Detailed behavior guidelines and system prompt for the Sub Agent | | `model` | string | No | AI model identifier (e.g., "anthropic/claude-sonnet-4-20250514"). Choose a model that you prefer | | `providerOptions` | object | No | Model-specific configuration options (temperature, maxOutputTokens, etc.) | | `Data components` | object | No | Data components that the Sub Agent can use. See [Data Components](/visual-builder/structured-outputs/data-components) for details | | `Artifact components` | array | No | Artifact components that the Sub Agent can use. See [Artifact Components](/visual-builder/structured-outputs/artifact-components) for details | ## Adding data components Data components are used to render rich UI components directly in the chat. Before adding a data component, you will first need to create a data component. See the [Data Components](/visual-builder/structured-outputs/data-components) page for more information. Then, add the data component to your Sub Agent by clicking the Sub Agent box and then selecting which data components you want to add. ## Adding artifact components To create artifact components, see the [Artifact Components](/visual-builder/structured-outputs/artifact-components) page for details. Add the artifact component to your Sub Agent by clicking on the Sub Agent box in the Agent, then selecting which artifact components you want to add. When the Sub Agent uses tools or delegates to other Sub Agents, artifacts will be automatically created according to your defined schema, capturing the source and content of each interaction. ## Adding team agents Team agents are agents that are part of the same project. You can add team agents to your Agent by dragging and dropping the **Team Agent** block from the top left onto the Canvas. You can configure custom headers to send on every request to the team agent by clicking on the team agent box and then populating the **Headers** field with your custom headers. ## Adding external agents External agents let you integrate agents built outside of Inkeep (using other frameworks or platforms) into your Agent. You can add an external agent to your Agent by dragging and dropping the **External Agent** block from the top left onto the Canvas. You can configure custom headers to send on every request to the external agent by clicking on the external agent box and then populating the **Headers** field with your custom headers. # Sub Agent Relationships URL: /typescript-sdk/agent-relationships Learn how to add Sub Agent relationships to your agent Sub Agent relationships are used to coordinate specialized Sub Agents for complex workflows. This framework implements Sub Agent relationships through using `canDelegateTo()` and `canTransferTo()`, allowing a parent Sub Agent to automatically coordinate specialized Sub Agents for complex workflows. ## Understanding Sub Agent Relationships The framework supports two types of Sub Agent relationships: ### Transfer Relationships Transfer relationships **completely relinquish control** from one Sub Agent to another. When a Sub Agent hands off to another: - The source Sub Agent stops processing - The target Sub Agent takes full control of the conversation - Control is permanently transferred until the target Sub Agent hands back ```typescript // Create specialized Sub Agents first const qaSubAgent = subAgent({ id: "qa-agent", name: "QA Sub Agent", description: "Answers product and service questions", prompt: "Provide accurate information using available tools. Hand back to router if unable to help.", canUse: () => [knowledgeBaseTool], canTransferTo: () => [routerSubAgent], }); const orderSubAgent = subAgent({ id: "order-agent", name: "Order Sub Agent", description: "Handles order-related inquiries and actions", prompt: "Assist with order tracking, modifications, and management.", canUse: () => [orderSystemTool], canTransferTo: () => [routerSubAgent], }); // Create router Sub Agent that coordinates other Sub Agents const routerSubAgent = subAgent({ id: "router-agent", name: "Router Sub Agent", description: "Routes customer inquiries to specialized Sub Agents", prompt: `Analyze customer inquiries and route them appropriately: - Product questions → Hand off to QA Sub Agent - Order issues → Hand off to Order Sub Agent - Complex issues → Handle directly or escalate`, canTransferTo: () => [qaSubAgent, orderSubAgent], }); // Create the agent with router as default entry point const supportAgent = agent({ id: "customer-support-agent", defaultSubAgent: routerSubAgent, subAgents: () => [routerSubAgent, qaSubAgent, orderSubAgent], models: { base: { model: "anthropic/claude-sonnet-4-5", providerOptions: { temperature: 0.5, }, }, structuredOutput: { model: "openai/gpt-4.1-mini", }, }, }); ``` ### Delegation Relationships Delegation relationships are used to **pass a task** from one Sub Agent to another while maintaining oversight: - The source Sub Agent remains in control - The target Sub Agent executes a specific task - Results are returned to the source Sub Agent - The source Sub Agent continues processing ```typescript // Sub Agents for specific tasks const numberProducerA = subAgent({ id: "number-producer-a", name: "Number Producer A", description: "Produces low-range numbers (0-50)", prompt: "Generate numbers between 0 and 50. Respond with a single integer.", }); const numberProducerB = subAgent({ id: "number-producer-b", name: "Number Producer B", description: "Produces high-range numbers (50-100)", prompt: "Generate numbers between 50 and 100. Respond with a single integer.", }); // Coordinating Sub Agent that delegates tasks const mathSupervisor = subAgent({ id: "math-supervisor", name: "Math Supervisor", description: "Coordinates mathematical operations", prompt: `When given a math task: 1. Delegate to Number Producer A for a low number 2. Delegate to Number Producer B for a high number 3. Add the results together and provide the final answer`, canDelegateTo: () => [numberProducerA, numberProducerB], }); const mathAgent = agent({ id: "math-delegation-agent", defaultSubAgent: mathSupervisor, subAgents: () => [mathSupervisor, numberProducerA, numberProducerB], models: { base: { model: "anthropic/claude-haiku-4-5", providerOptions: { temperature: 0.1, } } }, }); ``` ## When to Use Each Relationship ### Use Transfers for Complex Tasks Use `canTransferTo` when the task is complex and the user will likely want to ask follow-up questions to the specialized Sub Agent: - **Customer support conversations** - User may have multiple related questions - **Technical troubleshooting** - Requires back-and-forth interaction - **Order management** - User might want to modify, track, or ask about multiple aspects - **Product consultations** - Users often have follow-up questions ### Use Delegation for Simple Tasks Use `canDelegateTo` when the task is simple and self-contained: - **Data retrieval** - Get a specific piece of information and return it - **Calculations** - Perform a computation and return the result - **Single API calls** - Make one external request and return the data - **Simple transformations** - Convert data from one format to another ```typescript // TRANSFER: User will likely have follow-up questions about their order const routerSubAgent = subAgent({ id: "router", prompt: "For order inquiries, transfer to order specialist", canTransferTo: () => [orderSubAgent], }); // DELEGATION: Just need a quick calculation, then continue const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to number producers, then add results together", canDelegateTo: () => [numberProducerA, numberProducerB], }); ``` ## Types of Delegation Relationships ### Sub Agent Delegation Sub agent delegation is used to delegate a task to a sub agent as seen above. ### External Agent Delegation External agent delegation is used to delegate a task to an [external agent](/typescript-sdk/external-agents). ```typescript const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to the external agent to calculate the answer to the question", canDelegateTo: () => [myExternalAgent], }); ``` You can also specify headers to include with every request to the external agent. ```typescript const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to the external agent to calculate the answer to the question", canDelegateTo: () => [myExternalAgent.with({ headers: { "authorization": "my-api-key" } })], }); ``` ### Team Agent Delegation Team agent delegation is used to delegate a task to another agent in the same project. ```typescript const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to the team agent to calculate the answer to the question", canDelegateTo: () => [myAgent], }); ``` You can also specify headers to include with every request to the team agent. ```typescript const mathSupervisor = subAgent({ id: "supervisor", prompt: "Delegate to the team agent to calculate the answer to the question", canDelegateTo: () => [myAgent.with({ headers: { "authorization": "my-api-key" } })], }); ``` # Agents & Sub Agents URL: /typescript-sdk/agent-settings Learn how to customize your Agents. Agents and Sub Agents are the core building blocks of the Inkeep Agent framework. An Agent is made up of one or more Sub Agents that can delegate or transfer control with each other, share context, use tools to respond to a user or complete a task. ## Creating an Agent An Agent is your top-level entity that you as a user interact with or can trigger programmatically. An Agent is made up of sub-agents, like so: ```typescript // Agent-level prompt that gets added to all Sub Agents const customerSupportAgent = agent({ id: "support-agent", prompt: `You work for Acme Corp. Always be professional and helpful.`, subAgents: () => [supportAgent, escalationAgent], }); ``` **The `prompt` is automatically put into context and added into each Sub Agent's system prompt.** This provides consistent behavior and tone to all Sub Agents so they can act and respond as one cohesive unit to the end-user. ## Creating a Sub Agent Like an Agent, a Sub Agent needs an id, name, and clear prompt that define its behavior: ```typescript const supportAgent = subAgent({ id: "customer-support", name: "Customer Support Agent", prompt: `You are a customer support specialist.`, }); ``` ## Configuring Models Configure AI models for your agents. See [Model Configuration](/typescript-sdk/models) for detailed information about supported providers, configuration options, and examples. ## Configuring StopWhen Control stopping conditions to prevent infinite loops: ```typescript // Agent level - limit transfers between Sub Agents agent({ id: "support-agent", stopWhen: { transferCountIs: 5 // Max transfers in one conversation }, }); // Sub Agent level - limit generation steps subAgent({ id: "my-sub-agent", stopWhen: { stepCountIs: 20 // Max tool calls + LLM responses }, }); ``` **Configuration levels:** - `transferCountIs`: Project or Agent level - `stepCountIs`: Project or Sub Agent level Settings inherit from Project → Agent → Sub Agent. ## Sub Agent overview Beyond model configuration, Sub Agents define tools, structured outputs, and agent-to-agent relationships available to the Sub Agent. | Parameter | Type | Required | Description | | -------------------- | -------- | -------- | ----------- | | `id` | string | Yes | Stable Sub Agent identifier used for consistency and persistence | | `name` | string | Yes | Human-readable name for the Sub Agent | | `prompt` | string | Yes | Detailed behavior guidelines and system prompt for the Sub Agent | | `description` | string | No | Brief description of the Sub Agent's purpose and capabilities | | `models` | object | No | Model configuration for this Sub Agent. See [Model Configuration](/typescript-sdk/models) | | `stopWhen` | object | No | Stop conditions (`stepCountIs`). See [Configuring StopWhen](#configuring-stopwhen) | | `canUse` | function | No | Returns the list of MCP/tools the Sub Agent can use. See [MCP Servers](/tutorials/mcp-servers/overview) for how to find or build MCP servers | | `dataComponents` | array | No | Structured output components for rich, interactive responses. See [Data Components](/typescript-sdk/structured-outputs/data-components) | | `artifactComponents` | array | No | Components for handling tool or Sub Agent outputs. See [Artifact Components](/typescript-sdk/structured-outputs/artifact-components) | | `canTransferTo` | function | No | Function returning array of Sub Agents this Sub Agent can transfer to. See [Transfer Relationships](/typescript-sdk/agent-relationships#transfer-relationships) | | `canDelegateTo` | function | No | Function returning array of Sub Agents this Sub Agent can delegate to. See [Delegation Relationships](/typescript-sdk/agent-relationships#delegation-relationships) | ### Tools & MCPs Enable tools for a Sub Agent to perform actions like looking up information or calling external APIs. Tools can be: - **[MCP Servers](/typescript-sdk/tools/mcp-servers)** - Connect to external services and APIs using the Model Context Protocol - **[Function Tools](/typescript-sdk/tools/function-tools)** - Custom JavaScript functions that execute directly in secure sandboxes ```typescript const mySubAgent = subAgent({ id: "my-agent-id", name: "My Sub Agent", prompt: "Detailed behavior guidelines", canUse: () => [ functionTool({ name: "get-current-time", description: "Get the current time", execute: async () => ({ time: new Date().toISOString() }), }), mcpTool({ id: "inkeep-kb-rag", name: "Knowledge Base Search", description: "Search the company knowledge base.", serverUrl: "https://rag.inkeep.com/mcp", }), ], }); ``` ### Data components Structured output components for rich, interactive responses. See [Data Components](/typescript-sdk/structured-outputs/data-components). ```typescript const mySubAgent = subAgent({ id: "my-agent-id", name: "My Sub Agent", prompt: "Detailed behavior guidelines", dataComponents: [ { id: "customer-info", name: "CustomerInfo", description: "Customer information display component", props: z.object({ name: z.string().describe("Customer name"), email: z.string().describe("Customer email"), issue: z.string().describe("Customer issue description"), }), }, ], }); ``` ### Artifact components Components for handling tool or Sub Agent outputs. See [Artifact Components](/typescript-sdk/structured-outputs/artifact-components). ```typescript const mySubAgent = subAgent({ id: "my-agent-id", name: "My Sub Agent", prompt: "Detailed behavior guidelines", artifactComponents: [ { id: "customer-info", name: "CustomerInfo", description: "Customer information display component", props: z.object({ name: preview(z.string().describe("Customer name")), customer_info: z.string().describe("Customer information"), }), }, ], }); ``` ### Sub Agent relationships Define other Sub Agents this Sub Agent can transfer control to or delegate tasks to. ```typescript const mySubAgent = subAgent({ // ... canTransferTo: () => [subAgent1], canDelegateTo: () => [subAgent2], }); ``` As a next step, see [Sub Agent Relationships](/typescript-sdk/agent-relationships) to learn how to design transfer and delegation relationships between Sub Agents. # CLI Reference URL: /typescript-sdk/cli-reference Complete reference for the Inkeep CLI commands ## Overview The Inkeep CLI is the primary tool for interacting with the Inkeep Agent Framework. It allows you to push Agent configurations and interact with your multi-agent system. ## Installation ```bash # Install the CLI globally npm install -g @inkeep/agents-cli # Install the dashboard package (for visual agents orchestration) npm install @inkeep/agents-manage-ui ``` ## Global Options All commands support the following global options: - `--version` - Display CLI version - `--help` - Display help for a command ## Commands ### `inkeep init` Initialize a new Inkeep configuration file in your project. ```bash inkeep init [path] ``` **Options:** - `--no-interactive` - Skip interactive path selection - `--config ` - Path to use as template for new configuration **Examples:** ```bash # Interactive initialization inkeep init # Initialize in specific directory inkeep init ./my-project # Non-interactive mode inkeep init --no-interactive # Use specific config as template inkeep init --config ./template-config.ts ``` ### `inkeep push` **Primary use case:** Push a project containing Agent configurations to your server. This command deploys your entire multi-agent project, including all Agents, Sub Agents, and tools. ```bash inkeep push ``` **Options:** - `--project ` - Project ID or path to project directory - `--env ` - Load environment-specific credentials from `environments/.env.ts` - `--config ` - Override config file path (bypasses automatic config discovery) - `--tenant-id ` - Override tenant ID - `--agents-manage-api-url ` - Override the management API URL from config - `--agents-run-api-url ` - Override agents run API URL - `--json` - Generate project data as JSON file instead of pushing to server **Examples:** ```bash # Push project from current directory inkeep push # Push specific project directory inkeep push --project ./my-project # Push with development environment credentials inkeep push --env development # Generate project JSON without pushing inkeep push --json # Override tenant ID inkeep push --tenant-id my-tenant # Override API URLs inkeep push --agents-manage-api-url https://api.example.com inkeep push --agents-run-api-url https://run.example.com # Use specific config file inkeep push --config ./custom-config/inkeep.config.ts ``` **Environment Credentials:** The `--env` flag loads environment-specific credentials when pushing your project. This will look for files like `environments/development.env.ts` or `environments/production.env.ts` in your project directory and load the credential configurations defined there. **Example environment file:** ```typescript // environments/development.env.ts credentials: { "api-key-dev": { id: "api-key-dev", type: CredentialStoreType.memory, credentialStoreId: "memory-default", retrievalParams: { key: "API_KEY_DEV", }, }, }, }); ``` #### Project Discovery and Structure The `inkeep push` command follows this discovery process: 1. **Config File Discovery**: Searches for `inkeep.config.ts` using this pattern: - Starts from current working directory - Traverses **upward** through parent directories until found - Can be overridden by providing a path to the config file with the `--config` flag 2. **Workspace Structure**: Expects this directory layout: ``` workspace-root/ ├── package.json # Workspace package.json ├── tsconfig.json # Workspace TypeScript config ├── inkeep.config.ts # Inkeep configuration file ├── my-project/ # Individual project directory │ ├── index.ts # Project entry point │ ├── agents/ # Agent definitions │ │ └── *.ts │ ├── tools/ # Tool definitions │ │ └── *.ts │ ├── data-components/ # Data component definitions │ │ └── *.ts │ └── environments/ # Environment-specific configs │ ├── development.env.ts │ └── production.env.ts └── another-project/ # Additional projects └── index.ts ``` 3. **Resource Compilation**: Automatically discovers and compiles: - All project directories containing `index.ts` - All TypeScript files within each project directory - Categorizes files by type (agents, Sub Agents, tools, data components) - Resolves dependencies and relationships within each project #### Push Behavior When pushing, the CLI: - Finds and loads configuration from `inkeep.config.ts` at workspace root - Discovers all project directories containing `index.ts` - Applies environment-specific settings if `--env` is specified - Compiles all project resources defined in each project's `index.ts` - Validates Sub Agent relationships and tool configurations across all projects - Deploys all projects to the management API - Prints deployment summary with resource counts per project ### `inkeep pull` Pull project configuration from the server and update all TypeScript files in your local project using LLM generation. ```bash inkeep pull ``` **Options:** - `--project ` - Project ID to pull (or path to project directory). If in project directory, validates against local project ID. - `--config ` - Path to configuration file - `--env ` - Environment file to generate (development, staging, production). Defaults to development - `--json` - Output project data as JSON instead of generating files - `--debug` - Enable debug logging - `--verbose` - Enable verbose logging - `--force` - Force regeneration even if no changes detected - `--introspect` - Completely regenerate all files from scratch (no comparison needed) **Smart Project Detection:** The pull command intelligently handles different scenarios: 1. **Project Directory Detection**: If your current directory contains an `index.ts` file that exports a project, the command automatically uses that project's ID 2. **Project ID Validation**: If you specify `--project` while in a project directory, it validates that the ID matches your local project 3. **Subdirectory Creation**: When not in a project directory, creates a new subdirectory named after the project ID **`--project` Flag Behavior:** The `--project` flag has dual functionality: - **Directory Path**: If the value points to a directory containing `index.ts`, it uses that directory - **Project ID**: If the value doesn't match a valid directory path, it treats it as a project ID **Pull Modes:** | Scenario | Command | Behavior | |----------|---------|----------| | In project directory | `inkeep pull` | ✅ Automatically detects project, pulls to current directory | | In project directory | `inkeep pull --project ` | ✅ Validates ID matches local project, pulls to current directory | | In project directory | `inkeep pull --project ` | ❌ Error: Project ID doesn't match local project | | Not in project directory | `inkeep pull` | ❌ Error: Requires --project flag | | Not in project directory | `inkeep pull --project ` | ✅ Creates `/` subdirectory with project files | **How it Works:** The pull command discovers and updates all TypeScript files in your project based on the latest configuration from the server: 1. **File Discovery**: Recursively finds all `.ts` files in your project (excluding `environments/` and `node_modules/`) 2. **Smart Categorization**: Categorizes files as index, agents, Sub Agents, tools, or other files 3. **Context-Aware Updates**: Updates each file with relevant context from the server: - **Agent files**: Updated with specific agent data - **Sub Agent files**: Updated with specific Sub Agent configurations - **Tool files**: Updated with specific tool definitions - **Other files**: Updated with full project context 4. **LLM Generation**: Uses AI to maintain code structure while updating with latest data #### TypeScript Updates (Default) By default, the pull command updates your existing TypeScript files using LLM generation: 1. **Context Preservation**: Maintains your existing code structure and patterns 2. **Selective Updates**: Only updates relevant parts based on server configuration changes 3. **File-Specific Context**: Each file type receives appropriate context (Agents get Agent data, Sub Agents get Sub Agent data, etc.) **Examples:** ```bash # Directory-aware pull: Automatically detects project from current directory cd my-project # Directory contains index.ts with project # Validate project ID while in project directory cd my-project # Directory contains index.ts inkeep pull --project my-project # Validates ID matches, pulls to current directory # Error case: Wrong project ID in project directory cd my-project # Directory contains index.ts with different project ID inkeep pull --project different-project # ERROR: Project ID mismatch # Pull when NOT in a project directory (requires --project) cd ~/projects inkeep pull # ERROR: Requires --project flag # Pull specific project to new subdirectory cd ~/projects inkeep pull --project my-project # Creates ./my-project/ subdirectory # Save project data as JSON file instead of updating files inkeep pull --json # Enable debug logging inkeep pull --debug # Enable verbose logging inkeep pull --verbose # Force regeneration even if no changes detected inkeep pull --force # Completely regenerate all files from scratch inkeep pull --introspect # Generate environment-specific credentials inkeep pull --env production # Use specific config file inkeep pull --config ./custom-config/inkeep.config.ts ``` #### Model Configuration The `inkeep pull` command automatically selects the best available model for LLM generation based on your API keys: 1. **Anthropic Claude** (if `ANTHROPIC_API_KEY` is set): `claude-sonnet-4-5` 2. **OpenAI GPT** (if `OPENAI_API_KEY` is set): `gpt-5.1` 3. **Google Gemini** (if `GOOGLE_GENERATIVE_AI_API_KEY` is set): `gemini-2.5-flash` The models are used for intelligent content merging when updating modified components, ensuring your local customizations are preserved while incorporating upstream changes. #### Validation Process The `inkeep pull` command includes a two-stage validation process to ensure generated TypeScript files accurately represent your backend configuration: **1. Basic File Verification** - Checks that all expected files exist (index.ts, agent files, tool files, component files) - Verifies file naming conventions match (kebab-case) - Ensures project **2. Round-Trip Validation** *(New in v0.24.0)* - Loads the generated TypeScript using the same logic as `inkeep push` - Serializes it back to JSON format - Compares the serialized JSON with the original backend data - Reports any differences found This round-trip validation ensures: - ✅ Generated TypeScript can be successfully loaded and imported - ✅ The serialization logic (TS → JSON) works correctly - ✅ Generated files will work with `inkeep push` without errors - ✅ No data loss or corruption during the pull process **Validation Output:** ```bash ✓ Basic file verification passed ✓ Round-trip validation passed - generated TS matches backend data ``` **If validation fails:** The CLI will display specific differences between the generated and expected data: ```bash ✗ Round-trip validation failed ❌ Round-trip validation errors: The generated TypeScript does not serialize back to match the original backend data. • Value mismatch at agents.my-agent.name: "Original Name" vs "Generated Name" • Missing tool in generated: tool-id ⚠️ This indicates an issue with LLM generation or schema mappings. The generated files may not work correctly with `inkeep push`. ``` **TypeScript generation fails:** - Ensure your network connectivity and API endpoints are correct - Check that your model provider credentials (if required by backend) are set up - Try using `--json` flag as a fallback to get the raw project data **Validation errors during pull:** - The generated TypeScript may have syntax errors or missing dependencies - Check the generated file manually for obvious issues - Try pulling as JSON first to verify the source data: `inkeep pull --json` - If round-trip validation fails, report the issue with the error details ### `inkeep list-agents` List all available agents for a specific project. ```bash inkeep list-agents --project ``` **Options:** - `--project ` - **Required.** Project ID to list agents for - `--tenant-id ` - Tenant ID - `--agents-manage-api-url ` - Agents manage API URL - `--config ` - Path to configuration file - `--config-file-path ` - Path to configuration file (deprecated, use --config) **Examples:** ```bash # List agents for a specific project inkeep list-agents --project my-project # List agents using a specific config file inkeep list-agents --project my-project --config ./inkeep.config.ts # Override tenant and API URL inkeep list-agents --project my-project --tenant-id my-tenant --agents-manage-api-url https://api.example.com ``` ### `inkeep dev` Start the Inkeep dashboard server, build for production, or > **Note:** This command requires `@inkeep/agents-manage-ui` to be installed for visual agents orchestration. ```bash inkeep dev ``` **Options:** - `--port ` - Port to run the server on (default: 3000) - `--host ` - Host to bind the server to (default: localhost) - `--build` - Build the Dashboard UI for production (packages standalone build) - `--export` - Export the Next.js project source files - `--output-dir ` - Output directory for build files (default: ./inkeep-dev) - `--path` - Output the path to the Dashboard UI **Examples:** ```bash # Start development server inkeep dev # Start on custom port and host inkeep dev --port 3001 --host 0.0.0.0 # Build for production (packages standalone build) inkeep dev --build --output-dir ./build # Export Next.js project source files inkeep dev -- # Get dashboard path for deployment DASHBOARD_PATH=$(inkeep dev --path) echo "Dashboard built at: $DASHBOARD_PATH" # Use with Vercel vercel --cwd $(inkeep dev --path) -Q .vercel build # Use with Docker docker build -t inkeep-dashboard $(inkeep dev --path) # Use with other deployment tools rsync -av $(inkeep dev --path)/ user@server:/var/www/dashboard/ ``` ### `inkeep config` Manage Inkeep configuration values. **Subcommands:** #### `inkeep config get [key]` Get configuration value(s). ```bash inkeep config get [key] ``` **Options:** - `--config ` - Path to configuration file - `--config-file-path ` - Path to configuration file (deprecated, use --config) **Examples:** ```bash # Get all config values inkeep config get # Get specific value inkeep config get tenantId ``` #### `inkeep config set ` Set a configuration value. ```bash inkeep config set ``` **Options:** - `--config ` - Path to configuration file - `--config-file-path ` - Path to configuration file (deprecated, use --config) **Examples:** ```bash inkeep config set tenantId my-tenant-id inkeep config set apiUrl http://localhost:3002 ``` #### `inkeep config list` List all configuration values. ```bash inkeep config list ``` **Options:** - `--config ` - Path to configuration file - `--config-file-path ` - Path to configuration file (deprecated, use --config) ### `inkeep add` Pull a template project or MCP from the [Inkeep Agents Cookbook](https://github.com/inkeep/agents/tree/main/agents-cookbook). ```bash inkeep add [options] ``` **Options:** - `--project ` - Add a project template - `--mcp ` - Add a custom MCP server template for common use cases (adds server code to your project) - `--target-path ` - Target path to add the template to - `--config ` - Path to configuration file **Examples:** ```bash # List available templates (both project and MCP) inkeep add # Add a project template inkeep add --project activities-planner # Add project template to specific path inkeep add --project activities-planner --target-path ./examples # Add a custom MCP server template (auto-detects apps/mcp/app directory) inkeep add --mcp zendesk # Add MCP server template to specific path inkeep add --mcp zendesk --target-path ./apps/mcp/app # Using specific config file inkeep add --project activities-planner --config ./my-config.ts ``` **Behavior:** - When adding an MCP server template without `--target-path`, the CLI automatically searches for an `apps/mcp/app` directory in your project - If no app directory is found, you'll be prompted to confirm whether to add to the current directory - Project templates are added to the current directory or specified `--target-path` - Model configurations are automatically injected based on available API keys in your environment (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, or `GOOGLE_GENERATIVE_AI_API_KEY`) ### `inkeep update` Update the Inkeep CLI to the latest version from npm. ```bash inkeep update ``` **Options:** - `--check` - Check for updates without installing - `--force` - Force update even if already on latest version **How it Works:** The update command automatically: 1. **Detects Package Manager**: Identifies which package manager (npm, pnpm, bun, or yarn) was used to install the CLI globally 2. **Checks Version**: Compares your current version with the latest available on npm 3. **Updates CLI**: Executes the appropriate update command for your package manager 4. **Displays Changelog**: Shows a link to the changelog for release notes **Examples:** ```bash # Check if an update is available (no installation) inkeep update --check # Update to latest version (with confirmation prompt) inkeep update # Force reinstall current version inkeep update --force # Skip confirmation prompt (useful for CI/CD) inkeep update --force ``` **Output Example:** ``` 📦 Version Information: • Current version: 0.22.3 • Latest version: 0.23.0 📖 Changelog: • https://github.com/inkeep/agents/blob/main/agents-cli/CHANGELOG.md 🔍 Detected package manager: pnpm ✅ Updated to version 0.23.0 ``` **Troubleshooting:** If you encounter permission errors, try running with elevated permissions: ```bash # For npm, pnpm, yarn sudo inkeep update # For bun sudo -E bun add -g @inkeep/agents-cli@latest ``` **Package Manager Detection:** The CLI automatically detects which package manager you used by checking global package installations: - npm: Checks `npm list -g @inkeep/agents-cli` - pnpm: Checks `pnpm list -g @inkeep/agents-cli` - bun: Checks `bun pm ls -g` - yarn: Checks `yarn global list` If automatic detection fails, the CLI will prompt you to select your package manager. ## Configuration File The CLI uses a configuration file (typically `inkeep.config.ts`) to store settings: ```typescript tenantId: "your-tenant-id", agentsManageApiUrl: "http://localhost:3002", agentsRunApiUrl: "http://localhost:3003", }); ``` ### Configuration Priority Effective resolution order: 1. Command-line flags (highest) 2. Environment variables (override config values) 3. `inkeep.config.ts` values ## Environment Variables The CLI and SDK respect the following environment variables: - `INKEEP_TENANT_ID` - Tenant identifier - `INKEEP_AGENTS_MANAGE_API_URL` - Management API base URL - `INKEEP_AGENTS_RUN_API_URL` - Run API base URL - `INKEEP_ENV` - Environment name for credentials loading during `inkeep push` - `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` - Optional bearer for Manage API (advanced) - `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` - Optional bearer for Run API (advanced) ## Troubleshooting **Project Not Found:** - Projects are automatically managed based on your tenantId - `inkeep push` will create resources as needed ### Getting Help For additional help with any command: ```bash inkeep [command] --help ``` For issues or feature requests, visit: [GitHub Issues](https://github.com/inkeep/agents/issues) # Configure Runtime Limits URL: /typescript-sdk/configure-runtime-limits Customize execution limits, timeouts, and constraints for your deployment # Configure Runtime Limits ## Overview The Inkeep Agent Framework includes sensible defaults for execution limits, timeouts, and constraints. However, certain use cases—like long-running orchestration agents—may require different configurations. ## How Runtime Configuration Works ### Two Types of Limits **1. API-Configurable Limits** (set per agent/sub-agent via manage-api) - Configured in your agent definitions using `stopWhen` - Controls agent-specific behavior - See [Agent Settings - StopWhen](/docs/typescript-sdk/agent-settings#configuring-stopwhen) **2. Runtime Environment Limits** (set via environment variables) - System-wide defaults and maximum allowed values - Controls infrastructure-level constraints - Documented in this guide ### Configuration Hierarchy When an agent executes: 1. Uses agent-specific `stopWhen` settings if configured 2. Falls back to runtime environment defaults 3. Cannot exceed runtime environment maximums ### Understanding the Constants Architecture The runtime environment limits are defined as TypeScript constants in the `@inkeep/agents-core` package. These constants are organized into two categories based on their purpose: **1. Schema Validation Constants** (`packages/agents-core/src/constants/schema-validation/defaults.ts`) - Define **API-level validation limits** (min, max, default values) - Control what values users can configure via the manage-api - Used in Zod schemas to validate agent configuration requests - Examples: Transfer count limits, generation step limits, prompt character limits **2. Runtime Execution Constants - Shared** (`packages/agents-core/src/constants/execution-limits-shared/defaults.ts`) - Define **internal runtime behavior** limits - Control infrastructure-level timeouts, retries, and constraints - Shared across both manage-api and run-api services - Examples: MCP connection timeouts, conversation history token limits **3. Runtime Execution Constants - Run-API** (`packages/agents-run-api/src/constants/execution-limits/defaults.ts`) - Define **run-api specific runtime behavior** limits - Control execution engine timeouts, retries, and resource constraints - Only used by the run-api service (not shared with manage-api) - Examples: LLM generation timeouts, function tool sandbox limits, streaming buffer sizes Each constant includes detailed inline documentation explaining its purpose, behavior, and impact on the runtime. **The TypeScript constant files are the source of truth** for default values and available overrides—not `.env.example`. All constants can be overridden via environment variables prefixed with `AGENTS_`. For example: ```bash # Override a schema validation constant AGENTS_AGENT_EXECUTION_TRANSFER_COUNT_DEFAULT=50 # Override a runtime execution constant AGENTS_MCP_TOOL_CONNECTION_TIMEOUT_MS=5000 ``` ## Use Case: Long-Running Orchestration Agents For agents that orchestrate complex workflows (like coding agents that run for hours), you'll want to increase various limits and timeouts. ### Recommended Configuration Add these to your `.env` file: ```bash # Agent Execution - Allow more sub-agent transfers for complex orchestration AGENTS_EXECUTION_TRANSFER_COUNT_DEFAULT=50 # Default: 10 AGENTS_EXECUTION_TRANSFER_COUNT_MAX=2000 # Default: 1000 # Sub-Agent Turns - Allow more LLM generation steps per turn AGENTS_SUB_AGENT_TURN_GENERATION_STEPS_DEFAULT=30 # Default: 12 AGENTS_SUB_AGENT_TURN_GENERATION_STEPS_MAX=2000 # Default: 1000 # LLM Timeouts - Increase for longer-running operations AGENTS_LLM_GENERATION_FIRST_CALL_TIMEOUT_MS_STREAMING=600000 # 10min, Default: 4.5min AGENTS_LLM_GENERATION_SUBSEQUENT_CALL_TIMEOUT_MS=180000 # 3min, Default: 1.5min # Function Tools - Allow longer execution for complex operations AGENTS_FUNCTION_TOOL_EXECUTION_TIMEOUT_MS_DEFAULT=120000 # 2min, Default: 30sec # MCP Tools - Increase timeout for external tool operations AGENTS_MCP_TOOL_REQUEST_TIMEOUT_MS_DEFAULT=180000 # 3min, Default: 1min # Session Management - Keep tool results cached longer AGENTS_SESSION_TOOL_RESULT_CACHE_TIMEOUT_MS=1800000 # 30min, Default: 5min # Streaming - Allow streams to run longer AGENTS_STREAM_MAX_LIFETIME_MS=1800000 # 30min, Default: 10min ``` ### Why These Settings? - **Transfer Count**: Coding orchestration often involves many handoffs between specialist agents (planning → implementation → testing → refinement) - **Generation Steps**: Each coding operation may require multiple tool calls (read files, analyze, write code, run tests) - **Timeouts**: Code generation and analysis can take time, especially for large files or complex refactoring - **Cache Duration**: Keeps tool results available throughout long sessions for artifact processing ## All Available Configuration Variables All runtime configuration variables are defined as TypeScript constants. To explore available overrides and their detailed documentation: 1. **View the constant files** (source of truth): - Schema validation constants: `packages/agents-core/src/constants/schema-validation/defaults.ts` - Runtime execution constants (shared): `packages/agents-core/src/constants/execution-limits-shared/defaults.ts` - Runtime execution constants (run-api): `packages/agents-run-api/src/constants/execution-limits/defaults.ts` 2. Each constant includes inline documentation explaining its purpose, behavior, and impact 3. Convert constant names to environment variables by prefixing with `AGENTS_` ### Configuration Categories The runtime configuration is organized into functional categories: **Schema Validation Constants** (API-level limits): - **Agent Execution Transfer Count**: Limits transfers between sub-agents in a turn - **Sub-Agent Turn Generation Steps**: LLM generation steps within a single sub-agent activation - **Status Update Thresholds**: Event and time-based triggers for status updates - **Prompt Validation**: Character limits for agent and sub-agent system prompts - **Context Fetchers**: HTTP timeouts for external data fetching (CRM lookups, API calls) **Runtime Execution Constants - Shared** (infrastructure-level limits): - **MCP Tool Connection & Retry**: Connection timeouts and exponential backoff for MCP tools - **Conversation History**: Token limits for conversation context windows **Runtime Execution Constants - Run-API** (run-api service-specific limits): - **LLM Generation Timeouts**: Timeout behavior for AI model calls - **Function Tool Sandbox**: Execution limits for function tools in isolated sandboxes - **Delegation & A2A**: Inter-agent communication settings - **Artifact Processing**: UI component generation with retry mechanisms - **Session Management**: Tool result caching and cleanup - **Streaming**: Frequency, batching, and lifetime limits for streams # Context Fetchers URL: /typescript-sdk/context-fetchers Learn how to use context fetchers to fetch data from external sources and make it available to your agents ## Overview Context fetchers allow you to embed real-time data from external APIs into your agent prompts. Instead of hardcoding information in your agent prompt, context fetchers dynamically retrieve fresh data for each conversation. ## Key Features - **Dynamic data retrieval**: Fetch real-time data from APIs. - **Dynamic Prompting**: Use dynamic data in your agent prompts - **Headers integration**: Use request-specific parameters to customize data fetching. - **Data transformation**: Transform API responses into the exact format your agent needs. ## Context Fetchers vs Tools - **Context Fetchers**: Pre-populate agent prompts with dynamic data - Run automatically before/during conversation startup - Data becomes part of the agent's system prompt - Perfect for: Personalized agent personas, dynamic agent guardrails - Example Prompt: `You are an assistant for ${userContext.toTemplate('user.name')} and you work for ${userContext.toTemplate('user.organization')}` - **Tools**: Enable agents to take actions or fetch data during conversations - Called by the agent when needed during the conversation - Agent decides when and how to use them - Example Tool Usage: Agent calls a "send_email" tool or "search_database" tool ## Basic Usage Let's create a simple context fetcher that retrieves user information: ```typescript // 1. Define a schema for headers validation. All header keys are converted to lowercase. // In this example all incoming headers will be validated to make sure they include user_id and api_key. const personalAgentHeaders = headers({ schema: z.object({ user_id: z.string(), api_key: z.string(), }) }); // 2. Create the fetcher with const userFetcher = fetchDefinition({ id: "user-info", name: "User Information", trigger: "initialization", // Fetch only once when a conversation is started with the Agent. When set to "invocation", the fetch will be executed every time a request is made to the Agent. fetchConfig: { url: `https://api.example.com/users/${personalAgentHeaders.toTemplate('user_id')}`, method: "GET", headers: { Authorization: `Bearer ${personalAgentHeaders.toTemplate('api_key')}`, }, transform: "user", // Extract user from response, for example if the response is { "user": { "name": "John Doe", "email": "john.doe@example.com" } }, the transform will return the user object }, responseSchema: z.object({ user: z.object({ name: z.string(), email: z.string(), }), }), // Used to validate the result of the transformed api response. defaultValue: "Unable to fetch user information", }); // 3. Configure context const personalAgentContext = contextConfig({ headers: personalAgentHeaders, contextVariables: { user: userFetcher, }, }); // 4. Create and use the Sub Agent const personalAssistant = subAgent({ id: "personal-assistant", name: "Personal Assistant", description: "A personalized AI assistant", prompt: `Hello ${personalAgentContext.toTemplate('user.name')}! I'm your personal assistant.`, }); // Initialize the Agent id: "personal-agent", name: "Personal Assistant Agent", defaultSubAgent: personalAssistant, subAgents: () => [personalAssistant], contextConfig: personalAgentContext, }); ``` ## Using Context Variables Context variables can be used in your agent prompts using JSONPath template syntax `{{contextVariableKey.field_name}}`. Use the context config's `toTemplate()` method for type-safe templating with autocomplete and validation. ```typescript const personalGraphContext = contextConfig({ headers: personalGraphHeaders, contextVariables: { user: userFetcher, }, }); const personalAgent = subAgent({ id: "personal-agent", name: "Personal Assistant", description: "A personalized AI assistant", prompt: `Hello ${personalGraphContext.toTemplate('user.name')}! I'm your personal assistant.`, }); ``` Context variables are resolved using [JSONPath notation](https://jsonpath.com). ## Data transformation The `transform` property on fetch definitions lets you extract exactly what you need from API responses using JSONPath notation: ```typescript // API returns: { "user": { "profile": { "displayName": "John Doe" } } } transform: "user.profile.displayName"; // Result: "John Doe" // API returns: { "items": [{ "name": "First Item" }, { "name": "Second Item" }] } transform: "items[0].name"; // Result: "First Item" ``` ## Best Practices 1. **Use Appropriate Triggers** - `initialization`: Use when data rarely changes - `invocation`: Use for frequently changing data 2. **Handle Errors Gracefully** - Always provide a `defaultValue` - Use appropriate response schemas ## Related documentation - [Headers](/typescript-sdk/headers) - Learn how to pass dynamic context to your agents via HTTP headers # Data Operations URL: /typescript-sdk/data-operations Learn about data operations emitted by agents and how to use the x-emit-operations header to control their visibility. ## Overview Data operations are detailed, real-time events that provide visibility into what agents are doing during execution. They include agent reasoning, tool executions, transfers, delegations, and artifact creation. By default, these operations are hidden from end users to keep the interface clean, but they can be enabled for debugging and monitoring purposes. ## The x-emit-operations Header The `x-emit-operations` header controls whether data operations are included in the response stream. When set to `true`, the system will emit detailed operational events alongside the regular response content. ### Usage ```bash curl -N \ -X POST "http://localhost:3003/api/chat" \ -H "Authorization: Bearer $INKEEP_API_KEY" \ -H "Content-Type: application/json" \ -H "x-emit-operations: true" \ -d '{ "messages": [ { "role": "user", "content": "What can you do?" } ], "conversationId": "chat-1234" }' ``` ### CLI Usage In the CLI, you can toggle data operations using the `operations` command: ```bash # Start a chat session inkeep chat # Toggle data operations on/off > operations 🔧 Emit operations: ON Data operations will be shown during responses. > operations 🔧 Emit operations: OFF Data operations are hidden. ``` ## Data Operation Types ### Agent Events #### `agent_generate` Emitted when an agent generates content (text or structured data). ```json { "type": "data-operation", "data": { "type": "agent_generate", "label": "Agent search-agent generating response", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "parts": [ { "type": "text", "content": "I found 5 relevant documents..." } ], "generationType": "text_generation" } } } } ``` #### `agent_reasoning` Emitted when an agent is reasoning through a request or planning its approach. ```json { "type": "data-operation", "data": { "type": "agent_reasoning", "label": "Agent search-agent reasoning through request", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "parts": [ { "type": "text", "content": "I need to search for information about..." } ] } } } } ``` ### Tool Execution Events #### `tool_call` Emitted when an agent starts calling a tool or function. ```json { "type": "data-operation", "data": { "type": "tool_call", "label": "Tool call: search_documents", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "toolName": "search_documents", "args": { "query": "machine learning best practices", "limit": 10 }, "toolCallId": "call_abc123", "toolId": "tool_xyz789" } } } } ``` **Tool Approval Events** When a tool requires user approval, the `tool_call` event includes additional fields: ```json { "type": "data-operation", "data": { "type": "tool_call", "label": "Tool call: delete_user_data", "details": { "timestamp": 1726247200000, "subAgentId": "data-agent", "data": { "toolName": "delete_user_data", "input": { "userId": "user_123", "confirm": true }, "toolCallId": "call_approval_xyz789", "needsApproval": true, "conversationId": "conv_abc123" } } } } ``` **Approval-specific fields:** - `needsApproval: true` - Indicates this tool requires user approval before execution - `conversationId` - The conversation context for sending approval responses - Client should show approval UI and call `/api/tool-approvals` endpoint See [Tool Approvals](/tools/tool-approvals) for complete configuration and integration details. #### `tool_result` Emitted when a tool execution completes (success or failure). ```json { "type": "data-operation", "data": { "type": "tool_result", "label": "Tool result: search_documents", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "toolName": "search_documents", "result": { "documents": [ { "title": "ML Best Practices Guide", "url": "/docs/ml-guide", "relevance": 0.95 } ] }, "toolCallId": "call_abc123", "toolId": "tool_xyz789", "duration": 1250 } } } } ``` **Error Example:** ```json { "type": "data-operation", "data": { "type": "tool_result", "label": "Tool error: search_documents", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "toolName": "search_documents", "result": null, "toolCallId": "call_abc123", "toolId": "tool_xyz789", "duration": 500, "error": "API rate limit exceeded" } } } } ``` ### Agent Interaction Events #### `transfer` Emitted when control is transferred from one agent to another. ```json { "type": "data-operation", "data": { "type": "transfer", "label": "Agent transfer: search-agent → analysis-agent", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "fromSubAgent": "search-agent", "targetAgent": "analysis-agent", "reason": "Specialized analysis required", "context": { "searchResults": "...", "userQuery": "..." } } } } } ``` #### `delegation_sent` Emitted when an agent delegates a task to another agent. ```json { "type": "data-operation", "data": { "type": "delegation_sent", "label": "Task delegated: coordinator-agent → search-agent", "details": { "timestamp": 1726247200000, "agentId": "coordinator-agent", "data": { "delegationId": "deleg_xyz789", "fromSubAgent": "coordinator-agent", "targetAgent": "search-agent", "taskDescription": "Search for information about machine learning", "context": { "priority": "high", "deadline": "2024-01-15T10:00:00Z" } } } } } ``` #### `delegation_returned` Emitted when a delegated task is completed and returned. ```json { "type": "data-operation", "data": { "type": "delegation_returned", "label": "Task completed: search-agent → coordinator-agent", "details": { "timestamp": 1726247200000, "agentId": "search-agent", "data": { "delegationId": "deleg_xyz789", "fromSubAgent": "search-agent", "targetAgent": "coordinator-agent", "result": { "status": "completed", "documents": [...], "summary": "Found 5 relevant documents" } } } } } ``` ### Artifact Events #### `artifact_saved` Emitted when an agent creates or saves an artifact (document, chart, file, etc.). ```json { "type": "data-operation", "data": { "type": "artifact_saved", "label": "Artifact saved: chart", "details": { "timestamp": 1726247200000, "agentId": "analysis-agent", "data": { "artifactId": "art_123456", "taskId": "task_789", "toolCallId": "tool_abc123", "artifactType": "chart", "summaryData": { "title": "Sales Performance Q4 2023", "type": "bar_chart" }, "fullData": { "chartData": [...], "config": {...} }, "metadata": { "createdBy": "analysis-agent", "version": "1.0" } } } } } ``` ## System Events ### `agent_initializing` Emitted when the agent runtime is starting up. ```json { "type": "data-operation", "data": { "type": "agent_initializing", "details": { "sessionId": "session_abc123", "agentId": "graph_xyz789" } } } ``` ### `completion` Emitted when an agent completes its task. ```json { "type": "data-operation", "data": { "type": "completion", "details": { "agent": "search-agent", "iteration": 1 } } } ``` ### `error` Emitted when an error occurs during execution. ```json { "type": "data-operation", "data": { "type": "error", "message": "Tool execution failed: API rate limit exceeded", "agent": "search-agent", "severity": "error", "code": "RATE_LIMIT_EXCEEDED", "timestamp": 1726247200000 } } ``` ## Example: Complete Request with Data Operations Here's a complete example showing a request with data operations enabled: ```bash curl -N \ -X POST "http://localhost:3003/api/chat" \ -H "Authorization: Bearer $INKEEP_API_KEY" \ -H "Content-Type: application/json" \ -H "x-emit-operations: true" \ -d '{ "messages": [ { "role": "user", "content": "Create a sales report for Q4" } ], "conversationId": "chat-1234" }' ``` **Response Stream:** ```text data: {"type":"agent_initializing","details":{"sessionId":"session_abc123","agentId":"graph_xyz789"}} data: {"type":"data-operation","data":{"type":"agent_reasoning","label":"Agent coordinator-agent reasoning through request","details":{"timestamp":1726247200000,"agentId":"coordinator-agent","data":{"parts":[{"type":"text","content":"I need to create a sales report for Q4. This will require gathering data and generating a chart."}]}}}} data: {"type":"data-operation","data":{"type":"tool_call","label":"Tool call: get_sales_data","details":{"timestamp":1726247200000,"agentId":"coordinator-agent","data":{"toolName":"get_sales_data","args":{"quarter":"Q4","year":"2023"},"toolCallId":"call_abc123","toolId":"tool_xyz789"}}}} data: {"type":"data-operation","data":{"type":"tool_result","label":"Tool result: get_sales_data","details":{"timestamp":1726247200000,"agentId":"coordinator-agent","data":{"toolName":"get_sales_data","result":{"sales":[...]},"toolCallId":"call_abc123","toolId":"tool_xyz789","duration":850}}}} data: {"type":"data-artifact","data":{ ... }} data: {"type":"data-operation","data":{"type":"artifact_saved","label":"Artifact saved: chart","details":{"timestamp":1726247200000,"agentId":"coordinator-agent","data":{"artifactId":"art_123456","artifactType":"chart","summaryData":{"title":"Q4 Sales Report"}}}}} data: {"type":"text-start","id":"1726247200-abc123"} data: {"type":"text-delta","id":"1726247200-abc123","delta":"I've created a comprehensive Q4 sales report..."} data: {"type":"text-end","id":"1726247200-abc123"} data: {"type":"completion","details":{"agent":"coordinator-agent","iteration":1}} ``` This provides complete visibility into the agent's execution process, from initialization through reasoning, tool execution, artifact creation, and final response generation. # Add External Agents to your Agent URL: /typescript-sdk/external-agents Learn how to configure and use external agents using the A2A protocol External agents let you integrate agents built outside of Inkeep (using other frameworks or platforms) into your Agent. They communicate over the A2A (Agent‑to‑Agent) protocol so your Inkeep sub-agents can delegate tasks to them as if they were native. Note that Inkeep Agents are available via an [A2A endpoint](/talk-to-your-agents/a2a) themselves and used from other platforms. Learn more about A2A: - A2A overview on the Google Developers Blog: [A2A — a new era of agent interoperability](https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/) - A2A protocol site: [a2a.how](https://a2a.how/) Examples platforms that expose Agents in A2A-format: | Platform | Type | Description | | --- | --- | --- | | [LangGraph](https://docs.langchain.com/langgraph-platform/server-a2a) | Native | Built-in A2A endpoint & Agent Card for graph agents. | | [Google Agent Development Kit (ADK)](https://google.github.io/adk-docs/a2a/) | Native | Official guide to build agents that expose/consume A2A. | | [Microsoft Semantic Kernel](https://devblogs.microsoft.com/foundry/semantic-kernel-a2a-integration/) | Native | “SK now speaks A2A” with sample to expose compliant agents. | | [Pydantic AI](https://ai.pydantic.dev/a2a/) | Native | Convenience method to publish a Pydantic AI agent as an A2A server. | | [AWS Strands Agents SDK](https://strandsagents.com/latest/documentation/docs/user-guide/concepts/multi-agent/agent-to-agent/) | Native | A2A support in Strands for cross‑platform agent communication. | | [CrewAI](https://codelabs.developers.google.com/intro-a2a-purchasing-concierge) | With Adapter | Use the A2A Python SDK to serve a CrewAI agent over A2A. | | [LlamaIndex](https://a2aprotocol.ai/blog/a2a-samples-llama-index-file-chat-openrouter) | With Adapter | Example Workflows app exposed via A2A (agent + card). | ## Creating an External Agent Every external agent needs a unique identifier, name, description, base URL for A2A communication, and optional authentication configuration: ```typescript const technicalSupportAgent = externalAgent({ id: "technical-support-agent", name: "Technical Support Team", description: "External technical support specialists for complex issues", baseUrl: "https://api.example.com/agents/technical-support", // A2A endpoint }); ``` ## External Agent Relationships Agents can be configured to delegate tasks to external agents. ```typescript // Define the customer support sub-agent with delegation capabilities const supportSubAgent = subAgent({ id: "support-agent", name: "Customer Support Sub-Agent", description: "Handles customer inquiries and escalates technical issues", prompt: `You are a customer support sub-agent that handles general customer inquiries.`, canDelegateTo: () => [myExternalAgent], }); // Create the customer support agent with external agent capabilities id: "customer-support-agent", name: "Customer Support System", description: "Handles customer inquiries and escalates to technical teams when needed", defaultSubAgent: supportSubAgent, subAgents: () => [supportSubAgent], }); ``` ## External Agent Options Configure authentication by providing a [credential reference](/typescript-sdk/tools/credentials). ```typescript const myExternalAgent = externalAgent({ // Required id: "external-support-agent", name: "External Support Agent", // Human-readable agent name description: "External AI agent for specialized support", // Agent's purpose baseUrl: "https://api.example.com/agents/support", // A2A endpoint URL // Optional - Credential Reference credentialReference: myCredentialReference, }); ``` When delegating to an external agent, you can specify headers to include with every request to the external agent. These headers can be dynamic variables that are [resolved at runtime](/typescript-sdk/headers). ```typescript const supportSubAgent = subAgent({ id: "support-agent", name: "Customer Support Sub-Agent", description: "Handles customer inquiries and escalates technical issues", prompt: `You are a customer support sub-agent that handles general customer inquiries.`, canDelegateTo: () => [myExternalAgent.with({ headers: { Authorization: "Bearer {{headers.Authorization}}" } })], }); ``` | Parameter | Type | Required | Description | | --------------------- | ------------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `id` | string | Yes | Stable agent identifier used for consistency and persistence | | `name` | string | Yes | Human-readable name for the external agent | | `description` | string | Yes | Brief description of the agent's purpose and capabilities | | `baseUrl` | string | Yes | The A2A endpoint URL where the external agent can be reached | | `credentialReference` | CredentialReference | No | Reference to dynamic credentials for authentication. See [Credentials](/typescript-sdk/tools/credentials) for details | # Headers URL: /typescript-sdk/headers Pass dynamic context to your Agents via HTTP headers for personalized interactions ## Overview Headers allow you to pass request-specific values (like user IDs, authentication tokens, or organization metadata) to your Agent at runtime via HTTP headers. These values are validated, cached per conversation, and made available throughout your Agent system for: - **Context Fetchers**: Dynamic data retrieval based on request values - **External Tools**: Authentication and personalization for API calls - **Agent Prompts**: Personalized responses using context variables ## Passing context via headers Include context values as HTTP headers when calling your agent API. These headers are validated against your configured schema and cached for the conversation. ```bash curl -N \ -X POST "http://localhost:3003/api/chat" \ -H "Authorization: Bearer $INKEEP_API_KEY" \ -H "user_id: u_123" \ -H "auth_token: t_abc" \ -H "org_name: Acme Corp" \ -H "Content-Type: application/json" \ -d '{ "messages": [ { "role": "user", "content": "What can you help me with?" } ], "conversationId": "conv-123" }' ``` ## Configuring headers Define a schema for your headers and configure how it's used in your agent. You must include the headers schema in your context config. ```typescript // Define schema for expected headers (use lowercase keys) const personalAgentHeaders = headers({ schema: z.object({ user_id: z.string(), auth_token: z.string(), org_name: z.string().optional() }); }); // Create a context fetcher that uses header values with type-safe templating const userFetcher = fetchDefinition({ id: "user-info", name: "User Information", trigger: "initialization", fetchConfig: { url: `https://api.example.com/users/${personalAgentHeaders.toTemplate('user_id')}`, method: "GET", headers: { Authorization: `Bearer ${personalAgentHeaders.toTemplate('auth_token')}`, }, transform: "user", // Extract user from response, for example if the response is { "user": { "name": "John Doe", "email": "john.doe@example.com" } }, the transform will return the user object }, responseSchema: z.object({ user: z.object({ name: z.string(), email: z.string(), }), }), defaultValue: "Guest User" }); // Configure context for your agent const personalAgentContext = contextConfig({ headers: personalAgentHeaders, contextVariables: { user: userFetcher, }, }); // Create a Sub Agent that uses context variables const personalAssistant = subAgent({ id: "personal-assistant", name: "Personal Assistant", description: "Personalized AI assistant", prompt: `You are a helpful assistant for ${personalAgentContext.toTemplate('user.name')} from ${personalAgentHeaders.toTemplate('org_name')}. User ID: ${personalAgentHeaders.toTemplate('user_id')} Provide personalized assistance based on their context.`, }); // Attach context to your Agent const myAgent = agent({ id: "personal-agent", name: "Personal Assistant Agent", defaultSubAgent: personalAssistant, subAgents: () => [personalAssistant], contextConfig: personalAgentContext, }); ``` ## How it works ## Using headers in your agents Header values can be used in your agent prompts and fetch definitions using JSONPath template syntax `{{headers.field_name}}`. You can use the headers schema's `toTemplate()` method for type-safe templating with autocomplete and validation. ### In Context Fetchers Use header values to fetch dynamic data from external APIs: ```typescript // Define schema for expected headers (use lowercase keys) const personalAgentHeaders = headers({ schema: z.object({ user_id: z.string(), auth_token: z.string(), org_name: z.string().optional() }); }); const userDataFetcher = fetchDefinition({ id: "user-data", name: "User Data", fetchConfig: { url: `https://api.example.com/users/${personalAgentHeaders.toTemplate('user_id')}/profile`, headers: { Authorization: `Bearer ${personalAgentHeaders.toTemplate('auth_token')}`, "X-Organization": personalAgentHeaders.toTemplate('org_name') }, body: { includePreferences: true, userId: personalAgentHeaders.toTemplate('user_id') } }, responseSchema: z.object({ name: z.string(), preferences: z.record(z.unknown()) }) }); // Configure context for your Agent // You must include the headers schema and fetchers in your context config. const personalAgentContext = contextConfig({ headers: personalAgentHeaders, contextVariables: { user: userFetcher, }, }); ``` ### In Agent Prompts Reference context directly in agent prompts for personalization using the context config's template method: ```typescript // Create context config with both headers and fetchers const userContext = contextConfig({ headers: requestHeaders, contextVariables: { userName: userDataFetcher, }, }); const assistantAgent = subAgent({ prompt: `You are an assistant for ${userContext.toTemplate('userName')} from ${requestHeaders.toTemplate('org_name')}. User context: - ID: ${requestHeaders.toTemplate('user_id')} - Organization: ${requestHeaders.toTemplate('org_name')} Provide help tailored to their organization's needs.` }); ``` ### In External Tools Configure external agents or MCP servers with dynamic headers using the headers schema: ```typescript // Define schema for expected headers (use lowercase keys) const personalAgentHeaders = headers({ schema: z.object({ user_id: z.string(), auth_token: z.string(), org_name: z.string().optional() }); }); // Configure external agent const externalAgent = externalAgent({ id: "external-service", baseUrl: "https://external.api.com", headers: { Authorization: `Bearer ${personalAgentHeaders.toTemplate('auth_token')}`, "X-User-Context": personalAgentHeaders.toTemplate('user_id'), "X-Org": personalAgentHeaders.toTemplate('org_name') } }); // Configure context for your Agent with your headers schema. const personalAgentContext = contextConfig({ headers: personalAgentHeaders, }); ``` ## Best practices - **Use lowercase keys**: Always define schema properties in lowercase and reference them as lowercase in templates - **Validate early**: Test your schema configuration with sample headers before deploying - **Cache wisely**: Remember that context persists per conversation - design accordingly - **Secure sensitive data**: For long-lived secrets, use the [Credentials](/typescript-sdk/tools/credentials) system instead of headers - **Keep it minimal**: Only include context values that are actually needed by your agents ## Common use cases ### Multi-tenant applications Pass tenant-specific configuration to customize agent behavior per customer: ```typescript // Headers "tenant_id: acme-corp" "tenant_plan: enterprise" "tenant_features: advanced-analytics,custom-branding" ``` ### User authentication Provide user identity and session information for personalized interactions: ```typescript // Headers "user_id: user_123" "user_role: admin" "session_token: sk_live_..." ``` ### API gateway integration Forward headers from your API gateway for consistent authentication: ```typescript // Headers "x-api-key: your-api-key" "x-request-id: req_abc123" "x-client-version: 2.0.0" ``` ## Troubleshooting ### Invalid headers errors If you receive a 400 error about invalid headers: 1. Verify your schema matches the headers you're sending 2. Ensure all header keys are lowercase 3. Check that required fields are present 4. Validate the data types match your schema ### Context not persisting If context values aren't available in subsequent requests: 1. Ensure you're using the same `conversationId` across requests 2. Verify headers are being sent correctly 3. Check that your context config is properly attached to the Agent ## Related documentation - [Context Fetchers](/typescript-sdk/context-fetchers) - Learn about fetching and caching external data - [External Agents](/typescript-sdk/external-agents) - Configure external agent integrations - [Credentials](/typescript-sdk/tools/credentials) - Manage secure credentials for your Agents # Conversation Memory URL: /typescript-sdk/memory Understand how conversation history is managed and included in the context window for both main and delegated agents ## Overview Conversation memory determines how much of the conversation history is included in the context window when your Agent processes a new message. The Inkeep Agent Framework automatically manages conversation history to balance context retention with token efficiency, with specialized handling for delegated agents and tool results. ## What's Included in Memory The conversation history now includes: - **Chat messages**: User messages and agent responses - **Tool results**: Results from tool executions, providing context about what actions were performed - **Agent communications**: Messages exchanged between agents during transfers and delegations ## Default Limits By default, the system includes conversation history using these limits: - **50 messages**: Up to the 50 most recent messages from the conversation - **8,000 tokens**: Maximum of 8,000 tokens from previous conversation messages ## How It Works ## Memory for Delegated Agents When agents delegate tasks to other agents, memory is intelligently filtered: ### Main Agents - See complete conversation history including all tool results - Maintain full context of delegated actions and their results ### Delegated Agents - See conversation history filtered to their delegation scope - Receive tool results from: - Their own tool executions - Top-level (non-delegated) tool executions - Cannot see tool results from unrelated delegations This ensures delegated agents have sufficient context while preventing memory pollution from unrelated parallel delegations. ## Tool Results in Memory Tool execution results are automatically included in conversation history, helping agents: - Understand what actions have already been performed - Avoid duplicate tool calls - Build on previous results when transferring between agents The tool results include both the input parameters and output results, formatted as: ``` ## Tool: search_knowledge_base **Input:** { "query": "API authentication methods" } **Output:** { "results": [...] } ``` # Model Configuration URL: /typescript-sdk/models Configure AI models for your Agents and Sub Agents Configure models at **Project** (required), **Agent**, or **Sub Agent** levels. Settings inherit down the hierarchy. ## Configuration Hierarchy You **must configure at least the base model** at the project level: ```typescript // inkeep.config.ts models: { base: { model: "anthropic/claude-sonnet-4-5", providerOptions: { temperature: 0.7, maxOutputTokens: 2048 } } } }); ``` Override at agent or sub agent level: ```typescript const myAgent = agent({ models: { base: { model: "openai/gpt-4.1" } // Override project default } }); const mySubAgent = subAgent({ models: { structuredOutput: { model: "openai/gpt-4.1-mini" } // Override for JSON output } }); ``` ## Model Types | Type | Purpose | Fallback | |------|---------|----------| | `base` | Text generation and reasoning | **Required at project level** | | `structuredOutput` | JSON/structured output only | Falls back to `base` | | `summarizer` | Summaries and status updates | Falls back to `base` | ## Supported Models | Provider | Example Models | API Key | |----------|----------------|---------| | **Anthropic** | `anthropic/claude-sonnet-4-5`
`anthropic/claude-haiku-4-5` | `ANTHROPIC_API_KEY` | | **OpenAI** | `openai/gpt-4.1`
`openai/gpt-4.1-mini`
`openai/gpt-4.1-nano`
`openai/gpt-5`* | `OPENAI_API_KEY` | | **Google** | `google/gemini-2.5-flash`
`google/gemini-2.5-flash-lite` | `GOOGLE_GENERATIVE_AI_API_KEY` | | **OpenRouter** | `openrouter/anthropic/claude-sonnet-4-0`
`openrouter/meta-llama/llama-3.1-405b` | `OPENROUTER_API_KEY` | | **Gateway** | `gateway/openai/gpt-4.1-mini` | `AI_GATEWAY_API_KEY` | | **NVIDIA NIM** | `nim/nvidia/llama-3.3-nemotron-super-49b-v1.5`
`nim/nvidia/nemotron-4-340b-instruct` | `NIM_API_KEY` | | **Custom OpenAI-compatible** | `custom/my-custom-model`
`custom/llama-3-custom` | `CUSTOM_LLM_API_KEY` | ### Pinned vs Unpinned Models **Pinned models** include a specific date or version (e.g., `anthropic/claude-sonnet-4-20250514`) and always use that exact version. **Unpinned models** use generic identifiers (e.g., `anthropic/claude-sonnet-4-5`) and let the provider choose the latest version, which may change over time as providers update their models. ```typescript models: { base: { model: "anthropic/claude-sonnet-4-5", // Unpinned - provider chooses version // vs model: "anthropic/claude-sonnet-4-20250514" // Pinned - exact version } } ``` The TypeScript SDK also provides constants for common models: ```typescript models: { base: { model: Models.ANTHROPIC_CLAUDE_SONNET_4_5, // Type-safe constants } } ``` ## Provider Options Inkeep Agents supports all [Vercel AI SDK provider options](https://ai-sdk.dev/providers/ai-sdk-providers/). ### Complete Examples **Basic configuration:** **OpenAI with reasoning:** **Anthropic with thinking:** **Google with thinking:** **Custom OpenAI-compatible provider:** ## CLI Defaults When using `inkeep init`, defaults are set based on your chosen provider: | Provider | Base | Structured Output | Summarizer | |----------|------|-------------------|------------| | **Anthropic** | `claude-sonnet-4-5` | `claude-sonnet-4-5` | `claude-sonnet-4-5` | | **OpenAI** | `gpt-4.1` | `gpt-4.1-mini` | `gpt-4.1-nano` | | **Google** | `gemini-2.5-flash` | `gemini-2.5-flash-lite` | `gemini-2.5-flash-lite` | # Project Management URL: /typescript-sdk/project-management Learn how to manage projects in the Inkeep Agent Framework # Workspace Configuration URL: /typescript-sdk/workspace-configuration Learn how to configure your workspace ## Overview The `inkeep.config.ts` file at the workspace root defines settings for all projects in this workspace. See [Project Management](/typescript-sdk/project-management#workspace-layout) for where this file should be placed. ```typescript import 'dotenv/config'; tenantId: 'my-company', agentsManageApi: { url: 'http://localhost:3002', apiKey: process.env.MANAGE_API_KEY, // Optional }, agentsRunApi: { url: 'http://localhost:3003', apiKey: process.env.RUN_API_KEY, // Optional }, outputDirectory: './output', }); ``` ## Configuration hierarchy One can override the settings in `inkeep.config.ts` by setting the following settings in this order (highest to lowest priority): ```mermaid graph LR A[CLI flags] --> B[Environment variables] B --> C[Config file values] C --> D[Built-in defaults] style A fill: style B fill: style C fill: style D fill: ``` ### 1. CLI Flags Command-line flags override all other settings: ```bash # Override API URL inkeep push --agents-manage-api-url https://api.production.com # Override config file location inkeep pull --config /path/to/custom.config.ts ``` ### 2. Environment Variables Environment variables override config file values: ```bash # Set via environment # Now CLI commands use these values inkeep push ``` **Supported Environment Variables:** | Variable | Config Equivalent | Description | |----------|-------------------|-------------| | `INKEEP_TENANT_ID` | `tenantId` | Tenant identifier | | `INKEEP_AGENTS_MANAGE_API_URL` | `agentsManageApiUrl` | Management API URL | | `INKEEP_AGENTS_RUN_API_URL` | `agentsRunApiUrl` | Runtime API URL | ### 3. Config File Values Values explicitly set in your `inkeep.config.ts`: ```typescript tenantId: 'my-tenant', agentsManageApi: { url: 'http://localhost:3002', }, agentsRunApi: { url: 'http://localhost:3003', }, }); ``` ### 4. Built-in Defaults Default values used when not specified elsewhere: ```typescript const defaults = { agentsManageApiUrl: 'http://localhost:3002', agentsRunApiUrl: 'http://localhost:3003', }; ``` ## Working with multiple configurations ### Dynamic configuration You can use environment-based logic in your workspace config: ```typescript // inkeep.config.ts const isDevelopment = process.env.NODE_ENV === 'development'; tenantId: process.env.TENANT_ID || 'default-tenant', agentsManageApiUrl: isDevelopment ? 'http://localhost:3002' : 'https://api.production.com', }); ``` ### Multiple configuration files For workspaces requiring different configurations: ```typescript // inkeep.config.ts tenantId: 'production-tenant', agentsManageApiUrl: 'https://api.production.com', }); ``` ```typescript // inkeep.dev.config.ts tenantId: 'dev-tenant', agentsManageApiUrl: 'http://localhost:3002', }); ``` ```bash # Use development config (specify from any project directory) inkeep push --config ../inkeep.dev.config.ts # or with absolute path inkeep push --config /path/to/workspace/inkeep.dev.config.ts ``` # Manage API Authentication URL: /api-reference/authentication/manage-api Authentication modes for Manage API The Manage API (`agents-manage-api`) has two authentication modes: ### Secured Mode When `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` is set: - Include the secret as a Bearer token in requests - Recommended for production environments ### Open Mode When `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` is not set: - No authentication required - Useful for local development # Run API Authentication URL: /api-reference/authentication/run-api Authentication modes for Run API The Run API (`agents-run-api`) has three authentication modes depending on your environment configuration: ### Development Mode When `ENVIRONMENT=development`: - No API key required - Must include scope headers: - `x-inkeep-tenant-id` - `x-inkeep-project-id` - `x-inkeep-agent-id` ```bash curl -H "x-inkeep-tenant-id: tenant-123" \ -H "x-inkeep-project-id: project-456" \ -H "x-inkeep-agent-id: agent-789" \ https://localhost:3003/v1/chat/completions ``` ### Standard Mode When `ENVIRONMENT≠development` and `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` is not set: - Use API keys created from the Manage UI - No additional headers required (scope encoded in key) ```bash curl -H "Authorization: Bearer sk_live_abc123..." \ https://localhost:3003/v1/chat/completions ``` ### Bypass Mode When `ENVIRONMENT≠development` and `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` is set: **Option 1: Bypass Secret** - Use the bypass secret as token - Must include scope headers ```bash curl -H "Authorization: Bearer YOUR_BYPASS_SECRET" \ -H "x-inkeep-tenant-id: tenant-123" \ -H "x-inkeep-project-id: project-456" \ -H "x-inkeep-agent-id: agent-789" \ https://run-api.example.com/chat/completions ``` **Option 2: Standard API Keys remain valid** - Use API keys from the Manage UI ```bash curl -H "Authorization: Bearer sk_live_xyz789..." \ https://run-api.example.com/chat/completions ``` ## Running Multiple Instances You can run multiple Run API instances with different auth configurations. If you're deploying Inkeep Agents to production, it is common to only expose the deployment of Run API in standard mode and all other services remain internal. ```bash # Instance 1: Port 3003 with bypass secret (intended for internal purposes) PORT=3003 INKEEP_AGENTS_RUN_API_BYPASS_SECRET=secret123 pnpm dev # Instance 2: Port 3004 without bypass secret (intended for external purposes) PORT=3004 pnpm dev ``` ## Security Best Practices 1. **Production**: Always use Standard API keys 2. **Bypass Secret**: Use for internal services only 3. **API Keys**: Rotate regularly and set expiration dates # Environment Configuration URL: /community/contributing/environment-configuration How to configure the environment variables for the Inkeep Agent Framework # Environment Configuration ## Overview The Inkeep Agents framework uses a **centralized environment configuration**. This approach provides a single source of truth for all environment variables across the monorepo, eliminating duplication and simplifying configuration management. ## Configuration Structure ### Single Root Configuration All packages in the monorepo reference a **single `.env` file** at the repository root. This is different from the typical approach of having separate `.env` files per package. ``` agents-4/ ├── .env # Main configuration (gitignored) ├── .env.example # Template with all variables └── packages/ └── agents-core/ └── src/env.ts # Centralized env loader ``` ### Loading Priority Environment variables are loaded in the following order (highest priority first): 1. **`/package-name/.env`** Package specific configuration 2. **`.env`** - Main configuration file 3. **`~/.inkeep/config`** - User-global settings (shared across all repo copies) 4. **`.env.example`** - Default values This hierarchy allows for flexible configuration management across different scenarios. If you have .env or .env.local in a package directory, they will override the root .env or .env.local for that package. ## Use Cases ### 1. Basic Local Development For simple local development with a single repository copy: ```bash # Copy the template cp .env.example .env # Edit .env with your configuration vim .env # Start development pnpm dev ``` ## Troubleshooting ### Environment variables not loading 1. Check loading order - later sources override earlier ones 2. Verify file paths are correct 3. Ensure `packages/agents-core` is built: `pnpm --filter @inkeep/agents-core build` ### Missing variables in production Ensure all required variables are set in your deployment environment. The application will fail fast if critical variables are missing. ### Database connection issues - For SQLite: Make sure youu are using an absolute path to the db file # Contribute to Inkeep Open Source project URL: /community/contributing/overview How to contribute to the Inkeep Agent Framework # Making a Contribution Thank you for your interest in contributing to the Agent Framework! This document provides guidelines and information for contributors. # Launch the Visual Builder ### Prerequisites Before getting started, ensure you have the following installed on your system: - [Node.js](https://nodejs.org/en/download/) version 22 or higher - [Docker](https://docs.docker.com/get-docker/) - [pnpm](https://pnpm.io/installation) version 10 or higher ### Step 1: Clone the repository ``` git clone https://github.com/inkeep/agents.git cd agents ``` ### Step 2: Run the setup script For first-time setup, run: ```bash pnpm setup-dev ``` This will: 1. Create `.env` from the template 2. Set up user-global config at `~/.inkeep/config` 4. Install dependencies 5. Initialize the database, starting the postgres docker container and applying migrations Add API keys for the AI providers you want to use to the root `.env`. You must have at least one AI provider configured. ```dotenv ANTHROPIC_API_KEY=sk-ant-xyz789 OPENAI_API_KEY=sk-xxx GOOGLE_GENERATIVE_AI_API_KEY=sk-xxx ``` Add your SigNoz API key to enable the traces feature (optional). ```dotenv SIGNOZ_API_KEY=sk-xxx ``` Add your Nango secret key to enable the Nango credential store (optional). ```dotenv NANGO_SECRET_KEY=sk-xxx ``` ### Step 5: Run the agent framework ```bash pnpm dev ``` ### Step 6: Start building! Open `http://localhost:3000` in the browser and start building agents. # Push your project A key advantage of the Inkeep agent framework is its seamless code-to-visual workflow: define your agents programmatically, push them to the Visual Builder, and continue developing with the intuitive drag-and-drop interface. Follow the steps below to push your project using the [Inkeep CLI](/typescript-sdk/cli-reference). ### Step 1: Download the Inkeep CLI ```bash pnpm install -g @inkeep/agents-cli ``` ### Step 2: Push the project ```bash cd agents-cookbook/template-projects/weather-project inkeep push ``` ### Step 3: Observe the Agent in the Visual Builder # Set up live traces ### Step 1: Launch docker containers ```bash cd deploy/docker docker compose up -d ``` ### Step 2: Fetch SigNoz API Key Open `http://localhost:3080` in the browser. Go to **Settings** → **Workspace Settings** → **API Keys** and copy the API key. ### Step 3: Configure environment variable Create a `.env` file at `/agents-manage-ui/` with the following variable: ```dotenv SIGNOZ_API_KEY=your-signoz-api-key-here ``` ### Step 4: View your live traces Refresh the live traces panel on the right to see your agents in action. # Set up credentials ### Step 1: Create .env file and generate encryption key ```bash cp deploy/docker/.env.nango.example deploy/docker/.env && encryption_key=$(openssl rand -base64 32) && sed -i '' "s|REPLACE_WITH_BASE64_256BIT_ENCRYPTION_KEY|$encryption_key|" deploy/docker/.env && echo "Docker environment file created with auto-generated encryption key" ``` ### Step 2: Restart the containers ```bash cd deploy/docker docker compose up -d ``` ### Step 3: Get your Nango API key Open the Nango Dashboard: `http://localhost:3050` and navigate to **Environment Settings** → **API Keys** and copy the API key. ### Step 4: Configure environment variables Navigate back to the root directory and paste the below command. Enter your Nango API key when prompted: ```bash cd ../../ printf "Enter your Nango API key: " && read key && sed -i '' "s|^NANGO_SECRET_KEY=.*|NANGO_SECRET_KEY=$key|" agents-manage-api/.env agents-run-api/.env agents-manage-ui/.env && echo "Application files updated with Nango API key" ``` ### Step 5: Start creating credentials! Navigate to the Credentials tab in the left sidebar and click "Create credential". # Development Workflow ### Git Hooks This project uses Husky for Git hooks to maintain code quality. The hooks are automatically installed when you run `pnpm install`. #### Pre-commit Hook The pre-commit hook runs the following checks before allowing a commit: 1. **Type checking** - Ensures type safety across all packages 2. **Tests** - Runs the test suite ##### Bypassing Checks While we recommend running all checks, there are legitimate cases where you might need to bypass them: **Skip typecheck only (tests still run):** ```bash SKIP_TYPECHECK=1 git commit -m "WIP: debugging issue" ``` **Skip all hooks (use sparingly):** ```bash git commit --no-verify -m "emergency: hotfix for production" ``` > **Note:** Use these bypass mechanisms sparingly. They're intended for: > > - Work-in-progress commits that you'll fix before pushing > - Emergency fixes where speed is critical > - Commits that only touch non-code files (though hooks are smart enough to handle this) ### Code Quality #### Type Checking Run type checking across all packages: ```bash pnpm typecheck ``` #### Linting Run the linter: ```bash pnpm lint ``` Format code automatically: ```bash pnpm format ``` #### Testing Run tests: ```bash pnpm test # Run all tests pnpm test:watch # Run tests in watch mode pnpm test:coverage # Run tests with coverage report ``` ### Building Build all packages: ```bash pnpm build ``` # CLI Development ## Running the CLI Locally When developing the CLI, you can run the local version directly without global installation: ```bash # Build the CLI first cd agents-cli pnpm build # Run the local CLI from the repository root node agents-cli/dist/index.js --version # Or use the convenience script (if available) ./scripts/inkeep-local.sh --version ``` ## Testing CLI Changes 1. **Build the CLI after making changes:** ```bash cd agents-cli pnpm build ``` 2. **Test commands locally:** ```bash # From repository root node agents-cli/dist/index.js push examples/agent-configurations node agents-cli/dist/index.js chat ``` 3. **Run CLI tests:** ```bash cd agents-cli pnpm test ``` ## Switching Between Local and Published CLI During development, you may need to test both the local development version and the published npm package: ```bash # Use local development version node /path/to/agents/agents-cli/dist/index.js --version # Use globally installed published version inkeep --version ``` For detailed information about switching between versions, see the `INKEEP_CLI_SWITCHING.md` file in the repository root. # Commit Messages We follow conventional commit format: ``` type(scope): description [optional body] [optional footer] ``` Types: - `feat`: New feature - `fix`: Bug fix - `docs`: Documentation changes - `style`: Code style changes (formatting, etc.) - `refactor`: Code refactoring - `test`: Test changes - `chore`: Build process or auxiliary tool changes # Pull Requests 1. Fork the repository 2. Create a feature branch (`git checkout -b feat/amazing-feature`) 3. Make your changes 4. Ensure all checks pass (`pnpm typecheck && pnpm test`) 5. Commit your changes (following commit message guidelines) 6. Push to your fork 7. Open a pull request ### PR Guidelines - Keep PRs focused on a single feature or fix - Update tests for any behavior changes - Update documentation as needed - Ensure CI passes before requesting review # Continuous Integration Our CI pipeline runs on all pull requests and includes: - Type checking (`pnpm typecheck`) - Tests (`pnpm test`) - Build verification (`pnpm build`) These checks must pass before a PR can be merged. The same checks run in pre-commit hooks to catch issues early. # Questions? If you have questions about contributing, please: 1. Check existing issues and discussions 2. Open a new issue if your question isn't addressed 3. Reach out to the maintainers Thank you for contributing! # Data Layer constraints on Project relationships URL: /community/contributing/project-constraints How the Inkeep Agent Framework ensures data integrity with project constraints # Project Constraints and Validation The Inkeep Agent Framework implements multiple layers of validation to ensure that all agents and resources are associated with valid projects. ## Database-Level Constraints ### Foreign Key Constraints All tables that reference projects include foreign key constraints to ensure referential integrity: ```sql FOREIGN KEY (tenant_id, project_id) REFERENCES projects(tenant_id, id) ON DELETE CASCADE ``` This ensures that: - No data can be inserted for non-existent projects - Deleting a project cascades to remove all associated resources - Data integrity is maintained at the database level ### Tables with Project Constraints The following tables have foreign key constraints to the `projects` table: 1. `agent` - Agent definitions (top-level Agents) 2. `sub_agents` - Sub Agent configurations 3. `sub_agent_relations` - Relationships between Sub Agents 4. `tools` - Tool configurations 5. `context_configs` - Context configurations 6. `external_agents` - External agent references 7. `conversations` - Chat conversations 8. `messages` - Chat messages 9. `tasks` - Task records 10. And more... ## Runtime Validation ### CLI Validation The Inkeep CLI validates project existence before pushing agents: ```typescript // CLI checks if project exists const existingProject = await getProject(dbClient)({ scopes: { tenantId, projectId }, }); if (!existingProject) { // Prompt user to create project // ... } ``` ### Data Access Layer Validation The core package provides validation utilities for runtime checks: ```typescript // Validate before any operation await validateProjectExists(db, tenantId, projectId); ``` ### Wrapped Operations Data access functions can be wrapped with automatic validation: ```typescript const validatedCreateAgent = withProjectValidation(db, createAgent); // Now automatically validates project before creating agent ``` ## Implementation Details ### Schema Definition ```typescript "agent", { tenantId: text("tenant_id").notNull(), projectId: text("project_id").notNull(), // ... other columns }, (table) => [ primaryKey({ columns: [table.tenantId, table.projectId, table.id] }), foreignKey({ columns: [table.tenantId, table.projectId], foreignColumns: [projects.tenantId, projects.id], name: "agent_project_fk", }), ] ); ``` ### Enabling Foreign Keys in SQLite SQLite requires explicit enabling of foreign key constraints: ```sql PRAGMA foreign_keys = ON; ``` This should be set when creating the database connection: ```typescript const client = createClient({ url: dbUrl, authToken: authToken, }); // Enable foreign keys await client.execute("PRAGMA foreign_keys = ON"); ``` ## Error Handling When a constraint violation occurs, appropriate error messages guide users: ### CLI Error ``` ⚠ Project "my-project" does not exist ? Would you like to create it? (Y/n) ``` ### Database Error ``` Error: Project with ID "my-project" does not exist for tenant "my-tenant". Please create the project first before adding resources to it. ``` ## Best Practices 1. **Always validate at multiple levels**: Database constraints, runtime validation, and UI validation 2. **Provide clear error messages**: Help users understand what went wrong and how to fix it 3. **Offer solutions**: When a project doesn't exist, offer to create it 4. **Use transactions**: Ensure atomicity when creating projects and related resources 5. **Test constraints**: Verify that constraints work as expected in tests ## Migration Guide For existing databases without constraints: 1. **Backup your database** before applying migrations 2. **Check for orphaned data**: Find resources referencing non-existent projects 3. **Clean up orphaned data** or create missing projects 4. **Apply foreign key constraints** using migrations 5. **Enable foreign key enforcement** in your database connection ```sql -- Find orphaned agents SELECT a.* FROM agent a LEFT JOIN projects p ON a.tenant_id = p.tenant_id AND a.project_id = p.id WHERE p.id IS NULL; ``` ## Benefits - **Data Integrity**: Prevents orphaned data and maintains consistency - **Clear User Experience**: Users are guided to create projects when needed - **Easier Debugging**: Constraint violations are caught early - **Simplified Cleanup**: Cascading deletes remove all related data - **Better Documentation**: Constraints document relationships in the schema # Spans and Traces URL: /community/contributing/spans OpenTelemetry spans for distributed tracing and observability in the Inkeep Agent Framework ## Overview The Inkeep Agent Framework uses OpenTelemetry for distributed tracing and observability. Spans provide detailed visibility into the execution flow of agents, context resolution, tool execution, and other framework operations. ## Getting Started with Spans ### 1. Import Required Dependencies ```typescript ``` ### 2. Get the Tracer ```typescript // Use the centralized tracer utility const tracer = getTracer("your-service-name"); ``` ## Creating and Using Spans ```typescript return tracer.startActiveSpan( "context.resolve", { attributes: { "context.config_id": contextConfig.id, "context.trigger_event": options.triggerEvent, }, }, async (span: Span) => { try { // Your operation logic here return result; } catch (error) { // Use setSpanWithError for consistent error handling setSpanWithError(span, error); throw error; } } ); ``` ## Setting Span Attributes ### Basic Attributes ```typescript span.setAttributes({ "user.id": userId, "request.method": "POST", }); ``` ## Adding Events to Spans ### Recording Important Milestones ```typescript // Add events for significant operations span.addEvent("context.fetch_started", { definitionId: definition.id, url: definition.fetchConfig.url, }); ``` ### Error Events ```typescript span.addEvent("error.validation_failed", { definitionId: definition.id, error_type: "json_schema_validation", error_details: errorMessage, }); ``` ## Error Handling and Status ### Using setSpanWithError Utility The framework provides a convenient `setSpanWithError` utility function that handles error recording and status setting: ```typescript try { // Your operation } catch (error) { // Use the setSpanWithError utility for consistent error handling setSpanWithError(span, error); throw error; } ``` ## Best Practices ### 1. Consistent Naming Convention The span naming convention follows a hierarchical structure that mirrors your code organization. ```typescript // Format: 'class.function' // Use descriptive span names that follow a hierarchical structure // Agent operations "agent.generate"; "agent.tool_execute"; "agent.transfer"; // Context operations "context.resolve"; "context.fetch"; ``` #### Naming Rules 1. **Class First**: Start with the class/module name (e.g., `agent`, `context`, `tool`) 2. **Function Second**: Follow with the specific function/method (e.g., `generate`, `resolve`) 3. **Use Underscores**: For multi-word functions, use underscores (e.g., `tool_execute`, `cache_lookup`) 4. **Consistent Casing**: Use lowercase with underscores for consistency ## Configuration and Setup ### Environment Variables ```bash # OpenTelemetry configuration OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4317 ``` ### Instrumentation Setup The framework automatically sets up OpenTelemetry instrumentation in `src/instrumentation.ts`: ## Examples in the Codebase ### Agent Operations See `src/agents/Agent.ts` for span usage in agent generation and tool execution. **Example from Agent.ts:** ```typescript // Class: Agent // Function: generate return tracer.startActiveSpan( "agent.generate", { attributes: { "agent.id": this.id, "agent.name": this.name, }, }, async (span: Span) => { // ... implementation } ); ``` ## Summary Spans provide powerful observability into your Inkeep Agent Framework operations. By following these patterns: 1. **Use `getTracer()`** for consistent tracing 2. **Use consistent naming** 3. **Set meaningful attributes** for searchability 4. **Handle errors properly** using `setSpanWithError` for consistent error handling 5. **Use `startActiveSpan`** for automatic lifecycle management This will give you comprehensive visibility into your agent operations, making debugging and performance optimization much easier. # Deploy to AWS EC2 URL: /deployment/aws-ec2 Deploy to AWS EC2 with Docker Compose ## Create a VM Instance - Go to [Compute Engine](https://console.aws.amazon.com/ec2/v2/home). - Launch an instance - Select Amazon Machine Image (AMI) - Recommended size is at least `t2.large` (2 vCPU, 8 GiB Memory). - Click "Edit" in the "Network settings" section. Set up an Inbound Security Group Rules for (TCP, 3000, 0.0.0.0/0), (TCP, 3002-3003, 0.0.0.0/0), (TCP, 3050-3051, 0.0.0.0/0), and (TCP, 3080, 0.0.0.0/0). These are the ports exposed by the Inkeep services. - Auto-assign public IP - Increase the size of storage to 30 GiB. ## Install Docker Compose 1. SSH into the EC2 Instance 2. Install packages ```bash sudo dnf update sudo dnf install -y git sudo dnf install -y docker ``` ```bash sudo mkdir -p /usr/libexec/docker/cli-plugins sudo curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-linux-$(uname -m) -o /usr/libexec/docker/cli-plugins/docker-compose sudo chmod +x /usr/libexec/docker/cli-plugins/docker-compose ``` ## Deploy SigNoz and Nango Clone this repo, which includes Docker files with SigNoz and Nango: ```bash git clone https://github.com/inkeep/agents-optional-local-dev inkeep-external-services cd inkeep-external-services ``` Run this command to autogenerate a `.env` file: ```bash cp .env.example .env && \ encryption_key=$(openssl rand -base64 32) && \ tmp_file=$(mktemp) && \ sed "s||$encryption_key|" .env > "$tmp_file" && \ mv "$tmp_file" .env && \ echo "Docker environment file created with auto-generated encryption key" ``` Nango requires a `NANGO_ENCRYPTION_KEY`. Once you create this, it cannot be edited. Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash NANGO_ENCRYPTION_KEY= # Replace these with your in production! NANGO_SERVER_URL=http://:3050 NANGO_PUBLIC_CONNECT_URL=http://:3051 # Modify these in production environments! NANGO_DASHBOARD_USERNAME=admin@example.com NANGO_DASHBOARD_PASSWORD=adminADMIN!@12 ``` Build and deploy SigNoz, Nango, OTEL Collector, and Jaeger: ```bash docker compose up -d ``` This may take up to 5 minutes to start. ### Retrieve your SigNoz and Nango API Keys To get your SigNoz API key `SIGNOZ_API_KEY`: - Open SigNoz in a browser at `http://:3080` - Navigate to Settings → Account Settings → API Keys → New Key - Choose a role, Viewer is sufficient for observability - Set the expiration field to "No Expiry" to prevent the key from expiring To get your Nango secret key `NANGO_SECRET_KEY`: - Open Nango in a browser at `http://:3050` - Nango auto-creates two environments, Prod and Dev. Select the one you will use. - Navigate to Environment Settings to find the secret key ## Deploy the Inkeep Agent Framework From the root directory, create a new project directory for the Docker Compose setup for the Inkeep Agent Framework ```bash mkdir inkeep && cd inkeep wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/docker-compose.yml wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/.env.docker.example ``` Generate a `.env` file from the example: ```bash cp .env.docker.example .env ``` Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash # Change to "production" if deploying to production ENVIRONMENT=production # AI Provider Keys (you need at least one) ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= # Nango NANGO_SECRET_KEY= # SigNoz SIGNOZ_API_KEY= # Uncomment and set each of these with (openssl rand -hex 32) INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= INKEEP_AGENTS_JWT_SIGNING_SECRET= # Uncomment and set these for the Manage UI at http://:3000 PUBLIC_INKEEP_AGENTS_MANAGE_API_URL=http://:3002 PUBLIC_INKEEP_AGENTS_RUN_API_URL=http://:3003 PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET= PUBLIC_NANGO_SERVER_URL=http://:3050 PUBLIC_NANGO_CONNECT_BASE_URL=http://:3051 PUBLIC_SIGNOZ_URL=http://:3080 # Uncomment and set these to access Manage UI at http://:3000 INKEEP_AGENTS_MANAGE_UI_USERNAME=admin@example.com INKEEP_AGENTS_MANAGE_UI_PASSWORD=adminADMIN!@12 ``` Run with Docker: ```bash docker compose up -d ``` Then open `http://:3000` in a browser! # Build a Custom Docker Image URL: /deployment/docker-build How to build your own Docker images If you created a project from the quick start, the template includes a set of Dockerfiles and `docker-compose.yml` files. To build and run locally: ```sh docker compose build docker compose up -d ``` # Deploy using Docker (Local Development) URL: /deployment/docker-local ## Install Docker - [Install Docker Desktop](https://www.docker.com/products/docker-desktop/) ## Deploy SigNoz and Nango For full functionality, the **Inkeep Agent Framework** requires [**SigNoz**](https://signoz.io/) and [**Nango**](https://www.nango.dev/). You can sign up for a cloud hosted account with them directly, or you can self host them. Follow these instructions to self-host both SigNoz and Nango. Clone this repo, which includes docker files with SigNoz and Nango: ```bash git clone https://github.com/inkeep/agents-optional-local-dev inkeep-external-services cd inkeep-external-services ``` Run this command to autogenerate a `.env` file: ```bash cp .env.example .env && \ encryption_key=$(openssl rand -base64 32) && \ tmp_file=$(mktemp) && \ sed "s||$encryption_key|" .env > "$tmp_file" && \ mv "$tmp_file" .env && \ echo "Docker environment file created with auto-generated encryption key" ``` Nango requires a `NANGO_ENCRYPTION_KEY`. Once you create this, it cannot be edited. Build and deploy SigNoz, Nango, OTEL Collector, and Jaeger: ```bash docker compose up -d ``` This may take up to 5 minutes to start. ### Retrieve your SigNoz and Nango API Keys To get your SigNoz API key `SIGNOZ_API_KEY`: - Open SigNoz in a browser at `http://localhost:3080` - Navigate to Settings → Account Settings → API Keys → New Key - Choose a role, Viewer is sufficient for observability - Set the expiration field to "No Expiry" to prevent the key from expiring To get your Nango secret key `NANGO_SECRET_KEY`: - Open Nango in a browser at `http://localhost:3050` - Nango autocreates two environments Prod and Dev, select the one you will use - Navigate to Environment Settings to find the secret key ## Deploy the Inkeep Agent Framework From the root directory, create a new project directory for the docker compose setup for Inkeep Agent Framework ```bash mkdir inkeep && cd inkeep wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/docker-compose.yml wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/.env.docker.example ``` Generate a `.env` file from the example: ```bash cp .env.docker.example .env ``` Here's an overview of the important environment variables when deploying. Make sure to replace all of these in the `.env` file. ```bash ENVIRONMENT=development # AI Provider Keys (you need at least one) ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= # Nango NANGO_SECRET_KEY= # SigNoz SIGNOZ_API_KEY= # Default username and password for Manage UI (http://localhost:3000) # INKEEP_AGENTS_MANAGE_UI_USERNAME=admin@example.com # INKEEP_AGENTS_MANAGE_UI_PASSWORD=adminADMIN!@12 ``` Run with docker: ```bash docker compose up -d ``` Then open http://localhost:3000 in a browser! - Manage UI (http://localhost:3000) - Manage API Docs (http://localhost:3002/docs) - Run API Docs (http://localhost:3003/docs) - Nango Dashboard (http://localhost:3050) - SigNoz Dashboard (http://localhost:3080) # Deploy to GCP Cloud Run URL: /deployment/gcp-cloud-run Deploy to GCP Cloud Run with Docker Containers ## Coming soon # Deploy to GCP Compute Engine URL: /deployment/gcp-compute-engine Deploy to GCP Compute Engine with Docker Compose ## Create a VM Instance - Go to [Compute Engine](https://console.cloud.google.com/compute/instances) in your GCP project. - Create an instance; a recommended size is at least `e2-standard-2` (2 vCPU, 1 core, 8 GB memory). - Use Debian GNU/Linux 12 (bookworm) - Increase the size of the boot disk to 30 GB. - Allow HTTP traffic. - Allow ingress traffic from source IPv4 ranges `0.0.0.0/0` to TCP ports: `3000, 3002, 3003, 3050, 3051, 3080`. These are the ports exposed by the Inkeep services. - Retrieve an external IP address (if applicable, set up a static IP or set up a load balancer). ## Install Docker Compose 1. [SSH into the VM](https://cloud.google.com/compute/docs/connect/standard-ssh) 2. [Set up Docker's apt repository](https://docs.docker.com/engine/install/debian/#install-using-the-repository) ```bash # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update ``` 3. Install the Docker packages ```bash sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin ``` 4. Grant permissions ``` sudo usermod -aG docker $USER newgrp docker ``` ## Deploy SigNoz and Nango Clone this repo, which includes Docker files with SigNoz and Nango: ```bash git clone https://github.com/inkeep/agents-optional-local-dev inkeep-external-services cd inkeep-external-services ``` Run this command to autogenerate a `.env` file: ```bash cp .env.example .env && \ encryption_key=$(openssl rand -base64 32) && \ tmp_file=$(mktemp) && \ sed "s||$encryption_key|" .env > "$tmp_file" && \ mv "$tmp_file" .env && \ echo "Docker environment file created with auto-generated encryption key" ``` Nango requires a `NANGO_ENCRYPTION_KEY`. Once you create this, it cannot be edited. Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash NANGO_ENCRYPTION_KEY= # Replace these with your in production! NANGO_SERVER_URL=http://:3050 NANGO_PUBLIC_CONNECT_URL=http://:3051 # Modify these in production environments! NANGO_DASHBOARD_USERNAME=admin@example.com NANGO_DASHBOARD_PASSWORD=adminADMIN!@12 ``` Build and deploy SigNoz, Nango, OTEL Collector, and Jaeger: ```bash docker compose up -d ``` This may take up to 5 minutes to start. ### Retrieve your SigNoz and Nango API Keys To get your SigNoz API key `SIGNOZ_API_KEY`: - Open SigNoz in a browser at `http://:3080` - Navigate to Settings → Account Settings → API Keys → New Key - Choose a role, Viewer is sufficient for observability - Set the expiration field to "No Expiry" to prevent the key from expiring To get your Nango secret key `NANGO_SECRET_KEY`: - Open Nango in a browser at `http://:3050` - Nango auto-creates two environments, Prod and Dev. Select the one you will use. - Navigate to Environment Settings to find the secret key ## Deploy the Inkeep Agent Framework From the root directory, create a new project directory for the Docker Compose setup for the Inkeep Agent Framework ```bash mkdir inkeep && cd inkeep wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/docker-compose.yml wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/.env.docker.example ``` Generate a `.env` file from the example: ```bash cp .env.docker.example .env ``` Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash # Change to "production" if deploying to production ENVIRONMENT=production # AI Provider Keys (you need at least one) ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= # Nango NANGO_SECRET_KEY= # SigNoz SIGNOZ_API_KEY= # Uncomment and set each of these with (openssl rand -hex 32) INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= INKEEP_AGENTS_JWT_SIGNING_SECRET= # Uncomment and set these for the Manage UI at http://:3000 PUBLIC_INKEEP_AGENTS_MANAGE_API_URL=http://:3002 PUBLIC_INKEEP_AGENTS_RUN_API_URL=http://:3003 PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET= PUBLIC_NANGO_SERVER_URL=http://:3050 PUBLIC_NANGO_CONNECT_BASE_URL=http://:3051 PUBLIC_SIGNOZ_URL=http://:3080 # Uncomment and set these to access Manage UI at http://:3000 INKEEP_AGENTS_MANAGE_UI_USERNAME=admin@example.com INKEEP_AGENTS_MANAGE_UI_PASSWORD=adminADMIN!@12 ``` Run with Docker: ```bash docker compose up -d ``` Then open `http://:3000` in a browser! # Deploy to Hetzner URL: /deployment/hetzner Deploy to Hetzner with Docker Compose ## Create a server - Create a server, recommended size is at least CPX32 (4 VCPUS, 8 GB RAM, >30 GB Storage) - Select Ubuntu 24.04 Image - Create an inbound firewall rule to allow TCP ports: 3000, 3002, 3003, 3050, 3051, and 3080. These are the ports exposed by the Inkeep services. ## Install Docker Compose 1. SSH into the server as root 2. [Set up Docker's apt repository](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository) ```bash # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update ``` 3. Install the Docker packages ```bash sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin ``` ## Deploy SigNoz and Nango Clone this repo, which includes Docker files with SigNoz and Nango: ```bash git clone https://github.com/inkeep/agents-optional-local-dev inkeep-external-services cd inkeep-external-services ``` Run this command to autogenerate a `.env` file: ```bash cp .env.example .env && \ encryption_key=$(openssl rand -base64 32) && \ tmp_file=$(mktemp) && \ sed "s||$encryption_key|" .env > "$tmp_file" && \ mv "$tmp_file" .env && \ echo "Docker environment file created with auto-generated encryption key" ``` Nango requires a `NANGO_ENCRYPTION_KEY`. Once you create this, it cannot be edited. Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash NANGO_ENCRYPTION_KEY= # Replace these with your in production! NANGO_SERVER_URL=http://:3050 NANGO_PUBLIC_CONNECT_URL=http://:3051 # Modify these in production environments! NANGO_DASHBOARD_USERNAME=admin@example.com NANGO_DASHBOARD_PASSWORD=adminADMIN!@12 ``` Build and deploy SigNoz, Nango, OTEL Collector, and Jaeger: ```bash docker compose up -d ``` This may take up to 5 minutes to start. ### Retrieve your SigNoz and Nango API Keys To get your SigNoz API key `SIGNOZ_API_KEY`: - Open SigNoz in a browser at `http://:3080` - Navigate to Settings → Account Settings → API Keys → New Key - Choose a role, Viewer is sufficient for observability - Set the expiration field to "No Expiry" to prevent the key from expiring To get your Nango secret key `NANGO_SECRET_KEY`: - Open Nango in a browser at `http://:3050` - Nango auto-creates two environments, Prod and Dev. Select the one you will use. - Navigate to Environment Settings to find the secret key ## Deploy the Inkeep Agent Framework From the root directory, create a new project directory for the Docker Compose setup for the Inkeep Agent Framework ```bash mkdir inkeep && cd inkeep wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/docker-compose.yml wget https://raw.githubusercontent.com/inkeep/agents/refs/heads/main/.env.docker.example ``` Generate a `.env` file from the example: ```bash cp .env.docker.example .env ``` Here's an overview of the important environment variables when deploying to production. Make sure to replace all of these in the `.env` file. ```bash # Change to "production" if deploying to production ENVIRONMENT=production # AI Provider Keys (you need at least one) ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= # Nango NANGO_SECRET_KEY= # SigNoz SIGNOZ_API_KEY= # Uncomment and set each of these with (openssl rand -hex 32) INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= INKEEP_AGENTS_JWT_SIGNING_SECRET= # Uncomment and set these for the Manage UI at http://:3000 PUBLIC_INKEEP_AGENTS_MANAGE_API_URL=http://:3002 PUBLIC_INKEEP_AGENTS_RUN_API_URL=http://:3003 PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET= PUBLIC_NANGO_SERVER_URL=http://:3050 PUBLIC_NANGO_CONNECT_BASE_URL=http://:3051 PUBLIC_SIGNOZ_URL=http://:3080 # Uncomment and set these to access Manage UI at http://:3000 INKEEP_AGENTS_MANAGE_UI_USERNAME=admin@example.com INKEEP_AGENTS_MANAGE_UI_PASSWORD=adminADMIN!@12 ``` Run with Docker: ```bash docker compose up -d ``` Then open `http://:3000` in a browser! # Datadog URL: /deployment/add-other-services/datadog-apm Add Datadog APM to your Inkeep Agent Framework services ## Overview Learn how to add Datadog APM to your Inkeep Agent Framework services. ## Step 1: Install Datadog APM From the root of your workspace, run the following command: ```bash pnpm install dd-trace ``` ## Step 2: Set up tracer.ts In `apps/run-api` and `apps/manage-api` create a new file called `tracer.ts` and add the following code: ```typescript tracer.init(); // initialized in a different file to avoid hoisting. ``` In `apps/run-api/src/index.ts` and `apps/manage-api/src/index.ts` add the following code to the top of the file before all other imports: ```typescript import "./tracer"; ``` ## Additional Resources For more information on how to configure APM, you can consult the official Datadog Node.js [documentation](https://app.datadoghq.com/apm/service-setup?architecture=host-based&framework=typescript&language=node&product=apm). # Sentry URL: /deployment/add-other-services/sentry Add Sentry monitoring to your agent services ## Overview Learn how to add Sentry monitoring to your Inkeep Agent Framework services. ## Step 1: Install Sentry ```bash pnpm install @sentry/node ``` ## Step 2: Update your `.env` file Add your Sentry DSN to the `.env` file in the root of your workspace. ```bash SENTRY_DSN=https://@sentry.io/ ``` ## Step 3: Configure Sentry In `apps/run-api` and `apps/manage-api` create a new file called `sentry.ts` and add the following code: ```typescript Sentry.init({ dsn: process.env.SENTRY_DSN, sampleRate: 1.0, tracesSampleRate: 1.0, }); ``` In `apps/run-api/src/index.ts` and `apps/manage-api/src/index.ts` add the following code to the top of the file before all other imports: ```typescript import "./sentry"; ``` ## Forward Error Logs to Sentry You can use [pino-sentry-transport](https://github.com/tomer-yechiel/pino-sentry-transport) to forward error logs to Sentry. ### Step 1: Install pino-sentry-transport ```bash pnpm install pino-sentry-transport ``` ### Step 2: Configure pino-sentry-transport Add the following code to the top of your `apps/run-api/src/index.ts` file: ```typescript logger = getLogger('agents-run-api'); logger.addTransport({ target: 'pino-sentry-transport', options: { sentry: { dsn: process.env.SENTRY_DSN, }, }, }); ``` Add the following code to the top of your `apps/manage-api/src/index.ts` file: ```typescript logger = getLogger('agents-manage-api'); logger.addTransport({ target: 'pino-sentry-transport', options: { sentry: { dsn: process.env.SENTRY_DSN, }, }, }); ``` ## Additional Resources For more information on how to configure Sentry, you can consult the official Sentry Node.js [documentation](https://docs.sentry.io/platforms/node/). # Get Started with Inkeep Cloud URL: /deployment/inkeep-cloud/get-started Get started with Inkeep Cloud # Overview With Inkeep Cloud, we manage the infrastructure for you so you can focus on building your agents. In this guide, will walk you through the steps to build your first agent in the Inkeep Cloud Visual Builder. ### Prerequisites - An Inkeep Cloud account - if you don't have one, you can sign up for the [waitlist](https://inkeep.com/cloud-waitlist) ### Step 1: Login using your credentials Inkeep Cloud Login ### Step 2: Enter a project Once you're logged in, you'll be redirected to the Inkeep Cloud project overview page. If you are part of a team, you might see a list of existing projects like this: ![Inkeep Cloud Project Overview](/images/inkeep-cloud-project-overview.png) If you are part of a team, click into the project your team has created. If you are not part of a team, you can create a new project by clicking the "Create Project" button. ### Step 3: Inspect the project Once you're in the project, you'll see the project overview page. You can see the list of agents that are part of the project. ![Inkeep Cloud Agent Overview](/images/inkeep-cloud-agent-overview.png) In the MCP Servers section, you can see the list of MCP servers that are part of the project: ![Inkeep Cloud MCP Servers](/images/inkeep-cloud-mcp-overview.png) Notice the Inkeep Enterprise Search MCP server, part of [Inkeep's Enterprise offering](https://inkeep.com/enterprise). It allows your agent to connect to 25+ data sources to create a unified knowledge base that your agents can access. ### Step 4: Build your agent Back in the agent overview page, click the "Create Agent" button to create a new agent. Give your agent a name, let's call it "Knowledge agent".