Ai api

Inkeep AI API Overview

Inkeep's AI API endpoints make it easy to develop chatbot or copilot experiences powered by your own knowledge base.

Because these endpoints are compatible with OpenAI's Chat Completion API format, so you can use most LLM application frameworks, libraries, or SDKs with zero code changes.

For example, you can build a:

Check out our examples for quickstarts for the various APIs.

Available Models

We offer various APIs tailored for different use cases, we use the model concept to differentiate between the different scenarios and functionality.

Model (API) NameDescription
inkeep-qaProvides sensible defaults for customer-facing support assistant or "question-answer" scenarios. Useful for auto-reply automations or custom chat experiences.
inkeep-contextA flexible "passthrough" proxy that injects Inkeep's RAG context into calls to underlying models from Anthropic or OpenAI. Fully compatible with all chat completion endpoint functionality including tool/function calling, JSON mode, and image inputs. Ideal for custom AI agents, LLM applications, or workflows requiring full output control while benefiting from a managed RAG system. Provides high flexibility but requires standard LLM application prompting and experimentation.
inkeep-context-liteA fast version of the inkeep-context mode that skips RAG and instead only injects a general overview of your product into calls to base models. Best for when you want the flexibility of the inkeep-context mode, but don't need the full comprehensive context.
inkeep-ragProvides structured RAG chunks directly from your knowledge base. Returns chunks with URLs, excerpts from the original documents, and other source information. Ideal for custom implementations where you need direct access to the retrieved knowledge base content. You'd typically pass these chunks ('documents') to an LLM of your choice for any open-ended purpose.

Note
Note
Only an API key generated through the API integration can be used. API keys from other integration are not supported.

Get an API key

  1. Log in to the Inkeep Dashboard
  2. Navigate to the Projects section and select your project
  3. Open the Integrations tab
  4. Click Create Integration and choose API from the options
  5. Enter a Name for your new API integration.
  6. Click on Create
  7. A generated API key will appear that you can use to authenticate API requests.

Using the API

To use with an OpenAI-compatible client, simply:

  1. Customize the aiApiBaseUrl to https://api.inkeep.com/v1
  2. Specify the mode by setting model to a value in the format of {inkeep-mode}-{model}

If you'd like to let Inkeep manage which model is used

  • inkeep-qa-expert
  • inkeep-context-expert
  • inkeep-context-lite
  • inkeep-rag

If you'd like to pin the service against a preferred LLM model for the qa or context APIs, you can do so like this:

  • inkeep-qa-sonnet-3-5
  • inkeep-qa-gpt-4o
  • inkeep-qa-gpt-4-turbo
  • inkeep-context-sonnet-3-5
  • inkeep-context-gpt-4-turbo
  • inkeep-context-gpt-4o

Calling the API

All modes use the OpenAI chat completion API format. You can use any OpenAI compatible SDK in any language to make requests to the API. If you don't need streaming, then a regular HTTP request works fine. If you use streaming, we recommend using one of the SDKs.

curl https://api.inkeep.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <YOUR_INKEEP_API_KEY>" \
  -d '{
    "model": "inkeep-context-expert",
    "messages": [
      {
        "role": "user",
        "content": "How do I get started?"
      }
    ]
  }'
import { generateText } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
 
// Create an OpenAI provider configured for Inkeep
const inkeep = createOpenAI({
  apiKey: 'YOUR_INKEEP_API_KEY',
  baseURL: 'https://api.inkeep.com/v1',
});
 
const { text } = await generateText({
  model: inkeep('inkeep-context-expert'),
  messages: [
    { role: 'user', content: 'How do I get started?' },
  ],
});
import OpenAI from 'openai';
 
const client = new OpenAI({
  apiKey: 'YOUR_INKEEP_API_KEY',
  baseURL: 'https://api.inkeep.com/v1'
});
 
const completion = await client.chat.completions.create({
  model: 'inkeep-context-expert',
  messages: [
    { role: 'user', content: 'How do I get started?' },
  ],
});
from openai import OpenAI
 
client = OpenAI(
  api_key="YOUR_INKEEP_API_KEY",
  base_url="https://api.inkeep.com/v1"
)
 
completion = client.chat.completions.create(
    model="inkeep-context-expert",
    messages=[
        {"role": "user", "content": "How do I get started?"},
    ],
)

Follow the guides for the mode you want to use for details on the expected response schema and usage patterns.

On this page