*** title: AI for Customers icon: LuUsers keywords: customer support AI, documentation search, content ingestion, UI components, chat integration, Slack integration, Discord integration, analytics dashboard, support tools, AI assistant, custom actions, API integration, self-serve support, technical documentation, knowledge base integration ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Use AI to help your users find what they need, get guidance on their scenarios, and have access to 24/7 troubleshooting help. ## Ingest your content Ingest all of your customer-facing content with our out-of-box data connectors and scrapers. **Supported Sources** * Technical docs (Docusaurus, ReadMe, GitBook, Fumadocs, Mintlify, etc.) * Help center and FAQs * Website and blogs * GitHub * Community threads (Slack, Discord, and Discourse) * Open API and GraphQL API specs * Knowledge Bases (Notion, Confluence, Sharepoint, etc.) * PDF, Markdown, CSV, and other files * YouTube & other video sources ## Choose your UI Pick the [UI components](/ui-components/overview) you'd like to add to your marketing site, docs, help center, or app. Using the `@inkeep/cxkit-js` library. Add a floating `Ask AI` chat button to any site or app. Add to your docs or marketing site. Use your own button or UI element to trigger the assistant UI. Integrate an intelligent form to deflect new tickets. Embed the chat directly in a dedicated standalone page. Embed a search UI directly on a dedicated page. Embed the chat in a sidebar on the left or right side of the page. Using the `@inkeep/cxkit-react` library. Add a floating `Ask AI` chat button to any site or app. Add to your docs or marketing site. Use your own button or UI element to trigger the assistant UI. Integrate an intelligent form to deflect new tickets. Embed the chat directly in a dedicated standalone page. Embed a search UI directly on a dedicated page. Embed the chat in a sidebar on the left or right side of the page. ## Add to your site Follow a quickstart for your platform: * [Docusaurus](/integrations/docusaurus/chat-button) * [GitBook](/integrations/gitbook/chat-button) * [ReadMe](/integrations/readme/chat-button) * [Fumadocs](/integrations/fumadocs/chat-button) * [Mintlify](/integrations/mintlify) * [Next.js](/integrations/nextjs/chat-button) * [Webflow](/integrations/webflow/chat-button) * [Zendesk](/integrations/zendesk/help-center/chat-button) * [HelpScout](/integrations/helpscout/chat-button) * [Discourse](/integrations/discourse) * [Framer](/integrations/framer/chat-button) * [WordPress](/integrations/wordpress/chat-button) * [Astro](/integrations/astro/chat-button) * [Gatsby](/integrations/gatsby/chat-button) * [Remix](/integrations/remix/chat-button) * [Nextra](/integrations/nextra/chat-button) * [VitePress](/integrations/vitepress/chat-button) * [VuePress](/integrations/vuepress/chat-button) * [Document360](/integrations/document360/chat-button) * [Redocly](/integrations/redocly/api-docs/chat-button) * [Sphinx](/integrations/sphinx/chat-button) * [Hugo](/integrations/hugo/ananke) * [Bettermode](/integrations/bettermode) * [MkDocs](/integrations/mkdocs/chat-button) * [Google Tag Manager](/integrations/google-tag-manager/chat-button) * [Zudoku](/integrations/zudoku) ## Add to Slack or Discord Add an `✨ask-ai` channel or `@Ask AI` bot to: A Slack workspace. A Discord server. ## Monitor Use our [dashboard](https://portal.inkeep.com) to monitor usage and close the feedback loop. Get actionable reports on specific gaps in your documentation and monitor trending topics. View all conversations and track thumbs up, down, and other stats. ## Tools and Actions Add dynamic logic to your AI assistant. Define custom functions available to the assistant. Create support tickets, transfer to live chat, or book a sales call. ## AI APIs Use our AI API to create custom experiences, including copilots or agents that can take product-specific actions ("tools") or be used in workflow automation tasks like form-filling. All powered with context about your product. OpenAI-compatible Chat Completion API. For Vercel AI SDK and OpenAI SDKs. ## Get started Ready to use AI to improve your self-serve search and support experiences? [Try Inkeep on your content](https://inkeep.com/demo) or [Schedule a call](https://inkeep.com/schedule-demo). *** title: AI for Support Teams icon: PiHeadsetBold keywords: support team AI, AI copilot, support automation, ticket deflection, auto-reply, Zendesk integration, Slack integration, support analytics, knowledge base management, AI assistant, support workflow, ticket routing, FAQ generation, support metrics, customer support tools, support team productivity, AI-powered support, support automation tools, support team efficiency ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- All the AI tools your support team needs to deliver fast, high-quality support. ## Tools for your team ### Support Copilot Keep, our copilot for support teams, is a conversational AI sidekick that helps your team resolve tickets quickly. Keep intelligently provides: * Draft answers * Relevant Sources * Summaries * To-dos ...and other relevant suggestions based on the context of a ticket. Get started: Install as a native app for Zendesk. Use as a browser sidepane with any support platform. ### Slack Bot Add the Inkeep Slack bot to your `#support-triage` or other internal channels where your team collaborates on customer questions. Use Inkeep with: Tag Inkeep to new or existing threads. Have Inkeep automatically give suggestions on any new thread. ### Private Sources Your internal-facing assistant can leverage private sources your customer-facing assistant can't. This includes: * Support tickets * Slack conversations * Internal KBs (Notion, Confluence, Google Drive, Sharepoint, etc.) Use our out-of-box connectors to tap into any of these sources. ## Ticket Deflection ### AI Assistant Use Inkeep's [customer-facing integrations](/overview/ai-for-customers) to embed an AI assistant anywhere users have questions: docs, help center, community, in-app and more. ### AI Support Forms Inkeep AI knows when it can answer and when it needs to hand off to humans. Incorporate into your ticket creation process to deflect questions. Provide an AI answer if confident and otherwise smartly categorize a new ticket. Embed into the AI chat so a user can seamlessly escalate to human support. ### Auto-reply & Automations Auto-reply to new support tickets, emails, GitHub issues, or forum posts only when confident using our APIs or integrations. Our APIs are OpenAI compatible, so can be easily integrated with any LLM frameworks. Automatically respond to support tickets with AI-powered answers when confident. {" "} Have an AI agent handle livechats and handoff to your team when needed. {" "} Use and customize AI responses to auto-reply to any user question in any channel. Categorize tickets, pre-fill forms, and use custom tools to automate any support workflow. ## Close the Content Loop Continuously improve your knowledge base and support content. Get actionable reports on specific gaps in your documentation and monitor trending topics. Automatically reformat closed tickets into private or public FAQs. ## Measure Impact Track the effectiveness of your AI-powered support tools. Monitor key metrics like deflection, documentation coverage, and πŸ‘ πŸ‘Ž. Log chat events to your analytics tools and measure impact on your key metrics. ## Get started Ready to use AI your support team and users can trust? [Schedule a call](https://inkeep.com/schedule-demo). *** title: Developer Platform sidebarTitle: Developer Platform icon: LuCodeXml keywords: developer platform, UI components, React components, JavaScript components, custom AI assistants, API integration, analytics API, RAG API, chat UI, search UI, form UI, CSS theming, custom instructions, tool calls, OpenAI compatibility, LLM framework, conversation logging, user feedback, developer tools ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Inkeep's developer-first primatives let you get started quickly while providing an extensible foundation for custom AI assistants and support automations. ## UI Component Library Use `cxkit` React or JavaScript components to create custom search, chat, and form UIs. ### Components Using the `@inkeep/cxkit-js` library. Add a floating `Ask AI` chat button to any site or app. Add to your docs or marketing site. Use your own button or UI element to trigger the assistant UI. Integrate an intelligent form to deflect new tickets. Embed the chat directly in a dedicated standalone page. Embed a search UI directly on a dedicated page. Embed the chat in a sidebar on the left or right side of the page. Using the `@inkeep/cxkit-react` library. Add a floating `Ask AI` chat button to any site or app. Add to your docs or marketing site. Use your own button or UI element to trigger the assistant UI. Integrate an intelligent form to deflect new tickets. Embed the chat directly in a dedicated standalone page. Embed a search UI directly on a dedicated page. Embed the chat in a sidebar on the left or right side of the page. ### Customize Customize the UI components to match your brand and embed in a way that feels native to your product or end-user experiences. Customize theme tokens used and customize with semantic class names. {" "} Add user-specific information and custom guidance with dynamic prompts. Trigger your own app logic and add dynamic interactions to the chat. Create support tickets right from the chat. ## AI API Build custom agents, auto-replies, and workflows with programmatic access to your knowledge base. All APIs follow the OpenAI Chat Completions format, so you can use the APIs with any LLM framework or observability tools. Optimized for customer-facing support scenarios and auto-replies. {" "} Add dynamic functionality based on user requests. {" "} Use tools and structured outputs that leverage your knowledge base context. Use Inkeep's RAG API or MCP server to get relevant content for any LLM app. ## Analytics API When building your own AI assistant UI or user experience, use the analytics API to log and monitor all conversations, feedback, and usage events. Log custom conversations or AI chats to the Inkeep analytics system. Capture user feedback on responses. Log custom events and user interactions. Query and analyze all analytics data. ## Get started Ready to build with Inkeep? [Schedule a call](https://inkeep.com/schedule-demo) with our team to get recommendations on your specific scenario. *** title: Why Inkeep? icon: LuStar keywords: AI support, generative AI, RAG, retrieval augmented generation, conversational AI, customer support, knowledge base, content ingestion, neural search, hallucination prevention, enterprise support, self-serve support, semantic search, LLM, machine learning ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ## Quality Our #1 goal is to help our customers ship AI support experiences to their users with confidence. For us, quality means developing our AI solution to: * Admit when it doesn't know and intelligently guide users to support channels * Consistently find the right content and not hallucinate * Provide rich citations that help users inspect answers * Leverage many sources while prioritizing authoritative content It also means providing the end-to-end tooling needed for the entire support lifecycle. Beyond customer-facing AI assistants, Inkeep provides a [copilot for support team members](https://inkeep.com/blog/copilot-for-support-teams) and [actionable reporting](/analytics/content-gaps) for content and product teams. We built our product and [team](https://inkeep.com/team) to deliver best-in-class experiences for each of these steps. That said - don't take our word for it! Use our [free trial](https://inkeep.com/demo-form) to test Inkeep with your toughest questions. ## Hands-on Support Quality for us also means working closely with any team we work to accomplish their goals. Consider us a partner who will work with you every step of the way. Reach us over email, Slack, Discord, or Teams. Dedicated onboarding, support, and SLAs. ## Technical Deep Dive ✨ In our journey, we've talked to hundreds of companies who are eager to use generative AI to provide better self-serve experiences for their users. Many of them had experimented with creating their own LLM Q\&A apps or tried other services, but often didn't ship because they felt the quality and reliability weren't there. ### Ingesting content from many sources Knowledge about technical products often lives in many places: documentation, GitHub, forums, Slack and Discord channels, blogs, StackOverflow, support systems, and elsewhere. Smartly ingesting all of this content, and keeping it up to date over time, quickly becomes the full-time task of a team of data engineers. Inkeep addresses this by: * Automatically ingesting content from common public and private content sources with out-of-the-box integrations (while prioritizing them appropriately) * Frequently re-crawling your sources to find differences and keep your knowledge base up to date. ### Finding the most relevant content [Retrieval augmented generation (RAG)](https://arxiv.org/abs/2005.11401) is the best way to use LLMs to answer questions regarding domain-specific content. At a high level, it involves taking a user question, finding the most relevant content, and feeding it to an LLM model. RAG relies on finding the relevant documents and "chunks" within those documents needed to answer user questions. The problem is, popular ways of doing retrieval - like slicing up all content into `n`-character chunks - are often arbitrary and ineffective. Retrieval becomes even more challenging as the number of documents and sources increase. More content can mean higher coverage of user questions, but often also means more noise and the need for a precise retrieval system. Our retrieval and neural search engines address this by using: * Accounting for **time**, **author**, **source type**, **and other metadata** that's important for prioritizing trustworthy content. * **Custom embedding and chunking strategies** for each content source. The most effective embedding and chunking strategy for a Slack conversation is very different from one for a "How-to" article. * **Neural search** that combines semantic and keyword search to balance vector similarity and keyword matching. * **Tailoring of the embedding space** to your specific organization and content. Out-of-the-box embedding models don't account for what we call the "semantic space" of your company and your products. For example, "Retrieval system" is much closer in semantic meaning to "Feature" for Inkeep than for other companies. * ...and more. If you're curious about our technical approach, join [our newsletter](https://inkeep.typeform.com/to/IgrltCbO?utm_source=why_inkeep_article) where we share product updates and engineering deep-dives. ### Minimizing hallucinations Conversational large language models are trained to provide satisfying answers to users. Unfortunately, this makes them prone to providing answers that are unsubstantiated, i.e. "hallucinating". Dealing with hallucinations is notoriously difficult and a common blocker for many companies. Here are some of the key ways in which we minimize hallucinations with our grounded-answer system: 1. **Retrieving the right content** - When models are not given content that helps them answer a question, they are significantly more likely to hallucinate. That's in part why we focus so heavily on our search and retrieval engines. 2. **Providing citations** - Citations give end-users easy ways to learn more and introspect answers. We include rich citations in our UI to make it easy to compare reference content and use citations to automatically evaluate, alert, and fix drift from source material. 3. **Staying on topic** - We've implemented a variety of protections to keep model answers on topic. For example, the bot won't answer questions unrelated to your company and will guard against giving answers that create a poor perception of your product. 4. **Rapidly experimenting at scale** - We continuously test and evaluate our entire retrieval and LLM-stack against both historical and new user questions. This allows us to identify and adopt new techniques while monitoring for regressions. ### Incorporating feedback Even with a best-in-class retrieval and grounded-answer system, feedback loops are the key to continuous improvement of model performance over time. Our platform has built-in mechanisms for this, including: * Thumbs up/down feedback from end-users * "Edit answer" feature for administrators * The ability to batch test a set of test questions * Custom FAQs We also provide usage, topical, and sentiment analysis on all user questions. Product and content teams often use these insights to prioritize content creation and product improvements that address root causes of user questions. ### Production-ready service To launch something confidently to end-users, it's essential to have: * High availability, geo-distributed, low latency search and chat services * API and UX monitoring * Continuous evaluation of search and chat results Our platform already handles this at scale and answers hundreds of thousands of questions per month. ## The Team Our [team](https://inkeep.com/team) is made up of engineers passionate about machine learning, data engineering, and user experiences. We're excited to solve the challenges in this space and help companies provide the best self-serve support and search experiences possible to their users. We're fortunate to have the backing of reputable investors, including [Y Combinator](https://www.ycombinator.com/launches/IAP-inkeep-conversational-search-for-your-developer-product) and [Khosla Ventures](https://www.khoslaventures.com/). *** title: Privacy description: How Inkeep protects your data. icon: LuLock keywords: data privacy, SOC 2 compliance, data protection, LLM privacy, AI data security, data retention, HIPAA compliance, user privacy, data processing, security controls, enterprise security, data governance, privacy policy, data collection, user data protection, information security, privacy controls, data handling, security compliance ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ## Overview We know that privacy and security are critical for many organizations. Inkeep is committed to protecting your organization's and your users' data and privacy. Inkeep is **SOC 2 Type II compliant**. We use industry-standard security practices, reputable subprocessors with SOC 2 Type II compliance, and offer controls to help organizations meet their own data protection and privacy requirements. We follow security best practices, including regular external security reviews and penetration tests. ## Use of LLMs To provide our search and chat services, we use foundational large language models (LLMs) and artificial intelligence (AI) services from the following providers: * [Anthropic](https://legal.anthropic.com) * [Azure OpenAI](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy) * [OpenAI](https://openai.com/policies/api-data-usage-policies) Inkeep and our providers will never use your data to train AI or LLM models. See the links above for details of each provider's policies. ## Custom Controls For enterprise customers with strict data protection requirements, we offer: * Custom retention period of conversations, including no-retention. * Choice of which LLM or vector database provider is used. * Data de-identification services (powered by Google Cloud's Sensitive Data Protection). * Role-based access control to limit who can view your conversations logs. * Restricting end-user access of AI functions to authenticated users only. * A HIPAA Business Associate Agreement (BAA) We will work with your legal and security teams to work with your requirements. We will review with them our service agreement, data processing agreement, security policies, and other relevant documents. This article is provided solely for general informational purposes and does not create or constitute contractual obligations, rights, or warranties between you and Inkeep. Refer to your organization's Service Agreement, Data Processing Agreement (if applicable), and our [Privacy Policy](https://privacy.inkeep.com/privacy-policy) for details on your relationship with Inkeep. For any questions, please contact [privacy@inkeep.com](mailto:privacy@inkeep.com?subject=Inkeep%20Privacy%20Question). ## Categories of Individuals We distinguish between: * **Users:** Your end users or customers who interact with the platform. * **Customer Agents:** Your employees or consultants authorized to access and manage the platform. ## Categories of Collected Data ### Knowledge Base Documents **Knowledge Base Documents** are materials or data sources you provide to Inkeep for ingestion into the platform. They may include: * Technical documentation * Website content and blogs * Sources with end-user generated content (e.g., StackOverflow, GitHub, Discourse, Slack, Discord) * Support desk tickets and FAQs * Internal documents These documents are considered **Customer Data**, which you control and can view or delete at any time. This data is used to power the AI Functions for your (and only your) organization. ### Prompts and Responses * **Prompts:** The text or input provided by Users or Customer Agents to the AI functions (e.g. search and chat services). Prompts are a form of **User Content**, which is a subset of Customer Data. * **Responses:** The text or output generated by the platform's AI functions in reply to a Prompt. Responses are referred to as **Output**. We retain Prompts (User Content) and Responses (Output) for: 1. **Usage Analytics:** Providing authorized Customer Agents with topical, sentiment, and other analytics. 2. **End User Functionality:** Enabling features like thumbs-up/down feedback, "Share chat," and conversation history. 3. **Abuse and Misuse Monitoring** 4. **Service Improvements:** Monitoring and improving the quality of our services, without using your data to train AI models. Enterprise Customers can customize the below: * Retention period of Prompts * Enabling or disabling of Usage Analytics or End User Functionality ### User Metadata **User Metadata** may include identifying information such as IP addresses, browser session details, or other user-related identifiers you or your Users choose to provide. This data is considered a subset of Customer Data. You can configure what User Metadata is collected and whether it is associated with Prompts and Responses. You can customize and opt out of the collection of User Metadata. See [here](/ui-components/common-settings/base) for instructions on configuring Inkeep's web widgets. ### Usage Data We collect **Usage Data**β€”such as technical logs, performance metrics, and other non-Customer Data related to platform usageβ€”to help troubleshoot, measure, and improve the performance and availability of our services. ## Contacting Us If you have a data request or questions about our use of data and processing, please reach out to [privacy@inkeep.com](mailto:privacy@inkeep.com?subject=Inkeep%20Privacy%20Question). If you are aware of an information security incident, unauthorized access, policy violation, security weakness, or suspicious activity related to Inkeep, please send an email with any relevant details to [incidents@inkeep.com](mailto:incidents@inkeep.com). *** title: Add Chat Button to Docusaurus description: Integrate Inkeep's chat button into your Docusaurus documentation for real-time user assistance. keywords: Docusaurus chat, React integration, documentation chat, Docusaurus customization, chat implementation, documentation enhancement, chat configuration, Docusaurus setup, React components, user assistance, MDX integration sidebarTitle: Chat Button ------------------------- ## What is Docusaurus [Docusaurus](https://docusaurus.io/) is an open-source documentation platform powered by MDX and React. ## Get an API key Follow [these steps](/projects/overview#create-a-web-assistant) to create an API key for your web assistant. ## Install the Inkeep plugin ```bash npm install @inkeep/cxkit-docusaurus ``` ```bash bun add @inkeep/cxkit-docusaurus ``` ```bash pnpm add @inkeep/cxkit-docusaurus ``` ```bash yarn add @inkeep/cxkit-docusaurus ``` ## Define the widget Add the chat button as a plugin in your `docusaurus.config.js` file: ```js title="docusaurus.config.js" plugins: ["@inkeep/cxkit-docusaurus"], ``` ### Configuration settings You have two configuration options: 1. Configure the widget in the plugin `options`. Use this option if you are loading your apiKey from an environment variable, see [here](https://docusaurus.io/docs/deployment#using-environment-variables) for more information. 2. Configure the widget in standalone `config`. Use this option if you are using any callback functions (like `transformSource`, `onEvent`, or `getTools`) in your config. These options can also be used together and the settings will be merged automatically. #### Configure the widget in the plugin `options` Docusaurus plugins can accept a tuple of `[pluginName, options]`. In this case, the plugin name is `@inkeep/cxkit-docusaurus`. Note that if you are using any callback functions (like `transformSource`, `onEvent`, or `getTools`) in your config, you will need to use the [standalone config](#configure-the-widget-in-standalone-config) to pass those settings. So use like this: You will need to replace `REPLACE_WITH_YOUR_INKEEP_API_KEY` with your actual Inkeep API key in the code below. ```js title="docusaurus.config.js" plugins: [ ["@inkeep/cxkit-docusaurus", { ChatButton: { baseSettings: { // see https://docusaurus.io/docs/deployment#using-environment-variables to use docusaurus environment variables apiKey: "REPLACE_WITH_YOUR_INKEEP_API_KEY", // required - replace with your own API key primaryBrandColor: "#26D6FF", // required -- your brand color, the widget color scheme is derived from this organizationDisplayName: "Inkeep", // ...optional settings theme: { // optional path to a custom stylesheet styles: [ { key: "main", type: "link", value: "/path/to/stylesheets", }, ], syntaxHighlighter: { lightTheme: lightCodeTheme, // optional -- pass in the Prism theme you're using darkTheme: darkCodeTheme, // optional -- pass in the Prism theme you're using }, } }, modalSettings: { // optional settings }, searchSettings: { // optional settings }, aiChatSettings: { // optional settings aiAssistantAvatar: "/img/logo.svg", // optional -- use your own AI assistant avatar exampleQuestions: [ "Example question 1?", "Example question 2?", "Example question 3?", ], }, } }], ], ``` #### Configure the widget in standalone `config` In this case, the plugin name is `@inkeep/cxkit-docusaurus/chatButton`. What this means is that you create a config file in your project. By default, you can create an `inkeep.config.js` or `inkeep.config.ts` file in the root of your project, and Inkeep will automatically pick it up. ```js title="inkeep.config.js" window.inkeepConfig = { ChatButton: { // ...options }, }; ``` You can customize the path to the config file in the plugin options: ```js title="docusaurus.config.js" plugins: [ ["@inkeep/cxkit-docusaurus", { config: "./path/to/my-inkeep-config.js" }], ], ``` We also export a fully typed `defineConfig` function that you can use to create your config: ```js title="inkeep.config.js" import { defineConfig } from "@inkeep/cxkit-docusaurus"; export default defineConfig({ ChatButton: { // ...options }, }); ``` ```js title="inkeep.config.js" const { defineConfig } = require("@inkeep/cxkit-docusaurus"); module.exports = defineConfig({ ChatButton: { // ...options }, }); ``` For a full list of customizations, check out the [Common Settings](/ui-components/common-settings). ### FAQ We support docusaurus versions `2.0.1` and above. Our plugin swizzles the `SearchBar` component. If you have a plugin that also swizzles the `SearchBar` component, you need to ensure that plugin comes after our plugin in your `docusaurus.config.js` file. This way, your custom `SearchBar` will override our default one. To add custom styles to the Inkeep widget in your Docusaurus site, first create a CSS file in your `static` directory (e.g. `static/inkeep-overrides.css`). Then specify the URL of the stylesheet in the `styles` array within the `theme` object (inside of `baseSettings`). For example, if you created a file at `static/inkeep-overrides.css`, you should set the `styles` array to: ```js theme: { styles: [ { key: "main", type: "link", value: "/inkeep-overrides.css", }, ], }, ``` For additional details on how Docusaurus manages static assets, please refer to the [official documentation](https://docusaurus.io/docs/static-assets). ```js title="docusaurus.config.js" //.. SearchBar: { // ... rest of your settings baseSettings: { // ... rest of your baseSettings theme: { styles: [ { key: "main", type: "link", value: "/inkeep-overrides.css", }, ], }, }, }, ``` If you need more control or customizations, you can use the React components directly. To do so, follow the react guides: [chat button](/ui-components/react/chat-button), [search bar](/ui-components/react/search-bar), [embedded chat](/ui-components/react/in-page/embedded-chat), [custom modal trigger](/ui-components/react/custom-modal-trigger). A few things to keep in mind: * If using Docusaurus 3.4.0 you will need to load the Inkeep widget component dynamically and wrap it in a `` [tag](https://docusaurus.io/docs/advanced/ssg#browseronly). For example: ```js import React, { useEffect, useState } from "react"; import BrowserOnly from "@docusaurus/BrowserOnly"; export default function Widget() { const [isOpen, setIsOpen] = useState(false); const [ModalSearchAndChat, setModalSearchAndChat] = useState(null); useEffect(() => { (async () => { const { InkeepModalSearchAndChat } = await import("@inkeep/cxkit-react"); setModalSearchAndChat(() => InkeepModalSearchAndChat); })(); }, []); const inkeepModalSearchAndChatProps = { baseSettings: { apiKey: "INKEEP_API_KEY", // required primaryBrandColor: "#26D6FF", // required -- your brand color, the widget color scheme is derived from this organizationDisplayName: "Inkeep", }, modalSettings: { isOpen, onOpenChange: handleOpenChange, }, }; }; return ( <> }> {() => { return ( ModalSearchAndChat && ); }} ); } ``` * If using the ChatButton component, you can create a `Footer.js` in the `themes` directory then import the original Footer component and add the Chat Button so that it will be present on each page. For example: ```js import Footer from "@theme-original/Footer"; import React, { useEffect, useState } from "react"; import BrowserOnly from "@docusaurus/BrowserOnly"; export default function FooterWrapper(props) { const [ChatButton, setChatButton] = useState(null); useEffect(() => { (async () => { const { InkeepChatButton } = await import("@inkeep/cxkit-react"); setChatButton(() => InkeepChatButton); })(); }, []); const InkeepChatButtonProps = { baseSettings: { apiKey: "INKEEP_API_KEY", // required primaryBrandColor: "#26D6FF", // required -- your brand color, the widget color scheme is derived from this organizationDisplayName: "Inkeep", }, }; return ( <> }> {() => { return ChatButton && ; }}