# Core concepts (/concepts) Learn about the key building blocks of Inkeep - Agents, Sub Agents, tools, data components, and more. ## Agents In Inkeep, an **Agent** is the top-level entity you can interface with via conversational experiences (chat) or trigger programmatically (via API). Under the hood, an Agent is made up of one or more **Sub Agents** that work together to respond to a user or complete a task. ## Tools When you send a message to an Agent, it is first received by a **Default Sub Agent** that decides what to do next. In a simple Agent, there may be only one Sub Agent with a few tools available to it. **Tools** are actions that a Sub Agent can take, like looking up information or performing a task on apps and APIs. In Inkeep, tools can be added to Sub Agents as: * **MCP Servers**: Connect to external services and APIs via the Model Context Protocol. You can: * **Connect to Native MCP servers** provided directly by SaaS vendors (no building required) * **Access Composio's platform** for 10,000+ out-of-box MCP servers for popular services (no building required) * **Use Gram** to convert OpenAPI specs into MCP servers * **Build and deploy Custom servers** for your own APIs and business logic Register any of these with their associated **Credentials** for your Agents to use. * **Function Tools**: Custom JavaScript functions that Agents can execute directly without the need for standing up an MCP server. Typically, you want a Sub Agent to handle narrow, well-defined tasks. As a general rule of thumb, keep Sub Agents to be using 5-7 related tools at a time. ## Sub Agent relationships When your scenario gets complex, it can be useful to break up your logic into multiple Sub Agents that are specialized in specific parts of your task or workflow. This is often referred to as a "Multi-agent" system. A Sub Agent can be configured to: * **Transfer** control of the chat to another Sub Agent. When a transfer happens, the receiving Sub Agent becomes the primary driver of the thread and can respond to the user directly. * **Delegate** a subtask for another ('child') Sub Agent to do and wait for its response before proceeding with the next step. A child Sub Agent *cannot* respond directly to a user. ## Sub Agent 'turn' When it's a Sub Agent's turn, it can choose to: 1. Send an update message to the user 2. Call a tool to collect information or take an action 3. Transfer or delegate to another Sub Agent An Agent's execution stays in this loop until one of the Sub Agents chooses to respond to the user with a final result. Sub Agents in Inkeep are designed to respond to the user as a single, cohesive unit by default. ## Chatting with an Agent <> You can talk to an Inkeep Agent in a few ways, including: * **UI Chat Components**: Drop-in React components for chat UIs with built-in streaming and rich UI customization. See [`agents-ui`](/talk-to-your-agents/react/chat-button). * **As an MCP server**: Use your Inkeep Agent as if was an MCP Server. Allows you to connect it to any MCP client, like Claude, ChatGPT, Claude and other Agents. See [MCP server](/talk-to-your-agents/mcp-server). * **Via API (Vercel format)**: An API that streams responses over server-side events (SSE). Use from any language/runtime, including the Vercel's `useChat` and AI Element primitives for custom UIs. See [API (Vercel format)](/talk-to-your-agents/api). * **Via API (A2A format)**: An API that follows the Agent-to-Agent ('A2A') JSON-RPC protocol. Great for when combining Inkeep with different Agent frameworks that support the A2A format. See [A2A protocol](/talk-to-your-agents/a2a). * **Via Webhook Triggers**: Create webhook endpoints that allow external services (GitHub, Slack, Stripe, etc.) to invoke your Agents. See [Triggers](/talk-to-your-agents/triggers). Drop-in chat components for React apps with streaming and rich UI. POST /api/chat, SSE (text/event-stream), x-vercel-ai-data-stream: v2. JSON-RPC messages at /agents/a2a with blocking and streaming modes. HTTP JSON-RPC endpoint at /v1/mcp with session header management. Webhook endpoints for event-driven Agent invocation. ## Triggers **Triggers** are webhook endpoints that allow external services to invoke your Agents. When a webhook is received, the payload is validated, transformed into a message, and used to start a new conversation. Triggers are useful for: * **Event-driven workflows** - Respond to events from external services like GitHub, Slack, or Stripe * **Third-party integrations** - Connect any service that can send HTTP webhooks to your Agents * **Automated pipelines** - Kick off Agent tasks from CI/CD, cron jobs, or other automation systems Each trigger can be configured with: * **Input validation** - JSON Schema to validate incoming payloads * **Message templates** - Transform webhook payloads into natural language messages using `{{placeholder}}` syntax * **Authentication** - API keys, bearer tokens, or basic auth to secure the endpoint * **Signature verification** - HMAC-SHA256 verification for services like GitHub that sign webhooks When a webhook is received, the trigger creates a new conversation and invokes the Agent asynchronously, returning immediately with an invocation ID for tracking. Learn how triggers work and when to use them. Define triggers in code with the SDK. ## Authentication & API Keys You can authenticate with your Agent using: * **API Keys**: Securely hashed keys that are scoped to specific Agents * **Development Mode**: No API key required, perfect for local development and testing * **Bypass Secrets**: For internal services and infrastructure that need direct access API keys are the recommended approach for production use, providing secure, scoped access to your Agents. ## Agent replies with Structured Data Sometimes, you want your Agent to reply not in plain text but with specific types of well-defined information, often called 'Structured Outputs' (JSON). With Inkeep, there are a few ways to do this: * **Data Components**: Structured Outputs that Sub Agents can output in their messages so they can render rich, interactive UIs (lists, buttons, forms, etc.) or convey structured information. * **Artifacts**: A Sub Agent can save information from a **tool call result** as an artifact in order to make it available to others. For example, a Sub Agent that did a web search can save the contents of a webpage it looked at as an artifact. Once saved, a Sub Agent can cite or reference artifacts in its response, and other Sub Agents or users can fetch the full artifacts if they'd like. * **Status Updates**: Real-time progress updates that can be plain text or Structured Outputs that can be used to keep users informed about what the Sub Agent is doing during longer operations. ## Passing context to Sub Agents Beyond using Tools to fetch information, Sub Agents also receive information via: * **Headers**: In the API request to an Agent, the calling application can include headers for a Sub Agent. Learn more [here](/typescript-sdk/headers). * **Context Fetchers**: Can be configured for an Agent so that at the beginning of a conversation, an API call is automatically made to an external service to get information that is then made available to any Sub Agent. For example, your Headers may include a `user-id`, which can be used to auto-fetch information from a CRM about the user for any Sub Agent to use. Headers and fetched context can then be referenced explicitly as `{{variables}}` in Sub Agent prompts. Learn more [here](/typescript-sdk/headers). ## Ways to build Quick reference to the key docs for building with the Visual Builder or the TypeScript SDK. Configure and manage MCP servers for your Sub Agents. Create and manage Agents visually. Build rich UI elements Sub Agents can render in conversations. Define structured outputs generated by tools or Sub Agents. Show progress updates during longer operations. Manage secrets and auth for MCP servers. Organize agents, MCP Servers, and other entities in Projects. Configure Sub Agents with prompts, tools, and data components. Add tools as MCP servers. Create custom JavaScript functions that run in secure sandboxes. Define how Sub Agents transfer and delegate tasks. Build custom UI elements Sub Agents can render. Create structured outputs from tools or Sub Agents. Provide real-time progress updates. Dynamically fetch and cache external context. Store and retrieve credentials for MCP tools. Create webhook endpoints for external services. The Visual Builder and TypeScript SDK work seamlessly together—define your Sub Agents in code, push them to the Visual Builder, and iterate visually. ## Projects You can organize your related MCP Servers, Credentials, Agents, and more into **Projects**. A Project is generally used to represent a set of related scenarios. For example, you may create one Project for your support team that has all the MCP servers and Agents related to customer support. ## CLI: Push and pull The Inkeep CLI bridges your TypeScript SDK project and the Visual Builder. Run the following from your project (the folder that contains your `inkeep.config.ts`) which has an `index.ts` file that exports a project. * **Push (code → Builder)**: Sync locally defined agents, Sub Agents, tools, and settings from your SDK project into the Visual Builder. ```bash inkeep push ``` * **Pull (Builder → code)**: Fetch your project from the Visual Builder back into your SDK project. By default, the CLI will LLM-assist in updating your local TypeScript files to reflect Builder changes. ```bash inkeep pull ``` Push and pull operate at the project level (not individual agents). Define agents in your project and push/pull the whole project. See the [CLI Reference](/typescript-sdk/cli-reference) for full command details. ## Deployment Once you've built your Agents, you can deploy them using: Self-host your Agents using Docker for full control and flexibility. Deploy your Agents to Vercel for easy serverless hosting. ## Architecture The Inkeep Agent framework is composed of several key services and libraries that work together: * **agents-api**: An API that handles configuration of Agents, Sub Agents, MCP Servers, Credentials, and Projects with a REST API. * **agents-manage-ui**: Visual Builder web interface for creating and managing Agents. Writes to the `agents-api`. * **agents-sdk**: TypeScript SDK (`@inkeep/agents-sdk`) for declaratively defining Agents and custom tools in code. Writes to `agents-api`. * **agents-cli**: Includes various handy utilities, including `inkeep push` and `inkeep pull` which sync your TypeScript SDK code with the Visual Builder. * **agents-ui**: A UI component library of chat interfaces for embedding rich, dynamic conversational AI experiences in web apps. # The No-Code + Code Agent Builder (/overview) Inkeep is a platform for building Agent Chat Assistants and AI Workflows. With Inkeep, you can build AI Agents with a **No-Code Visual Builder** and **Developer SDK**. Agents can be edited in either with **full 2-way sync**, so technical and non-technical teams can create and manage their Agents in one platform. ## Two ways to build ### No-Code Visual Builder A drag-and-drop canvas so any team can create and own the Agents they care about. No-Code Agent Builder demo ### TypeScript Agents SDK A code-first framework so engineering teams can build with the tools they expect. ```typescript import { agent, subAgent } from "@inkeep/agents-sdk"; import { consoleMcp } from "./mcp"; const helloAgent = subAgent({ id: "hello-agent", name: "Hello Agent", description: "Says hello", canUse: () => [consoleMcp], prompt: `Reply to the user and console log "hello world" with fun variations like h3llo world`, }); export const basicAgent = agent({ id: "basic-agent", name: "Basic Agent", description: "A basic agent", defaultSubAgent: helloAgent, subAgents: () => [helloAgent], }); ``` The **Visual Builder and TypeScript SDK are fully interoperable**: your technical and non-technical teams can edit and manage Agents in either format and switch or collaborate with others at any time. ## Use cases Inkeep Agents can operate as **Agentic AI Chat Assistants**, for example: * a customer experience agent for help centers, technical docs, or in-app experiences * an internal copilot to assist your support, sales, marketing, ops, and other teams Agents can also be used for **Agentic Workflow Automation** like: * Creating and updating knowledge bases, documentation, and blogs * Updating CRMs, triaging helpdesk tickets, and tackling repetitive tasks ## Platform Overview **Inkeep Open Source** includes: * A Visual Builder & TypeScript SDK with 2-way sync * Multi-agent architecture to support teams of agents * MCP Tools with credentials management * A UI component library for dynamic chat experiences * Triggering Agents via MCP, A2A, Webhooks, & Vercel SDK APIs * Observability via a Traces UI & OpenTelemetry * Easy deployment to Vercel and using Docker Interested in a managed platform? Sign up for the [Inkeep Cloud waitlist](https://inkeep.com/cloud-waitlist) or learn about [Inkeep Enterprise](https://inkeep.com/enterprise). You can view a full feature comparison [here](/pricing#feature-comparison). ## Our Approach Inkeep is designed to be extensible and open: you can use the LLM provider of your choice, use Agents via open protocols, and with a [fair-code](/community/license) license and great devex, easily deploy and self-host Agents in your own infra. [Join our community](https://docs.inkeep.com/community/inkeep-community) to get support, stay up to date, and share feedback. ## Next Steps Get started with the Visual Builder and TypeScript SDK in under 5 minutes. Learn about the key concepts of building Agents with Inkeep. # Pricing (/pricing) Learn about Inkeep's pricing plans and features Inkeep offers three ways to get started: **Open Source** (free forever), **Cloud** (managed deployment), and **Enterprise** (managed platform with dedicated support). Everything you need to create AI Agents: * Visual Builder & SDK * MCP Servers & Tools * Observability & UI Lib * Use with Claude/Cursor * Deploy to Vercel or Docker Follow the \<1min Quick Start → Everything in Open Source plus: * Fully managed cloud hosting * No infra management * Transparent, usage-based pricing Sign up for the Cloud waitlist → Everything in Open Source plus: * Dedicated forward deployed engineer * Unified AI Search (Managed RAG) * Use from Slack & Support Platforms * PII removal and data controls * Cloud Hosting & User Management * Trainings, enablement, and support Schedule a Demo → ## Feature Comparison ### Building Agents | Feature | Open Source | Cloud | Enterprise | | -------------------------------- | :---------: | :---: | :--------: | | No-Code Visual Builder | ✓ | ✓ | ✓ | | Agent Developer SDK (TypeScript) | ✓ | ✓ | ✓ | | 2-way Sync: Edit in Code or UI | ✓ | ✓ | ✓ | ### Core Framework | Feature | Open Source | Cloud | Enterprise | | ------------------------------------------------------ | :---------: | :---: | :--------: | | Take actions on any MCP Server, App, or API | ✓ | ✓ | ✓ | | Multi-agent Architecture (Teams of Agents) | ✓ | ✓ | ✓ | | Agent Credential and Permissions Management | ✓ | ✓ | ✓ | | Agent Traces available in UI and OTEL | ✓ | ✓ | ✓ | | Talk to Agents via A2A, MCP, and Vercel AI SDK formats | ✓ | ✓ | ✓ | ### Talk to Your Agents (Out of the Box) | Feature | Open Source | Cloud | Enterprise | | -------------------------------------------------- | :---------: | :---: | :--------: | | With Claude, ChatGPT, and Cursor | ✓ | ✓ | ✓ | | With Slack, Discord, and Teams integrations | — | — | ✓ | | With Zendesk, Salesforce, and support integrations | — | — | ✓ | ### Building Agent UIs | Feature | Open Source | Cloud | Enterprise | | --------------------------------------------------- | :---------: | :---: | :--------: | | Agent Messages with Custom UIs (forms, cards, etc.) | ✓ | ✓ | ✓ | | Custom UIs using Vercel AI SDK format | ✓ | ✓ | ✓ | | Out-of-box Chat Components (React, JS) | ✓ | ✓ | ✓ | | Answers with Inline Citations | ✓ | ✓ | ✓ | ### Unified AI Search (Managed RAG) | Feature | Open Source | Cloud | Enterprise | | ---------------------------------------------------- | :---------: | :---: | :--------: | | Real-time fetch from databases, APIs, and the web | ✓ | ✓ | ✓ | | Public sources ingestion (docs, help center, etc.) | — | — | ✓ | | Private sources ingestion (Notion, Confluence, etc.) | — | — | ✓ | | Optimized Retrieval and Search (Managed RAG) | — | — | ✓ | | Semantic Search | — | — | ✓ | ### Insights & Analytics | Feature | Open Source | Cloud | Enterprise | | ---------------------------------- | :---------: | :---: | :--------: | | AI Reports on Knowledge Gaps | — | — | ✓ | | AI Reports on Product Feature Gaps | — | — | ✓ | ### Authentication and Authorization | Feature | Open Source | Cloud | Enterprise | | ------------------------- | :---------: | :---: | :--------: | | Single Sign-on | — | — | ✓ | | Role-Based Access Control | — | — | ✓ | | Audit Logs | — | — | ✓ | ### Security | Feature | Open Source | Cloud | Enterprise | | ------------------------------------- | :---------: | :---: | :--------: | | PII Removal | — | — | ✓ | | Uptime and Support SLAs | — | — | ✓ | | SOC II Type II and Pentest Reports | — | — | ✓ | | GDPR, HIPAA, DPA, and Infosec Reviews | — | — | ✓ | ### Deployment | Feature | Open Source | Cloud | Enterprise | | ------------- | :---------: | :-------: | :---------------------------: | | Hosting Types | Self-hosted | Cloud | Cloud, Hybrid, or Self-hosted | | Support Type | Community | Community | Dedicated Engineering Team | ### Forward Deployed Engineer Program | Feature | Open Source | Cloud | Enterprise | | ------------------------------------------ | :---------: | :---: | :--------: | | Dedicated Architect and AI Agents Engineer | — | — | ✓ | | 1:1 Office Hours and Trainings | — | — | ✓ | | Structured Pilot | — | — | ✓ | # Troubleshooting Guide (/troubleshooting) Learn how to diagnose and resolve issues when something breaks in your Inkeep agent system. ## Overview This guide provides a structured methodology for debugging problems across different components of your agent system. ## Step 1: Check the Timeline The timeline is your first stop for understanding what happened during a conversation or agent execution. Navigate to the **Traces** sections to view in depth details per conversation. Within each conversation, you'll find an **error card** that is clickable whenever something goes wrong during agent execution. ### What to Look For * **Execution flow**: Review the sequence of agent actions and tool calls * **Timing**: Check for delays or bottlenecks in the execution * **Agent transitions**: Verify that transfers and delegations happened as expected * **Tool usage**: Confirm that tools were called correctly and returned expected results * **Error cards**: Look for red error indicators in the timeline and click to view detailed error information ### Error Cards in the Timeline Clicking on this error card reveals: * **Error type**: The specific category of error (e.g., "Agent Generation Error") * **Exception stacktrace**: The complete stack trace showing exactly where the error occurred in the code This detailed error information helps you pinpoint exactly what went wrong and where in your agent's execution chain. <> ### Copy Trace for Debugging The `Copy Trace` button in the timeline view allows you to export the entire conversation trace as JSON. This is particularly useful for offline analysis and debugging complex flows. Copy Trace button in the timeline view for exporting conversation traces #### What's Included in the Trace Export When you click `Copy Trace`, the system exports a JSON object containing: ```json { "metadata": { "conversationId": "unique-conversation-id", "traceId": "distributed-trace-id", "agentId": "agent-identifier", "agentName": "Agent Name", "exportedAt": "2025-10-14T12:00:00.000Z" }, "timing": { "startTime": "2025-10-14T11:59:00.000Z", "endTime": "2025-10-14T12:00:00.000Z", "durationMs": 60000 }, "timeline": [ // Array of all activities with complete details: // - Agent messages and responses // - Tool calls and results // - Agent transfers // - Artifact information // - Execution context ] } ``` #### How to Use Copy Trace 1. Navigate to the **Traces** section in the management UI 2. Open the conversation you want to debug 3. Click the **Copy Trace** button at the top of the timeline 4. The complete trace JSON is copied to your clipboard 5. Paste it into your preferred tool for analysis This exported trace contains all the activities shown in the timeline, making it easy to share complete execution context with team members or support. ## Step 2: Check SigNoz SigNoz provides distributed tracing and observability for your agent system, offering deeper insights when the built-in timeline isn't sufficient. ### Accessing SigNoz from the Timeline You can easily access SigNoz directly from the timeline view. In the **Traces** section, click on any activity in the conversation timeline to view its details. Within the activity details, you'll find a **"View in SigNoz"** button that takes you directly to the corresponding span in SigNoz for deeper analysis. ### What SigNoz Shows * **Distributed traces**: End-to-end request flows across services * **Performance metrics**: Response times, throughput, and error rates ### Key Metrics to Monitor * **Agent response times**: How long each agent takes to process requests * **Tool execution times**: Performance of MCP servers and external APIs * **Error rates**: Frequency and types of failures ## Agent Stopped Unexpectedly ### StopWhen Limits Reached If your agent stops mid-conversation, it may have hit a configured stopWhen limit: * **Transfer limit reached**: Check `transferCountIs` on your Agent or Project - agent stops after this many transfers between Sub Agents * **Step limit reached**: Check `stepCountIs` on your Sub Agent or Project - execution stops after this many tool calls + LLM responses **How to diagnose:** * Check the timeline for the last activity before stopping * Look for messages indicating limits were reached * Review your stopWhen configuration in Agent/Project settings **How to fix:** * Increase the limits if legitimate use case requires more steps/transfers * Optimize your agent flow to use fewer transfers * Investigate if agent is stuck in a loop (limits working as intended) See [Configuring StopWhen](/typescript-sdk/agent-settings#configuring-stopwhen) for more details. ## Check service logs (local development) When running `pnpm dev` from your [quickstart workspace](/quick-start/start-development), you will see an interactive terminal interface. This interface allows you to inspect the logs of each [running service](/quick-start/start-development#service-ports). You can navigate between services using the up and down arrow keys. Service logs in local development * The `service-info` tab displays the health of each running service. * The `manage-api` tab contains logs for all database operations. This is useful primarily for debugging issues with [`inkeep push`](/typescript-sdk/cli-reference#inkeep-push). * The `run-api` tab contains logs for all agent execution and tool calls. This is useful for debugging issues with your agent's behavior. * The `mcp` tab contains logs for your [custom MCP servers](/tutorials/mcp-servers/custom-mcp-servers). * The `dashboard` tab displays logs for the [Visual Builder](/visual-builder/sub-agents) dashboard. To terminate the running services, click press `q` or `esc` in the terminal. ## Common Configuration Issues ### General Configuration Issues * **Missing environment variables**: Ensure all required env vars are set * **Incorrect API endpoints**: Verify you're using the right URLs * **Network connectivity**: Check firewall and proxy settings * **Version mismatches**: Ensure all packages are compatible ### MCP Server Connection Issues * **MCP not able to connect**: * Check that the MCP server is running and accessible * **401 Unauthorized errors**: * Verify that credentials are properly configured and valid * **Connection timeouts**: * Ensure network connectivity and firewall settings allow connections ### AI Provider Configuration Problems * **AI Provider key not defined or invalid**: * Ensure you have one of these environment variables set: `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, or `GOOGLE_GENERATIVE_AI_API_KEY` * Verify the API key is valid and has sufficient credits * Check that the key hasn't expired or been revoked * **GPT-5 access issues**: * Individual users cannot access GPT-5 as it requires organization verification * Use GPT-4 or other available models instead * Contact OpenAI support if you need GPT-5 access for your organization ### Credit and Rate Limiting Issues * **Running out of credits**: * Monitor your OpenAI usage and billing * Set up usage alerts to prevent unexpected charges * **Rate limiting by AI providers**: * Especially common with high-frequency operations like summarizers * Monitor your API usage patterns and adjust accordingly ### Context Fetcher Issues * **Context fetcher timeouts**: * Check that external services are responding within expected timeframes ## Error Retry Behavior When calling agents the system automatically retries certain errors using exponential backoff. The following errors are automatically retried: | Status Code | Meaning | | ----------- | -------------------------------- | | `429` | Too Many Requests (rate limited) | | `500` | Internal Server Error | | `502` | Bad Gateway | | `503` | Service Unavailable | | `504` | Gateway Timeout | These transient network issues are also automatically retried: * Network connectivity failures * Connection timeouts * `ECONNRESET` — Connection reset by peer * `ECONNREFUSED` — Connection refused (network level) * `ENOTFOUND` — DNS lookup failures * Fetch/request failures # Inkeep API (/api-reference) Explore the Inkeep Agents API endpoints for managing, running, and evaluating agents. import { source } from '@/lib/source'; {source .getPages() .filter((item) => item.url.startsWith('/api-reference/')) .map((item) => ( {item.data.description} ))} *** # CrewAI vs Inkeep (/comparisons/crewai) Compare CrewAI with Inkeep ## Overview CrewAI is a Python-only, developer-focused platform with process-based (sequential/hierarchical) agent orchestration, whereas Inkeep provides true autonomous agents with 2-way code-UI sync, out-of-box chat components, native data ingestion, and ready-to-deploy integrations for customer-facing AI experiences. ### Building Agents ### Developer Platform ### Unified AI Search & RAG ### Interact with your AI Agents in... ### Building Agent UIs ### AI Agents for... ### Insights & Analytics ### Authentication and Authorization ### Deployment ### Security # Lindy vs Inkeep (/comparisons/lindy) Compare Lindy with Inkeep ## Overview Inkeep is a developer-first platform with a TypeScript SDK, 2-way code-UI sync, and graph-based multi-agent orchestration for building sophisticated AI systems, while Lindy is a no-code workflow automation tool designed for business users who prefer visual-only configuration. ### Building Agents ### Developer Platform ### Unified AI Search & RAG ### Interact with your AI Agents in... ### Building Agent UIs ### AI Agents for... ### Insights & Analytics ### Authentication and Authorization ### Deployment ### Security # n8n vs Inkeep (/comparisons/n8n) Compare n8n with Inkeep ## Overview n8n excels at deterministic, rule-based workflow automation where each step follows a predefined path. Inkeep, by contrast, is built for agentic workflows and conversational AI-driven systems that reason, adapt, and make decisions dynamically. ### Building Agents ### Workflow Automation ### Developer Platform ### Unified AI Search & RAG ### Interact with your AI Agents in... ### Building Agent UIs ### AI Agents for... ### Insights & Analytics ### Authentication and Authorization ### Deployment ### Security # OpenAI AgentKit vs Inkeep (/comparisons/openai-agent-kit) Compare OpenAI AgentKit with Inkeep ## Overview OpenAI AgentKit provides strong UI components (ChatKit) and basic agent building, but limits you to OpenAI models with manual knowledge management. Inkeep offers multi-agent orchestration, 2-way code-UI sync, automated knowledge ingestion, multi-provider support, and enterprise-grade content intelligence. ### Building Agents ### Developer Platform ### Unified AI Search & RAG ### Interact with your AI Agents in... ### Building Agent UIs ### AI Agents for... ### Insights & Analytics ### Authentication and Authorization ### Deployment ### Security # Join & Follow (/community/inkeep-community) To get help, share ideas, and provide feedback, join our community: Get support, share ideas, and connect with other builders. You can also find us on: Updates, tips, and shout‑outs for builders. Star the repo, open issues, and contribute. Practical demos, tutorials, and deep dives. Company updates, launches, and hiring news. Feel free to tag us as `@inkeep` on 𝕏 or `@Inkeep` on LinkedIn with a video of what you're building — we like to highlight neat Agent use cases from the community where possible. Also feel free to submit a PR to our [template library](https://github.com/inkeep/agents/tree/main/agents-cookbook/template-projects). To keep up to date with all news related to AI Agents, sign up for the Agents Newsletter: Get the latest AI Agents news, tips, and updates delivered to your inbox. # License (/community/license) License for the Inkeep Agent Framework The Inkeep Agent Framework is licensed under the **Elastic License 2.0** ([ELv2](https://www.elastic.co/licensing/elastic-license)) subject to **Inkeep's Supplemental Terms** ([SUPPLEMENTAL\_TERMS.md](https://github.com/inkeep/agents/blob/main/SUPPLEMENTAL_TERMS.md)). This is a [fair-code](https://faircode.io/), source-available license that allows broad usage while protecting against certain competitive uses. # Connect Your Data with Context7 (/connect-your-data/context7) Learn how to connect your code repositories and documentation to your agents using Context7 Context7 specializes in connecting code repositories and technical documentation to your agents. It's particularly well-suited for developers who want their agents to have access to library documentation, API specs, and code repositories. ## Supported data sources With Context7 you can connect: * **Code Repositories**: GitHub Repositories, GitLab, BitBucket * **API Documentation**: OpenAPI Spec * **Documentation Formats**: LLMs.txt * **Websites**: Website crawling and indexing ## Getting started ### Step 1: Check if your library is already available Context7 maintains a library of pre-indexed documentation for popular libraries and frameworks. Before creating an account, check if your library is already available: Visit [context7.com](https://context7.com/) and browse the list of available libraries. If your library is listed, you can use it immediately without additional setup. If your library is already available in Context7's library, you can skip directly to Step 4 to get the library ID and register the MCP server. ### Step 2: Create an account If your library isn't listed or you want to connect custom sources: 1. [Sign up for Context7](https://context7.com/sign-in) 2. Complete the account setup process 3. Verify your email address if required ### Step 3: Connect your data sources 1. Log in to your Context7 dashboard 2. Add your repositories, websites, or OpenAPI specs 3. Wait for Context7 to index your content 4. Verify that your content is accessible ### Step 4: Get the Context7 Library ID To use a specific library with your agents, you'll need the Context7 Library ID. You can find this ID in the library's URL on context7.com. **How to find the Library ID:** 1. Navigate to the library page on context7.com (e.g., `https://context7.com/supabase/supabase`) 2. Copy the path after the domain name 3. The Library ID is the path portion of the URL **Example:** * URL: `https://context7.com/supabase/supabase` * Library ID: `supabase/supabase` If you don't know the library ID, the Context7 MCP server can automatically match libraries based on your query using the `resolve-library-id` tool. See the "Automatic library matching" section below for details. ### Step 5: Register the MCP server Register the Context7 MCP server as a tool in your agent configuration: **Using TypeScript SDK:** ```typescript import { mcpTool, subAgent } from "@inkeep/agents-sdk"; const context7Tool = mcpTool({ id: "context7-docs", name: "context7_search", description: "Search code documentation and library references", serverUrl: "https://mcp.context7.com/mcp", }); const devAgent = subAgent({ id: "dev-agent", name: "Developer Assistant", description: "Helps with code questions using library documentation", prompt: `You are a developer assistant with access to code documentation.`, canUse: () => [context7Tool], }); ``` **Using Visual Builder:** 1. Go to the **MCP Servers** tab in the Visual Builder 2. Click "New MCP server" 3. Enter: * **Name**: `Context7 Documentation` * **URL**: `https://mcp.context7.com/mcp` * **Transport Type**: `Streamable HTTP` or `SSE` 4. Save the server 5. Add it to your agent by dragging it onto your agent canvas ### Step 6: Use the Context7 MCP server in your agent Once your Context7 MCP server is registered, you can use it with a specific library ID (from Step 4) or let it automatically match libraries based on your query. #### Specifying a library ID If you know which library you want to use, specify its Context7 Library ID in your agent's prompt. This allows the Context7 MCP server to skip the library-matching step and directly retrieve documentation. **Example:** ```typescript const devAgent = subAgent({ id: "react-agent", name: "React Assistant", description: "Helps with React development", prompt: `You are a React development assistant. Use library ID "websites/inkeep" when searching for documentation. Use the get_library_docs tool to find React documentation and examples.`, canUse: () => [context7Tool], }); ``` #### Automatic library matching If you don't specify a library ID, the Context7 MCP server will automatically match libraries based on your query. The server uses the `resolve-library-id` tool to identify the appropriate library, then uses the `get_library_docs` tool to retrieve the relevant documentation. This approach works well when: * You're working with multiple libraries * The library name is mentioned in the user's query * You want the agent to dynamically select the most relevant library # Connect Your Data with Firecrawl (/connect-your-data/firecrawl) Connect websites to your agents using Firecrawl ## Overview Firecrawl is a web scraping and web crawling platform that extracts clean content from web pages and converts it to markdown or structured JSON, ready for embedding and use in RAG pipelines. With Firecrawl you can connect your agents to: * **Websites**: Website crawling and indexing for extracting clean content from web pages * **Web pages**: Individual page scraping with automatic content extraction ## RAG pipeline workflow Here's what a complete RAG pipeline looks like to connect your websites to your agents: **Scrape web content with Firecrawl** - Extract clean markdown from websites **Save the clean markdown** - Store scraped content locally **Index the documents in Pinecone** - Using Pinecone Assistant SDK **Agent queries via MCP server** - Retrieves relevant content using semantic search ## Getting started ### Prerequisites Before we get started, make sure you have the following: * A [Firecrawl account](https://firecrawl.dev/) * A [Pinecone account](https://app.pinecone.io/) * [uv](https://docs.astral.sh/uv/) installed * A python virtual environment running ### Step 1: Set up Firecrawl and collect data Install Firecrawl and retrieve your API key from [firecrawl.dev](https://firecrawl.dev): ```bash uv pip install firecrawl-py python-dotenv ``` Save your API key to a `.env` file: ```bash title=".env" FIRECRAWL_API_KEY=fc-YOUR-KEY-HERE ``` The following script uses Firecrawl to explore a website's structure, identify all available pages, and convert each page's content into markdown files. ```python from firecrawl import Firecrawl from dotenv import load_dotenv from pathlib import Path load_dotenv() app = Firecrawl() # Crawl to discover pages crawl_result = app.crawl( "https://www.mayoclinic.org/drugs-supplements", limit=10, scrape_options={'formats': ['markdown']} ) # Extract URLs urls = [page.metadata.url for page in crawl_result.data if page.metadata and page.metadata.url] # Batch scrape batch_job = app.batch_scrape(urls, formats=["markdown"]) # This creates a directory for our documents and saves each scraped page as a numbered markdown file output_dir = Path("data/documents") output_dir.mkdir(parents=True, exist_ok=True) for i, result in enumerate(batch_job.data): filename = f"doc_{i:02d}.md" with open(output_dir / filename, "w") as f: f.write(result.markdown) ``` For a more in-depth tutorial on programmatically scraping with Firecrawl, follow Firecrawl's [guide](https://www.firecrawl.dev/blog/best-vector-databases-2025) under the "Building a RAG Pipeline" section. ### Step 2: Set up Pinecone Assistant and index your documents We'll load those markdown files, chunk them, and store them in Pinecone. First, install the required packages: ```bash uv pip install langchain-pinecone langchain-openai pinecone langchain langchain-text-splitters ``` Set up your Pinecone API key environment variable: ```bash title=".env" PINECONE_API_KEY=your-key ``` In your Pinecone Assistant, create a new assistant named "drug-info-rag". The code below indexes your documents in the assistant with their embeddings. ```python import os from pathlib import Path from dotenv import load_dotenv from pinecone import Pinecone load_dotenv() # Initialize Pinecone pc = Pinecone(api_key=os.environ["PINECONE_API_KEY"]) # Create assistant assistant = pc.assistant.Assistant( assistant_name="drug-info-rag", ) # Upload markdown files to assistant for md_file in Path("data/documents").glob("*.md"): response = assistant.upload_file( file_path=str(md_file.absolute()), timeout=None ) print(f"Uploaded {md_file.name}: {response}") ``` ### Step 3: Get your Pinecone Assistant MCP server URL 1. Navigate to the **Settings** tab in [Pinecone Assistant](https://app.pinecone.io/) 2. Copy the MCP URL provided ### Step 4: Register the MCP server Register the Pinecone MCP server as a tool in your agent configuration. Replace `` with the MCP URL you copied in Step 4. **Using TypeScript SDK:** You can create your [credential](/typescript-sdk/credentials/overview) using keychain, nango, or environment variables, but in this example we use environment variables. ```typescript title=".env" PINECONE_API_KEY= ``` ```typescript title="agents/documentation-agent.ts" import { mcpTool, subAgent, credential } from "@inkeep/agents-sdk"; const pineconeCredential = credential({ id: 'pinecone-credential', name: 'Pinecone Credential', type: CredentialStoreType.memory, credentialStoreId: 'memory-default', retrievalParams: { "key": "PINECONE_API_KEY", // where PINECONE_API_KEY is the API key for your Pinecone project }, }); const pineconeTool = mcpTool({ id: "pinecone-documents", name: "pinecone_search", description: "Search uploaded documents and files using semantic search", serverUrl: "", // From Pinecone Settings tab }); const docAgent = subAgent({ id: "doc-agent", name: "Documentation Assistant", description: "Answers questions using uploaded documents", prompt: `You are a documentation assistant with access to company documents and files.`, canUse: () => [pineconeTool], }); ``` **Using Visual Builder:** 1. **Add a Pinecone credential:** * Go to the **Credentials** tab in the Visual Builder * Click **"New credential"** * Select **"Bearer authentication"** * Enter: * **Name**: `Pinecone API Key` (or your preferred name) * **API key**: Your Pinecone API key (found in your [Pinecone dashboard](https://app.pinecone.io/)) * Click **"Create Credential"** to save 2. **Register the MCP server:** * Go to the **MCP Servers** tab in the Visual Builder * Click **"New MCP server"** * Select **"Custom Server"** * Enter: * **Name**: `Pinecone Documents` * **URL**: Your MCP URL from Pinecone Settings tab * **Transport Type**: `Streamable HTTP` * **Credential**: Select the Pinecone credential you created * Click **"Create"** to save the server 3. **Add the MCP tool to your sub agent:** * Drag the Pinecone Documents MCP tool onto your agent canvas and connect it to the sub agent ### Step 5: Use the Pinecone Assistant MCP server in your agent Once you have registered your MCP server as a tool and connected it to your agent, your agent can use the Pinecone Assistant tool to search and retrieve relevant content from your uploaded documents. Ask an interesting question like, "What are the primary uses of amlodipine and atorvastatin, and how do they work in the body?" The Pinecone tool provides a `get_context` function that retrieves relevant document snippets from your knowledge base. When your agent calls this tool, it will: **Search semantically**: Use vector similarity search to find the most relevant content based on the query **Return formatted snippets**: Each result includes:
  • file_name : The name of the file containing the snippet
  • pages : The page numbers where the snippet appears (for PDFs and DOCX files)
  • content : The actual text content of the snippet
**Parameters:** * `query` (required): The search query to retrieve context for * `top_k` (optional): The number of context snippets to retrieve. Defaults to 15. # Connect Your Data with Inkeep Unified Search (/connect-your-data/inkeep) Learn how to connect your data sources to your agents using Inkeep Unified Search's comprehensive knowledge base platform Inkeep Unified Search is part of [Inkeep's Enterprise offering](https://inkeep.com/enterprise). Connect 25+ data sources to create a unified knowledge base that your agents can access. ## Supported data sources Inkeep Unified Search supports a wide variety of data sources: * **Documentation**: OpenAPI Spec, Plain text * **Collaboration**: Confluence, Notion, Slack, Discord, Discourse, Missive * **Code Repositories**: GitHub * **Cloud Storage**: Google Drive * **Websites**: Website crawling and indexing * **Video**: YouTube * **Support Systems**: * Freshdesk Tickets * HelpScout Docs and Tickets * Zendesk Knowledge Base, Tickets, and Help Center * **Project Management**: Jira ## Getting started ### Step 1: Set up your Inkeep account If you don't have an Inkeep account yet, you can: * [Try Inkeep on your content](https://inkeep.com/demo) - Test Inkeep with your own content * [Schedule a call](https://inkeep.com/schedule-demo) - Get personalized setup assistance ### Step 2: Connect your data sources 1. Log in to your Inkeep dashboard 2. Navigate to the "Sources" tab 3. Add and configure your desired data sources (websites, GitHub repos, Notion pages, etc.) 4. Wait for Inkeep to index your content ### Step 3: Get your MCP server URL Once your data sources are connected and indexed, obtain your Inkeep MCP server URL from your Inkeep dashboard: 1. Go to the **Assistants** tab 2. Click **"Create Assistant"** 3. Select **MCP** from the dropdown 4. Copy the MCP server URL ### Step 4: Register the MCP server Register your Inkeep MCP server as a tool in your agent configuration: **Using TypeScript SDK:** ```typescript import { mcpTool, subAgent } from "@inkeep/agents-sdk"; const inkeepTool = mcpTool({ id: "inkeep-knowledge-base", name: "knowledge_base", description: "Search the company knowledge base powered by Inkeep", serverUrl: "YOUR_INKEEP_MCP_SERVER_URL", // From your Inkeep dashboard }); const supportAgent = subAgent({ id: "support-agent", name: "Support Agent", description: "Answers questions using the company knowledge base", prompt: `You are a support agent with access to the company knowledge base.`, canUse: () => [inkeepTool], }); ``` **Using Visual Builder:** 1. Go to the **MCP Servers** tab in the Visual Builder 2. Click "New MCP server" 3. Select "Custom Server" 4. Enter: * **Name**: `Inkeep Knowledge Base` * **URL**: Your Inkeep MCP server URL * **Transport Type**: `Streamable HTTP` 5. Save the server 6. Add it to your agent by dragging it onto your agent canvas ### Step 5: Use the Inkeep MCP server in your agent Once your have registered your MCP server as a tool and connected it to your agent, it's now ready to use! Click on the "Try it" button to test it out. # Connecting Your Data (/connect-your-data/overview) Learn how to connect your data sources to your agents through MCP servers Connect your data sources to your agents through MCP servers. Support websites, GitHub repositories, documentation, knowledge bases, PDFs, and more. Your agents can search and access this content in real-time. ## How it works 1. **Set up your data source** - Configure your chosen provider and connect your data sources 2. **Get your MCP server URL** - Obtain the MCP server endpoint from your provider 3. **Register the MCP server** - Register it as a tool in the Visual Builder or TypeScript SDK 4. **Use in your agents** - Your agents can now search and access your connected data ## Choose a data provider Select a provider that supports your data sources: Connect 25+ data sources including websites, GitHub, Notion, Slack, Confluence, and more. Comprehensive knowledge base solution. Connect DOCX, JSON, Markdown, PDF, and Text files. Ideal for document-heavy workflows and semantic search applications. Connect GitHub, GitLab, BitBucket repositories, OpenAPI specs, and websites. Great for code documentation. Connect GitHub repositories, PDFs, and Markdown files. Simple and focused solution for documentation. Connect websites. Great for extracting LLM-friendly content from the web. ## Other options Consider these additional options for connecting your data: * **[Reducto](https://reducto.ai/)** - Self-serve document processing with a free tier. Upload documents and access them via APIs or SDKs. Wrap their APIs in your own MCP server or invoke them directly using [function tools](/typescript-sdk/tools/function-tools). * **[Unstructured](https://unstructured.io/)** - Document processing platform with a free tier. Upload documents and access them via APIs or SDKs. Wrap their APIs in your own MCP server or invoke them directly using [function tools](/typescript-sdk/tools/function-tools). * **[Milvus](https://milvus.io/)** - Open-source vector database with self-hosting options. See their [MCP integration guide](https://milvus.io/docs/milvus_and_mcp.md) for setup instructions. Must manage deployment and hosting for MCP server. * **[SingleStore](https://singlestore.com/)** - Relational database with a managed MCP server that converts natural language queries into SQL. Learn more in their [MCP server documentation](https://docs.singlestore.com/cloud/ai-services/singlestore-mcp-server/). # Connect Your Data with Pinecone (/connect-your-data/pinecone) Connect documents to your agents using Pinecone Assistant's MCP server Pinecone is a vector database, and Pinecone Assistant helps you build production-grade chat and agent applications. Connect your documents and files to your agents using Pinecone Assistant's MCP server for semantic search and retrieval. ## Supported data sources With Pinecone Assistant you can connect: * **Documents**: DOCX (.docx), PDF (.pdf), Text (.txt) * **Structured Data**: JSON (.json) * **Documentation**: Markdown (.md) ## Getting started ### Step 1: Create a Pinecone account [Sign up for Pinecone](https://app.pinecone.io/) ### Step 2: Create an Assistant 1. Log in to your [Pinecone dashboard](https://app.pinecone.io/) 2. Navigate to the **Assistant** tab 3. Click **"Create an Assistant"** 4. Give your assistant a name ### Step 3: Upload your files 1. In your assistant, navigate to the **Files** tab (located in the top right corner) 2. Upload your documents (DOCX, JSON, Markdown, PDF, or Text files) 3. Wait for Pinecone to process and index your content Test your assistant in the Assistant playground. Try uploading an Apple 10-K PDF file from [Apple's investor relations page](https://investor.apple.com/sec-filings/default.aspx) and ask it to summarize the document. ### Step 4: Get your MCP server URL 1. Navigate to the **Settings** tab in your assistant 2. Copy the MCP URL provided ### Step 5: Register the MCP server Register the Pinecone MCP server as a tool in your agent configuration. Replace `` with the MCP URL you copied in Step 4. **Using TypeScript SDK:** You can create your [credential](/typescript-sdk/credentials/overview) using keychain, nango, or environment variables, but in this example we use environment variables. ```typescript title=".env" PINECONE_API_KEY= ``` ```typescript title="agents/documentation-agent.ts" import { mcpTool, subAgent, credential } from "@inkeep/agents-sdk"; const pineconeCredential = credential({ id: 'pinecone-credential', name: 'Pinecone Credential', type: CredentialStoreType.memory, credentialStoreId: 'memory-default', retrievalParams: { "key": "PINECONE_API_KEY", // where PINECONE_API_KEY is the API key for your Pinecone project }, }); const pineconeTool = mcpTool({ id: "pinecone-documents", name: "pinecone_search", description: "Search uploaded documents and files using semantic search", serverUrl: "", // From Pinecone Settings tab }); const docAgent = subAgent({ id: "doc-agent", name: "Documentation Assistant", description: "Answers questions using uploaded documents", prompt: `You are a documentation assistant with access to company documents and files.`, canUse: () => [pineconeTool], }); ``` **Using Visual Builder:** 1. **Add a Pinecone credential:** * Go to the **Credentials** tab in the Visual Builder * Click **"New credential"** * Select **"Bearer authentication"** * Enter: * **Name**: `Pinecone API Key` (or your preferred name) * **API key**: Your Pinecone API key (found in your [Pinecone dashboard](https://app.pinecone.io/)) * Click **"Create Credential"** to save 2. **Register the MCP server:** * Go to the **MCP Servers** tab in the Visual Builder * Click **"New MCP server"** * Select **"Custom Server"** * Enter: * **Name**: `Pinecone Documents` * **URL**: Your MCP URL from Pinecone Settings tab * **Transport Type**: `Streamable HTTP` * **Credential**: Select the Pinecone credential you created * Click **"Create"** to save the server 3. **Add the MCP tool to your sub agent:** * Drag the Pinecone Documents MCP tool onto your agent canvas and connect it to the sub agent ### Step 6: Use the Pinecone Assistant MCP server in your agent Once you have registered your MCP server as a tool and connected it to your agent, your agent can use the Pinecone Assistant tool to search and retrieve relevant content from your uploaded documents. The Pinecone tool provides a `get_context` function that retrieves relevant document snippets from your knowledge base. When your agent calls this tool, it will: **Search semantically**: Use vector similarity search to find the most relevant content based on the query **Return formatted snippets**: Each result includes:
  • file_name : The name of the file containing the snippet
  • pages : The page numbers where the snippet appears (for PDFs and DOCX files)
  • content : The actual text content of the snippet
**Parameters:** * `query` (required): The search query to retrieve context for * `top_k` (optional): The number of context snippets to retrieve. Defaults to 15. # Connect Your Data with Ref (/connect-your-data/ref) Learn how to connect your documentation and code repositories to your agents using Ref Ref provides a simple and focused solution for connecting your documentation and code repositories to your agents. With support for GitHub repositories, PDFs, and Markdown files, Ref is perfect for teams that need straightforward documentation access. ## Supported data sources With Ref you can connect: * **Code Repositories**: GitHub Repositories * **Documents**: PDF files * **Documentation**: Markdown files ## Getting started ### Step 1: Create an account 1. [Sign up for Ref](https://ref.tools/login) 2. Complete the account registration process ### Step 2: Upload your resources 1. Log in to your Ref dashboard 2. Navigate to the [Resources page](https://ref.tools/resources) 3. Upload your PDFs, Markdown files, or connect your GitHub repositories 4. Wait for Ref to process and index your content ### Step 3: Get your MCP server URL 1. Navigate to the [Install MCP page](https://ref.tools/install) 2. Copy the MCP server URL (it will start with `https://api.ref.tools/mcp?apiKey=ref-`) ### Step 4: Register the MCP server Register the Ref MCP server as a tool in your agent configuration. Replace `` with your actual API key from Step 3. **Using TypeScript SDK:** ```typescript import { mcpTool, subAgent } from "@inkeep/agents-sdk"; const refTool = mcpTool({ id: "ref-documentation", name: "ref_search", description: "Search uploaded documentation and code repositories", serverUrl: "https://api.ref.tools/mcp?apiKey=ref-", }); const docAgent = subAgent({ id: "doc-agent", name: "Documentation Assistant", description: "Answers questions using uploaded documentation", prompt: `You are a documentation assistant with access to company documentation`, canUse: () => [refTool], }); ``` **Using Visual Builder:** 1. Go to the **MCP Servers** tab in the Visual Builder 2. Click "New MCP server" 3. Select "Custom Server" 4. Enter: * **Name**: `Ref Documentation` * **URL**: `https://api.ref.tools/mcp?apiKey=ref-` (replace with your API key) * **Transport Type**: `Streamable HTTP` 5. Save the server 6. Add it to your agent by dragging it onto your agent canvas ### Step 5: Use the Ref MCP server in your agent Once your have registered your MCP server as a tool and connected it to your agent, it's now ready to use! Click on the "Try it" button to test it out. # Deploy to Vercel (/deployment/vercel) Deploy the Inkeep Agent Framework to Vercel ## Deploy to Vercel ### Step 1: Create a GitHub repository for your project If you do not have an Inkeep project already, [follow these steps](/get-started/quick-start) to create one. Then push your project to a repository on GitHub. ### Step 2: Create a Postgres Database Create a Postgres database on the [**Vercel Marketplace**](https://vercel.com/marketplace/neon) or directly at [**Neon**](https://neon.tech/). ### Step 3: Create a Doltgres Database Create a Doltgres database at [**DoltHub**](https://hosted.doltdb.com). ### Step 4: Configure Database Connection Set your database connection string as an environment variable: ``` # Doltgres Database INKEEP_AGENTS_MANAGE_DATABASE_URL= # Postgres Database INKEEP_AGENTS_RUN_DATABASE_URL= ``` ### Step 5: Create a Vercel account Sign up for a Vercel account [here](https://vercel.com/signup). ### Step 6: Create a Vercel project for Manage API Vercel New Project - Manage API The Framework Preset should be "Hono" and the Root Directory should be `apps/manage-api`. Required environment variables for Manage API: ```dotenv ENVIRONMENT=production INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= # Doltgres Database INKEEP_AGENTS_MANAGE_DATABASE_URL= # Postgres Database INKEEP_AGENTS_RUN_DATABASE_URL= NANGO_SECRET_KEY= NANGO_SERVER_URL=https://api.nango.dev ``` | Environment Variable | Value | | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | | `ENVIRONMENT` | `production` | | `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` | Run `openssl rand -hex 32` in your terminal to generate this value. Save this value for `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` in Step 7. | | `INKEEP_AGENTS_MANAGE_DATABASE_URL` | Doltgres connection string from Step 3 (e.g., `postgresql://user:password@host:5432/database`) | | `INKEEP_AGENTS_RUN_DATABASE_URL` | Postgres connection string from Step 4 (e.g., `postgresql://user:password@host:5433/database`) | | `NANGO_SECRET_KEY` | Nango secret key from your [Nango Cloud account](/typescript-sdk/credentials/nango). Note: Local Nango setup won't work with Vercel deployments. | | `NANGO_SERVER_URL` | `https://api.nango.dev` | ### Step 7: Create a Vercel project for Run API Vercel New Project - Run API The Framework Preset should be "Hono" and the Root Directory should be `apps/run-api`. Required environment variables for Run API: ```dotenv ENVIRONMENT=production ANTHROPIC_API_KEY= OPENAI_API_KEY= GOOGLE_GENERATIVE_AI_API_KEY= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= # Postgres Database INKEEP_AGENTS_RUN_DATABASE_URL= OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ingest.us.signoz.cloud:443/v1/traces OTEL_EXPORTER_OTLP_TRACES_HEADERS=signoz-ingestion-key= NANGO_SECRET_KEY= NANGO_SERVER_URL=https://api.nango.dev INKEEP_AGENTS_JWT_SIGNING_SECRET= ``` | Environment Variable | Value | | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `ENVIRONMENT` | `production` | | `ANTHROPIC_API_KEY` | Your Anthropic API key | | `OPENAI_API_KEY` | Your OpenAI API key | | `GOOGLE_GENERATIVE_AI_API_KEY` | Your Google Gemini API key | | `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` | Run `openssl rand -hex 32` in your terminal to generate this value. Save this value for `INKEEP_AGENTS_RUN_API_BYPASS_SECRET` in Step 7. | | `INKEEP_AGENTS_RUN_DATABASE_URL` | Postgres connection string from Step 3 (e.g., `postgresql://user:password@host:5432/database`) | | `NANGO_SECRET_KEY` | Nango secret key from your [Nango Cloud account](/typescript-sdk/credentials/nango). Note: Local Nango setup won't work with Vercel deployments. | | `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` | `https://ingest.us.signoz.cloud:443/v1/traces` | | `OTEL_EXPORTER_OTLP_TRACES_HEADERS` | `signoz-ingestion-key=`. Use the instructions from [SigNoz Cloud Setup](/get-started/traces#option-1-signoz-cloud-setup) to configure your ingestion key. Note: Local SigNoz setup won't work with Vercel deployments. | | `NANGO_SERVER_URL` | `https://api.nango.dev` | | `INKEEP_AGENTS_JWT_SIGNING_SECRET` | Run `openssl rand -hex 32` in your terminal to generate this value. Save this value for `INKEEP_AGENTS_JWT_SIGNING_SECRET` in Step 7. | ### Step 8: Create a Vercel project for Manage UI Vercel New Project - Manage UI The Framework Preset should be "Next.js" and the Root Directory should be `apps/manage-ui`. Required environment variables for Manage UI: ```dotenv ENVIRONMENT=production PUBLIC_INKEEP_AGENTS_RUN_API_URL= PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET= PUBLIC_INKEEP_AGENTS_MANAGE_API_URL= INKEEP_AGENTS_MANAGE_API_URL= INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= PUBLIC_SIGNOZ_URL=https://.signoz.cloud SIGNOZ_API_KEY= PUBLIC_NANGO_SERVER_URL=https://api.nango.dev PUBLIC_NANGO_CONNECT_BASE_URL=https://connect.nango.dev NANGO_SECRET_KEY= ``` | Environment Variable | Value | | -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `ENVIRONMENT` | `production` | | `PUBLIC_INKEEP_AGENTS_RUN_API_URL` | Your Vercel deployment URL for Run API | | `PUBLIC_INKEEP_AGENTS_RUN_API_BYPASS_SECRET` | Your generated Run API bypass secret from Step 6 | | `PUBLIC_INKEEP_AGENTS_MANAGE_API_URL` | Your Vercel deployment URL for Manage API (skip if same as `INKEEP_AGENTS_MANAGE_API_URL`) | | `INKEEP_AGENTS_MANAGE_API_URL` | Your Vercel deployment URL for Manage API | | `INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET` | Your generated Manage API bypass secret from Step 5 | | `PUBLIC_SIGNOZ_URL` | Use the instructions from [SigNoz Cloud Setup](/get-started/traces#option-1-signoz-cloud-setup) to configure your SigNoz URL. Note: Local SigNoz setup won't work with Vercel deployments. | | `SIGNOZ_API_KEY` | Use the instructions from [SigNoz Cloud Setup](/get-started/traces#option-1-signoz-cloud-setup) to configure your SigNoz API key. Note: Local SigNoz setup won't work with Vercel deployments. | | `NANGO_SECRET_KEY` | Nango secret key from your [Nango Cloud account](/typescript-sdk/credentials/nango). Note: Local Nango setup won't work with Vercel deployments. | | `PUBLIC_NANGO_SERVER_URL` | `https://api.nango.dev` | | `PUBLIC_NANGO_CONNECT_BASE_URL` | `https://connect.nango.dev` | ### Step 9: Enable Vercel Authentication To prevent anyone from being able to access the UI, we recommend enabling Vercel authentication for all deployments: **Settings > Deployment Protection > Vercel Authentication > All Deployments**. ### Step 10: Create a Vercel project for your MCP server (optional) Vercel New Project - MCP Server The Framework Preset should be "Next.js" and the Root Directory should be `apps/mcp`. For more information on how to add MCP servers to your project, see [Create MCP Servers](/typescript-sdk/cli-reference#inkeep-add). ## Push your Agent ### Step 1: Configure your root .env file ```dotenv INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET= INKEEP_AGENTS_RUN_API_BYPASS_SECRET= ``` ### Step 2: Create a cloud configuration file Create a new configuration file named `inkeep-cloud.config.ts` in your project's `src` directory, alongside your existing configuration file. ```typescript import { defineConfig } from "@inkeep/agents-cli/config"; const config = defineConfig({ tenantId: "default", agentsManageApi: { url: "https://", apiKey: process.env.INKEEP_AGENTS_MANAGE_API_BYPASS_SECRET, }, agentsRunApi: { url: "https://", apiKey: process.env.INKEEP_AGENTS_RUN_API_BYPASS_SECRET, }, }); export default config; ``` ### Step 3: Push your Agent ```bash cd /src/ inkeep push --config ../inkeep-cloud.config.ts ``` ## Pull your Agent ```bash cd /src inkeep pull --config inkeep-cloud.config.ts ``` ## Function Tools with Vercel Sandbox When deploying to serverless environments like Vercel, you can configure [function tools](/typescript-sdk/tools/function-tools) to execute in [Vercel Sandbox](https://vercel.com/docs/vercel-sandbox) MicroVMs instead of your Agent's runtime service. This is **required** for serverless platforms since child process spawning is restricted. ### Why Use Vercel Sandbox? **When to use each provider:** * **Native** - Use for traditional cloud deployments (VMs, Docker, Kubernetes), self-hosted servers, or local development * **Vercel Sandbox** - Required for serverless platforms (Vercel, AWS Lambda, etc.) or if you'd like to isolate tool executions ### Setting Up Vercel Sandbox #### Step 1: Get Vercel Credentials You'll need three credentials from your Vercel account: 1. **Vercel Token** - Create an access token at [vercel.com/account/tokens](https://vercel.com/account/tokens) 2. **Team ID** - Find in your team settings at [vercel.com/teams](https://vercel.com/teams) 3. **Project ID** - Find in your Vercel project settings #### Step 2: Configure Sandbox in Your Application Update your Run API to use Vercel Sandbox. In the `apps/run-api/src` folder, create a `sandbox.ts` file: ```typescript sandbox.ts const isProduction = process.env.ENVIRONMENT === "production"; export const sandboxConfig = isProduction ? { provider: "vercel", runtime: "node22", // or 'typescript' timeout: 60000, // 60 second timeout vcpus: 4, // Allocate 4 vCPUs teamId: process.env.SANDBOX_VERCEL_TEAM_ID!, projectId: process.env.SANDBOX_VERCEL_PROJECT_ID!, token: process.env.SANDBOX_VERCEL_TOKEN!, } : { provider: "native", runtime: "node22", timeout: 30000, vcpus: 2, }; ``` Import it into your `index.ts` file: ```typescript index.ts import { sandboxConfig } from "./sandbox.ts"; // ... const app: Hono = createExecutionApp({ // ... sandboxConfig, // NEW }); ``` #### Step 3: Add Environment Variables to Run API Add these [environment variables in your Vercel project](https://vercel.com/docs/environment-variables/managing-environment-variables#declare-an-environment-variable) to your **Run API** app: ```dotenv SANDBOX_VERCEL_TOKEN=your_vercel_access_token SANDBOX_VERCEL_TEAM_ID=team_xxxxxxxxxx SANDBOX_VERCEL_PROJECT_ID=prj_xxxxxxxxxx ``` "Failed to refresh OIDC token" error:
  • This occurs when you're not in a Vercel environment or you don't provide a Vercel access token
  • Solution: Use a Vercel access token from vercel.com/account/tokens
Function execution timeouts:
  • Increase the timeout value in sandbox configuration
  • Consider allocating more vcpus for resource-intensive functions
  • Check Vercel Sandbox limits for your plan
Dependency installation failures:
  • Ensure dependencies are compatible with Node.js 22
  • Check that package versions are specified correctly
  • Verify network access to npm registry
High costs:
  • Reduce vcpus allocation if functions don't need maximum resources
  • Optimize function code to execute faster
  • Consider caching results when possible
  1. Use environment variables – Never hardcode credentials
  2. Start with fewer vCPUs – Scale up only if needed
  3. Set reasonable timeouts – Prevent runaway executions
  4. Monitor usage – Track sandbox execution metrics in Vercel dashboard
  5. Test thoroughly – Verify functions work in sandbox environment before deploying
  6. Choose the right provider – Use native for VMs/Docker/K8s, Vercel Sandbox for serverless only

For more information on function tools, see:

## Deployment Protection The Inkeep Agent Framework provides health endpoints for Kubernetes-style probes and optional integration with Vercel Checks for automated deployment protection. ### Health Endpoints Two health endpoints are available for monitoring deployment readiness: #### `/health` - Liveness Probe A lightweight endpoint that returns immediately to indicate the service is running. * **Response**: HTTP 204 (No Content) * **Use case**: Kubernetes liveness probes, load balancer health checks * **Latency**: Sub-millisecond (no external calls) ```bash curl -I https://your-api.vercel.app/health # HTTP/2 204 ``` #### `/ready` - Readiness Probe A comprehensive endpoint that verifies database connectivity before serving traffic. * **Response (healthy)**: HTTP 200 with JSON status * **Response (unhealthy)**: HTTP 503 with RFC 7807 Problem Details * **Use case**: Kubernetes readiness probes, deployment verification ```bash # Healthy response curl https://your-api.vercel.app/ready { "status": "ok", "manageDb": true, "runDb": true } # Unhealthy response (503 Service Unavailable) { "type": "about:blank", "title": "Service Unavailable", "status": 503, "detail": "One or more health checks failed", "checks": { "manageDb": false, "runDb": true } } ``` ### Vercel Deployment Checks with GitHub Actions Enable automated deployment protection using [Vercel Deployment Checks](https://vercel.com/docs/deployment-checks) combined with a GitHub Action. This approach blocks production deployments from being promoted until your health checks pass. #### How It Works 1. **Deployment Created**: Vercel creates a production deployment but doesn't promote it yet 2. **GitHub Action Runs**: A workflow triggers when the deployment is ready and hits the `/ready` endpoint 3. **Status Reported**: The GitHub Action reports success/failure back to GitHub 4. **Vercel Reads Status**: Vercel reads the GitHub commit status and promotes the deployment only if checks pass #### Step 1: Add the GitHub Action Workflow The Inkeep Agent Framework includes a pre-configured workflow at `.github/workflows/deployment-health-check.yml`: ```yaml name: Deployment Health Check on: deployment_status: jobs: health-check: if: github.event.deployment_status.state == 'success' runs-on: ubuntu-latest steps: - name: Wait for cold start run: sleep 10 - name: Check /health endpoint (liveness) run: | curl -f -s -o /dev/null -w "%{http_code}" \ "${{ github.event.deployment_status.target_url }}/health" \ --retry 3 --retry-delay 5 - name: Check /ready endpoint (readiness) run: | response=$(curl -f -s "${{ github.event.deployment_status.target_url }}/ready" \ --retry 3 --retry-delay 5) echo "Response: $response" echo "$response" | jq -e '.status == "ok"' ``` #### Step 2: Enable Deployment Checks in Vercel 1. Ensure your project is connected to GitHub using [Vercel for GitHub](https://vercel.com/docs/git/vercel-for-github) 2. Go to **Project Settings > Deployment Checks** 3. Click **Add Checks** and select the `health-check` job from your GitHub Actions 4. Production deployments will now wait for the health check to pass before being promoted #### Step 3: Test the Integration 1. Push a change to your default branch 2. Vercel creates a production deployment 3. The GitHub Action runs and checks the `/ready` endpoint 4. Once the check passes, Vercel promotes the deployment to your production domains GitHub Action not running:
  • Ensure the workflow file exists at .github/workflows/deployment-health-check.yml
  • Verify the workflow is enabled in your repository's Actions settings
Check not appearing in Vercel:
  • Verify your project is connected to GitHub via Vercel for GitHub
  • Ensure the check is selected in Project Settings > Deployment Checks
Health check fails but service is healthy:
  • The /ready endpoint checks database connectivity
  • Ensure database connection strings are correct in the deployment
  • Check database accessibility from the Vercel deployment region
  • Increase the cold start wait time if needed
Bypassing checks:
  • Use Force Promote from the deployment details page in Vercel
# Add Inkeep Skills and MCP Server to your IDE or coding tools (/get-started/ai-coding-setup-for-ide) Install Skills and MCP to help AI coding assistants build Inkeep agents. Set up your IDE so AI coding assistants can help you build Inkeep agents. ## Install Inkeep skills Install [Inkeep skills](https://github.com/inkeep/skills) so AI assistants know how to use the SDK: ```bash npx skills add inkeep/skills ``` ## Install Inkeep MCP If you didn't opt-in during `npx @inkeep/create-agents`, add the Inkeep MCP server to your IDE: ### Cursor Click the button below to add the Inkeep MCP server to Cursor. ### VS Code Click the button below to add the Inkeep MCP server to VS Code. ### Claude Code To add to **Claude Code**, run this in your terminal: ```bash claude mcp add --transport http inkeep-agents https://agents.inkeep.com/mcp --scope project ``` ### Other MCP clients Manually add `https://agents.inkeep.com/mcp` as an MCP Server to any MCP client. # Push / Pull (/get-started/push-pull) Push and pull your agents to and from the Visual Builder ## Push code to visual With Inkeep, you can define your agents in code, push them to the Visual Builder, and continue developing with the intuitive drag-and-drop interface. You can switch back to code any time. Let's walk through the process.