π§ The Neural Backbone for Autonomous AI Agents
Bridge the gap between your codebase and your AI editor. CCM transforms static source code into a dynamic, queryable Knowledge Graph, enabling AI agents to navigate, understand, and reason about your project with surgical precision.
Modern AI coding assistants (Claude, Cursor, Windsurf) are powerful but suffer from blindness:
| Problem | Impact |
|---|---|
| Context Limits | Can't "see" your entire 100,000-line project |
| Hallucination | Guesses dependencies without structure |
| Lost Context | Vector search finds similar words, not connected logic |
CCM turns your AI from a text predictor into a Senior Architect.
Unlike tools that dump raw code, CCM injects AI-Optimized Context:
- Logical Reasoning - Explains why code was retrieved
- Relational Edges - Maps how files talk to each other
- Confidence Scores - Shows certainty in results
- Two-Pass Indexing - Links function definitions to call sites
- Deep Traversal - Ask "Who calls this?" and get accurate answers
- Rust-Powered - Blazing fast indexing and queries
- Batch Embedding - Thousands of lines in seconds
- LanceDB - Millisecond-latency vector storage
- Tree-sitter - Robust AST for Rust, Python, TypeScript
- Binary Checksums - Release artifacts include
checksums.txtfor integrity - MCP Allowlist - Restrict project access with
CCM_ALLOWED_ROOTS - Safe Defaults - Configurable timeouts and file-size limits
- Plug & Play - Works with Claude Desktop, Antigravity, Zed, Cursor
- Lazy Indexing - Auto-indexes on first query
- Zero-Config - Auto-detects project root
# 1. Configure MCP for your AI editor
npx @senoldogann/context-manager install
# 2. Index your project
npx @senoldogann/context-manager index --path .git clone https://github.com/senoldogann/LLM-Context-Manager.git
cd LLM-Context-Manager
cargo build --release
# Binary location: target/release/ccm-cliCreate ~/.ccm/.env:
# Option A: Local (Recommended - Privacy)
EMBEDDING_PROVIDER=ollama
EMBEDDING_HOST=http://127.0.0.1:11434
EMBEDDING_MODEL=mxbai-embed-large
# Option B: Cloud (OpenAI)
EMBEDDING_PROVIDER=openai
EMBEDDING_API_KEY=sk-your-key
EMBEDDING_MODEL=text-embedding-3-small
# Networking & Limits
EMBEDDING_TIMEOUT_SECS=30
CCM_MAX_FILE_BYTES=2097152
# MCP Security
CCM_ALLOWED_ROOTS=/Users/you/projects:/Users/you/sandbox
CCM_REQUIRE_ALLOWED_ROOTS=0
# MCP Runtime
CCM_MCP_ENGINE_CACHE_SIZE=8
CCM_MCP_DEBUG=0
# Optional: disable embeddings entirely (semantic search disabled)
CCM_DISABLE_EMBEDDER=0
# Optional: embed data files (md/json/yaml) into vector search
CCM_EMBED_DATA_FILES=0
# npm wrapper security (0 = enforce checksum, 1 = bypass)
CCM_ALLOW_UNVERIFIED_BINARIES=0Note: Requires Ollama running (ollama serve) with model pulled (ollama pull mxbai-embed-large).
Production Tip: Set CCM_ALLOWED_ROOTS and enable CCM_REQUIRE_ALLOWED_ROOTS=1 to prevent unintended project access.
# Index a project
ccm-cli index --path .
# Search semantically
ccm-cli query --text "authentication logic"
# Cursor prediction (file:line format)
ccm-cli query --text "src/main.rs:50"
# Watch mode - auto-reindex
ccm-cli index --path . --watch
# Evaluate retrieval quality
ccm-cli eval --tasks eval/golden_tasks.json| Tool | Purpose | Example |
|---|---|---|
search_code |
Semantic search | "Find auth handling" |
read_graph |
Structural navigation | "Who calls this function?" |
get_context |
Cursor-based retrieval | Context at file:line |
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β AI Agent ββββββΆβ MCP Server ββββββΆβ Core Engine β
β (Claude) βββββββ (ccm-mcp) βββββββ (Rust) β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β
ββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββ
βΌ βΌ βΌ
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Code Graph β β Vector DB β β Parser β
β (Petgraph) β β (LanceDB) β β(Tree-sitter)β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
| Language | Extensions | Analysis |
|---|---|---|
| Rust | .rs |
Full AST |
| Python | .py |
Full AST |
| TypeScript | .ts, .tsx |
Full AST |
| JavaScript | .js, .jsx |
Full AST |
| Config/Data | .md, .json, .yaml |
Full File |
CCM includes a golden task evaluation framework:
# Run evaluation
ccm-cli eval --tasks eval/golden_tasks.v3.ccm.json
# Compare structural vs hybrid scoring
ccm-cli eval --tasks eval/golden_tasks.json --compareLatest Results: 100% pass rate on golden tasks.
- Run
ccm-cli index --path .first - Check CCM_PROJECT_ROOT matches indexed directory
- Ensure Ollama is running
- First run downloads embedding model (~1.5GB)
- Subsequent runs are fast (incremental)
- Ensure the GitHub Release includes
checksums.txt - Re-run the install once
- As a last resort, set
CCM_ALLOW_UNVERIFIED_BINARIES=1to bypass verification
- Set
CCM_ALLOWED_ROOTSto include the project root - Or disable strict mode with
CCM_REQUIRE_ALLOWED_ROOTS=0
- Increase
CCM_MAX_FILE_BYTESif you need larger text files indexed
- By default, data files (
.md,.json,.yaml) are indexed but not embedded. - Enable
CCM_EMBED_DATA_FILES=1to include them in semantic search.
- NPM Package: @senoldogann/context-manager
- Getting Started: GETTING_STARTED.md
- Contributing: CONTRIBUTING.md
MIT License - Open source and free to use.
SENOL DOGAN β€οΈ