2 unstable releases
| new 0.4.0 | Feb 25, 2026 |
|---|---|
| 0.3.1 | Feb 24, 2026 |
#220 in Machine learning
Used in synaptic
145KB
2.5K
SLoC
Synaptic
A Rust agent framework with LangChain-compatible architecture and Rust-native async interfaces.
Features
- LLM Providers — OpenAI, Anthropic, Gemini, Ollama, AWS Bedrock, and 10 OpenAI-compatible APIs via
compat::submodules (Groq, DeepSeek, Mistral, Fireworks, Together, xAI, Perplexity, HuggingFace, Cohere, OpenRouter) - LCEL Composition —
Runnabletrait with pipe operator (|), streaming, bind, parallel, branch, assign/pick, fallbacks - Graph Orchestration — LangGraph-style
StateGraphwith conditional edges, persistent checkpointing (Redis + PostgreSQL), human-in-the-loop, streaming - ReAct Agent —
create_react_agent(model, tools)with automatic tool dispatch - Tool System —
Tooltrait,ToolRegistry,SerialToolExecutor,ParallelToolExecutor, built-in tools (Tavily, DuckDuckGo, Wikipedia, SQL Toolkit) - Memory — Buffer, Window, Summary, SummaryBuffer, TokenBuffer strategies with
RunnableWithMessageHistory - Prompt Templates — Chat templates, few-shot prompting, placeholder interpolation
- Output Parsers — String, JSON, Structured<T>, List, Boolean, Enum, XML — all composable as
Runnable - RAG Pipeline — Document loaders (Text, JSON, CSV, Markdown, Directory, Web, PDF), text splitters, embeddings (OpenAI, Ollama, Cohere, HuggingFace), vector stores (InMemory, Qdrant, pgvector, Pinecone, Chroma, MongoDB, Elasticsearch, Weaviate), 7 retriever types
- Caching — In-memory (TTL), semantic (embedding similarity), Redis, SQLite,
CachedChatModelwrapper - Evaluation — ExactMatch, JsonValidity, Regex, EmbeddingDistance, LLMJudge evaluators
- Structured Output —
StructuredOutputChatModel<T>with JSON schema enforcement - Observability —
TracingCallback(structured spans),CompositeCallback,StdOutCallback - MCP —
MultiServerMcpClientwith Stdio/SSE/HTTP transport adapters - Macros —
#[tool],#[chain],#[entrypoint],#[task],#[traceable]proc-macros - Deep Agent — Filesystem-aware deep research agent harness (
create_deep_agent) - Middleware —
AgentMiddlewaretrait: Retry, PII redaction, Prompt Caching, Summarization
Installation
[dependencies]
synaptic = { version = "0.2", features = ["agent"] }
Feature flags
| Feature | Contents |
|---|---|
default |
runnables + prompts + parsers + tools + callbacks |
openai |
OpenAI ChatModel + Embeddings + 10 OpenAI-compatible providers via compat:: |
anthropic |
Anthropic Claude ChatModel |
gemini |
Google Gemini ChatModel |
ollama |
Ollama ChatModel + Embeddings |
bedrock |
AWS Bedrock ChatModel |
models |
All chat model provider crates (openai + anthropic + gemini + ollama + bedrock + cohere) |
qdrant |
Qdrant vector store |
postgres |
PostgreSQL store, cache, vector store, graph checkpointer |
redis |
Redis store + LLM cache + graph checkpointer |
weaviate |
Weaviate vector store |
pinecone |
Pinecone vector store |
chroma |
Chroma vector store |
mongodb |
MongoDB Atlas vector search |
elasticsearch |
Elasticsearch vector store |
sqlite |
SQLite LLM cache |
huggingface |
HuggingFace Inference API embeddings |
cohere |
Cohere reranker + embeddings |
tavily |
Tavily search tool |
sqltoolkit |
SQL database toolkit (ListTables, DescribeTable, ExecuteQuery) |
pdf |
PDF document loader |
graph |
LangGraph-style StateGraph |
memory |
Conversation memory strategies |
retrieval |
Retriever types (BM25, Ensemble, etc.) |
cache |
LLM response caching |
eval |
Evaluators |
mcp |
MCP server client |
macros |
Proc-macros |
deep |
Deep Agent harness |
agent |
default + graph + memory + middleware + store (provider-agnostic) |
agent-openai |
agent + openai |
rag |
default + embeddings + retrieval + loaders + splitters + vectorstores (provider-agnostic) |
rag-openai |
rag + openai |
full |
Everything |
Quick Start
use synaptic::core::{ChatModel, Message, ChatRequest, ToolChoice};
use synaptic::runnables::{Runnable, RunnableLambda};
use synaptic::graph::{create_react_agent, MessageState};
// LCEL pipe composition
let chain = step1.boxed() | step2.boxed() | step3.boxed();
let result = chain.invoke(input, &config).await?;
// ReAct agent
let graph = create_react_agent(model, tools)?;
let state = MessageState { messages: vec![Message::human("Hello")] };
let result = graph.invoke(state).await?;
Workspace Layout
38+ library crates in crates/, examples in examples/:
Core
| Crate | Description |
|---|---|
synaptic-core |
Shared traits and types: ChatModel, Message, ToolChoice, SynapticError |
synaptic-models |
ProviderBackend + HttpBackend + FakeBackend, wrappers (Retry, RateLimit, StructuredOutput) |
synaptic-runnables |
LCEL: Runnable, BoxRunnable, pipe, Lambda, Parallel, Branch, Assign, Pick, Fallbacks |
synaptic-prompts |
ChatPromptTemplate, FewShotChatMessagePromptTemplate |
synaptic-parsers |
Str, JSON, Structured, List, Boolean, Enum, XML output parsers |
synaptic-tools |
ToolRegistry, SerialToolExecutor, ParallelToolExecutor, DuckDuckGo, Wikipedia |
synaptic-memory |
Buffer, Window, Summary, SummaryBuffer, TokenBuffer, RunnableWithMessageHistory |
synaptic-callbacks |
RecordingCallback, TracingCallback, CompositeCallback |
synaptic-graph |
StateGraph, CompiledGraph, ToolNode, create_react_agent, MemorySaver |
synaptic-retrieval |
BM25, MultiQuery, Ensemble, Compression, SelfQuery, ParentDocument retrievers |
synaptic-loaders |
Text, JSON, CSV, Markdown, Directory, Web, PDF document loaders |
synaptic-splitters |
Character, Recursive, Markdown, Token, HTML text splitters |
synaptic-embeddings |
Embeddings trait, FakeEmbeddings, CacheBackedEmbeddings |
synaptic-vectorstores |
VectorStore trait, InMemoryVectorStore, VectorStoreRetriever |
synaptic-cache |
InMemory + Semantic LLM caches, CachedChatModel |
synaptic-eval |
Evaluator trait, 5 evaluators, Dataset, batch evaluate() |
synaptic-store |
InMemoryStore with semantic search |
synaptic-middleware |
AgentMiddleware trait, PII, Retry, Prompt Caching, Summarization |
synaptic-mcp |
MultiServerMcpClient, Stdio/SSE/HTTP transports |
synaptic-macros |
Proc-macros: #[tool], #[chain], #[entrypoint], #[task], #[traceable] |
synaptic-deep |
Deep Agent harness with filesystem tools + create_deep_agent() |
synaptic |
Unified facade with feature-gated re-exports |
Chat Model Providers
| Crate | Provider |
|---|---|
synaptic-openai |
OpenAI (GPT-4o, o1, o3...) + Azure OpenAI + 10 OpenAI-compatible APIs via compat:: submodules (Groq, DeepSeek, Mistral, Fireworks, Together, xAI, Perplexity, HuggingFace, Cohere, OpenRouter) |
synaptic-anthropic |
Anthropic (Claude 4.6, Claude Haiku...) |
synaptic-gemini |
Google Gemini (1.5 Pro, 2.0 Flash...) |
synaptic-ollama |
Ollama (local models) |
synaptic-bedrock |
AWS Bedrock (Titan, Claude, Llama via Bedrock) |
Embeddings
| Crate | Provider |
|---|---|
synaptic-openai |
OpenAI text-embedding-3-small/large |
synaptic-ollama |
Ollama local embedding models |
synaptic-cohere |
Cohere embed-english-v3.0, embed-multilingual-v3.0 |
synaptic-huggingface |
HuggingFace Inference API (BAAI/bge, sentence-transformers…) |
Vector Stores
| Crate | Backend |
|---|---|
synaptic-vectorstores |
In-memory (cosine similarity) |
synaptic-qdrant |
Qdrant |
synaptic-postgres |
PostgreSQL |
synaptic-pinecone |
Pinecone |
synaptic-chroma |
Chroma |
synaptic-mongodb |
MongoDB Atlas Vector Search |
synaptic-elasticsearch |
Elasticsearch |
synaptic-weaviate |
Weaviate |
Store, Cache & Graph Persistence
| Crate | Backend |
|---|---|
synaptic-redis |
Redis Store + LLM Cache + Graph Checkpointer |
synaptic-postgres |
PostgreSQL (Store + Cache + VectorStore + Graph Checkpointer) |
synaptic-sqlite |
SQLite LLM Cache |
Tools
| Crate | Tools |
|---|---|
synaptic-tavily |
Tavily AI search (API key required) |
synaptic-tools |
DuckDuckGo search, Wikipedia (no API key required) |
synaptic-sqltoolkit |
ListTables, DescribeTable, ExecuteQuery (read-only SQL) |
Examples
cargo run -p tool_calling_basic # Tool registry and execution
cargo run -p memory_chat # Session-based conversation memory
cargo run -p react_basic # ReAct agent with tool calling
cargo run -p graph_visualization # Graph state machine visualization
cargo run -p lcel_chain # LCEL pipe composition and parallel
cargo run -p prompt_parser_chain # Prompt template -> model -> parser
cargo run -p streaming # Streaming chat and runnables
cargo run -p rag_pipeline # RAG: load -> split -> embed -> retrieve
cargo run -p memory_strategy # Memory strategies comparison
cargo run -p structured_output # Structured output with JSON schema
cargo run -p callbacks_tracing # Callbacks and tracing
cargo run -p evaluation # Evaluator pipeline
cargo run -p caching # LLM response caching
cargo run -p macros_showcase # Proc-macro usage
All examples use ScriptedChatModel and FakeEmbeddings — no API keys required.
Documentation
- Book: dnw3.github.io/synaptic — tutorials, how-to guides, concepts, integration reference
- API Reference: docs.rs/synaptic — full Rustdoc API documentation
Design Principles
- Core abstractions first, feature crates expanded incrementally
- LangChain concept compatibility with Rust-idiomatic APIs
- All traits are async via
#[async_trait], runtime is tokio - Type-erased composition via
BoxRunnablewith|pipe operator Arc<RwLock<_>>for shared registries, session-keyed memory isolation- MSRV: 1.88
Contributing
See CONTRIBUTING.md for guidelines, or the full guide.
License
MIT — see LICENSE for details.
Dependencies
~41–59MB
~878K SLoC