LangChain Cheat Sheet
LangChain Summary
LangChain = Framework for LLM apps.
Adds tools, memory, data & reasoning.
Supports OpenAI, Anthropic, etc.
Core Modules
1. Model I/O Interface with LLMs.
2. Retrieval Search docs/knowledge.
3. Chains Prompt model output.
4. Agents LLM decides actions/tools.
5. Memory Stores chat context.
6. Callbacks For logging/debugging.
Chat Message Types
SystemMessage Rules/instructions.
HumanMessage User input.
AIMessage LLM reply.
Prompt Engineering
Use PromptTemplate / ChatPromptTemplate.
Combine with `|` pipe.
Few-shot with examples.
Output Parsing
Use Output Parsers (not regex).
PydanticOutputParser structured data.
Use .get_format_instructions() + .parse().
Evaluation
Use LangChain Evals.
e.g., labeled_pairwise_string for comparing answers.
LangChain Cheat Sheet
Check accuracy, similarity, rankings.
OpenAI Function Calling
LLMs return JSON to trigger external tools.
Connecting to Data (RAG Pipelines)
Load Chunk Embed Store Retrieve.
Use unstructured (PDF, CSV) or structured (SQL) data.
Text Splitting
CharacterTextSplitter by characters.
TokenTextSplitter by tokens (e.g., tiktoken).
Saving Prompts
Save as .json or .yaml with .save().
Reuse with load_prompt().
Query Planning
Use QueryPlan to split complex queries into smart steps.