0% found this document useful (0 votes)
0 views

langchain

LangChain is a framework that facilitates the integration of large language models (LLMs) into applications, allowing for complex workflows and monitoring through tools like LangSmit. It supports various LLMs, prompt engineering, and modular chains for multi-step tasks, while also incorporating agents for decision-making and memory for context retention. Key features include temperature and top_p settings for model behavior, as well as connections to external data sources for enhanced functionality.

Uploaded by

Narmadha
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

langchain

LangChain is a framework that facilitates the integration of large language models (LLMs) into applications, allowing for complex workflows and monitoring through tools like LangSmit. It supports various LLMs, prompt engineering, and modular chains for multi-step tasks, while also incorporating agents for decision-making and memory for context retention. Key features include temperature and top_p settings for model behavior, as well as connections to external data sources for enhanced functionality.

Uploaded by

Narmadha
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2

langchain

it is a framework that facilitate the integration of llm in applications.


if the application is going complex, we can inspect the chain using LangSmit(has
tools which monitors, builds, evaluate the components, it helps to trace and
evaluated the model and helps the model to move from development to production
phrase)
openai and antropic has their own integration, so they have to integrate as pip
install langchain-openai
packages that doesnot haven't been split into their own packages then it will come
under pip install langchain-community

temperature and top_p controls the behavious of the language model within the
langchain frameword
temp: randomness,top_p: it is a nucleus sampling that specifies how many possible
words to consider
temp:0 ->very predictable(nxt possible word) 1-> more likely(randomness)
temp-low, top_p-high -->can have high vocabulary(since top_p is high)
temp-high, top_p-low --> the model make sense on their own
for legal doc.- mostly low temp is preferred.

agents:
______
agents in langchain model is used as a reasoning engine to determine which actions
to take and in which order.
agents combine: language model: decision making and reasoning
tools:external func. or api that the agent can perform specific tasks
decision-making tools: which tools to use, calls the tool and process
the result until the task is complete
Agent Type
Intended Model Type
Supports Chat History
Supports Multi-Input Tools
Supports Parallel Function Calling
Required Model Params

Core Components of LangChain


Models:
Supports various LLMs (e.g., OpenAI, Anthropic, Hugging Face) and chat models.
Allows switching between models with minimal code changes.

Prompts:
Tools for prompt engineering, including templating (dynamic prompts with variables)
and optimization for better model output. Example:

python
Copy
from langchain import PromptTemplate
template = "Translate this to French: {text}"
prompt = PromptTemplate(input_variables=["text"], template=template)
Chains:
Sequences of modular components (e.g., models, prompts) to handle multi-step tasks.
Chains can be simple (e.g., prompt → model → output) or complex (e.g., retrieval →
summarization).

Core Components of LangChain


Models:
Supports various LLMs (e.g., OpenAI, Anthropic, Hugging Face) and chat models.
Allows switching between models with minimal code changes.
Prompts:
Tools for prompt engineering, including templating (dynamic prompts with variables)
and optimization for better model output. Example:

from langchain import PromptTemplate


template = "Translate this to French: {text}"
prompt = PromptTemplate(input_variables=["text"], template=template)

Chains:
Sequences of modular components (e.g., models, prompts) to handle multi-step tasks.
Chains can be simple (e.g., prompt → model → output) or complex (e.g., retrieval →
summarization).

from langchain.chains import LLMChain


chain = LLMChain(llm=model, prompt=prompt)
result = chain.run("Hello world!")

Agents:
Autonomous components that use LLMs to decide actions (e.g., calling APIs,
searching the web). Agents leverage tools (e.g., calculators, search engines)
dynamically.
Example: An agent might use a search tool to answer real-time questions.

Memory:
Retains context across interactions (e.g., conversation history). Supports short-
term (e.g., chat history) and long-term memory (e.g., summarization of past
interactions).
Example:

from langchain.memory import ConversationBufferMemory


memory = ConversationBufferMemory()
memory.save_context({"input": "Hi"}, {"output": "Hello!"})

Indexes & Retrieval:


Connects LLMs to external data (e.g., databases, documents) using Retrieval-
Augmented Generation (RAG). Tools include text splitters, vector databases, and
retrievers.

You might also like