Copyright | (c) 2025 Tushar Adhatrao |
---|---|
License | MIT |
Maintainer | Tushar Adhatrao <[email protected]> |
Safe Haskell | None |
Language | Haskell2010 |
Langchain.Runnable.ConversationChain
Contents
Description
This module provides the ConversationChain
implementation, which manages stateful
conversations with language models. It combines:
- A memory component for storing conversation history
- An LLM for generating responses
- A prompt template for formatting the conversation
ConversationChain
handles the full conversation lifecycle, including:
- Adding user messages to memory
- Retrieving conversation history
- Formatting the conversation context for the LLM
- Getting responses from the LLM
- Storing AI responses back to memory
This creates a complete conversation loop that maintains context across multiple turns.
Synopsis
- data ConversationChain m l = ConversationChain {
- memory :: m
- llm :: l
- prompt :: PromptTemplate
Types
data ConversationChain m l Source #
Manages a stateful conversation between a user and a language model.
The ConversationChain
combines three key components:
memory
: Stores and retrieves conversation historyllm
: The language model that generates responsesprompt
: Template for formatting the conversation for the LLM
When invoked with a user message, the ConversationChain
:
- Adds the user message to memory
- Retrieves the updated conversation history
- Formats the conversation for the LLM using the prompt template
- Gets a response from the LLM
- Stores the AI response in memory
- Returns the AI response
Example:
import Data.Text (Text) import qualified Data.Text as T import Langchain.LLM.OpenAI (OpenAI(..)) import Langchain.Memory.ConversationBufferMemory (ConversationBufferMemory(..)) import Langchain.PromptTemplate (PromptTemplate(..), createPromptTemplate) import Langchain.Runnable.ConversationChain (ConversationChain(..)) main :: IO () main = do -- Create memory component let memory = ConversationBufferMemory { messages = [] , returnMessages = True } -- Create LLM let llm = OpenAI { model = "gpt-4" , temperature = 0.7 } -- Create prompt template promptTemplate <- createPromptTemplate "You are a helpful assistant. {history}\nHuman: {input}\nAI:" ["history", "input"] -- Create conversation chain let conversation = ConversationChain { memory = memory , llm = llm , prompt = promptTemplate } -- Start conversation response1 <- invoke conversation "Hello, who are you?" case response1 of Left err -> putStrLn $ "Error: " ++ T.unpack err Right answer -> do putStrLn $ "AI: " ++ T.unpack answer -- Continue conversation with context response2 <- invoke conversation "What can you help me with?" case response2 of Left err -> putStrLn $ "Error: " ++ T.unpack err Right answer2 -> putStrLn $ "AI: " ++ T.unpack answer2
You can customize the behavior by using different memory implementations:
ConversationBufferMemory
- Stores the full conversation historyConversationBufferWindowMemory
- Keeps only the most recent N exchangesConversationSummaryMemory
- Summarizes older conversations to save tokensConversationEntityMemory
- Tracks entities mentioned in the conversation
The prompt template can be customized to give the LLM specific instructions, persona characteristics, or to format the conversation history in different ways.
Constructors
ConversationChain | |
Fields
|