Open In App

LangChain Expression Language(LCEL)

Last Updated : 28 Aug, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

LCEL or LangChain Expression Language is used to connect AI building blocks like prompts, models, data retrievers and parsers by using a “pipe” symbol (|) so that information flows smoothly from one part to another. Instead of writing complicated code, we just stack these blocks in the order we need and LCEL makes sure each step passes its output to the next. It’s designed to help developers build AI apps quickly, keep their code clean and modular and take advantage of features like parallel processing and easy debugging.

  • Runnable Interface: At the heart of LCEL are Runnables, modular components encapsulating functions or operations that can be chained. Any two Runnables can be combined using the pipe | operator thanks to the overloaded __or__ method, forming data flows where the output of one component becomes the input of the next.
  • Declarative Chains: By using pipe operators, LCEL constructs chains differently from traditional LangChain objects, enhancing readability and developer experience.
LCEL
Difference between With and Without LCEL

Key Features of LCEL

Let's see the key features of LCEL,

  • Declarative Syntax Using Pipe Operators: Chains are constructed by connecting Runnables with the pipe | operator, creating a clear left-to-right data flow.
  • Parallel Execution: Supports execution of independent tasks concurrently using RunnableParallel, reducing end-to-end latency.
  • Guaranteed Asynchronous Support: All chains run asynchronously by default, supporting high-throughput use cases like web servers.
  • Streaming Output: Supports incremental streaming to allow faster time-to-first-token from language models, enhancing responsiveness.
  • Seamless Deployment with LangServe: Chains can be directly deployed in production environments with support for retries, fallbacks and scaling.

LCEL Syntax

Using  LCEL we create our chain differently using pipe operators (|) rather than Chains objects. A basic LLM Chain consists of following 3 components there can be many variations into this which we will learn later.

  • LLM: An abstraction over the paradigm used in Langchain to create completions like Claude, OpenAI GPT3.5 and so on.
  • Prompt: The LLM object uses this as its input to provide inquiries to the LLM and specify its goals. It is basically a string template which we define with certain placeholders for our variables.
  • Output Parser:  A parser defines how to extract output from response and display it as final response.
  • Chain : A chain ties up all the above components. It is a series of calls to an LLM or any stage in the data processing process.  

For Example:

lcel-chain
LCEL Chain Example

Simple LLM Chain Using LCEL

LangChain Expression Language (LCEL) makes it easy to build chains in a clear and readable way. In this example, we’ll use Cohere’s LLM to create a simple chain that solves a word problem step by step.

Step 1: Import Libraries

We Import all the necessary libraries,

  • PromptTemplate / ChatPromptTemplate: Used to define the structure of prompts sent to the model.
  • BaseModel, Field: Utilities for structured inputs (not directly used here but useful for schema validation).
  • ChatCohere: Wrapper to interact with Cohere’s LLM.
  • StrOutputParser: Ensures the LLM output is parsed into plain text.
Python
from langchain_core.prompts import PromptTemplate
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_cohere import ChatCohere
from langchain.schema.output_parser import StrOutputParser
from google.colab import userdata

Step 2: Setup API Key and Initialize the LLM

We need the Cohere API Key, which can be extracted by following the below mention steps,

  • Go to the official website of Cohere.
signup-cohere
Login/Signup Cohere
  • Login/Signup using Google account or GitHub Account.
cohere-dashboard
Cohere Dashboard
  • After successful login, we will be redirected to the dashboard page and there locate API KEY tab.
API
API Key Menu
  • In the API Key menu, find and select NEW TRIAL KEY and copy the extracted key.

We will attach the Cohere API Key in our code and we will initialize the LLM,

model="command-r": Cohere’s reasoning-focused model.

  • temperature=0: Makes responses deterministic (less randomness).
  • cohere_api_key: Authenticates with Cohere’s API.
Python
import os
os.environ["cohere_api_key"] = "your_key_here"

llm = ChatCohere(model="command-r", temperature=0,
                 cohere_api_key=cohere_api_key)

Step 4: Define the Prompt Template and Setup Output Parser

We will define the prompt template,

  • Creates a structured template where {question} will be replaced with user input.
  • The phrase “Let’s think step by step” nudges the LLM to reason logically before answering.

And we will setup the output parser which converts the raw response from the LLM into a simple string (removes metadata).

Python
template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
output_parser = StrOutputParser()

Step 5: Build the LCEL Chain

We build the LCEL chain using the pipe operator (|) to connect components:

  • Prompt: Formats the user’s question.
  • LLM: Processes the formatted input and generates an answer.
  • Output Parser: Extracts plain text output.
Python
chain = prompt | llm | output_parser

Step 6: Run the Test

We will run a query to test and fetch results,

  • Defines a simple word problem.
  • Passes it to the chain using .invoke().
  • The LLM applies the step-by-step reasoning prompt and returns the answer.
  • The final response is printed to the console.
Python
question = """
I have five apples. I throw two away. I eat one. How many apples do I have left?
"""
response = chain.invoke({"question": question})
print(response)

Output:

You started with five apples, removed two by throwing them away and then consumed one more, which leaves you with two apples.

So, the final answer is you have **two apples** left.

Runnables Interface in LangChain

When working with the LangChain Expression Language (LCEL), we often need to modify how values flow between components or even transform the values themselves. For this purpose, LangChain provides the Runnables interface.

How Runnables Work

1. Any two Runnables can be chained together into a sequence.

2. The output of one Runnables' .invoke() call automatically becomes the input to the next runnable.

3. Chaining can be done using:

  • The pipe operator | (short form).
  • The .pipe() method (explicit form).

This makes pipelines modular, flexible and easy to read.

Types of Runnables in LangChain

1. RunnablePassThrough

  • Simply passes the input unchanged to the next component in the chain.
  • Useful when we want to preserve original data while performing transformations elsewhere.

Example:

Python
from langchain_core.runnables import RunnablePassthrough

passthrough = RunnablePassthrough()

result = passthrough.invoke("Hello, Geek!")
print(result)

Output:

Hello, Geek!

2. RunnableParallel

  • Sends the input to multiple branches in parallel.
  • Enables simultaneous processing, e.g., sending the same query to two different LLMs or retrievers at once.

Example:

Python
from langchain_core.runnables import RunnableParallel, RunnableLambda

def to_uppercase(x): return x.upper()
def word_count(x): return len(x.split())

uppercase = RunnableLambda(to_uppercase)
count_words = RunnableLambda(word_count)

parallel = RunnableParallel({
    "upper": uppercase,
    "count": count_words
})

result = parallel.invoke("LangChain makes AI development easier")
print(result)

Output:

{'upper': 'LANGCHAIN MAKES AI DEVELOPMENT EASIER', 'count': 5}

3. RunnableLambda

  • Wraps a Python function and converts it into a runnable.
  • This allows custom logic (e.g., text cleaning, preprocessing, formatting) to be injected into the chain as a runnable component.

Example:

Python
from langchain_core.runnables import RunnableLambda

def add_five(x):
    return x + 5

def multiply_by_two(x):
    return x * 2

add_five = RunnableLambda(add_five)
multiply_by_two = RunnableLambda(multiply_by_two)

chain = add_five | multiply_by_two
chain.invoke(3)

Output:

16

Other Features of LCEL

LCEL has a number of other features also such as async stream batch processing . 

  • .invoke(): The goal is to pass in an input and receive the output neither more nor less.
  • .batch(): This is faster than using invoke three times when we wish to supply several inputs to get multiple outputs because it handles the parallelization for us.
  • .stream():  We may begin printing the response before the entire response is complete.

Lets see there implementation and use,

Python
prompt_str = "You know 1 short line about {topic}?"
prompt = ChatPromptTemplate.from_template(prompt_str)

chain = prompt | llm | output_parser

result_with_invoke = chain.invoke("AI")

result_with_batch = chain.batch(["AI", "LLM", "Vector Database"])
print(result_with_batch)

for chunk in chain.stream("Artificial Intelligence write 5 lines"):
    print(chunk, flush=True, end="")

Output:

output
Output of Features

Advantages of Using LCEL

  • Simplicity & Developer Productivity: Dramatically reduces boilerplate code. Developers describe what the chain does rather than how it works, enabling faster iteration.
  • Optimized Performance: Runtime optimizations like parallel execution and streaming improve latency and make workflows efficient for real-time applications.
  • Improved Debugging and Monitoring: Integration with LangSmith automatically tracks all intermediate steps and data flows, making troubleshooting painless.
  • Flexibility: Suitable for a wide range of applications including retrieval-augmented generation, conversational AI, business automation and more.

Limitations

  • Linear Structure: LCEL chains generally run one step after another, making it hard to build workflows with dynamic branching or complex decision-making.
  • Complex State Management: Managing conversation or workflow state across multiple turns is tricky and requires manual handling, increasing code complexity.
  • Tool Integration Challenges: Using and coordinating multiple external tools within LCEL chains is not intuitive, especially when tool usage needs to change dynamically.
  • Debugging & Scalability Issues: Debugging long or nested LCEL chains can be difficult and its newer design means stability and performance may vary in complex production use cases.

Explore