What is LCEL?

LangChain Expression Language

LCEL (LangChain Expression Language) is a declarative language for building and composing chains of components for working with Large Language Models (LLMs). It allows users to easily create complex chains from basic components and supports features like streaming, parallelism, and logging out of the box.


LCEL provides a unified interface for all components, making it easy to combine and chain them together. It also offers composition primitives for building and customizing chains, including support for parallelism, fallbacks, and dynamic configuration.

LangChain Ecosystem
LangChain Ecosystem

Basic example

The most basic and common use case is chaining a prompt template and a model together. To see how this works, let’s create a chain that takes a topic and generates a joke:

pip install --upgrade --quiet  langchain-core langchain-community langchain-openai
  • Install dependencies
pip install -qU langchain-openai
  • Set environment variables
import getpass
import os

os.environ["OPENAI_API_KEY"] = getpass.getpass()

from langchain_openai import ChatOpenAI

model = ChatOpenAI(model="gpt-4")
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
output_parser = StrOutputParser()

chain = prompt | model | output_parser

chain.invoke({"topic": "ice cream"})

Notice this line of the code, where we piece together these different components into a single chain using LCEL:

chain = prompt | model | output_parser

The | symbol is similar to a unix pipe operator, which chains together the different components, feeding the output from one component as input into the next component.

Understanding Chain

Let’s take a look at each component individually to really understand what’s going on.

chain = prompt | model | output_parser


  • prompt is a BasePromptTemplate, which means it takes in a dictionary of template variables and produces a PromptValue
  • A PromptValue is a wrapper around a completed prompt that can be passed to either an LLM (which takes a string as input) or ChatModel (which takes a sequence of messages as input). 
  • It can work with either language model type because it defines logic both for producing BaseMessages and for producing a string.
prompt_value = prompt.invoke({"topic": "ice cream"})

‘Human: tell me a short joke about ice cream’


  • The PromptValue is then passed to model. In this case our model is a ChatModel, meaning it will output a BaseMessage.
message = model.invoke(prompt_value)

If our model was an LLM, it would output a string.

from langchain_openai import OpenAI

llm = OpenAI(model="gpt-3.5-turbo-instruct")

\n\nRobot: Why did the ice cream truck break down? Because it had a meltdown!

Output Parser

  • And lastly we pass our model output to the output_parser, which is a BaseOutputParser meaning it takes either a string or a BaseMessage as input. 
  • The specific StrOutputParser simply converts any input into a string.

Advantages of LCEL

LCEL offers several advantages that simplify the process of building applications with Large Language Models (LLMs) and combining related components. These advantages include:

  1. A unified interface: Every LCEL object implements the Runnable interface, which defines a common set of invocation methods (invoke, batch, stream, ainvoke, …). This makes it possible for chains of LCEL objects to also automatically support useful operations like batching and streaming of intermediate steps, since every chain of LCEL objects is itself an LCEL object.
  2. Composition primitives: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.
LCEL Advantages
LCEL Advantages

Important Links:

Valuable comments