LangChain’s generative UI tool helps in providing personalized experiences. The platform’s robust personalization features and intuitive interface redefine the standard for developing engaging applications tailored to individual preferences.

Table Of Contents
Generative UI
Generative UI

LangChain acts as a catalyst for developers, enabling them to fully utilize the capabilities of large language models (LLMs) such as Google’s Gemini AI and OpenAI’s models.

  1. Personalization features
  • Customizing applications to meet individual needs boosts user engagement.
  • Offering customizable options guarantees a unique experience for each user.

2. User-friendly interface

  • Intuitive design elements enable effortless navigation within applications.
  • Streamlined interfaces focus on user experience, enhancing accessibility and ease of use.

This integration provides easy access to advanced language processing features, opening up new possibilities for intelligent and interactive applications that understand, respond to, and anticipate user needs. With LangChain, developers can streamline the creation of generative UI tool interfaces, transforming the development of state-of-the-art Natural Language Processing (NLP) applications.


The example demonstrates a tool-calling agent that outputs an interactive UI element by streaming intermediate outputs of tool calls to the client.

  • We introduce two utilities designed to simplify the integration of the AI SDK with React elements within runnables and tool called: createRunnableUI and streamRunnableUI.
  • streamRunnableUI: This utility executes the provided Runnable using the `streamEvents` method and sends each stream event to the client via the React Server Components stream.
  • createRunnableUI: This utility wraps the `createStreamableUI` function from the AI SDK to properly integrate with the Runnable event stream
"use server";

const tool = new DynamicStructuredTool({
  // ...
  func: async (input, config) => {
    // create a new streamable UI and wire it up to the streamEvents
    const stream = createRunnableUI(config);

    const result = await images(input);

    // update the UI element with the rendered results
          .map((image) => image.thumbnail)
          .slice(0, input.limit)}

    return `[Returned ${result.images_results.length} images]`;

// add LLM, prompt, etc...

const tools = [tool];

export const agentExecutor = new AgentExecutor({
  agent: createToolCallingAgent({ llm, tools, prompt }),
async function agent(inputs: { input: string }) {
  "use server";
  return streamRunnableUI(agentExecutor, inputs);

export const EndpointsContext = exposeEndpoints({ agent });

To include all client components in the bundle, we must wrap all Server Actions using the `exposeEndpoints` method. 

The client can then access these endpoints through the Context API, as demonstrated in the `useActions` hook.

"use client";
import type { EndpointsContext } from "./agent";

export default function Page() {
  const actions = useActions<typeof EndpointsContext>();
  const [node, setNode] = useState();

  return (

        onClick={async () => {
          setNode(await actions.agent({ input: "cats" }));
        Get images of cats

What’s Next?

  • Check out the video

Valuable comments