Banner Shadow

Welcome to Flattered With Flutter-Build Smart Mobile Apps

What are LangChain agents in LLM?

Welcome to Flattered With Flutter-Build Smart Mobile Apps

Welcome to Flattered with Flutter, where creativity meets innovation!

Build stunning mobile apps with ease using Flutter, and let our expert guidance flatter your development skills. From design to deployment, we’ve got you covered.

Join us and discover the power of Flutter and AI!

home_brand

Stay updated with AI and latest in Mobile Development

Flattered With Flutter

Announcements

Introducing FyndMyAI your go-to hub for AI tools and the latest news! 🚀✨

AI Articles

Latest updates in AI

Continue Reading…

Our expert-led tutorials and guides will help you master AI-infused app development and stay ahead of the curve.

Flutter Articles

Latest updates in Flutter

What are LangChain agents in LLM?

Agents revolutionize language processing by leveraging a language model as a decision-making engine, dynamically selecting a sequence of actions to perform.

LangChain agents in LLM are a revolutionary technology that enables Large Language Models (LLMs) to perform complex tasks and interact with external tools and resources.

LangChain agents in LLM
LangChain agents in LLM

LangChain Agents in LLM

While a chain follows a fixed order of operations, an agent uses a language model to make decisions and determine its next steps

Agents involve a language model:

  • Making decisions
  • Taking actions.
  • Observing outcomes.
  • Adapting until they achieve their desired goals

LangChain agents are versatile language processing powerhouses, capable of easily tackling various tasks. These agents are the ultimate language problem-solvers, from answering complex questions and generating creative text to translating languages and summarizing lengthy documents. With their advanced language understanding and generation capabilities, the possibilities are endless!

Types of LangChain Agents in LLM

LangChain Agents in LLM
LangChain Agents in LLM

LangChain agents leverage the capabilities of Large Language Models (LLMs) to make intelligent decisions about which actions to take and in what order.

  1. LLM Agents: These agents use a Large Language Model (LLM) to generate text, answer questions, or perform other language-related tasks.
  2. Tool Agents: These agents interact with external tools and utilities, such as databases, APIs, or file systems, to perform tasks like data retrieval or file manipulation.
  3. Environment Agents: These agents interact with external environments, such as web browsers or operating systems, to perform tasks like web scraping or system automation.
  4. Conversation Agents: These agents engage in natural language conversations with users, using LLMs to generate responses and respond to user input.
  5. Task Agents: These agents perform specific tasks, such as text classification, sentiment analysis, or entity extraction, using LLMs and other tools.
LangChain Agents in LLM
LangChain Agents in LLM

Zero-shot ReAct

Zero-shot ReAct is an agent in LangChain that uses a Large Language Model (LLM) to make decisions and determine its next steps. It is a type of zero-shot learning, which enables models to recognize objects, texts, or tasks they have never seen during training. This technique is particularly valuable in scenarios where it’s impractical to collect or label a comprehensive dataset for every category of interest. Here are some key points about Zero-shot ReAct:

  • Zero-shot learning: This technique enables models to recognize objects, texts, or tasks they have never seen during training.
  • Agent: This is an “enabling tool” for LLMs, allowing them to use external tools and resources to perform tasks.
  • Zero-shot ReAct: This is a type of agent that uses a Large Language Model (LLM) to make decisions and determine its next steps.

Advantages: It enables LLMs to perform tasks that they would not be able to do on their own, and can be used in a variety of applications such as natural language processing, image recognition, and more.

from langchain.agents import initialize_agent

zero_shot_agent = initialize_agent(
    agent="zero-shot-react-description", 
    tools=tools, 
    llm=llm,
    verbose=True,
    max_iterations=3,
)
print(zero_shot_agent.agent.llm_chain.prompt.template)

Answer the following questions as best you can. You have access to the 
following tools:

Calculator: Useful for when you need to answer questions about math.
Stock DB: Useful for when you need to answer questions about stocks and their prices.

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Calculator, Stock DB]
Action Input: the input to the action
Observation: the result of the action
';... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Begin!

Question: {input}
Thought:{agent_scratchpad}
  • We first tell the LLM the tools it can use (Calculator and Stock DB).
  • Following this, an example format is defined; this follows the flow of Question (from the user), Thought, Action, Action Input, Observation — and repeats until reaching the Final Answer.
  • The final line is “Thought:{agent_scratchpad}”.

The agent_scratchpad is where we add every thought or action the agent has already performed.

Conversational ReAct
Conversational ReAct

Conversational ReAct

Conversational ReAct is a type of agent in LangChain that combines the capabilities of Zero-shot ReAct with conversational memory. It’s designed to facilitate more advanced and context-aware conversations, enabling chatbots and conversational AI systems to remember previous interactions and adapt to user preferences.

Conversational ReAct agents possess the following key features:

  • Conversational memory: They can store and retrieve conversation history, allowing them to maintain context and recall previous interactions.
  • Zero-shot capabilities: They can perform tasks and reason like Zero-shot ReAct agents, using Large Language Models (LLMs) and external tools to generate responses.
  • Contextual understanding: They can leverage conversation history to better comprehend user requests and provide more personalized and relevant responses.

By integrating conversational memory with zero-shot capabilities, Conversational ReAct agents can engage in more natural and human-like conversations, making them ideal for applications like customer support, language translation, and virtual assistants.

conversational_agent = initialize_agent(
    agent='conversational-react-description', 
    tools=tools, 
    llm=llm,
    verbose=True,
    max_iterations=3,
    memory=memory,
)

result = conversational_agent(
    "Please provide me the stock prices for ABC on January the 1st"
)

[1m> Entering new AgentExecutor chain...

Thought: Do I need to use a tool? Yes
Action: Stock DB
Action Input: ABC on January the 1st

> Entering new SQLDatabaseChain chain...
ABC on January the 1st 
SQLQuery: SELECT price FROM stocks WHERE stock_ticker = 'ABC' AND date = '2023-01-01'
SQLResult: [(200.0,)]
Answer: The price of ABC on January the 1st was 200.0.
> Finished chain.

Observation:  The price of ABC on January the 1st was 200.0.
Thought: Do I need to use a tool? No
AI: Is there anything else I can help you with?

> Finished chain.
  • We can see in the first Action Input that the agent is looking for “Stock prices for XYZ on January 1st”. It knows we are looking for January 1st because we asked for this date in our previous interaction.

ReAct Docstore

LangChain agents in LLM
LangChain agents in LLM

ReAct Docstore is a component of the LangChain framework that serves as a document storage system for agents. It allows agents to store, retrieve, and manage documents, which are essential for various tasks and conversations.

ReAct Docstore provides the following functionalities:

  • Document storage: Agents can store documents, such as text files, images, or other media, in the Docstore.
  • Document retrieval: Agents can retrieve stored documents by querying the Docstore using keywords, tags, or other metadata.
  • Document management: Agents can update, delete, or modify stored documents as needed.
  • Conversational context: Docstore can store conversation history and context, enabling agents to recall previous conversations and maintain context.

LangChain docstores allow us to store and retrieve information using traditional retrieval methods. One of these docstores is Wikipedia, which gives us access to the information on the site.

from langchain import Wikipedia
from langchain.agents.react.base import DocstoreExplorer

docstore=DocstoreExplorer(Wikipedia())
tools = [
    Tool(
        name="Search",
        func=docstore.search,
        description='search wikipedia'
    ),
    Tool(
        name="Lookup",
        func=docstore.lookup,
        description='lookup a term in wikipedia'
    )
]

docstore_agent = initialize_agent(
    tools, 
    llm, 
    agent="react-docstore", 
    verbose=True,
    max_iterations=3
)

Self-Ask With Search

Self-ask
LangChain agents in LLM

The self-ask-with-search agent is a powerful tool in LangChain that combines the capabilities of Large Language Models (LLMs) with search engines to retrieve accurate and relevant information. 

The agent will perform searches and ask follow-up questions as often as required to get a final answer.

Here’s how the self-ask-with-search agent works:

  • Initial Question: The user asks a question or provides a prompt.
  • Search: The agent performs a search using a connected search engine (e.g., Google, Bing, or a custom knowledge base).
  • LLM Processing: The search results are passed through an LLM, which processes and analyzes the information.
  • Follow-up Questions: If the initial search results are unclear or incomplete, the agent automatically generates follow-up questions to refine the search query.
  • Repeat Search and Processing: The agent performs additional searches and LLM processing until it obtains a satisfactory answer or reaches a predetermined limit.

Final Answer: The agent returns the final answer or result to the user.