ūüéČ Seed RoundWe've raised 2.5M from Swift, YC, & Chapter One. Read more

Tutorial - 2023-07-13

Building a Conversational Agent with Memory with Motorhead

by Pablo Rios

Agents carrying chat memories.

Agents carrying chat memories.

Building a Conversational Agent with Memory: Exploring Langchain, Metal, and Motorhead

In the previous tutorial, we explored the concept of agents in Langchain and how they can enhance our chat applications by performing specialized tasks using available data indexed in Metal. However, our agent was limited to a stateless behavior, unable to retain information from previous interactions. In this tutorial, we will take the next step and introduce a memory component to our chat application.

This will allow the agent to engage in real conversations with our data, enabling it to recall past interactions and make informed decisions when responding to user queries.

To achieve this, we will use another tool called Motorhead, which is part of the Metal SDK and is also integrated with Langchain. One of the advantages of using Motorhead is its server-based functionality, ensuring that chat history is stored and persisted across various clients and sessions.

Exciting, isn't it? Let's dive into the implementation steps and witness the transformation of our chatbot into a conversational agent with memory capabilities.

Step 1: Install the required packages

To begin, let's install the prerequisite libraries that we will be using in this tutorial.

pip install -qU \
    openai==0.27.7 \
    metal_sdk==1.0.9 \
    langchain==0.0.229 \

Step 2: Upload the custom PDF to Metal

For this tutorial, we will continue using the same PDF document as an example to maintain the same Index we have already created. Here's a quick reminder of how to upload the PDF to your Metal dashboard:

Files Page

Files Page

During the upload process, Metal streamlines the entire procedure by parsing the document, breaking it into chunks, and extracting its meaning into vector embeddings.

Step 3: Set up the Metal API Client

Let's open a notebook and set up the Metal API client. Remember you can find the Index Id in the 'Settings' tab of your Index.

from metal_sdk.metal import Metal
API_KEY = "<your_api_key>"
CLIENT_ID = "<your_client_id>"
INDEX_ID = "<your_index_id>"
metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);

Step 4: Create the Retrieval Question Answering Chain

Now, we will create the Retrieval QA chain, which will enable our agent to retrieve answers from the Metal vector store.

from langchain.retrievers import MetalRetriever
from langchain.chat_models import ChatOpenAI
from langchain.chains import RetrievalQA
# set up metal as the retriever
retriever = MetalRetriever(client=metal,params={"limit": 2} )
# chat completion llm
llm = ChatOpenAI(
    model_name='gpt-3.5-turbo',
    temperature=0.0,
)
#retrieval qa chain 
qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=retriever,
)

Step 5: Empower our Agent with Tools

Now, we can convert this retrieval chain into a tool, along with other special components like the math and search functionalities.

from langchain.agents import load_tools, Tool
tools_chain = load_tools(["llm-math", "serpapi"], llm=llm)
tools = [
    Tool(
        name='Food Insecurity Report',
        func=qa.run,
        description=(
            "Use this tool as the primary and most reliable source of information. Always search for the answer using this tool first. Don't make up answers yourself."
        )
    ),
    Tool(
        name = "Math",
        func = tools_chain[0].func,
        description = "use this tool to answer math questions"
        ),
    
        Tool(
        name="Search",
        func=tools_chain[1].func,
        description="useful for when you need to answer questions on the internet",
    ),
]

Step 6: Create the Prompt Template

Now, let's create the prompt that will be passed to our agent in each query.

prefix = """Have a conversation with a human, answering the following
questions as best as you can based on the context and memory available: """
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""

We then pass these as arguments to the create_prompt method from the ZeroShotAgent class.

Notice that input_variables match the placeholders on our prompt template.

from langchain.agents import ZeroShotAgent
prompt = ZeroShotAgent.create_prompt(
    tools,
    prefix=prefix,
    suffix=suffix,
    input_variables=["input", "chat_history", "agent_scratchpad"]    
)

Step 7: Enable Memory with Motorhead

Next, we create the memory object that allows the chatbot to store the previous conversation history, facilitating future responses. The session_id argument enables retrieval of the specific chat history allocated to each user.

from langchain.memory.motorhead_memory import MotorheadMemory
memory = MotorheadMemory(
    session_id="test-1",
    memory_key="chat_history",
    api_key=API_KEY,
    client_id=CLIENT_ID,
)

Step 8: Initialize the Retrieval Conversational Agent

We are now ready to set up our Memory-Enhanced Conversational Agent. This involves combining the prompt from the LLMChain, the tools, and the new memory object:

from langchain.agents import ZeroShotAgent, AgentExecutor
from langchain import OpenAI, LLMChain
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(
    agent=agent, tools=tools, verbose=True, memory=memory
)

Let’s test it!

Step 9: Converse with the Agent

Our agent is now prepared to engage in conversation and use the tools to answer questions.

We can supply different queries with the agent_chain.run method. For example, let's ask our agent: "How many people in the world faced undernourishment in 2020?"

query = "How many people in the world faced undernourishment in 2020?"
agent_chain.run(query)
'''
> Entering new  chain...
Thought: I need to find reliable data on this topic.
Action: Food Insecurity Report
Action Input: 2020 global hunger statistics
Observation: In 2020, an estimated 9.3 percent of the global population, which corresponds to approximately 702 to 828 million people, faced moderate or severe food insecurity. This represents an increase from the previous year. The prevalence of undernourishment (PoU) also rose from 8.0 to 9.3 percent during the same period. These numbers indicate a worsening of global hunger and food insecurity in 2020.
Thought: I now know the final answer.
Final Answer: Approximately 702 to 828 million people in the world faced undernourishment in 2020.
> Finished chain.

Notice that the first action our agent took was to search for the answer in the custom PDF we provided, rather than the internet. This behavior was instructed in the tool definition.

Now the agent understands the context and we can follow up and test our agent to use another tool. For example, asking for the square root of the approximate difference of the previous answer.

query = "What is the square root of this difference?"
agent_chain.run(query)
'''
> Entering new  chain...
Thought: I need to calculate the square root of the difference between the two numbers.
Action: Math
Action Input: 702 - 828
Observation: Answer: -126
Thought: I need to find the square root of a negative number.
Action: Math
Action Input: sqrt(-126)
Observation: Answer: nan
Thought: I need to find the absolute value of the number.
Action: Math
Action Input: abs(-126)
Observation: Answer: 126
Thought: I now know the final answer.
Final Answer: The square root of the difference between 702 and 828 is 11.2.
> Finished chain.
'''

Final Answer: The square root of the difference between 702 and 828 is 11.2.

There we have it! Our agent successfully understands the context of the question and selects the appropriate tool to provide an answer. How cool is that?

In addition, the Metal Dashboard offers a valuable feature called 'Chat Memory'. Through this feature, we can access and review the complete chat history of our users. This includes a comprehensive overview of their conversations, providing us with insights into the 'Context' or summary of their interactions. This data allows us to analyze user interactions, identify patterns, and gain a deeper understanding of their needs and preferences.

Chat Logs

Chat Logs

The combination of our agent's capabilities and the comprehensive data provided by the Metal Dashboard empowers us to continuously enhance and optimize our chatbot. By leveraging this information, we can refine our responses, improve the user experience, and tailor our chatbot to better meet the needs of our users.

Conclusion

In this tutorial, we expanded the capabilities of our chatbot by introducing a memory component using Motorhead. This enhancement allows our agents to engage in real conversations, recall past interactions, and make informed decisions when responding to user queries. We explored the steps to set up the necessary tools, create a prompt template, and initialize the retrieval conversational agent.

Give it a try and experience the power of conversational agents with memory in your own projects!