🎉 Seed RoundWe've raised 2.5M from Swift, YC, & Chapter One. Read more

Tutorial - 2023-07-06

Building a Custom Question-Answering GPT Agent with Langchain and Metal

by Pablo Rios

An agent reflecting on their purpose.

An agent reflecting on their purpose.

As artificial intelligence advances, developers have access to more advanced tools and frameworks for creating smart applications. In this post, we'll explore how Langchain and Metal can work together to build applications that use Large Language Models (LLMs) to understand natural language queries and provide intelligent responses.

In this first example, we'll create a simple GPT agent that can answer questions based on information from a PDF file. Langchain will assist us in integrating additional tools like a calculator or a search engine, enhancing our application's ability to reason and perform complex computations. Metal will streamline the entire process by processing the documents, breaking them into smaller parts, and extracting their meaning so we can have engaging conversations with our data.

Moreover, Metal provides us with valuable insights into our vector store and engagement metrics, allowing us to understand how our application is being utilized and can be improved over time.

Let's get started!

Step 1: Importing and Processing Custom Data

To begin, go to your Metal Dashboard and create an Index. From there, you can easily import your custom data file. The platform conveniently supports PDF, DOCX, and CSV formats, allowing you to add multiple documents to your index.

Files Page

Files Page

The platform will automatically read the contents of the file. It will then split the text into smaller chunks, create the embeddings (representations) of these chunks, and add them to our vector store. For this example, we will use The State of Food Security and Nutrition in the World 2022 - UNICEF DATA

We will also upload a CSV file with data from the tables in the document to provide our index with better retrieval capabilities.

Step 2: Setting up the Metal API Client

Now, let's open a notebook and set up the Metal API client. Remember you can find the Index Id in the Settings tab of your Index.

from metal_sdk.metal import Metal
API_KEY = "<your_api_key>"
CLIENT_ID = "<your_client_id>"
INDEX_ID = "<your_index_id>"
metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);

Step 3: Create Question Answering Chain

Now, let's initialize the Question Answering (QA) chain using the MetalRetriever module from Langchain. You can specify the maximum number of source documents to retrieve. In this example, we'll use the OpenAI LLM, but feel free to choose the LLM that suits your needs.

from langchain.retrievers import MetalRetriever
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
retriever = MetalRetriever(metal, params={"limit": 2})
qa_chain = RetrievalQA.from_chain_type(
llm=OpenAI(temperature=0),
chain_type="stuff",
retriever=retriever,
)

The chosen chain type, 'stuff', indicates that we will include all retrieved documents simultaneously. The prompt template used by the chain will take the form:

Use the following pieces of context to answer the users question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
____________________________________________
{context}
{question}

Here, the 'context' corresponds to the information obtained through retrieval and the 'question' represents the specific query posed by the user.

Step 4: Querying the QA Model

With our QA model ready, we can now pose questions and receive responses based on the information extracted from the documents.

query = "What are the the strategies to lower the cost of nutritious foods?"
response = qa_chain(query)
result = response["result"]

For instance, if we query about strategies to lower the cost of nutritious foods, our agent might respond with:

"Strategies to lower the cost of nutritious foods include social protection mechanisms, primary healthcare services, interventions along food supply chains to increase the availability of safe and nutritious foods, and the empowerment of women, children, and youth."

Step 5: Empowering the Application with Agency

Now to take our application to the next level, we can empower it with the ability to interact with data and make informed decisions for different tasks. This concept is known as agency.

To achieve this, we can combine the RetrievalQA chain with additional tools like math and search functionalities. By integrating these resources, our application gains the power to perform calculations using available data and seek information from external sources. This integration enables our application to handle a wider range of queries and tasks effectively.

from langchain.agents import load_tools, Tool
#defining the tools for the agent
tools_chain = load_tools(["llm-math","serpapi"] llm=llm)
#defining the tools for the agent
tools = [
Tool(
name = "Food Security Report",
func=qa_chain.run,
description="use this as the primary source of context information when you are asked the question. Always search for the answers using this tool first, don't make up answers yourself"
),
Tool(
name = "Math",
func = tools_chain[0].func,
description = "use this tool to answer math questions"
),
Tool(
name = "Search",
func = tools_chain[1].func,
description = "use this tool to search questions on the internet"
)
]

Here, we define a set of tools for our agent, including the "Food Security Report," which utilizes the QA chain for answering questions, the "Math" tool for mathematical inquiries, and the "Search" tool for searching the internet for relevant information. Each tool is accompanied by a custom name and a descriptive explanation, helping our agent make informed decisions.

Step 6: Initialize the Agent Chain

With our tools in place, it's time to initialize the agent to assist us in answering more complex questions.

from langchain.agents import initialize_agent
agent_chain = initialize_agent(tools,
llm,
agent="zero-shot-react-description",
verbose=True,
max_iterations=3)

In this step, we initialize the agent using the "zero-shot-react" type. This agent limits itself to a single instance but excels in reasoning about the question, gathering information from the provided tools, and formulating the answer.

Now, let's put our agent to the test by asking a multifaceted question:

query = "What is the average rate of undernourishment in Central America from 2015 to 2020?
Furthermore, how many people does this average represent in the region today?"
agent_chain.run(input=query)
> Entering new AgentExecutor chain...
I need to find the average percentage of undernourishment in Central America from 2015 to 2020 and the number of people that represents.
> Action: Food Security Report
Action Input: Central America, 2015-2020
> Observation: The undernourishment rate in Central America was 7.5% in 2015, 8.1% in 2016, 7.9% in 2017, 8% in 2018, 8.1% in 2019, and 10.6% in 2020.
Thought: I need to calculate the average percentage of undernourishment and the number of people that represents.
> Action: Math
Action Input: 7.5%, 8.1%, 7.9%, 8%, 8.1%, 10.6%
> Observation: Answer: 8.366666666666667
Thought: I need to find the number of people that represents.
> Action: Search
Action Input: Number of people in Central America
> Observation: The current population of Central America in 2023 is 180,643,728, a 0.88% increase from 2022. The population of Central America in 2022 was 179,060,359, a 0.79% increase from 2021. The population of Central America in 2021 was 177,661,929, a 0.75% increase from 2020.
Thought: I now know the final answer.
> Final Answer: The average percentage of undernourishment in Central America from 2015 to 2020 is 8.37%, representing 15,323,845 people in the region today.

Upon running the query, our agent springs into action, engaging in a series of tasks. It retrieves the embedding from the vector store, employs the calculator tool to determine the average percentage of undernourishment, and then utilizes the search tool to find the current population of Central America. After gathering all the necessary information, our agent delivers the final answer:

"The average percentage of undernourishment in Central America from 2015 to 2020 is 8.37%, representing 15,323,845 people in the region today."

Impressive, isn't it? Our agent seamlessly performs a range of tasks to arrive at a comprehensive answer. It retrieves relevant data from the vector store, employs mathematical calculations, and even conducts internet searches to provide a holistic and accurate response.

Step 7: Query Analytics

Metal also offers us a window into the observability of our application, allowing us to understand how it is being used. The Analytics tab provides insights into the number of queries made over time, serving as a valuable measurement of engagement. Additionally, the average distance over time provides a measure of relevance—the closer the distance relative to the query, the more relevant the response.

Query Analytics

Query Analytics

Furthermore, the Logs tab presents a comprehensive record of all queries made against the index. This information proves invaluable in understanding user behavior and the types of questions being asked.

Query Logs

Query Logs

Conclusion

By harnessing the combined power of Metal and Langchain, developers can leverage a potent framework for building applications that tap into the capabilities of Large Language Models. Metal facilitates efficient retrieval of information, while Langchain simplifies the process of constructing complex question-answering systems. By following the integration steps outlined in this code example, developers can effortlessly harness the power of LLMs to create intelligent applications that captivate and engage users.