Release - 2023-05-17
Build a Simple ChatGPT CLI with memory
Build a CLI tool to talk to an LLM, using Motorhead and Langchain.

Metal memories in space
In this tutorial, we will walk through how to create a command-line interface (CLI) chat tool that uses memory. We'll leverage Langchain for all LLM calls, Motorhead for memory management and Redis as the storage used by Motorhead.
Let's get started.
Prerequisites
To follow along with this tutorial, you should have:
- Basic knowledge of JavaScript and Node.js
- Node.js v18+ (we recommend using nvm)
- Have Docker installed - more info here
- An OpenAI Api key - get one here
Step 1: Setup your environment
First, we need to create the project directory and install the necessary packages. Create a new directory for your project and run the following commands to initialize a new Node.js project and create the required files:
mkdir motorhead-cli-chatgptcd motorhead-cli-chatgptnpm init -ynpm i langchain dotenv chalktouch index.js .env docker-compose.yml
After running npm init, you will manually need to add the line "type": "module" to your package.json file. This tells Node.js to treat .js files within this package as ECMAScript modules. Your package.json file should look like this:
{"name": "motorhead-cli-chatgpt","version": "1.0.0","description": "","type": "module", // Add this line"main": "index.js","scripts": {"test": "echo \"Error: no test specified\" && exit 1",},"keywords": [],"author": "","license": "ISC"}
Now, we need to configure our environment variables. We are using the dotenv package to load environment variables from a .env file. Create a .env file in your project root and add the necessary environment variables.
# .env fileMOTORHEAD_URL=http://localhost:8080SESSION_ID=ozzy6666OPENAI_API_KEY=<YOUR_KEY>
Then we need to update our docker-compose.yml file to include the Motorhead container. This will allow us to run Motorhead locally.
version: '3'services:motorhead:image: ghcr.io/getmetal/motorhead:v2.0.1ports:- '8080:8080'links:- redisenvironment:PORT: 8080MOTORHEAD_LONG_TERM_MEMORY: 'true'MOTORHEAD_MODEL: 'gpt-3.5-turbo'REDIS_URL: 'redis://redis:6379'env_file:- .envredis:image: redis/redis-stack-server:latestports:- '6379:6379'
Step 2: Import required packages
Next, we import the necessary packages and modules. chalk is used for styling our console outputs. The Langchain package provides tools for handling OpenAI calls.
// index.jsimport readline from "readline";import chalk from "chalk";import { CallbackManager } from "langchain/callbacks";import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import {ChatPromptTemplate,HumanMessagePromptTemplate,SystemMessagePromptTemplate,MessagesPlaceholder,} from "langchain/prompts";import { MotorheadMemory } from "langchain/memory";import * as dotenv from "dotenv";dotenv.config();
Step 3: Setup the readline interface
We use Node's built-in readline module to handle user input and output in the console. Create an interface using readline.createInterface().
// index.jsconst rl = readline.createInterface({input: process.stdin,output: process.stdout,});
Step 4: Implement the Chat and Memory Management Features
This section of your code handles the actual chat and memory management operations using the Langchain and Motorhead libraries. Here's how it's organized:
Step 4.1: Create a New Chat Instance
First, we create an instance of ChatOpenAI from Langchain. We pass in some configuration options to the constructor, such as temperature, streaming mode, and callback manager:
// index.jsconst chat = new ChatOpenAI({temperature: 0,streaming: true,callbackManager: CallbackManager.fromHandlers({async handleLLMNewToken(token) {process.stdout.write(chalk.green(token));},}),});
This set up will call the handleLLMNewToken function every time a new token is streamed by the language model. We use process.stdout.write() to print the token to the console.
Step 4.2: Create a New Memory Instance
Next, we create an instance of MotorheadMemory to manage our chat context. This allows us to maintain a history of chat messages across different sessions:
// index.jsconst memory = new MotorheadMemory({returnMessages: true,memoryKey: "history",sessionId: process.env.SESSION_ID,motorheadURL: process.env.MOTORHEAD_URL,});await memory.init(); // loads previous state from Motorhead 🤘
Step 4.3: Set up the Chat Prompt Template
We then set up a chat prompt template. This determines how our chat messages are structured:
// index.jslet context = "";if (memory.context) {context = `Here's previous context: ${memory.context}`;}const systemPrompt = `You are a helpful assistant.${context}`;const chatPrompt = ChatPromptTemplate.fromPromptMessages([SystemMessagePromptTemplate.fromTemplate(systemPrompt),new MessagesPlaceholder("history"),HumanMessagePromptTemplate.fromTemplate("{input}"),]);
Step 4.4: Set up the Conversation Chain
With our chat and memory instances and our chat prompt template, we can set up a conversation chain. This will handle the back-and-forth of our chat:
// index.jsconst chain = new ConversationChain({memory,prompt: chatPrompt,llm: chat,});
Step 4.5: Create a Function to Post Messages to Shell
Next, we create a recursive function that will ask a question in the shell, wait for an answer, call the conversation chain with the answer as input, and then call itself with the response as a new question:
// index.jsconst postToShell = async () => {rl.question(chalk.green(`\n`), async function (answer) {const res = await chain.call({ input: answer });await postToShell(res.response);});};
Step 4.6: Start the Conversation
Finally, we start the conversation by asking the first question and using the response to call our postToShell function:
// index.jsrl.question(chalk.blue(`\nMotorhead 🤘chat start\n`), async function (answer) {const res = await chain.call({ input: answer });await postToShell(res.response);});
Step 5: Run the chat tool
First, we need to start Motorhead. Use the following command to start it:
docker-compose up
Once Motorhead is running, you can now run your CLI chat tool. Use the following command to start it:
node index.js
You should now be able to interact with your chat tool directly from the command line. If you exit the CLI and restart it the history will be persisted!
Congratulations! You have successfully created a CLI chat tool with memory using Langchain and Motorhead. You can find the full code for this tutorial on GitHub.