Release - 2023-09-21
How to Get to Production with Metal
by Taylor Lowe

On the way to a production deployment
It’s almost been one year since the public release of ChatGPT 3.5, and as we’ve written about before, large language models (LLMs) have changed how we think about and build software. For anyone working closely with this technology, this is not a controversial statement.
In fact, we’ve spent the entire year helping organizations launch their first LLM applications and in front of real users. And for enterprise applications, there’s more to it than hitting an API and calling it a day. At Metal, we are intimately familiar with how to scope these applications for success and the steps necessary to move from idea to production.
What’s more, every live Metal application has provided insights into how we can help developers ship even faster. These improvements have been built into our product, a few of which we are announcing today!
Let’s dive in 🤘
Start small, think big
If this is your organization's first LLM application, we recommend starting with a relatively simple use case. This approach will help you learn the stack as you build, and help you get something in front of users faster. If you’re not sure where to start, check out the examples in our documentation or let our team know.
Otherwise, here are a few typical starting points:
- Semantic search over data relevant to your business, such as a product catalog, set of documents, or company-specific knowledge like a help center or knowledge base.
- A context-augmented chatbot with an in-depth understanding of company-specific knowledge that you provide.
When scoping these applications, it’s usually best to start with a single team’s workflow or dataset. You’ll deal with fewer moving parts and be able to build on top of initial versions as you go.
Critically, these first use cases let you see LLMs in action using your own data. With Metal’s fully managed platform, you can build and deploy these applications in under an hour.
Divide and conquer
The process of getting to production can be broken down into three distinct parts:
- Getting data in
- Getting data out
- Checking your work
Let’s take a look at each of these, why they’re important, and how we address them in Metal.
Data-In
This step is about preparing your data for use by an LLM. This is important, as how your data is pre-processed will have a big impact on your application. Metal tackles this problem through our ingestion pipeline, transforming your data so it’s LLM-ready.
We start by defining a data source, which is a collection of data that can later be used in a Metal application. Data sources can handle static or live data, connecting to your business systems through Metals API. When files are processed through Metal’s pipeline, they are transformed into data entities, making them ready for use in an application.

Metal pre-processing pipeline
For example, let’s say you are building a chatbot to help your sales team close more deals. In this scenario, you would likely create a data source for your CRM. As you push customer emails, sales notes, contracts, and other relevant information into Metal, our ingestion pipeline automatically extracts text, parses tables, and performs various functions so your CRM data can be interpreted by an LLM.

Metal data source
Data-Out
Now let’s put your data to use. This step is focused on how we can organize and make use of your data entities in service of an application.
First, you can create an application directly from Metal’s apps dashboard.

Next, specify which data source you want to connect to your app.

Once you connect a data source, Metal will handle the rest: chunking your data, creating embeddings from these chunks, and storing these embeddings in a vector database.Â
Congratulations! Your app can now consume data via Metals API. How you proceed from here will depend on your application’s use case – so please reference our API documentation or SDKs (node.js, python) for more details.
Check your work
Once your application is live, it's important to understand how it’s being used. Are your users engaging with your app? How useful are your LLMs responses?
Every Metal application comes with metrics and observability out of the box. We provide visibility into your user's queries, query volume over time, and how accurate the LLM's responses are. You can also use Metal’s API to return user queries if further analysis is needed outside of the platform.


Metal application metrics and observability
Wrapping up
And there you have it! Getting to production with Metal in three simple steps.
Whether your goal is to empower teams through contextual chatbots, upgrade your company’s search capabilities, or more – Metal will help you get there by covering the necessary areas of the stack.Â
If you’re new to LLMs, remember to start with a small use case that you can build on top of over time. And while getting to production is important, the job is not done yet. Understanding how your application is being used in production is key to making improvements over time.
As always, support and guidance are always available from our team if you need it. We’re excited to see what you build 🤘