Deploy your first model
From model weights to API endpoint
In this guide, you will package and deploy Phi-3-mini-4k-instruct, a 3.8-billion-parameter large language model.
We’ll cover:
- Loading model weights from Hugging Face
- Running model inference on a GPU
- Configuring your infrastructure and Python environment
- Iterating on your model server in a live reload development environment
- Deploying your finished model serving instance for production use
By the end of this tutorial, you will have built a production-ready API endpoint for an open source LLM on autoscaling infrastructure.
This tutorial is a comprehensive introduction to deploying models from scratch. If you want to quickly deploy an off-the-shelf model, start with our model library and Truss examples.
Setup
Before we dive into the code:
- Sign up for or sign in to your Baseten account.
- Generate an API key and store it securely.
- Install Truss, our open-source model packaging framework.
New Baseten accounts come with free credits to experiment with model inference. Completing this tutorial should consume less than a dollar of GPU resources.
What is Truss?
Truss is a framework for writing model serving code in Python and configuring the model’s production environment without touching Docker. It also includes a CLI to power a robust developer experience that will be introduced shortly.
A Truss contains:
- A file
model.py
where theModel
class is implemented as a serving interface for an AI model. - A file
config.yaml
that specifies GPU resources, Python environment, metadata, and more. - Optional folders for bundling model weights (
data/
) and custom dependencies (packages/
).
Truss is designed to map directly from model development code to production-ready model serving code:
Create a Truss
To get started, create a Truss with the following terminal command:
When prompted, give your Truss a name like Phi 3 Mini
.
Then, navigate to the newly created directory:
You should see the following file structure:
For this tutorial, we will be editing model/model.py
and config.yaml
.
Load model weights
Phi-3-mini-4k-instruct is an open source LLM available for download on Hugging Face. We’ll access its model weights via the transformers
library.
Two functions in the Model
object, __init__()
and load()
, run exactly once when the model server is spun up or patched. Using these functions, we load model weights and anything else the model server needs for inference.
For Phi 3, we need to load the LLM and its tokenizer. After initializing the necessary instance attributes, we load the weights and tokenzier from Hugging Face:
Run model inference
The final required function in the Model
class, predict()
, runs each time the model endpoint is requested. The predict()
function handles model inference.
The implementation for predict()
determines what features your model endpoint supports. You can implement anything from streaming to support for specific input and output specs:
Set Python environment
Now that the model server is implemented, we need to give it an environment to run in. In model/model.py
, we imported a couple of objects from transformers
:
To add transformers
, torch
, and other required packages to our Python environment, we move to config.yaml
, the other essential file in every Truss. Here, you can set your Python requirements:
We strongly recommend pinning versions for every Python requirement. The AI/ML ecosystem moves fast, and breaking changes to unpinned dependencies can cause errors in production.
Select a GPU
Picking the right GPU is a balance between performance and cost. First, consider the size of the model weights. A good rule of thumb is that for float16
LLM inference, you need 2GB of VRAM on your GPU for every billion parameters in the model, plus overhead for processing requests.
Phi 3 Mini has 3.8 billion parameters, meaning that it needs 7.6GB of VRAM just to load model weights. An NVIDIA T4 GPU, the smallest and least expensive GPU available on Baseten, has 16GB of VRAM, which will be more than enough to run the model.
To use a T4 in your Truss, update the resources
in config.yaml
:
Here’s a list of supported GPUs.
Create a development deployment
With the implementation finished, it’s time to test the packaged model. With Baseten, you can spin up a development deployment, which replicates a production environment but with a live reload system that lets you patch your running model and test changes in seconds.
Get your API key
Retreive your Baseten API key or, if necessary, create one from your workspace..
To use your API key for model inference, we recommend storing it as an enviornment variable:
Add this line to your ~/.zshrc
or similar shell config file.
The first time you run truss push
, you’ll be asked to paste in an API key.
Run truss push
To create a development deployment for your model, run the following command in your phi-3-mini
working directory:
You can monitor your model deployment from your model dashboard on Baseten.
Call the development deployment
Your model deployment will go through three stages:
- Building the model serving environment (creating a Docker container for model serving)
- Deploying the model to the model serving environment (provisioning GPU resources and installing the image)
- Loading the model onto the model server (running the
load()
function)
After deployment is complete, the model will show as “active” in your workspace. You can call the model with:
Live reload development environment
Even with Baseten’s optimized infrastructure, deploying a model from scratch takes time. If you had to wait for the image to build, GPU to be provisioned, and model environment to be loaded every time you make a change as you test your code, that would be a frustrating and slow developer experience.
Instead, the development environment has live reload. This way, when you make changes to your model, you skip the first two steps of deployment and only need to wait for load()
to run, cutting your dev loop from minutes to seconds.
To activate live reload, in your working directory, run:
Now, when you make changes to your model/model.py
or certain parts of your config.yaml
(such as Python requirements), your changes will be patched onto your running model server.
Implementation: generation configs
Let’s implement a few more features into our model object to experience the live reload workflow.
Currently, we only support passing the messages to the model. But LLMs have a number of other parameters like max_length
and temperature
that matter during inference.
To set these appropriately, we’ll use the preprocess()
function in the Model
object. Truss models have optional preprocess()
and postprocess()
functions, which run on the CPU on either side of predict()
, which runs on the GPU.
Add the following function to your Truss:
To use the generation args, we’ll modify our predict()
function as follows:
Save your model/model.py
file and check your truss watch
logs to see the patch being applied. Once the model status on your model dashboard shows as “active”, you can call the API endpoint again with new parameters:
Implementation: streaming output
Right now, the model works by returning the entire output at once. For many use cases, we’d rather stream model output, receiving the tokens as they are generated to reduce user-facing latency.
This requires updates to the imports at the top of model/model.py
:
We can implement streaming in model/model.py
. We’ll define a function to handle streaming:
Then in predict()
, we enable streaming:
To call the streaming endpoint, update your API call to process the streaming output:
Promote to production
Now that we’re happy with how our model is implemented, we can promote our deployment to production. Production deployments don’t have live reload, but are suitable for real traffic as they have access to full autoscaling settings and can’t be interrupted by patches or other deployment activities.
You can promote your deployment to production through the Baseten UI or by running:
When a development deployment is promoted to production, it gets rebuilt and deployed.
Call the production endpoint
When the deployment is running in production, the API endpoint for calling it changes from /development/predict
to /production/predict
. All other inference code remains unchanged:
Both your development and production deployments will scale to zero when not in use.
Learn more
You’ve completed the quickstart by packaging, deploying, and invoking an AI model with Truss!
From here, you may be interested in:
- Learning more about model serving with Truss.
- Example implementations for dozens of open source models.
- Inference examples and Baseten integrations.
- Using autoscaling settings to spin up and down multiple GPU replicas.