config.yaml file. You point to a model on Hugging Face, choose a GPU, and Baseten builds a TensorRT-optimized container with an OpenAI-compatible API. No Python code, no Dockerfile, no container management.
This tutorial deploys Qwen 2.5 3B Instruct, a small but capable LLM, to a production-ready endpoint on an L4 GPU.
Set up your environment
To use Truss, install a recent Truss version and ensure pydantic is v2:Help for setting up a clean development environment
Help for setting up a clean development environment
Truss requires python
>=3.9,<3.15. To set up a fresh development environment,
you can use the following commands, creating a environment named truss_env
using pyenv:.bashrc:
~/.bashrc
Log in to Baseten
Generate an API key from Settings > API keys, then authenticate the Truss CLI:Create a Truss project
Scaffold a new project withtruss init:
Qwen 2.5 3B.
config.yaml, a model/ directory, and supporting files. For engine-based deployments like this one, you only need config.yaml. The model/ directory is for custom Python code, which this deployment doesn’t require.
Write the config
Replace the contents ofconfig.yaml with:
config.yaml
resources section selects an L4 GPU, which has 24 GB of VRAM. The trt_llm section tells Baseten to use Engine-Builder-LLM, which compiles the model with TensorRT-LLM for optimized inference. The checkpoint_repository points to the model weights on Hugging Face (Qwen 2.5 3B Instruct is ungated, so no access token is needed). Setting quantization_type: fp8 compresses weights to 8-bit floating point, cutting memory usage roughly in half with negligible quality loss.
Deploy
From the project directory, push to Baseten:/models/ (e.g. abc1d2ef). You’ll need this to call the model’s API. You can also find it in your Baseten dashboard.
Baseten now downloads the model weights, compiles them with TensorRT-LLM, and deploys the resulting container to an L4 GPU. You can watch progress in the logs linked above. When the deployment status shows “Active” in the dashboard, it’s ready for requests.
Call your model
Engine-based deployments serve an OpenAI-compatible API, so any code that works with the OpenAI SDK works with your model. Replace{model_id} with your model ID from the deployment output.
- Python
- cURL
Install the OpenAI SDK if you don’t have it:Create a chat completion:
What just happened
With a 12-line config file, you deployed a production-ready LLM endpoint. Here’s what Baseten did:- Downloaded the Qwen 2.5 3B Instruct weights from Hugging Face.
- Compiled the model with TensorRT-LLM, applying FP8 quantization for faster inference and lower memory usage.
- Packaged everything into a container and deployed it to an L4 GPU.
- Exposed an OpenAI-compatible API that handles tokenization, batching, and KV cache management automatically.
model.py, no Docker setup, no inference server configuration. This config-only pattern works for most popular open-source LLMs, including Llama, Qwen, Mistral, Gemma, and Phi models.
Next steps
Engine configuration
Tune max sequence length, batch size, quantization, and runtime settings for your deployment.
Custom model code
Add custom Python when you need preprocessing, postprocessing, or unsupported model architectures.
Autoscaling
Configure replicas, concurrency targets, and scale-to-zero for production traffic.
Promote to production
Move from development to production with
truss push --publish.