Fast LLMs with TensorRT-LLM
Optimize LLMs for low latency and high throughput
To get the best performance, we recommend using our TensorRT-LLM Engine Builder when deploying LLMs. Models deployed with the Engine Builder are OpenAI compatible, support structured output and function calling, and offer deploy-time post-training quantization to FP8 with Hopper GPUs.
The Engine Builder supports LLMs from the following families, both foundation models and fine-tunes:
- Llama 3.0 and later (including DeepSeek-R1 distills)
- Qwen 2.5 and later (including Math, Coder, and DeepSeek-R1 distills)
- Mistral (all LLMs)
You can download preset Engine Builder configs for common models from the model library.
The Engine Builder does not support vision-language models like Llama 3.2 11B or Pixtral. For these models, we recommend vLLM.
Example: Deploy Qwen 2.5 3B on an A10G
This configuration builds an inference engine to serve Qwen 2.5 3B on an A10G GPU. Running this model is fast and cheap, making it a good example for documentation, but the process of deploying it is very similar to larger models like Llama 3.3 70B.
Setup
Before you deploy a model, you’ll need three quick setup steps.
Create an API key for your Baseten account
Create an API key and save it as an environment variable:
Add an access token for Hugging Face
Some models require that you accept terms and conditions on Hugging Face before deployment. To prevent issues:
- Accept the license for any gated models you wish to access, like Llama 3.3.
- Create a read-only user access token from your Hugging Face account.
- Add the
hf_access_token
secret to your Baseten workspace.
Install Truss in your local development environment
Install the latest version of Truss, our open-source model packaging framework, as well as OpenAI’s model inference SDK, with:
Configuration
Start with an empty configuration file.
This configuration file specifies model information and Engine Builder arguments. You can find dozens of examples in the model library as well as details on each config option in the engine builder reference.
Below is an example for Qwen 2.5 3B.
Deployment
Pushing the model to Baseten kicks off a multi-stage build and deployment process.
Upon deployment, check your terminal logs or Baseten account to find the URL for the model server.
Inference
This model is OpenAI compatible and can be called using the OpenAI client.
That’s it! You have successfully deployed and called an LLM optimized with the TensorRT-LLM Engine Builder. Check the model library for more examples and the engine builder reference for details on each config option.
Was this page helpful?