config.yaml file defines how your model runs on Baseten: its dependencies,
compute resources, secrets, and runtime behavior. You specify what your model
needs; Baseten handles the infrastructure.
Every Truss includes a config.yaml in its root directory. Configuration is
optional, every value has a sensible default.
Common configuration tasks include:
- Allocate GPU and memory: compute resources for your instance.
- Declare environment variables: environment variables for your model.
- Configure concurrency: parallel request handling.
- Use a custom Docker image: deploy pre-built inference servers.
YAML syntax
YAML syntax
If you’re new to YAML, here’s a quick primer.
The default config uses
[] for empty lists and {} for empty dictionaries.
When adding values, the syntax changes to indented lines:Example
The following example shows a config file for a GPU-accelerated text generation model:config.yaml
Reference
The name of your model.
This is displayed in the model details page in the Baseten UI.
A description of your model.
The name of the class that defines your Truss model.
This class must implement at least a
predict method.The folder containing your model class.
The folder for data files in your Truss. Access it in your model:
model/model.py
The folder for custom packages in your Truss.Place your own code here to reference in Inside the
model.py. For example, with this project structure:model.py the package can be imported like this:model/model.py
Use Specify the path in your Then import the package in your
external_package_dirs to access custom packages located outside your Truss.
This lets multiple Trusses share the same package.The following example shows a project structure where shared_utils/ is outside the Truss:config.yaml:config.yaml
model.py:model.py
Key-value pairs exposed to the environment that the model executes in.
Many Python libraries can be customized with environment variables.
A flexible field for additional metadata.
The entire config file is available to your model at runtime.Reserved keys that Baseten interprets:
example_model_input: Sample input that populates the Baseten playground.
Path to a dependency file. Supports
requirements.txt, pyproject.toml, and uv.lock.
Truss detects the format by filename. Pin versions for reproducibility.When set to a pyproject.toml, Truss installs packages from [project.dependencies].
When set to a uv.lock, a sibling pyproject.toml must exist in the same directory.A list of Python dependencies in pip requirements file format.
Mutually exclusive with
requirements_file. Only one can be specified.For example, to install pinned versions of the dependencies, use the following:System packages that you would typically install using
apt on a Debian operating system.The Python version to use.
Supported versions:
py39py310py311py312py313py314
Declare secrets your model needs at runtime, such as API keys or access tokens.
Store the actual values in your organization settings.For more information, see Secrets.
The path to a file containing example inputs for your model.
If true, changes to your model code are automatically reloaded without restarting the server. Useful for development.
Whether to apply library patches for improved compatibility.
resources
Theresources section specifies the compute resources that your model needs, including CPU, memory, and GPU resources.
You can configure resources in two ways:
Option 1: Specify individual resource fields
instance_type lets you select an exact SKU from the instance type reference. When instance_type is specified, other resource fields are ignored.
CPU resources needed, expressed as either a raw number or “millicpus”.
For example,
1000m and 1 are equivalent.
Fractional CPU amounts can be requested using millicpus.
For example, 500m is half of a CPU core.CPU RAM needed, expressed as a number with units.
Units include “Gi” (Gibibytes), “G” (Gigabytes), “Mi” (Mebibytes), and “M” (Megabytes).
For example,
1Gi and 1024Mi are equivalent.Gi in resources.memory refers to Gibibytes, which are slightly larger
than Gigabytes.The GPU type for your instance.
Available GPUs:For more information, see how to Manage resources.
T4L4L40SA10GV100A100A100_40GBH100H100_40GB(fractional GPU details)H200B200
: operator:The full SKU name for the instance type. When specified, Examples:
cpu, memory, and accelerator fields are ignored.Use this field to select an exact instance type from the instance type reference. The format is <GPU_TYPE>:<vCPU>x<MEMORY> for GPU instances or CPU:<vCPU>x<MEMORY> for CPU-only instances.L4:4x16: L4 GPU with 4 vCPUs and 16 GiB RAM.H100:8x80: H100 GPU with 8 vCPUs and 80 GiB RAM (the exact specs vary by GPU type).CPU:4x16: CPU-only instance with 4 vCPUs and 16 GiB RAM.
The number of nodes for multi-node deployments. Each node gets the specified resources.
runtime
Runtime settings for your model instance. For example, to configure a high-throughput inference server with concurrency and health checks, use the following:The number of concurrent requests that can run in your model’s predict method. Defaults to 1, meaning
predict runs one request at a time. Increase this if your model supports parallelism.See Autoscaling for more detail.The timeout in seconds for streaming read operations.
If true, enables trace data export with built-in OTEL instrumentation. By default, data is collected internally by Baseten for troubleshooting. You can also export to your own systems. See the tracing guide. May add performance overhead.
If true, sets the Truss server log level to
DEBUG instead of INFO.The transport protocol for your model. Supports
http (default), websocket, and grpc.Custom health check configuration for your deployments. For details, see health check configuration.
How long the startup phase runs before marking the replica as unhealthy. During startup, readiness and liveness probes don’t run. Values must be between
10 and 3000 seconds. Defaults to 30 minutes (1800 seconds). See health checks for details.How long health checks must continuously fail before Baseten stops traffic to the replica. Defaults to 30 minutes (
1800 seconds).How long health checks must continuously fail before Baseten restarts the replica. Defaults to 30 minutes (
1800 seconds).How long to wait before running health checks. Deprecated. Use
startup_threshold_seconds instead.base_image
Usebase_image to deploy a custom Docker image. This is useful for running scripts at build time or installing complex dependencies.
For more information, see Deploy custom Docker images.
For example, to use the vLLM Docker image as your base, use the following:
The path to the Docker image, for example:
vllm/vllm-openailmsysorg/sglangnvcr.io/nvidia/nemo:23.03
When using image tags like
:latest, Baseten uses a cached copy and may not reflect updates to the image. To pull a specific version, use image digests like your-image@sha256:abc123....A path to the Python executable on the image, for example
/usr/bin/python.Authentication configuration for a private Docker registry.For more information, see Private Docker registries.
The authentication method for the private registry. Supported values:For For For
GCP_SERVICE_ACCOUNT_JSON- authenticate with a GCP service account. Add your service account JSON blob as a Truss secret.AWS_IAM- authenticate with an AWS IAM service account. Addaws_access_key_idandaws_secret_access_keyto your Baseten secrets.AWS_OIDC- authenticate using AWS OIDC federation. Requiresaws_oidc_role_arnandaws_oidc_region.GCP_OIDC- authenticate using GCP Workload Identity Federation. Requiresgcp_oidc_service_accountandgcp_oidc_workload_id_provider.
GCP_SERVICE_ACCOUNT_JSON:AWS_IAM:AWS_OIDC:GCP_OIDC:The Truss secret that stores the credential for authentication. Required for
GCP_SERVICE_ACCOUNT_JSON. Ensure this secret is added to the secrets section.The registry to authenticate to, for example
us-east4-docker.pkg.dev.The secret name for the AWS access key ID. Only used with
AWS_IAM auth method.The secret name for the AWS secret access key. Only used with
AWS_IAM auth method.docker_server
Usedocker_server to deploy a custom Docker image that has its own HTTP server, without writing a Model class. This is useful for deploying inference servers like vLLM or SGLang that provide their own endpoints.
See Deploy custom Docker images for usage details.
For example, to deploy vLLM serving Qwen 2.5 3B, use the following:
The command to start the server. Required when
no_build is not set or is false. When no_build is true, start_command is optional; if omitted, the image’s original ENTRYPOINT runs.The port where the server runs. Port 8080 is reserved by Baseten’s internal reverse proxy and cannot be used.
The endpoint for inference requests. This is mapped to Baseten’s
/predict route.The endpoint for readiness probes. Determines when the container can accept traffic.
The endpoint for liveness probes. Determines if the container needs to be restarted.
The Linux UID to run the server process as inside the container. Use this when your base image expects a specific non-root user (for example, NVIDIA NIM containers).The specified UID must already exist in the base image. Values
0 (root) and 60000 (platform default) are not allowed.Baseten automatically sets ownership of /app, /workspace, the packages directory, and $HOME to this UID. If your server writes to other directories, ensure they are writable by this UID in your base image or via build_commands.Skip the build step and deploy the base image as-is. Baseten copies the image to its container registry without running See No-build deployment for usage details.
docker build or modifying the image in any way. Only available for custom server deployments that use docker_server.When no_build is true:start_commandis optional. If omitted, the image’s originalENTRYPOINTruns.- Environment variables and secrets are available.
- Development mode is not supported. Deploy with
truss push(published deployments are the default).
config.yaml
The
/app directory is reserved by Baseten. By default, /app, /workspace, and /tmp are writable in the container. If you need other directories to be writable, use run_as_user_id or build_commands to set permissions.external_data
Useexternal_data to download remote files into your image at build time. This reduces cold-start time by making data available without downloading it at runtime. Each entry specifies a URL to fetch and a path relative to the data directory where the file is stored.
The URL to download data from.
Path relative to the data directory where the downloaded file is stored. For example,
my-data.tar.gz is stored at /app/data/my-data.tar.gz.An optional name for the download entry.
The download backend to use.
build_commands
A list of shell commands to run during Docker build. These commands execute after system packages and Python requirements are installed. Use them for any setup that can’t be handled by You can also combine For more information, see Build commands.
requirements or system_packages alone.For example, to clone a GitHub repository into the container, use the following:build_commands with docker_server to deploy third-party inference servers. The following example installs Ollama at build time and runs it as a Docker server:build
Thebuild section handles secret access during Docker builds.
Other build-time configuration options are:
build_commands: shell commands to run during build.requirements: Python packages to install.system_packages: apt packages to install.base_image: custom Docker base image.
Grants access to secrets during the build.
Provide a mapping between a secret and a path on the image.
You can then access the secret in commands specified in Under the hood, this option mounts your secret as a build secret.
The value of your secret will be secure and will not be exposed in your Docker history or logs.
build_commands by running cat on the file.For example, to install a pip package from a private GitHub repository, use the following:weights Preview
Useweights to configure Baseten Delivery Network (BDN) for model weight delivery with multi-tier caching. This is the recommended approach for optimizing cold starts.
weights replaces the deprecated model_cache configuration. Use truss migrate to automatically convert your configuration.URI specifying where to fetch weights from. Supported schemes:
hf://: Hugging Face Hub, for examplehf://meta-llama/Llama-3.1-8B@mains3://: AWS S3, for examples3://my-bucket/models/weightsgs://: Google Cloud Storage, for examplegs://my-bucket/models/weightsr2://: Cloudflare R2, for exampler2://account_id.bucket/path
Absolute path where weights will be mounted in your container. Must start with
/.Name of a Baseten secret containing credentials for private weight sources.
Authentication configuration for accessing private weight sources. Required for OIDC-based authentication. Supported For GCP OIDC:
auth_method values:CUSTOM_SECRET: use a Baseten secret (specifyauth_secret_name).AWS_OIDC: use AWS OIDC federation (requiresaws_oidc_role_arnandaws_oidc_region).GCP_OIDC: use GCP Workload Identity Federation (requiresgcp_oidc_service_accountandgcp_oidc_workload_id_provider).
File patterns to include. Uses
fnmatch-style wildcards. Patterns like *.safetensors only match at the root level; use **/*.safetensors for recursive matching across subdirectories.File patterns to exclude. Uses
fnmatch-style wildcards. Patterns like *.bin only match at the root level; use **/*.bin for recursive matching across subdirectories.model_cache Deprecated
Usemodel_cache to bundle model weights into your image at build time, reducing cold start latency.
For example, to cache Llama 2 7B weights from Hugging Face, use the following:
Despite the name
model_cache, there are multiple backends supported, not just Hugging Face.
You can also cache weights stored on GCS, S3, or Azure.The source path for your model weights.
For example, to cache weights from a Hugging Face repo, use the following:Or you can cache weights from buckets like GCS or S3, using the following options:
The source kind for the model cache.
Supported values:
hf (Hugging Face), gcs, s3, azure.The revision of your Hugging Face repo.
Required when
use_volume is true for Hugging Face repos.If true, caches model artifacts outside the container image. Recommended:
true.The location of the mounted folder. Required when
use_volume is true.
For example, volume_folder: myrepo makes the model available under /app/model_cache/myrepo at runtime.File patterns to include in the cache. Uses Unix shell-style wildcards.
By default, all paths are included.
File patterns to ignore, streamlining the caching process. Use Unix shell-style wildcards. Example:
["*.onnx", "Readme.md"]. By default, nothing is ignored.The secret name to use for runtime authentication, for example when accessing private Hugging Face repos.
trt_llm
Configure TensorRT-LLM for optimized LLM inference on Baseten. TRT-LLM supports two inference stacks:- v1: Best for dense models, small models, and embedding models. Supports lookahead speculative decoding and LoRA adapters.
- v2: Best for MoE models (Qwen3-MoE, DeepSeek, Kimi) and multi-node setups.
config.yaml
The inference stack version to use.Supported values:
v1: Use for dense models, small models, and embedding/reranking models. Supports lookahead speculative decoding and LoRA adapters.v2: Use for MoE models and multi-node setups. The v2 runtime manages build parameters automatically; onlycheckpoint_repository,quantization_type, andnum_builder_gpuscan be set underbuild.
build
Build-time configuration for TRT-LLM engine compilation.The model architecture type.Supported values:
decoder: For generative causal LLMs (Llama, Qwen, Mistral, DeepSeek). Auto-detects architecture from the checkpoint.encoder: For causal embedding models. Optimized for throughput with models like Qwen3-8B for embeddings.encoder_bert: For BERT-based models (classification, reranking, embeddings). Optimized for throughput and cold-start latency of models under 4B parameters.
The model checkpoint to compile. See checkpoint_repository for sub-fields.
The quantization method for the model weights. Use
no_quant for fp16/bf16 (uses the precision from the model’s config.json).Supported values:no_quant: No quantization (fp16 or bf16).fp8: FP8 weights with 16-bit KV cache.fp8_kv: FP8 weights with FP8 KV cache. Faster attention with FP8 context FMHA. Not compatible with models that usebias=True(for example, Qwen 2.5).fp4: FP4 weights with 16-bit KV cache. Requires B200 or newer GPUs.fp4_kv: FP4 weights with FP8 KV cache. Requires B200 or newer GPUs.fp4_mlp_only: FP4 quantization applied only to MLP layers, with 16-bit KV cache. Requires B200 or newer GPUs.
Number of GPUs for tensor parallelism. Must equal the number of GPUs in your
resources.accelerator setting for v1.Maximum sequence length the engine supports. Automatically inferred from the model checkpoint when not set. For encoder models, this is inferred from
max_position_embeddings in the model’s config.Maximum number of requests batched together in one forward pass. Range: 1 to 2048.
Maximum number of tokens batched together in one forward pass. For encoder models and generative models without chunked prefill, this limits the max context length. Range: 65 to 1048576.
Number of GPUs to use during engine compilation. Set this higher than the deployment GPU count if quantization causes out-of-memory errors during the build step. If you run out of CPU memory, add more memory in the
resources section instead.A mapping of LoRA adapter names to checkpoint repositories. Each key becomes the
model name in OpenAI-compatible API requests. Only supported on the v1 inference stack.LoRA configuration. See lora_configuration for sub-fields. Only supported on the v1 inference stack.
Speculative decoding configuration. See speculator for sub-fields. Only supported on the v1 inference stack.
Expert parallelism setting for MoE models. Set to
-1 to let the runtime decide. When set explicitly, must be a positive number less than or equal to tensor_parallel_count, and tensor_parallel_count should be divisible by this value for optimal performance.checkpoint_repository
The model checkpoint to compile. Specifies the source, repository path, and optional credentials.Where to fetch the checkpoint from.Supported values:
HF: Hugging Face Hub.S3: AWS S3 bucket (for example,s3://my-bucket/path/to/checkpoint).GCS: Google Cloud Storage bucket (for example,gcs://my-bucket/path/to/checkpoint).AZURE: Azure Blob Storage.REMOTE_URL: HTTP URL to a tar.gzip archive (for example, a presigned URL).BASETEN_TRAINING: Deploy from a Baseten training job. Use the training job ID asrepoand the run revision asrevision.
The repository path. For
HF, this is the Hugging Face repo ID (for example, meta-llama/Llama-3.1-8B-Instruct). For S3/GCS/AZURE, this is the bucket path. The checkpoint must contain config.json and model files in safetensors format.The revision or version of the checkpoint. For
HF sources, this is the branch, tag, or commit hash. Required for BASETEN_TRAINING sources.The name of the Baseten secret that stores the access credential. Must match a key in your organization’s secret settings.
quantization_config
Calibration settings for quantized models. Only relevant whenquantization_type is not no_quant.
Size of the calibration dataset. Must be a multiple of 64, between 64 and 16384. Increase for production runs (for example, 1536) or decrease for quick testing (for example, 256).
Hugging Face dataset to use for calibration. Uses the
train split and quantizes based on the text column.Maximum sequence length for calibration samples. Must be a multiple of 64, between 64 and 16384.
runtime (v1)
Runtime configuration for the v1 inference stack.Fraction of free GPU memory to allocate for the KV cache. Higher values allow more context but leave less room for other operations.
Bytes of host (CPU) memory to reserve for KV cache offloading. Set to a high value to enable KV cache offload to host memory when GPU memory is constrained.
Whether to process long contexts in chunks. Requires
paged_kv_cache and use_paged_context_fmha to be enabled in the build plugin configuration.The batch scheduling strategy.Supported values:
guaranteed_no_evict: Guarantees scheduling with the requested number of tokens. May queue requests if memory is insufficient. Recommended for most use cases.max_utilization: Schedules requests without checking available memory. May need to pause requests if memory fills up.
Default maximum number of tokens per request when not specified by the client.
The model name returned in OpenAI-compatible API responses. Only for generative (decoder) models.
Maximum number of tokens scheduled at once to the C++ engine. Only for generative (decoder) models.
Default API route for the model. Auto-detected from the model architecture for encoder models.Supported values:
/v1/embeddings: For embedding models./rerank: For reranking models./predict: For sequence classification models.
runtime (v2)
Runtime configuration for the v2 inference stack.Maximum sequence length. Range: 1 to 1048576.
Maximum number of requests batched together in one forward pass. Range: 1 to 2048.
Maximum number of tokens batched together in one forward pass. Range: 65 to 131072.
Number of GPUs for tensor parallelism.
Whether to enable chunked prefill for generative (decoder) models.
The model name returned in OpenAI-compatible API responses. Only for generative (decoder) models.
speculator
Configure speculative decoding to speed up inference by predicting multiple tokens per step. Only supported on the v1 inference stack.Speculative decoding works best at lower batch sizes (under 64). For high-throughput use cases, tune concurrency settings for more aggressive autoscaling instead.
The speculative decoding strategy.Supported values:
LOOKAHEAD_DECODING: N-gram based speculation built into the runtime. Recommended for most use cases, especially code editing workloads where n-gram patterns are common.
Lookahead window size for the
LOOKAHEAD_DECODING mode. Required when using lookahead decoding. Recommended values: 5 to 8.N-gram size for the
LOOKAHEAD_DECODING mode. Required when using lookahead decoding. Recommended values: 3 to 5.Verification set size for the
LOOKAHEAD_DECODING mode. Required when using lookahead decoding. Recommended values: 3 to 5.Maximum number of speculative tokens per step. Auto-calculated from the lookahead parameters when using
LOOKAHEAD_DECODING. Maximum: 2048.Enable the Baseten-optimized lookahead algorithm. Requires
speculative_decoding_mode to be LOOKAHEAD_DECODING. When enabled with (window_size, 1, 1) settings (for example, (8, 1, 1) or (32, 1, 1)), enables dynamic speculation.lora_configuration
LoRA adapter settings for the v1 inference stack. Use withlora_adapters to serve multiple fine-tuned models from a single deployment.
Maximum LoRA rank across all adapters.
List of model modules to apply LoRA to.
training_checkpoints
Configuration for deploying models from training checkpoints. For example, to deploy a model using checkpoints from a training job, use the following:The folder to download the checkpoints to.
A list of artifact references to download.
The training job ID that the artifact reference belongs to.
The paths of the files to download, which can contain
* or ? wildcards.