TrainingJob
has produced model checkpoints, you can deploy them as fully operational model endpoints.
This feature works with HuggingFace compatible LLMs, allowing you to easily deploy fine-tuned language models directly from your training checkpoints with a single command.
To leverage deploying checkpoints, first ensure you have a TrainingJob
that’s running with a checkpointing_config
enabled.
$BT_CHECKPOINT_DIR
.
The contents of this directory are uploaded to Baseten’s storage and made immediately available for deployment.
(You can optionally specify a checkpoint_path
in your checkpointing_config
if you prefer to write to a specific directory). The default location is “/tmp/training_checkpoints”.
To deploy your checkpoint(s) as a Deployment
, you can:
- run
truss train deploy_checkpoints [--job-id <job_id>]
and follow the setup wizard. - define an instance of a
DeployCheckpointsConfig
class (this is helpful for small changes that aren’t provided by the wizard) and runtruss train deploy_checkpoints --config <path_to_config_file>
.
Currently, the
deploy_checkpoints
command only supports LoRA and Full Fine Tune for Single Node LLM Training jobs.deploy_checkpoints
is run, truss
will construct a deployment config.yml
and store it on disk in a temporary directory. If you’d like to preserve or modify the resulting deployment config, you can copy paste it
into a permanent directory and customize it as needed.
This file defines the source of truth for the deployment and can be deployed independently via truss push
. See deployments for more details.
After successful deployment, your model will be deployed on Baseten, where you can run inference requests and evaluate performance. See Calling Your Model for more details.
To download the files you saved to the checkpointing directory or understand the file structure, you can run truss train get_checkpoint_urls [--job-id=<job_id>]
to get a JSON file containing presigned URLs for each training job.
The JSON file contains the following structure:
- The presigned URLs expire after 7 days from generation
- These URLs are primarily intended for evaluation and testing purposes, not for long-term inference deployments
- For production deployments, consider copying the checkpoint files to your Truss model directory and downloading them in the model’s
load()
function
Complex and Custom Use Cases
- Custom Model Architectures
- Weights Sharded Across Nodes (Contact Baseten for help implementating this)
truss train get_checkpoint_urls --job-id=<your-training-job-id>
. If a file looks like this:
*
match to to an arbitrary number of chars while ?
matches to one.
/tmp/training_checkpoints/rank-[node-rank]/[relative_file_name]
. For the example above, the file can be read from: