Concepts
Understanding the conceptual framework of Baseten Training for effective model development.
Baseten Training is designed to provide a structured yet flexible way to manage your machine learning training workflows. To use it effectively, it helps to understand the main ideas behind its components and how they fit together. This isn’t an API reference, but rather a guide to thinking about how to organize and execute your training tasks.
Organizing Your Work with TrainingProject
s
A TrainingProject
is a lightweight organization tool to help you group different TrainingJob
s together.
While there a few technical details to consider, your team can use TrainingProject
s to facilitate collaboration and organization.
Running a TrainingJob
Once you have a TrainingProject
, the actual work of training a model happens within a TrainingJob
. Each TrainingJob
represents a single, complete execution of your training script with a specific configuration.
- What it is: A
TrainingJob
is the fundamental unit of execution. It bundles together:- Your training code.
- A base
image
. - The
compute
resources needed to run the job. - The
runtime
configurations like startup commands and environment variables.
- Why use it: Each job is a self-contained, reproducible experiment. If you want to try training your model with a different learning rate, more GPUs, or a slightly modified script, you can create new
TrainingJob
s while knowing that previous ones have been persisted on Baseten. - Lifecycle: A job goes through various stages, from being created (
TRAINING_JOB_CREATED
), to resources being set up (TRAINING_JOB_DEPLOYING
), to actively running your script (TRAINING_JOB_RUNNING
), and finally to a terminal state likeTRAINING_JOB_COMPLETED
. More details on the job lifecycle can be found on the Lifecycle page.
Iterate Faster with the Training Cache
The training cache enables you to persist data between training jobs. This can significantly improve iteration speed by skipping expensive downloads and data transformations.
- How to use it: set
enable_cache=True
in yourRuntime
.- Cache Directories: The cache will be mounted at
/root/.cache/huggingface
and at$BT_RW_CACHE_DIR
. - Seeding Your Data: For multi-gpu training, you should ensure that your data is seeded before running multi-process training jobs. You can do this by separating your training script into training script and data loading script.
- Cache Directories: The cache will be mounted at
- Speedup: For a 400 GB HF Dataset, you can expect to save nearly an hour of compute time for each job - data download and preparation have been done already!
Taking Advantage of Automated Checkpointing
Training machine learning models can be lengthy and resource-intensive. Baseten’s automated Checkpointing
provides seemless storage for checkpoints and a jumping off point for inference and eval.
- What it is: Automated Checkpointing provides a seamless way to save model checkpoints to cloud storage.
- Why use it:
- Fault Tolerance: Resume from the last saved checkpoint if a job fails, saving time and compute.
- Experimentation: Use saved checkpoints as starting points for new training runs with different hyperparameters or for transfer learning.
- Model Evaluation: Deploy intermediate model versions to track progress.
To enable checkpointing, add a CheckpointingConfig
to the Runtime
and set enabled
to True
. Baseten will automatically export the $BT_CHECKPOINT_DIR
environment variable in your job’s environment. Ensure your code is writing checkpoints to the $BT_CHECKPOINT_DIR
.
Multinode Training
Baseten Training supports multinode training via infiniband. To deploy a multinode training job:
- Configure the
Compute
resource in yourTrainingJob
by setting thenode_count
to the number of nodes you’d like to use (e.g. 2). - Make sure you’ve properly integrated with the Baseten provided environment variables.
Securely Integrate with External Services with SecretReference
Successfully training a model often requires many tools and services. Baseten provides SecretReference
for secure handling of secrets.
- How to use it: Store your secret (e.g., an API key for Weights & Biases) in your Baseten workspace with a specific name. In your job’s configuration (e.g., environment variables), you refer to this secret by its name using
SecretReference
. The actual secret value is never exposed in your code. - How it works: Baseten injects the secret value at runtime under the environment variable name that you specify.
Running Inference on Trained Models
The journey from training to a usable model in Baseten typically follows this path:
- A
TrainingJob
with checkpointing enabled, produces one or more model artifacts. - You run
truss train deploy_checkpoint
to deploy a model from your most recent training job. You can read more about this at Deploying Trained Models. - Once deployed, your model will be avialble for inference via API. See more at Calling Your Model.