By the end of this page you have a checkpoint stored in Baseten that you can list, download, or hand to an inference deployment. The base model throughout isDocumentation Index
Fetch the complete documentation index at: https://docs.baseten.co/llms.txt
Use this file to discover all available pages before exploring further.
Qwen/Qwen3-8B. Before you start, export two environment variables: BASETEN_API_KEY (a workspace key with org access to Loops) and TRAINERS_PROJECT_ID (the ID of the training project you’re targeting).
Prerequisites
- A Baseten workspace API key with org access to Loops, exported as
BASETEN_API_KEY. See API keys. - A training project ID, exported as
TRAINERS_PROJECT_ID. - Python 3.10+ and
uv.
1. Install
The main client package isbaseten-loops, on PyPI. The Tinker compatibility package, tinker-loops, ships separately and re-exports the public API under the tinker namespace, so existing import tinker scripts run unchanged. Add both to your project:
2. Provision a trainer
A Loops session pairs a trainer server (forward, backward, and optimizer steps) with a sampling server (generates from current weights). Constructing aServiceClient and calling create_lora_training_client provisions both in one shot and returns clients you can drive directly. Cold start typically takes about five minutes.
Save this as provision.py:
trainer_server_id; you’ll use it in step 4.
3. Run a training round trip
The smallest complete round trip is one forward pass, one backward pass, one optimizer step, and one weight save. The script below mirrors the canonical SFT example: it tokenizes a prompt-and-answer pair, masks the prompt positions from the loss, runs the round trip, and saves a named checkpoint. Save this astrain.py:
save_weights_and_get_sampling_client returns, the weights are committed as a named checkpoint and the sampling server is loaded with the new version.
4. List checkpoints
Everysave_weights_and_get_sampling_client call creates a checkpoint. List them with the SDK to get checkpoint IDs and metadata:
RUN_ID here is the same value the SDK exposes as trainer_server_id. The response includes a list of checkpoint objects, each with an id, a name (the string you passed to save_weights_and_get_sampling_client), and a creation timestamp. Pass a checkpoint id to get_checkpoint_archive_url to retrieve paginated presigned URLs for the weight files.
Next steps
The Loops concepts page explains the paired-process model in detail: how sessions own trainer and sampling servers, how weight sync works, and how checkpoints land as unzipped folders of paginated presigned URLs rather than single archives. Reading it will make the resource IDs in this quickstart feel less arbitrary. If you’re migrating from Tinker, the Tinker compatibility page documents what carries over exactly (forward, backward, optim step, sampling, data types) and what behaves differently (checkpoint layout, authentication, cluster routing). Theimport tinker path used here already covers most cookbook recipes; that page names the three places where behavior has changed.
When you’re ready to call the HTTP API directly (for scripting deployments, fetching checkpoint files programmatically, or integrating Loops into a CI pipeline), the Loops API overview covers each route’s path, request body, response shape, and authentication scope in one place.