Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.baseten.co/llms.txt

Use this file to discover all available pages before exploring further.

Loops is a Tinker-compatible training SDK for post-training large models at long sequence lengths. It lets you deploy dedicated training and sampling servers for any supported base model, then run your existing Tinker scripts with minimal changes. Baseten’s Loops SDK also includes primitives for async RL, so you can train across long-horizon workloads without pipeline bubbles that increase wall-clock time and execution variance. You can then deploy sampler checkpoints directly to the Baseten Inference Stack.

How Loops works

Loops provides API-driven training infrastructure by deploying training servers that execute traditional forward and backward passes plus optimizer steps. It decouples RL inference into sampling servers, making RL inference scalable for compute-intensive workloads. Trainer and sampler stay synchronized via weight transfers that you can await synchronously or asynchronously, so you can stay on-policy or run bounded off-policy algorithms. In Loops, you own your checkpoints. You can download them as presigned URLs or deploy them onto Baseten’s Inference Stack via UI, CLI, or API. If you’re not sure Loops is the right path for your team, the Training overview compares Loops with truss train (the bring-your-own-container alternative) side by side.

Where to go next

The Loops quickstart gets you to a running training session: by the end you’ll have a checkpoint you can list and query against a Hugging Face base model. The Loops concepts page tells you what actually moves between the trainer and sampling server and how Baseten keeps the two in step through a full training run. The Tinker compatibility page shows exactly which Tinker API calls work unchanged in Loops and which ones you need to swap out before running your first session.