By the end of this guide, you’ll have created a federated user for one of your downstream customers, minted an API key bound to that user, and called your Dedicated deployment through Baseten Frontier Gateway with the key. From here, you can configure additional rate and usage limits, set up billing webhooks, and explore the full federated lifecycle.Documentation Index
Fetch the complete documentation index at: https://docs.baseten.co/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- A Dedicated deployment of your model on Baseten.
- A Baseten workspace API key with management scope, exported as
BASETEN_API_KEY. - Completed Frontier Gateway onboarding with your Baseten team.
/v1/gateway/ endpoints used here return 403 to workspaces that aren’t onboarded.
Step 1: Create a federated user
A federated user is the resource you create per downstream customer. The user owns the customer’scustomer_id, the model slugs they’re allowed to call, and the rate and usage limits enforced on every call. API keys are minted under the user in step 2.
Create a user with POST /v1/gateway/users. The request takes a customer_id you choose (a stable identifier from your own user system) and a non-empty list of model configurations. Each entry pairs a model slug with optional rate and usage limits.
id you’ll use as the path parameter when minting keys:
id. You’ll need it in step 2.
Step 2: Mint an API key for the user
Issue a new API key under the federated user withPOST /v1/gateway/users/{user_id}/api_keys. The key inherits the user’s full model set by default; rate and usage limits live on the user, not the key.
. (here, aBcDeFg) is the prefix. You’ll use the prefix, not the full key, when fetching or revoking the key later.
Step 3: Call your model through the gateway
Use the API key from step 2 to call your model. Frontier Gateway is OpenAI-compatible, so the OpenAI SDK works with the gateway base URL. ReplaceYOUR_API_KEY in the examples below with the value you saved from the mint-key response.
- Python
- curl
Install the OpenAI SDK:Make a chat completion request:
chat.py
https://inference.baseten.co/v1 today. Once white-label routing is provisioned for your workspace, the base URL becomes the branded domain you configure with your Baseten team, and your downstream customers call your domain instead.
Next steps
- Manage API keys: Walk the full federated lifecycle: upsert users, mint and revoke keys, and soft-delete users.
- Rate and usage limits: Tune per-user, per-model token and request thresholds.
- Billing webhooks: Stream signed per-request usage events into your billing pipeline.