Inference 📨
How to call your model
Run inference on deployed models
Once you’ve deployed your model, it’s time to use it! Every model on Baseten is served behind an API endpoint. To call a model, you need:
- The model’s ID.
- An API key for your Baseten account.
- JSON-serializable model input.
You can call a model using the:
/predict
endpoint for the production deployment, development deployment or other published deployment./async_predict
endpoint for the production deployment, development deployment or other published deployment.- Truss CLI command
truss predict
. - “Call model” button on the model dashboard within your Baseten workspace.
Call by API endpoint
See the inference API reference for more details.
Call by async API endpoint
See the async inference API reference for API details and the async guide for more information about running async inference.
Call with Truss CLI
See the Truss CLI reference for more details.