Deploy a model built with scikit-learn.
Baseten has great support for deploying and serving your scikit-learn models out of the box. Models built with scikit-learn can be deployed directly from in-memory objects.
All you need to do first is install the Baseten client and create an API key.
Baseten officially supports scikit-learn version 1.0.2 or higher. Especially if you're using an online notebook environment like Google Colab or a bundle of packages like Anaconda, ensure that the version you are using is supported. If it's not, use the --upgrade flag and pip will install the most recent version.


Using this Jupyter notebook, you can build and deploy a scikit-learn model to your Baseten account.

Deploying a scikit-learn model

Deploying a scikit-learn model is as simple as:
import baseten
baseten_model = baseten.deploy(
model_name='My first model'
If you have already saved your model (e.g. pickled it), just load it back into memory, test it to ensure it works, and deploy as in the above.

Example deployment

import baseten
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris
iris = load_iris()
data_x = iris['data']
data_y = iris['target']
rfc = RandomForestClassifier(), data_y)
# => RandomForestClassifier(n_estimators=100, ...)
baseten.login("*** INSERT API KEY ***") #
baseten_model = baseten.deploy(
model_name='Iris RFC'
INFO To build this model server locally execute `docker build -f rfc_iris/sklearn-server.Dockerfile rfc_iris -t iris_rfc`
INFO To run this model server locally execute `docker run --rm -p 8080:8080 iris_rfc`
INFO To use the Python shell locally execute `docker run --rm -it iris_rfc python3`
INFO Serializing Iris RFC scaffold.
INFO Making contact with Baseten 👋 👽
INFO 🚀 Uploading model to Baseten 🚀
Upload Progress: 100% |████████████████████████████████████████████████████████████████████████████████████████████████████| 43.5k/43.5k
INFO 🔮 Upload successful!🔮
INFO Successfully created version xxxxxxx forIris RFC.
INFO Deploying model version.
INFO 🏁 The model is being deployed right now 🏁
INFO ----------------------------------------------------------------------------------------
INFO | Visit for deployment status |
INFO ----------------------------------------------------------------------------------------

Calling your model from the client

Once your model is deployed, you can call it via the client:
baseten_model.predict(data_x[0:5]) # [0,0,0,0,0]
Last modified 3mo ago