deploy
function supports some model frameworks out of the box with no need for customization, you can use the custom model deployment flow to deploy a model written in any framework. This gives you complete control over the Python environment and execution of your model.custom_model
folder. You can update the file custom_model/model/model.py
to configure your custom model.load,
a method that will be called upon initialization of the model in the deployment environment.predict,
a method which consumes deserialized JSON input from a web request or from a Baseten worklet. This is the integration point for the underlying model object. It must return data in a JSON-serializable format.model.predict
in Python code, the input is wrapped as the value of the 'inputs'
key of a dictionary. So, the predict function above should expect input in that form, as in:'predictions'
key in a dictionary, such as:custom_model/data
. The Path (pathlib.Path) to the data directory is made available to the Model constructor as the parameter data_dir
.custom_model/model
and used in model.py
.custom_model/config.yaml
. They should be in the pip requirements format, with one line of requirements.txt per one list item in the requirements key of the config.truss.from_directory
and then deploy the truss using baseten.deploy_truss
.requirements.txt
file specifying the dependencies of your model.load,
a method that will be called upon initialization of the model in the deployment environment.predict,
a method which consumes deserialized JSON input from a web request or from a Baseten worklet. This is the integration point for the underlying model object. It must return data in a JSON-serializable format.model.py
or another name that conflicts with the Python namespace.deploy_custom
method of the Baseten API. You need to provide a name, the model class, the complete set of files supporting the model, and the requirements file.