When your model isn't a pure object from a standard ML library, you can still deploy your model via our custom models API.
Deploying a custom model
Standard supported framework models work out of the box with no need for customization. If needed, you have complete control over the python environment and execution of your model, this is where custom models are useful. In order to deploy a custom model you'll need to provide:
A list of all files to be packaged with your model, including:
Serialized objects such as model binaries, embeddings, and datasets,
Python files defining your model and all supporting files,
Anything else that your model needs to run,
A requirements.txt file specifying the dependencies of your model.
Your model must be a python class that implements two methods:
load , a method that will be called upon initialization of the model in the deployment environment.
predict which takes the raw JSON input of the the predict call to the model. It must return data in a JSON-serializable format.
In order to deploy a custom model, use the deploy_custom method of the BaseTen API. You need to provide a name, the model class, the complete set of files supporting the model, and the requirements file.