Private Hugging Face Model
Load a model that requires authentication with Hugging Face
View on Github
In this example, we build a Truss that uses a model that requires Hugging Face authentication. The steps for loading a model from Hugging Face are:
- Create an access token on your Hugging Face account.
- Add the `hf_access_token“ key to your config.yaml secrets and value to your Baseten account.
- Add
use_auth_token
when creating the actual model.
Setting up the model
In this example, we use a private version of the BERT base model. The model is publicly available, but for the purposes of our example, we copied it into a private model repository, with the path “baseten/docs-example-gated-model”.
First, like with other Hugging Face models, start by importing the pipeline
function from the
transformers library, and defining the Model
class.
An important step in loading a model that requires authentication is to
have access to the secrets defined for this model. We pull these out of
the keyword args in the __init__
function.
Ensure that when you define the pipeline
, we use the use_auth_token
parameter,
pass the hf_access_token
secret that is on our Baseten account.
Setting up the config.yaml
The main things that need to be set up in the config are
requirements
, which need to include Hugging Face transformers,
and the secrets.
To make the hf_access_token
available in the Truss, we need to include
it in the config. Setting the value to null
here means that the value
will be set by the Baseten secrets manager.
Deploying the model
An important note for deploying models with secrets is that
you must use the --trusted
flag to give the model access to
secrets stored on the remote secrets manager.
After the model finishes deploying, you can invoke it with: