Pre-trained models
Deploy a pre-trained model to kickstart your ML-powered application.
Baseten provides a growing set of pre-trained models that solve common ML tasks. These models are a great way to kickstart your ML application and showcase the features and functionality of Baseten - you can deploy pre-trained models along with optional application templates to add ML into your application in minutes.

How to deploy

To deploy a pre-trained model, head to the models page and select "Deploy a model" then "Choose a pre-trained model". Follow the model selection and configuration dialogs to deploy your model.
Deploying a pre-trained model
If you select "create sample application," go to the applications page to see an application with relevant resources to support common use cases for the model.

Supported models and frameworks

Pre-trained models can be applied to common ML tasks. From speech transcription to sentiment analysis, image classification to photo restoration, pre-trained models deliver powerful ML capabilities directly to your applications. Baseten currently offers 18 pre-trained models:
  • CLIP: connecting text and images (Custom): Classify images with zero-shot-like functionality.
  • CodeGen mono 2B (Custom): Generate Python code from natural language or code prompts. This version of the model was fine-trained on Python code.
  • CodeGen multi 2B (Custom): Generate Python code from natural language or code prompts. This version of the model was fine-trained on natural language and broad programming languages.
  • Dall·E Mini (Custom): Generate novel images from a text prompt.
  • ELMo word representations (TensorFlow): Generate embeddings from a language model trained on the 1 Billion Word Benchmark.
  • English to French translation (Hugging Face: Transformer): Translate between English and French.
  • Extractive question answering (Hugging Face: Transformer): Extract an answer from a given text when provided a question.
  • Faster R-CNN Inception V2 (TensorFlow): Object detection model using Faster R-CNN with Inception
  • GFP-GAN (Custom): Restore photos with GFP-GAN.
  • GPT-J (Custom): Generate text with GPT-J. This is an implementation of EleutherAI GPT-J-6B.
  • Image Segmentation (Custom): Identify classes of objects in an image.
  • Iris random forest classifier (scikit-learn): Predict Iris class with a random forest classifier.
  • Masked language modeling (Hugging Face: Transformer): Fill a masked token in sequences based on the context around it.
  • MNIST digit classifier (scikit-learn): Logistic Regression model for classification of MNIST database of handwritten digits.
  • ResNet50 V2 (TensorFlow): Image detection model using ResNet V2 neural network architecture.
  • Sentiment analyzer (Hugging Face: Transformer): Analyze sentiment.
  • Style transfer (Custom): Apply one image's style onto another.
  • Summarizer (Hugging Face: Transformer): Summarize a text into a shorter text.
  • Text generation (Hugging Face: Transformer): Generate language given a prompt.
  • Token classification (Hugging Face: Transformer): Classify tokens in strings.
  • Wav2vec 2.0 speech transcription (Custom): Transcribe audio files with wav2vec 2.0.
  • Zero-shot classification (Hugging Face: Transformer): Classify snippets of text into unseen categories.
If you'd like Baseten to offer a specific pre-trained model, or if you'd like to fine-tune these models on your data, reach out.