Deploy Whisper V3 Fastest

Example usage

Transcribe audio files at up to a 400x real-time factor — that’s 1 hour of audio in under 9 seconds. This setup requires meaningful production traffic to be cost-effective, but at scale it’s at least 80% cheaper than OpenAI. Get in touch with us and we’ll work with you to deploy a transcription pipeline that’s customized to match your needs.

For quick deployments of Whisper suitable for shorter audio files and lower traffic volume, you can deploy Whisper V3 and Whisper V3 Turbo directly from the model library.

import requests
import os

# Model ID for production deployment
model_id = ""
# Read secrets from environment variables
baseten_api_key = os.environ["BASETEN_API_KEY"]

# Call model endpoint
resp = requests.post(
    f"https://model-{model_id}.api.baseten.co/production/predict",
    headers={"Authorization": f"Api-Key {baseten_api_key}"},
    json={
      "url": "https://www2.cs.uic.edu/~i101/SoundFiles/gettysburg10.wav",
    }
)

print(resp.content.decode("utf-8"))

JSON Output

{
  "segments": [
    {
      "start": 0,
      "end": 9.8,
      "text": "Four score and seven years ago, our fathers brought forth on this continent a new nation, conceived in liberty and dedicated to the proposition that all men are created equal."
    }
  ],
  "language_code": "en"
}