Transcribe Audio
Use this endpoint to call the production environment of your model.
If you are deploying this model as a chain, you can call it in the following way
Parameters
The ID of the model you want to call.
Your Baseten API key, formatted with prefix Api-Key
(e.g. {"Authorization": "Api-Key abcd1234.abcd1234"}
).
Body
The audio input options. You must provide one of url
, audio_b64
, or audio_bytes
.
- url (
string
): URL of the audio file. - audio_b64 (
string
): Base64-encoded audio content. - audio_bytes (
bytes
): Raw audio bytes.
Parameters for controlling Whisper’s behavior.
- prompt (
string
, optional): Optional transcription prompt. - audio_language (
string
, default="en"
): Language of the input audio. Set to"auto"
for automatic detection. - language_detection_only (
boolean
, default=false
): Iftrue
, only return the automatic language detection result without transcribing. - language_options (
list[string]
, default=[]
): List of language codes to consider for language detection, for example["en", "zh"]
. This could improve language detection accuracy by scoping the language detection to a specific set of languages that only makes sense for your use case. By default, we consider all languages supported by Whisper model. [Added since v0.5.0] - use_dynamic_preprocessing (
boolean
, default=false
): Enables dynamic range compression to process audio with variable loudness. - show_word_timestamps (
boolean
, default=false
): Iftrue
, include word-level timestamps in the output. [Added since v0.4.0]
Advanced settings for automatic speech recognition (ASR) process.
- beam_size (
integer
, default=1
): Beam search size for decoding. We support beam size up to 5. - length_penalty (
float
, default=2.0
): Length penalty applied to ASR output. Length penalty can only work whenbeam_size
is greater than 1.
Parameters for controlling voice activity detection (VAD) process.
- max_speech_duration_s (
integer
, default=29
): Maximum duration of speech in seconds to be considered a speech segment.max_speech_duration_s
cannot be over 30 because Whisper model can only take up to 30 seconds audio input. [Added since v0.4.0] - min_silence_duration_ms (
integer
, default=3000
): In the end of each speech chunk wait for min_silence_duration_ms before separating it. [Added since v0.4.0] - threshold (
float
, default=0.5
): Speech threshold. VAD outputs speech probabilities for each audio chunk, probabilities above this value are considered as speech. It is better to tune this parameter for each dataset separately, but “lazy” 0.5 is pretty good for most datasets. [Added since v0.4.0]