Baseten home pagelight logodark logo
Get started
  • Overview
  • Quick start
Concepts
  • Why Baseten
  • How Baseten works
Development
  • Concepts
  • Model APIs
  • Developing a model
  • Developing a Chain
Deployment
  • Concepts
  • Deployments
  • Environments
  • Resources
  • Autoscaling
Inference
  • Concepts
  • Call your model
  • Streaming
  • Async inference
  • Structured LLM output
  • Output formats
  • Integrations
Training
  • Overview
  • Getting started
  • Concepts
  • Management
  • Deploying checkpoints
Observability
  • Metrics
  • Status and health
  • Security
  • Exporting metrics
  • Tracing
  • Billing and usage
Troubleshooting
  • Deployments
  • Inference
  • Support
  • Return to Baseten
Baseten home pagelight logodark logo
  • Support
  • Return to Baseten
  • Return to Baseten
Documentation
Examples
Reference
Status
Documentation
Examples
Reference
Status
Get started

Quick start

1

What modality are you working with?

Choose from common modalities like LLMs, transcription, and image generation to get started quickly.

quick-start-chat

LLMs

Build and deploy large language models

quick-start-transcription

Transcription

Transcribe audio and video

quick-start-image-gen

Image generation

Rapidly generate images

quick-start-text-to-speech

Text to speech

Build humanlike experiences

quick-start-compound-ai

Compound AI

Build real-time AI-native applications

quick-start-embeddings

Embeddings

Process millions of data points

quick-start-custom-models

Custom models

Deploy any model

2

Select a model or guide to get started...

Choose a use case or modality above first…

Was this page helpful?

Previous
Why BasetenBaseten delivers fast, scalable AI/ML inference with enterprise-grade security and reliability—whether in our cloud or yours.
Next
Assistant
Responses are generated using AI and may contain mistakes.
Baseten home pagelight logodark logo
githublinkedinx
Return to BasetenChangelogSupportStatus
githublinkedinx
githublinkedinx