Baseten home pagelight logodark logo
Get started
  • Overview
  • Quick start
Concepts
  • Why Baseten
  • How Baseten works
Development
  • Concepts
  • Developing a model
  • Developing a Chain
Deployment
  • Concepts
  • Deployments
  • Environments
  • Resources
  • Autoscaling
Inference
  • Concepts
  • Call your model
  • Streaming
  • Async inference
  • Structured LLM output
  • Output formats
  • Integrations
Observability
  • Metrics
  • Status and health
  • Security
  • Exporting metrics
  • Tracing
  • Billing and usage
Troubleshooting
  • Deployments
  • Inference
  • Support
  • Return to Baseten
Baseten home pagelight logodark logo
  • Support
  • Return to Baseten
  • Return to Baseten
Inference
Integrations
Documentation
Examples
Reference
Status
Documentation
Examples
Reference
Status
Inference

Integrations

Integrate your models with tools like LangChain, LiteLLM, and more.

Chainlit

Build your own open-source ChatGPT with Baseten and Chainlit.

LangChain

Use your Baseten models in the LangChain ecosystem.

LiteLLM

Use your Baseten models in LiteLLM projects.

Twilio

Build an AI-powered Twilio SMS chatbot with a Baseten-hosted LLM.

Build your own

Want to integrate Baseten with your platform or project? Reach out to support@baseten.co and we’ll help with building and marketing the integration.

Was this page helpful?

Previous
MetricsUnderstand the load and performance of your model
Next
Baseten home pagelight logodark logo
githublinkedinx
Return to BasetenChangelogSupportStatus
githublinkedinx
githublinkedinx