Skip to main content
Model APIs support both text and vision inputs, but multimodal capability depends on the underlying model. Vision-capable models accept images alongside text in the same request, using the OpenAI-compatible image_url content type. The model processes both modalities together, so it can answer questions about image content, compare multiple images, or extract structured data from screenshots. Not all models support vision. Check the table below before sending image inputs.

Supported models

ModelSlug
Kimi K2.5moonshotai/Kimi-K2.5
Kimi K2.6moonshotai/Kimi-K2.6

Send a vision request

Use the image_url content type to include images in your messages. Baseten retrieves image URLs from the inference service, so the URL must be reachable over HTTPS from Baseten’s environment (for example your own object storage, Hugging Face artifact links, or other hosts that allow server-side fetches). Prefer stable, direct HTTPS links. Optional image_url.detail controls preprocessing resolution: low, high, original, or auto (OpenAI-compatible). When in doubt, use auto.
from openai import OpenAI
import os

client = OpenAI(
    base_url="https://inference.baseten.co/v1",
    api_key=os.environ["BASETEN_API_KEY"],
)

response = client.chat.completions.create(
    model="moonshotai/Kimi-K2.6",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Describe the natural environment in the image.",
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png",
                        "detail": "auto",
                    },
                },
            ],
        }
    ],
)

print(response.choices[0].message.content)

Image and video limits (Model APIs)

For Kimi, multimodal limits come from the model’s deployment config (b10_vision_config under baseten/mp/baseten_dynamo/deploy/model-apis/) and its encoder template (baseten/mp/baseten_dynamo/cache_aware_routing_trtllm/encoder/template_configs/).
LimitKimi K2.5Kimi K2.6
Max images per request9696
Max videos per request1212
Max total media size per request (URL)240 MB240 MB
Max size per image (URL)90 MB80 MB
Max request size (base64)100 MB100 MB
Pass images as URLs or as base64-encoded data.
Other Model APIs models use their own b10_vision_config values. Confirm limits for a slug in the Baseten app, via /v1/models, or by reading that model’s YAML under deploy/model-apis/.

Pricing

There is no additional per-image fee. Images are converted to input tokens and priced at the model’s standard input rate. Higher resolution images produce more tokens and cost more to process. The exact conversion from pixels to tokens depends on the model. Kimi K2.5 and Kimi K2.6 divide each image into 14×14 pixel tiles where each tile becomes one input token. The cost table below uses Kimi K2.5’s uncached input rate ($0.60 per million tokens); for Kimi K2.6 and other models, use the rates in the pricing table on the Model APIs overview.
Image resolutionTilesInput tokensCost at $0.60/M
256×256324324$0.0002
512×5121,2961,296$0.0008
1024×10245,3295,329$0.0032
1920×108010,23410,234$0.0061
For videos, token count scales with both resolution and the number of sampled frames.