Quantization options
Quantization type availability depends on the engine and GPU.Engine support
| Quantization | BIS-LLM | Engine-Builder-LLM | BEI |
|---|---|---|---|
FP8 | β | β | β |
FP8_KV | β | β | β |
FP4 | β | β | β |
FP4_KV | β | β | β |
FP4_MLP_ONLY | β | β | β |
GPU support
| GPU type | FP8 | FP8_KV | FP4 | FP4_KV | FP4_MLP_ONLY |
|---|---|---|---|---|---|
| L4 | β | β | β | β | β |
| H100 | β | β | β | β | β |
| H200 | β | β | β | β | β |
| B200 | β | β | β | β | β |
Model recommendations
Some model families have specific quantization requirements that affect accuracy.Qwen2 models
Qwen2 models experience quality degradation withFP8_KV, so use regular FP8 instead. Increase calibration size to 1024 or greater for better accuracy.
Llama models
Llama variants work well withFP8_KV and standard calibration sizes (1024-1536). For B200 deployments, use FP4_MLP_ONLY for the best balance of speed and quality.
BEI models (embeddings)
UseFP8 for embedding models for causal models. Skip quantization for smaller models since the overhead isnβt worth the minimal benefit and Bert is not supported. BEI doesnβt support FP8_KV.
Calibration
Quantization requires calibration data to determine optimal scaling factors. Larger models generally need more calibration samples.Calibration datasets
The default dataset iscnn_dailymail (general news text). For specialized models, or fine-tunes specific to a chat template, use domain-specific datasets when available.
For using a custom dataset, reference the huggingface name under calib_dataset, and make sure the dataset has a train split with a text column.
Calibration configuration
calib_size for larger models. Use domain-specific datasets when available for better accuracy on specialized tasks.
Hardware requirements
FP4 quantization requires B200 GPUs. FP8 runs on L4 and above.
| Quantization | Minimum GPU | Recommended GPU | Memory reduction |
|---|---|---|---|
FP16/BF16 | A100 | H100 | None |
FP8 | L4 | H100 | ~50% |
FP8_KV | L4 | H100 | ~60% |
FP4 | B200 | B200 | ~75% |
FP4_KV | B200 | B200 | ~80% |
Configuration examples
Engine-Builder-LLM:quantization_type in the build section and add quantization_config to customize calibration. BIS-LLM uses inference_stack: v2 while Engine-Builder-LLM uses base_model: decoder.
Best practices
When to use quantization
UseFP8 for production deployments to achieve cost-effective scaling. For memory-constrained environments, FP8_KV or FP4 variants provide additional memory reduction. Quantization becomes essential for models over 15B parameters where memory and cost savings are significant.
When to avoid quantization
Skip quantization when maximum accuracy is critical. UseFP16/BF16 instead. Small models under 8B parameters see minimal benefit from quantization. BEI-Bert models donβt support quantization at all. During research and development, FP16 provides faster iteration without calibration overhead.
Optimization tips
Use calibration datasets that match your domain for best accuracy. Test quantized models with your specific data before production deployment. Monitor the accuracy vs. performance trade-off and consider your hardware constraints when selecting quantization type.Further reading
- Engine-Builder-LLM configuration: Dense model configuration.
- BIS-LLM configuration: MoE model configuration.
- BEI configuration: Embedding model configuration.