mxbai-edge-colbert-v0-32m-onnx

ONNX export of mixedbread-ai/mxbai-edge-colbert-v0-32m for fast CPU inference.

Model Details

Files

File Description
model.onnx FP32 ONNX model
model_int8.onnx INT8 quantized model (faster)
tokenizer.json Tokenizer configuration
config_sentence_transformers.json Model configuration
onnx_config.json ONNX configuration

Usage with colbert-onnx (Rust)

use colbert_onnx::Colbert;

let mut model = Colbert::from_pretrained("path/to/model")?;
let embeddings = model.encode_documents(&["Hello world"])?;

Export Tool

This model was exported using pylate-onnx-export:

pip install "pylate-onnx-export @ git+https://github.com/lightonai/next-plaid.git#subdirectory=onnx/python"
pylate-onnx-export mixedbread-ai/mxbai-edge-colbert-v0-32m --quantize --push-to-hub lightonai/mxbai-edge-colbert-v0-32m-onnx
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support