Convert Mistral Devstral model to MLX format for Apple Silicon.
Install project dependencies using uv sync
Run the model conversion script (convert_model.py) to convert the Hugging Face model to MLX format with quantization
Execute the converted MLX model (run_model.py)
Remove generated directories and cache files:
Devstral-Small-2-24B-Instruct-2512-mlx.venv__pycache__.ruff_cache
The convert_model.py script converts the Mistral Devstral model from Hugging Face to MLX format optimized for Apple Silicon. It uses the mlx_lm library to convert the model with quantization enabled, reducing model size while maintaining performance.
Model details:
- Source:
mistralai/Devstral-Small-2-24B-Instruct-2512 - Output:
./Devstral-Small-2-24B-Instruct-2512-mlx - Upload target:
mlx-community/Devstral-Small-2-24B-Instruct-2512-4bit