Skip to content

Latest commit

 

History

History
30 lines (21 loc) · 987 Bytes

File metadata and controls

30 lines (21 loc) · 987 Bytes

MLX Model Conversion

Convert Mistral Devstral model to MLX format for Apple Silicon.

Makefile Targets

install

Install project dependencies using uv sync

convert

Run the model conversion script (convert_model.py) to convert the Hugging Face model to MLX format with quantization

run

Execute the converted MLX model (run_model.py)

clean

Remove generated directories and cache files:

  • Devstral-Small-2-24B-Instruct-2512-mlx
  • .venv
  • __pycache__
  • .ruff_cache

convert_model.py

The convert_model.py script converts the Mistral Devstral model from Hugging Face to MLX format optimized for Apple Silicon. It uses the mlx_lm library to convert the model with quantization enabled, reducing model size while maintaining performance.

Model details:

  • Source: mistralai/Devstral-Small-2-24B-Instruct-2512
  • Output: ./Devstral-Small-2-24B-Instruct-2512-mlx
  • Upload target: mlx-community/Devstral-Small-2-24B-Instruct-2512-4bit