Description
_prompt_model_selection() in cli/setup.py has no input validation, causing multiple crash/correctness bugs when users enter invalid model selection strings.
Bugs Found
1. Non-numeric input crashes with ValueError
Entering abc at the model selection prompt causes an unhandled ValueError: invalid literal for int() with base 10: 'abc'.
2. Out-of-range index crashes with IndexError
Entering 99 (when only ~111 models are listed) causes IndexError: list index out of range.
3. Input "0" silently selects wrong model
Entering 0 maps to Python index -1 (last element in the list), silently selecting the wrong model.
4. Negative numbers wrap around
Entering -1 maps to index -2, silently selecting the wrong model via Python negative indexing.
5. Invalid quantization accepted without validation
Entering 1:fakeq passes fakeq as the quantization without any validation. This will fail downstream.
Expected Behavior
- Non-numeric input should show a retry prompt with a clear error message
- Out-of-range indices should show "Please enter a number between 1 and N"
- Zero and negative indices should be rejected
- Quant values should be validated against the allowed set (bf16, int4, int8)
Steps to Reproduce
# Direct test
from mlx_stack.cli.setup import _prompt_model_selection
# ... feed invalid inputs to the prompt
Or run mlx-stack setup interactively and enter abc, 99, 0, -1, or 1:fakeq at the model selection prompt.
Impact
High — this is the primary onboarding flow. Any user who makes a typo during interactive setup will see a Python traceback instead of a helpful error message.
Description
_prompt_model_selection()incli/setup.pyhas no input validation, causing multiple crash/correctness bugs when users enter invalid model selection strings.Bugs Found
1. Non-numeric input crashes with
ValueErrorEntering
abcat the model selection prompt causes an unhandledValueError: invalid literal for int() with base 10: 'abc'.2. Out-of-range index crashes with
IndexErrorEntering
99(when only ~111 models are listed) causesIndexError: list index out of range.3. Input "0" silently selects wrong model
Entering
0maps to Python index-1(last element in the list), silently selecting the wrong model.4. Negative numbers wrap around
Entering
-1maps to index-2, silently selecting the wrong model via Python negative indexing.5. Invalid quantization accepted without validation
Entering
1:fakeqpassesfakeqas the quantization without any validation. This will fail downstream.Expected Behavior
Steps to Reproduce
Or run
mlx-stack setupinteractively and enterabc,99,0,-1, or1:fakeqat the model selection prompt.Impact
High — this is the primary onboarding flow. Any user who makes a typo during interactive setup will see a Python traceback instead of a helpful error message.