Make custom_ops import optional in native runners and improve error message#18857
Make custom_ops import optional in native runners and improve error message#18857
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18857
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below:
|
This PR needs a
|
…e runners Make the custom_ops import optional in native runners since it is not needed for pybindings inference. Also improve the assertion error message in custom_ops.py to give actionable guidance when the library is missing. Fixes the error when running Qwen 3.5 examples without building with EXECUTORCH_BUILD_KERNELS_LLM_AOT=ON. Agent-Logs-Url: https://github.com/pytorch/executorch/sessions/6d0bbec5-f0cd-4307-9d9e-3703e499b4ab Co-authored-by: kirklandsign <107070759+kirklandsign@users.noreply.github.com>
Summary
Running native Python runners (e.g., Qwen 3.5 examples) fails with a cryptic
AssertionError: Expected 1 library but got 0whencustom_ops_aot_libis not built. The custom ops Meta kernels registered by this import are only needed for export tracing, not for pybindings inference.extension/llm/custom_ops/custom_ops.py: Improve assertion message to include the search path and actionable guidance (-DEXECUTORCH_BUILD_KERNELS_LLM_AOT=ONor pybind preset).examples/models/llama/runner/native.py,examples/models/llama3_2_vision/runner/native.py: Wrapcustom_opsimport intry/except— follows the same pattern askernels/portable/__init__.pyandkernels/quantized/__init__.py.Test plan
Verified the native runner no longer crashes on import when
custom_ops_aot_libis not built. Whencustom_opsis needed (e.g., during export) and the library is missing, the error now reads: