Feature Request: Add support for Qwen 3.5 series models (including Qwen3.5-0.8B/2B/4B/9B) #514
Unanswered
sdcb
asked this question in
Feature requests
Replies: 4 comments 1 reply
-
|
Seconding this^^ |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
+1 |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
I need this |
Beta Was this translation helpful? Give feedback.
1 reply
-
|
seems openvino is not support qwen3.5 now |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Is your feature request related to a problem? Please describe.
Currently, Foundry Local provides a great experience with the Qwen 2.5 family. However, Alibaba has recently released the Qwen 3.5 series, which brings significant improvements in reasoning, coding, and multilingual capabilities. As a developer using Intel Arc GPU (specifically the B580 Battlemage), I'm eager to leverage these new models natively via the OpenVINO backend in Foundry Local for better performance.
Describe the solution you'd like
I would like to see the following Qwen 3.5 models added to the Foundry Local model catalog, optimized for OpenVINO (CPU/GPU/NPU):
Describe alternatives you've considered
I have tried running these via Ollama or manual OpenVINO GenAI implementation, but Foundry Local's seamless integration with Windows and its API-first approach makes it the preferred tool for my C#/.NET development workflow.
Additional context
The Qwen 3.5 series models are already gaining traction in the community (Hugging Face / Unsloth), and having them available as "out-of-the-box" options in Foundry Local would significantly benefit developers on the Windows AI PC ecosystem, especially those using the latest Intel Battlemage hardware.
Beta Was this translation helpful? Give feedback.
All reactions