Professional benchmarking framework for Intel NPU, CPU, and GPU inference using OpenVINO
-
Updated
Jan 25, 2026 - Python
Professional benchmarking framework for Intel NPU, CPU, and GPU inference using OpenVINO
Complete guide and scripts for setup, execution and testing the AI models (ONNX) performance on NPU architectures (Qualcomm QNN & Intel OpenVINO) on Windows.
A high-performance Docker container that runs OpenAI's Whisper model. Optimized for CPU, Intel NPU, Intel Arc/iGPU, and NVIDIA CUDA GPUs.
Add a description, image, and links to the intel-npu topic page so that developers can more easily learn about it.
To associate your repository with the intel-npu topic, visit your repo's landing page and select "manage topics."