This code is used only for information purpose.
Compare run time of different frameworks.
The benchmarks were run in the following environment:
Experiment Enviroment: [Github Codespace](https://docs.github.com/en/codespaces/overview)
Python Version: 3.12.1
The versions of the packages used are:
numpy==1.26.4
torch==2.2.2
tensorflow==2.16.2
onnx==1.17.0
onnxruntime==1.17.0
jax==0.4.30
jaxlib==0.4.30
openvino==2024.6.0
matplotlib==3.9.3
Pillow: 8.3.2
psutil: 5.8.0
-
Create a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required dependencies:
pip install -r requirements.txt
Benchmarks for deep learning inference measure the performance of different machine learning frameworks when making predictions (inference) using pre-trained models. These benchmarks help in understanding the efficiency and resource utilization of each framework.
We are testing the following aspects of each framework:
- Latency: The average time taken to make a single prediction.
- CPU Utilization: The average percentage of CPU used during the inference process.
- Memory Utilization: The average amount of memory used during the inference process.
The frameworks being tested are:
- TensorFlow
- PyTorch
- ONNX Runtime
- JAX
- OpenVINO
To run the benchmark and compare the performance of different frameworks, execute the performance_benchmark.py script:
python performance_benchmark.py