Skip to content

boomermath/yolov8_obb_quantization

Repository files navigation

YOLOv8 OBB Quantization/Conversion Tool (WIP)

How to use

Simply install the dependencies and then run the converter.py script, replacing weights_name with the name of your YOLO weights in the same directory.

Optionally, you can use the create_uint8_quantized_tflite flag to specify whether or not you want to create a uint8 quantized TFLite model.

Then to run inference with the TFLite model simply run

python inference.py path/to/tflite_model.tflite path/to/image.jpg

You can specify the --conf and --iou flags to adjust the confidence and IoU thresholds respectively.

Note that the quantized version of this model isn't as good as the non-quantized version. For example, with the provided image.jpg it needs a NMS threshold of around 0.05 to remove junk detections, whereas the non-quantized version can use a less strict threshold of 0.45.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors