Simply install the dependencies and then run the converter.py script,
replacing weights_name with the name of your YOLO weights in the same directory.
Optionally, you can use the create_uint8_quantized_tflite flag to specify whether or not
you want to create a uint8 quantized TFLite model.
Then to run inference with the TFLite model simply run
python inference.py path/to/tflite_model.tflite path/to/image.jpgYou can specify the --conf and --iou flags to adjust the confidence and IoU thresholds respectively.
Note that the quantized version of this model isn't as good as the non-quantized version. For example,
with the provided image.jpg it needs a NMS threshold of around 0.05 to remove junk detections,
whereas the non-quantized version can use a less strict threshold of 0.45.