Hello, I have a couple of questions regarding quantizer options for Larq and LCE.
I am designing a BNN using the DoReFa quantizer, however, I noticed a very high number of estimated MACs and Ops when converting the model for ARM64. Changing the quantizer to "ste_sign" dramatically lowered the number of MACs and Ops.
I was wondering if there is a way to use the DoReFa quantizer for training without the serious overhead of operations when converting and running the model for inference in LCE? Is the "ste_sign" quantizer the only viable option for efficient inference?
Thank you for the excellent work and for your attention.