Conversation
| y = x * mul + add | ||
| # Avoid round to even behaviour, friggin pytorch | ||
| y = torch.floor((y / div) + 0.5) | ||
| # LMACAN: Dory doesn't like the `+ 0.5` fix |
There was a problem hiding this comment.
Please motivate this more, are you sure you are not making a mistake at some other point?
This is critical code for many applications.
There was a problem hiding this comment.
It is hard for me to motivate this more with my limited knowledge of quantlib. Do you have an alternative way to handle this?
There was a problem hiding this comment.
This creates an extra addition layer in the produced onnx graph and DORY expects a precise pattern of layers to recognize requantization.
There was a problem hiding this comment.
This will definitely break compatibility with the RQS strategy in Deeploy where we do rounding by default. I suggest that you make arithmetic rounding in the RequantShift layer configurable and disable it in flows targetting DORY as the backend.
There was a problem hiding this comment.
I am unfortunately not very familiar with DORY but for Deeploy we (or at least I) export fused RQS nodes directly.
There was a problem hiding this comment.
We should probably talk with @da-gazzi as well - I know that Georg implemented rounding by adding "half the shift" to the bias; it seems to me like adding .5 here does pretty much the same. We should disentangle this a bit before merging, but if there are multiple places where rounding biases are added we should fold that into one spot.
There was a problem hiding this comment.
I concur with @Scheremo the only "good" solution is to fuse the rounding with the bias value and not expose this +0.5 here. I do not know in Deeploy how that is handled, but as this is anyways an addition of 0.5 happening after requantization, it can not really represent an integer op in a bit-true fashion.
Fusing this inside QuantLib avoids any confusion.
There was a problem hiding this comment.
I concur with @Scheremo the only "good" solution is to fuse the rounding with the bias value and not expose this
+0.5here. I do not know in Deeploy how that is handled, but as this is anyways an addition of 0.5 happening after requantization, it can not really represent an integer op in a bit-true fashion. Fusing this inside QuantLib avoids any confusion.
Agree - the idea of the "RequantShift" layer is that it represents the integer operations performed on the device 1:1. The activation rounding is handled by statically adding half an eps to the bias value; adding 0.5 here would achieve the same thing but it breaks the exported net if you don't use custom nodes. Is there something keeping us from just using the "integer rounding" approach in all cases? It is already configurable; i.e., you can turn it on/off as desired with the rounding flag to PACTActivation classes.
No description provided.