Skip to content

Adapt sym quantizer to ET (#18870)#18870

Open
mgiordy wants to merge 3 commits intopytorch:mainfrom
mgiordy:export-D91777784
Open

Adapt sym quantizer to ET (#18870)#18870
mgiordy wants to merge 3 commits intopytorch:mainfrom
mgiordy:export-D91777784

Conversation

@mgiordy
Copy link
Copy Markdown
Contributor

@mgiordy mgiordy commented Apr 14, 2026

Summary:

Context

This diff aims at matching the inference accuracy on device using Executorch.

Summary

The quantizer of C++ pipeline needs to be aligned with the quantizer of Executorch. This involves matching the same quantization arithmetic.


#hthtemplate

Reviewed By: hsharma35

Differential Revision: D91777784

Copilot AI review requested due to automatic review settings April 14, 2026 10:23
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 14, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18870

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

⏳ No Failures, 8 Pending

As of commit 325ece4 with merge base c09c713 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 14, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync bot commented Apr 14, 2026

@mgiordy has exported this pull request. If you are a Meta employee, you can view the originating Diff in D91777784.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aligns Cadence mixed W8A32 quantization behavior with ExecuTorch by matching symmetric quant ranges and ensuring GRU bias quantization uses a shared scale/observer across bias terms.

Changes:

  • Introduce a symmetric int8 quantization spec using [-127, 127] and apply it to mixed W8A32 patterns (linear/conv/GRU).
  • Update the mixed W8A32 GRU path to use a single bias scale (shared observer) and update the custom op schema accordingly.
  • Adjust GRU reference implementation, meta kernel shape inference, fusion pass wiring, and unit tests to reflect the updated bias scaling and output shaping.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
backends/cadence/aot/tests/test_ref_implementations.py Updates GRU test inputs/signature usage and expected output shape for the new GRU behavior.
backends/cadence/aot/ref_implementations.py Updates GRU ref impl to use a single bias scale and changes output shaping logic.
backends/cadence/aot/quantizer/quantizer.py Adds [-127,127] symmetric qspec and switches mixed W8A32 quantizer to it.
backends/cadence/aot/quantizer/patterns.py Makes conv/gru pattern metadata checks more robust; shares GRU bias observers via SharedQuantizationSpec.
backends/cadence/aot/quantizer/fusion_pass.py Adjusts mixed W8A32 conv metadata propagation and updates GRU args to pass a single bias scale.
backends/cadence/aot/ops_registrations.py Updates GRU op schema to a single bias scale and changes meta output shape inference.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +1310 to +1316
batch_size = inputs.shape[0]
input_dim = inputs.shape[1]
hidden_dim = hidden.shape[-1]

new_hidden_expanded = new_hidden.unsqueeze(1).expand(batch_size, input_dim, hidden_dim)

return torch.stack([new_hidden_expanded, new_hidden_expanded], dim=0)
Comment on lines +3066 to +3076
seq_len = inputs.shape[1]
assert seq_len == 1
# inputs comes in shape [batch, seq_len, input_size]
# hidden comes in shape [batch, seq_len, hidden_size]
# weights_inputs comes in shape [3 * hidden_size, input_size]
# weights_hidden comes in shape [3 * hidden_size, hidden_size]
# output comes in empty with shape [2, batch, seq_len, hidden_size]
# The first dimension stacks the output and the new hidden state
return hidden.new_empty(
(2, inputs.shape[0], inputs.shape[1], hidden.shape[-1]), dtype=torch.float32
)
Comment on lines +3023 to +3027
expected_shape = (2, inputs.shape[0], inputs.shape[1], hidden.shape[-1])
self.assertEqual(
output.shape,
(2, *hidden.shape),
f"Output shape should match {(2, *hidden.shape)} in {name}",
expected_shape,
f"Output shape should match {expected_shape} in {name}",
Comment on lines 524 to 528
assert len(dequants_biases) == 2
w_i_scale = dequants_weights[0].args[1]
w_h_scale = dequants_weights[1].args[1]
b_i_scale = dequants_biases[0].args[1]
b_h_scale = dequants_biases[1].args[1]
b_scale = dequants_biases[0].args[1]

Marco Giordano added 2 commits April 14, 2026 12:23
Summary:
Pull Request resolved: pytorch#16607

#### Summary

This diff fixes the Conv1d w8a32 operator by adding a transformation to the `val` attribute of the `other_inputs[0].meta` dictionary. Specifically, the `permute` operation is applied to the `original_val` tensor with the `fake_mode` context, and the resulting `transposed_val` is assigned to `transposed_inputs.meta["val"]`.

Differential Revision: D89863750

Reviewed By: mcremon-meta
Summary:
# Context
This diff fixes the reference implementation of the w8a32 GRU operator and enhances the operator's pattern matching.

# Mitigation
The reference implementation has now the right output dimension and pattern matching now uses a safer check for the operator parameters.

Differential Revision: D90437262

Reviewed By: hsharma35
@meta-codesync meta-codesync bot changed the title Adapt sym quantizer to ET Adapt sym quantizer to ET (#18870) Apr 14, 2026
mgiordy pushed a commit to mgiordy/executorch that referenced this pull request Apr 14, 2026
Summary:

# Context

This diff aims at matching the inference accuracy on device using Executorch.

# Summary

The quantizer of C++ pipeline needs to be aligned with the quantizer of Executorch. This involves matching the same quantization arithmetic.

---
#hthtemplate

Reviewed By: hsharma35

Differential Revision: D91777784
Summary:
Pull Request resolved: pytorch#18870

# Context

This diff aims at matching the inference accuracy on device using Executorch.

# Summary

The quantizer of C++ pipeline needs to be aligned with the quantizer of Executorch. This involves matching the same quantization arithmetic.

 ---
#hthtemplate

Reviewed By: hsharma35

Differential Revision: D91777784
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants