Skip to content

Add per-layer MLP type support for executorch export (#18856)#18856

Open
navsud wants to merge 1 commit intopytorch:mainfrom
navsud:export-D100682545
Open

Add per-layer MLP type support for executorch export (#18856)#18856
navsud wants to merge 1 commit intopytorch:mainfrom
navsud:export-D100682545

Conversation

@navsud
Copy link
Copy Markdown
Contributor

@navsud navsud commented Apr 13, 2026

Summary:

Add per-layer MLP type support to the ExecuTorch export path. This allows hybrid models to configure FFN blocks per layer (e.g. skip FFN on specified layers), reducing model size and inference latency.

The per-layer config uses an mlp_type list in ModelArgs, where each layer can be set to "default" (standard FFN) or "skip" (no FFN block). This is extensible to future MLP types.

  • Add mlp_type field to ModelArgs (model_args.py) — optional list of strings, one per layer
  • Update TransformerBlock.init to accept mlp_type string and skip FFN/ffn_norm creation when mlp_type == "skip" (llama_transformer.py)
  • Update TransformerBlock.from_type() to read mlp_type from ModelArgs per layer
  • Update TransformerBlock.forward() to pass through attention output directly when mlp_type == "skip"

Reviewed By: ifed-ucsd

Differential Revision: D100682545

@navsud navsud requested a review from lucylq as a code owner April 13, 2026 23:23
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Apr 13, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18856

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure, 2 Unrelated Failures

As of commit 78d8419 with merge base fe71bd4 (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 13, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync bot commented Apr 13, 2026

@navsud has exported this pull request. If you are a Meta employee, you can view the originating Diff in D100682545.

@navsud navsud added the release notes: none Do not include this in the release notes label Apr 13, 2026
@navsud navsud changed the title Add per-layer MLP type support for on-device ANE export Add per-layer MLP type support for executorch export Apr 13, 2026
@meta-codesync meta-codesync bot changed the title Add per-layer MLP type support for executorch export Add per-layer MLP type support for executorch export (#18856) Apr 13, 2026
navsud added a commit to navsud/executorch that referenced this pull request Apr 13, 2026
Summary:

Add per-layer MLP type support to the ExecuTorch export path. This allows hybrid models to configure FFN blocks per layer (e.g. skip FFN on specified layers), reducing model size and inference latency.

The per-layer config uses an mlp_type list in ModelArgs, where each layer can be set to "default" (standard FFN) or "skip" (no FFN block). This is extensible to future MLP types.

- Add mlp_type field to ModelArgs (model_args.py) — optional list of strings, one per layer
- Update TransformerBlock.__init__ to accept mlp_type string and skip FFN/ffn_norm creation when mlp_type == "skip" (llama_transformer.py)
- Update TransformerBlock.from_type() to read mlp_type from ModelArgs per layer
- Update TransformerBlock.forward() to pass through attention output directly when mlp_type == "skip"

Reviewed By: ifed-ucsd

Differential Revision: D100682545
@navsud navsud force-pushed the export-D100682545 branch from a341ca0 to 04faf26 Compare April 13, 2026 23:49
Summary:
Pull Request resolved: pytorch#18856

Add per-layer MLP type support to the ExecuTorch export path. This allows hybrid models to configure FFN blocks per layer (e.g. skip FFN on specified layers), reducing model size and inference latency.

The per-layer config uses an mlp_type list in ModelArgs, where each layer can be set to "default" (standard FFN) or "skip" (no FFN block). This is extensible to future MLP types.

- Add mlp_type field to ModelArgs (model_args.py) — optional list of strings, one per layer
- Update TransformerBlock.__init__ to accept mlp_type string and skip FFN/ffn_norm creation when mlp_type == "skip" (llama_transformer.py)
- Update TransformerBlock.from_type() to read mlp_type from ModelArgs per layer
- Update TransformerBlock.forward() to pass through attention output directly when mlp_type == "skip"

Reviewed By: ifed-ucsd

Differential Revision: D100682545
@navsud navsud force-pushed the export-D100682545 branch from 04faf26 to 78d8419 Compare April 13, 2026 23:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported release notes: none Do not include this in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant