Refactor module access to use PyTorch get/set_submodule API#1365
Refactor module access to use PyTorch get/set_submodule API#1365scopophobic wants to merge 8 commits intointel:mainfrom
Conversation
for more information, see https://pre-commit.ci
…obic/auto-round into refactor/use-torch-submodule-api
for more information, see https://pre-commit.ci
|
Hi @scopophobic you can check the log here. https://dev.azure.com/lpot-inc/neural-compressor/_build/results?buildId=51719&view=logs&j=44c25250-aab3-5e31-d6d7-8ba2147b1266&t=262f41be-8379-5409-f492-e6c716395db9 |
|
I'm now seeing test failures that appear to be related to this change: Root CauseThe new fail-fast behavior is breaking code in for key in model._tied_weights_keys:
weight = get_attr(model, key)
set_attr(model, key, copy.deepcopy(weight))The issue: model._tied_weights_keys may contain keys that don't exist in all model configurations. The old Question: Which approach should I take?Option 1: Keep silent None behavior
Option 2: Improve the test case Update for key in model._tied_weights_keys:
try:
weight = get_attr(model, key)
set_attr(model, key, copy.deepcopy(weight))
except AttributeError:
# Skip tied weights that don't exist in this model configuration
pass
My recommendation: Option 2 (improve the test case), since fail-fast with clear errors is better for debugging and prevents silent bugs from propagating. What do you think? |
Summary
issue : #1362
This PR refactors module access and replacement to rely on PyTorch’s native
get_submodule/set_submoduleAPIs instead of maintaining custom traversal logic.What changed
get_attr/set_attrhelpersget_module/set_module(thin wrappers over
get_submodule/set_submodule)Why
PyTorch has provided a robust module replace API since 1.x.
Using the native API improves correctness, readability, and long-term maintainability,
and avoids subtle issues with manual attribute traversal.
Behavior
No functional changes — this is a pure refactor.