Skip to content

Documentation update#158

Merged
stelladk merged 77 commits intogrowingnet:mainfrom
stelladk:docs
Feb 9, 2026
Merged

Documentation update#158
stelladk merged 77 commits intogrowingnet:mainfrom
stelladk:docs

Conversation

@stelladk
Copy link
Copy Markdown
Collaborator

@stelladk stelladk commented Oct 20, 2025

TODO:

  • Check all documentation
  • Update docs for convolution attributes
  • Create example for GrowingDAG
  • Reduce warnings
  • Check that all tests use named arguments

Sébastian suggestions:

  • Example with complete training-growing loop
  • Tuning hyperparameters is not clear

Alex suggestions:

  • More examples on the containers
  • Conditional experiment setups (ligher for CI)

Reminder to address Issues:

@sylvchev
Copy link
Copy Markdown
Collaborator

This is a very important task. We also need an example of a growing_container using convolution.

@codecov
Copy link
Copy Markdown

codecov Bot commented Oct 24, 2025

Codecov Report

❌ Patch coverage is 92.36641% with 10 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/gromo/containers/growing_graph_network.py 66.66% 0 Missing and 4 partials ⚠️
src/gromo/containers/growing_residual_mlp.py 77.77% 2 Missing ⚠️
src/gromo/modules/conv2d_growing_module.py 89.47% 1 Missing and 1 partial ⚠️
src/gromo/containers/growing_dag.py 96.87% 0 Missing and 1 partial ⚠️
src/gromo/utils/utils.py 80.00% 0 Missing and 1 partial ⚠️
Flag Coverage Δ
unittests 95.13% <92.36%> (+0.44%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
src/gromo/containers/growing_block.py 98.85% <100.00%> (-1.15%) ⬇️
src/gromo/containers/growing_container.py 98.78% <100.00%> (+0.01%) ⬆️
src/gromo/containers/growing_mlp.py 100.00% <ø> (ø)
src/gromo/containers/growing_mlp_mixer.py 96.62% <100.00%> (-0.11%) ⬇️
src/gromo/containers/resnet.py 100.00% <ø> (ø)
...c/gromo/containers/sequential_growing_container.py 100.00% <100.00%> (ø)
src/gromo/modules/constant_module.py 100.00% <100.00%> (ø)
src/gromo/modules/growing_module.py 97.31% <100.00%> (+0.52%) ⬆️
src/gromo/modules/growing_normalisation.py 90.47% <100.00%> (+0.23%) ⬆️
src/gromo/modules/linear_growing_module.py 98.59% <100.00%> (+0.04%) ⬆️
... and 9 more
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@stelladk stelladk linked an issue Nov 12, 2025 that may be closed by this pull request
8 tasks
@stelladk stelladk linked an issue Nov 14, 2025 that may be closed by this pull request
Agree to refer the attributes as attributes of the "block"

Co-authored-by: Théo Rudkiewicz <azertyuiop9216012@gmail.com>
Comment thread pyproject.toml
Comment on lines +136 to +143
"SIM102", # forces non-nested if statements
"SIM108", # leads to decrease the actual coverage
"SIM117", # nested with statements
"N802", # function name should be lowercase
"N803", # argument name should be lowercase
"N806", # argument name should be lowercase
"E731", # do not assign a lambda expression, use a def
"E501", # line too long
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SIM102, SIM117 seems good rules.
E501 is abit of a pain but may worth it.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The suggestions from SIM102 and SIM117 seem unreadable to me, but I will leave this to the decision of the majority. E501 was clashing with the pre-commit limit

Comment thread tests/torch_unittest.py
Comment on lines +24 to +30
def forward(self, x: torch.Tensor) -> torch.Tensor:
if x.size(1) != self.num_features:
raise ValueError(
f"Input size of SizedIdentity must be {self.num_features}, "
f"but got {input.size(1)}"
f"but got {x.size(1)}"
)
return super().forward(input)
return super().forward(x)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that either:

  • we don't use input as variable
  • we don't properly inherit from nn.Module which use input

Which one is worse is not clear for me. I would go to use input as torch already made this choice but i am not 100% fixed.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest we use x and suppress any complaints from pytorch

Comment thread src/gromo/containers/growing_mlp.py Outdated
stelladk and others added 2 commits January 29, 2026 16:55
Co-authored-by: Théo Rudkiewicz <azertyuiop9216012@gmail.com>
Co-authored-by: Théo Rudkiewicz <azertyuiop9216012@gmail.com>
Comment thread src/gromo/modules/growing_module.py
Comment thread src/gromo/modules/growing_module.py Outdated
Comment thread tests/test_growing_mlp.py
Copy link
Copy Markdown
Collaborator

@TheoRudkiewicz TheoRudkiewicz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this huge work. I checked all the code which I know (excluding dag, graph_network and merge modules) and except for the few comments evrything look good.

Co-authored-by: Théo Rudkiewicz <azertyuiop9216012@gmail.com>
Comment thread tests/test_growing_module.py Outdated
Comment thread tests/test_linear_growing_module.py Outdated
@stelladk
Copy link
Copy Markdown
Collaborator Author

stelladk commented Feb 9, 2026

Thank you for contributing everyone! I will now merge this pull request and start a new one to continue this work. 🔥

@stelladk stelladk merged commit 59c05a0 into growingnet:main Feb 9, 2026
7 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants