Skip to content

Test out differentiating through a model#486

Merged
joewallwork merged 20 commits intomainfrom
483_autograd-function
Mar 25, 2026
Merged

Test out differentiating through a model#486
joewallwork merged 20 commits intomainfrom
483_autograd-function

Conversation

@joewallwork
Copy link
Copy Markdown
Collaborator

@joewallwork joewallwork commented Jan 21, 2026

Closes #483.
Closes #213.

This PR demonstrates that it's already possible to differentiate through calls to torch_model_forward in FTorch.

Checklist

  • Python example
  • Fortran example
  • Update docs
  • Changelog

@joewallwork joewallwork self-assigned this Jan 21, 2026
@joewallwork joewallwork added the enhancement New feature or request label Jan 21, 2026
@joewallwork joewallwork marked this pull request as ready for review January 21, 2026 13:17
Copy link
Copy Markdown
Member

@jatkinson1000 jatkinson1000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @joewallwork this generally looks good.

Couple of naming comments, and it may want a merge from main to pick up any changes/conflicts.

My only lingering thought is that this is quite a trivial example, and gives the result of ones. Do PyTorch have a net backpropogation example anywhere? I suppose a better test of these functionalities will be to 'train' a net, but that requires optimizers and loss functions so will likely be a new example. It'd be worth thinking about how we want to structure these though. Do we want to introduce backprop through nets here, or would we be better introducing backprop, optimizers, loss in separate examples before then applying them to a net in another example? I'm not set on either way so interested in your thoughts - either way we can merge this now and then restructure things later.

Comment thread examples/9_Autograd/README.md Outdated
Comment thread examples/09_Autograd/simplenet_backward.f90
@joewallwork
Copy link
Copy Markdown
Collaborator Author

Thanks @joewallwork this generally looks good.

Thanks for the review @jatkinson1000.

Couple of naming comments, and it may want a merge from main to pick up any changes/conflicts.

I did indeed need to merge in main and make some manual fixes following #528.

My only lingering thought is that this is quite a trivial example, and gives the result of ones. Do PyTorch have a net backpropogation example anywhere? I suppose a better test of these functionalities will be to 'train' a net, but that requires optimizers and loss functions so will likely be a new example. It'd be worth thinking about how we want to structure these though. Do we want to introduce backprop through nets here, or would we be better introducing backprop, optimizers, loss in separate examples before then applying them to a net in another example? I'm not set on either way so interested in your thoughts - either way we can merge this now and then restructure things later.

My thought was that we could include derivative code in 9_Autograd/ for other key examples. We already have tensor manipulation and SimpleNet but perhaps there are others that we could add later. I plan to add another example later that demonstrates full training in Fortran. That will need to follow the Optimizers example.

Something I saw that I liked in a repo I reviewed for JOSS was a "learning path" for the examples. It could be nice to include something like this, where autograd, optimizers, and training are on a learning path that isn't necessarily what the typical user would need. (See https://github.com/youssef-mesri/sofia-mesh/tree/main/examples#learning-path)

Copy link
Copy Markdown
Member

@jatkinson1000 jatkinson1000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM @joewallwork Thanks!

+1 for the "learning path" idea, perhaps open an issue and label with hackathon?

@joewallwork
Copy link
Copy Markdown
Collaborator Author

LGTM @joewallwork Thanks!

Okay great, will merge.

+1 for the "learning path" idea, perhaps open an issue and label with hackathon?

Opened #568.

@joewallwork joewallwork merged commit 8764efd into main Mar 25, 2026
8 checks passed
@joewallwork joewallwork deleted the 483_autograd-function branch March 25, 2026 17:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Could the manually defined function or similiary autograd operation be used with FTorch? Demonstrate differentiating through model output

2 participants