Test out differentiating through a model#486
Conversation
jatkinson1000
left a comment
There was a problem hiding this comment.
Thanks @joewallwork this generally looks good.
Couple of naming comments, and it may want a merge from main to pick up any changes/conflicts.
My only lingering thought is that this is quite a trivial example, and gives the result of ones. Do PyTorch have a net backpropogation example anywhere? I suppose a better test of these functionalities will be to 'train' a net, but that requires optimizers and loss functions so will likely be a new example. It'd be worth thinking about how we want to structure these though. Do we want to introduce backprop through nets here, or would we be better introducing backprop, optimizers, loss in separate examples before then applying them to a net in another example? I'm not set on either way so interested in your thoughts - either way we can merge this now and then restructure things later.
Thanks for the review @jatkinson1000.
I did indeed need to merge in
My thought was that we could include derivative code in Something I saw that I liked in a repo I reviewed for JOSS was a "learning path" for the examples. It could be nice to include something like this, where autograd, optimizers, and training are on a learning path that isn't necessarily what the typical user would need. (See https://github.com/youssef-mesri/sofia-mesh/tree/main/examples#learning-path) |
jatkinson1000
left a comment
There was a problem hiding this comment.
LGTM @joewallwork Thanks!
+1 for the "learning path" idea, perhaps open an issue and label with hackathon?
Okay great, will merge.
Opened #568. |
Closes #483.
Closes #213.
This PR demonstrates that it's already possible to differentiate through calls to
torch_model_forwardin FTorch.Checklist