Skip to content

Conversation

@lkdvos
Copy link
Member

@lkdvos lkdvos commented Jan 20, 2026

Here I am porting over a bunch of our chainrules to the Mooncake ones.

In particular, I am trying to identify the core computational routines and writing the rules for these, while not blindly taking the same methods.
For example, in ChainRules we overload rules for *(::Number, ::AbstractTensorMap), in Mooncake we simply define a rule for scale!(::AbstractTensorMap, ::Number).

To do:

  • Index manipulations
  • VectorInterface
  • LinearAlgebra
  • TensorOperations
  • PlanarOperations

@codecov
Copy link

codecov bot commented Jan 20, 2026

Codecov Report

❌ Patch coverage is 79.22438% with 75 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
ext/TensorKitMooncakeExt/planaroperations.jl 0.00% 38 Missing ⚠️
ext/TensorKitMooncakeExt/indexmanipulations.jl 79.54% 36 Missing ⚠️
ext/TensorKitMooncakeExt/linalg.jl 97.50% 1 Missing ⚠️
Files with missing lines Coverage Δ
ext/TensorKitMooncakeExt/TensorKitMooncakeExt.jl 100.00% <ø> (ø)
ext/TensorKitMooncakeExt/tangent.jl 100.00% <100.00%> (ø)
ext/TensorKitMooncakeExt/tensoroperations.jl 100.00% <100.00%> (+1.92%) ⬆️
ext/TensorKitMooncakeExt/utility.jl 71.42% <100.00%> (+28.57%) ⬆️
ext/TensorKitMooncakeExt/vectorinterface.jl 100.00% <100.00%> (ø)
src/fusiontrees/manipulations.jl 86.30% <100.00%> (ø)
ext/TensorKitMooncakeExt/linalg.jl 98.00% <97.50%> (+98.00%) ⬆️
ext/TensorKitMooncakeExt/indexmanipulations.jl 79.54% <79.54%> (ø)
ext/TensorKitMooncakeExt/planaroperations.jl 0.00% <0.00%> (ø)

... and 2 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@kshyatt
Copy link
Member

kshyatt commented Jan 21, 2026

I can likely pick up some of the linalg ones if you like

@lkdvos
Copy link
Member Author

lkdvos commented Jan 21, 2026

I'll keep my progress committed and pushed, feel free to push if you have something. If not I'll just gradually keep adding some whenever I'm waiting for other tests, so also shouldn't be a huge issue.

@lkdvos lkdvos marked this pull request as ready for review January 22, 2026 13:26
@lkdvos lkdvos requested review from Jutho and kshyatt January 22, 2026 13:26
Copy link
Member

@kshyatt kshyatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One comment its it might be nice to put some of these pullbacks into shared files like we did with MAK and TO so that if/when we add Enzyme support, we can do so with a light touch

@lkdvos
Copy link
Member Author

lkdvos commented Jan 22, 2026

I definitely agree that it would be nicer to put this in a better form, but would you be okay with leaving that for a follow-up PR?
I tried already separating out some of the functions so it should become easier to migrate this in the future.

What is preventing me from actually pulling this through though is that I now am also altering the primal computation in some places, specifically for constructions involving alpha.
The idea is that for a computation f!(out, args..., alpha, beta) = beta * out + alpha * f(args...) the pullback with respect to alpha is simply derived from f(args...) alone, so I can change the primal computation to f!_mod(out, args..., alpha, beta) = add!(out, f(args...), alpha, beta) and store the intermediate result. I.e. at the cost of adding an additional allocation and an in-place add!, I remove having to compute f in the reverse pass, but only when dalpha is required. (See e.g. the rule for mul!).

Without actually having the Enzyme code next to it, it's a bit hard to already come up with the correct abstractions to make sure this works for both engines, and I want to avoid having to do that work twice.

Additionally, it would be nice to immediately overload the TensorOperations functions but these haven't been released yet (and additionally I would like to play a similar trick there, but haven't gotten around to that yet)

@kshyatt
Copy link
Member

kshyatt commented Jan 22, 2026

I definitely agree that it would be nicer to put this in a better form, but would you be okay with leaving that for a follow-up PR?

Yeah that sounds fine, just separating things into discrete functions is great already

@lkdvos lkdvos enabled auto-merge (squash) January 22, 2026 15:53
@lkdvos lkdvos requested a review from kshyatt January 23, 2026 16:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants