Skip to content

Use Adapt.jl for backend array conversion#236

Merged
michel2323 merged 1 commit intomainfrom
adapt-backend
Mar 10, 2026
Merged

Use Adapt.jl for backend array conversion#236
michel2323 merged 1 commit intomainfrom
adapt-backend

Conversation

@michel2323
Copy link
Member

@sshin23 , This is equivalent to #235, but uses Adapt.

@michel2323 michel2323 requested a review from sshin23 March 2, 2026 15:50
@michel2323
Copy link
Member Author

michel2323 commented Mar 2, 2026

The MacOS test is failing because findall is not implemented in the backend. However, it worked before, so I wonder what was actually used for the tests on a Mac. One should either use Metal.jl for the GPU or just plain CPU without OpenCL. OpenCL is deprecated on MacOS.

@michel2323
Copy link
Member Author

@sshin23 Previously, the macOS runner was set to macos-latest and x64, and it used OpenCL. This has two issues:

  • macos-latest + x64 will run Julia in Rosetta emulation
  • OpenCL is officially deprecated on macOS

I added a macOS runner set to macos-latest and aarch64. Julia will now run natively on an M1 chip. The backend is set to Metal.jl, which is supported through KA.

@michel2323 michel2323 force-pushed the adapt-backend branch 2 times, most recently from 0547a3a to 4f493ce Compare March 3, 2026 14:49
@michel2323
Copy link
Member Author

Myea, Metal doesn't support Float64 😕.

@klamike
Copy link
Contributor

klamike commented Mar 3, 2026

I gave metal support a shot a while ago. I don't think it's complete but maybe it can be useful for you: main...klamike:ExaModels.jl:mk/metal

@michel2323
Copy link
Member Author

I gave metal support a shot a while ago. I don't think it's complete but maybe it can be useful for you: main...klamike:ExaModels.jl:mk/metal

I've reverted now in this PR. Using Metal should work out of the box, but to get the tests working with Float32 seems hard.

@sshin23
Copy link
Collaborator

sshin23 commented Mar 3, 2026

We have some NaN <: Float64 in our code that will cause some issues
https://github.com/search?q=repo%3Aexanauts%2FExaModels.jl%20NaN&type=code

@klamike
Copy link
Contributor

klamike commented Mar 3, 2026

Nice! Yeah looks like most of those changes were just casting to Float32 all over the place.

@sshin23
Copy link
Collaborator

sshin23 commented Mar 3, 2026

I wonder what the benefit of adapt is compared to using allocate. Given that we need an extremely simple feature, I'm more inclined to just using allocate as in #235 to reduce one dependency

@michel2323
Copy link
Member Author

michel2323 commented Mar 4, 2026

I wonder what the benefit of adapt is compared to using allocate. Given that we need an extremely simple feature, I'm more inclined to just using allocate as in #235 to reduce one dependency

ExaModels.convert_array(v, backend) = KernelAbstractions.allocate(backend, eltype(v), length(v))

allocate allocates uninitialized memory, whereas Adapt is closer to convert. It performs the appropriate conversion for the backend. In addition, adapt is supposed to preserve properties from the original array, such as unified, device, host, or something similar. It also does things like a noop when the array is already on the right backend.

Adapt is a dependency of all GPU packages and KA. It's a very light package that basically defines an API.

@sshin23
Copy link
Collaborator

sshin23 commented Mar 4, 2026

Great. Thank you, @michel2323!

@michel2323
Copy link
Member Author

Maybe waiting for this JuliaGPU/OpenCL.jl#423 so we can remove OpenCL too.

@michel2323
Copy link
Member Author

The OpenCL PR takes too long. Merging.

@michel2323 michel2323 merged commit 9059fe8 into main Mar 10, 2026
25 of 27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants