Skip to content

FNO parameters as SciML structures #59

@SCiarella

Description

@SCiarella

Describe the bug 🐞

When the parameters of a fno are converted to a SciML structure using ComponentArray(), type promotion breaks the model

Expected behavior

I was expecting to be able to use ComponentArray() to use the fno's parameters in SciMLSensitivity

Minimal Reproducible Example 👇

using NeuralOperators
using SciMLStructures
using ComponentArrays
using Lux
using Random

rng = Random.default_rng()

fno = FourierNeuralOperator(
    Lux.gelu;                    # activation function
    chs=(2, 64, 64, 64, 2), # channel weights
    modes=(4, 4),             # number of Fourier modes to retain
    permuted=Val(true)       # structure of the data means that columns are observations
)
ps, st = Lux.setup(rng, fno);

@info SciMLStructures.isscimlstructure(ps)

cps = ComponentArray(ps)

@info SciMLStructures.isscimlstructure(cps)

x = rand(Float32, 16, 16, 2, 8)
test_output = Lux.apply(fno, x, ps, st)[1]

test_output = Lux.apply(fno, x, cps, st)[1]

Error & Stacktrace ⚠️

ERROR: LoadError: MethodError: no method matching realfloat(::Array{ComplexF32, 4})
The function `realfloat` exists, but no method is defined for this combination of argument types.

Closest candidates are:
  realfloat(::StridedArray{<:Union{Float32, Float64}})
   @ AbstractFFTs ~/.julia/packages/AbstractFFTs/4iQz5/src/definitions.jl:42
  realfloat(::AbstractArray{T}) where T<:Real
   @ AbstractFFTs ~/.julia/packages/AbstractFFTs/4iQz5/src/definitions.jl:49

Stacktrace:
  [1] plan_rfft(x::Array{ComplexF32, 4}, region::UnitRange{Int64}; kws::@Kwargs{})
    @ AbstractFFTs ~/.julia/packages/AbstractFFTs/4iQz5/src/definitions.jl:221
  [2] rfft(x::Array{ComplexF32, 4}, region::UnitRange{Int64})
    @ AbstractFFTs ~/.julia/packages/AbstractFFTs/4iQz5/src/definitions.jl:67
  [3] transform(ft::FourierTransform{ComplexF32, Tuple{Int64, Int64}}, x::Array{ComplexF32, 4})
    @ NeuralOperators ~/.julia/packages/NeuralOperators/WUDL3/src/transform.jl:25
  [4] operator_conv(x::Array{ComplexF32, 4}, tform::FourierTransform{ComplexF32, Tuple{Int64, Int64}}, weights::Base.ReshapedArray{ComplexF32, 3, SubArray{ComplexF32, 1, Vector{ComplexF32}, Tuple{UnitRange{Int64}}, true}, Tuple{}})
    @ NeuralOperators ~/.julia/packages/NeuralOperators/WUDL3/src/layers.jl:74
  [5] (::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)})(x::Array{ComplexF32, 4}, ps::ComponentVector{ComplexF32, SubArray{ComplexF32, 1, Vector{ComplexF32}, Tuple{UnitRange{Int64}}, true}, Tuple{Axis{(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)}}}, st::@NamedTuple{})
    @ NeuralOperators ~/.julia/packages/NeuralOperators/WUDL3/src/layers.jl:62
  [6] apply
    @ ~/.julia/packages/LuxCore/q0Mrq/src/LuxCore.jl:155 [inlined]
  [7] macro expansion
    @ ~/.julia/packages/Lux/lRugP/src/layers/containers.jl:0 [inlined]
  [8] applyparallel(layers::@NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, connection::NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, x::Array{ComplexF32, 4}, ps::ComponentVector{ComplexF32, SubArray{ComplexF32, 1, Vector{ComplexF32}, Tuple{UnitRange{Int64}}, true}, Tuple{Axis{(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))}}}, st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}})
    @ Lux ~/.julia/packages/Lux/lRugP/src/layers/containers.jl:180
  [9] (::Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing})(x::Array{ComplexF32, 4}, ps::ComponentVector{ComplexF32, SubArray{ComplexF32, 1, Vector{ComplexF32}, Tuple{UnitRange{Int64}}, true}, Tuple{Axis{(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))}}}, st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}})
    @ Lux ~/.julia/packages/Lux/lRugP/src/layers/containers.jl:178
 [10] apply(model::Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing}, x::Array{ComplexF32, 4}, ps::ComponentVector{ComplexF32, SubArray{ComplexF32, 1, Vector{ComplexF32}, Tuple{UnitRange{Int64}}, true}, Tuple{Axis{(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))}}}, st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}})
    @ LuxCore ~/.julia/packages/LuxCore/q0Mrq/src/LuxCore.jl:155
 [11] (::OperatorKernel{Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing}})(x::Array{ComplexF32, 4}, ps::ComponentVector{ComplexF32, SubArray{ComplexF32, 1, Vector{ComplexF32}, Tuple{UnitRange{Int64}}, true}, Tuple{Axis{(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))}}}, st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}})
    @ LuxCore ~/.julia/packages/LuxCore/q0Mrq/src/LuxCore.jl:269
 [12] apply
    @ ~/.julia/packages/LuxCore/q0Mrq/src/LuxCore.jl:155 [inlined]
 [13] macro expansion
    @ ~/.julia/packages/Lux/lRugP/src/layers/containers.jl:0 [inlined]
 [14] applychain(layers::@NamedTuple{layer_1::OperatorKernel{Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing}}}, x::Array{ComplexF32, 4}, ps::ComponentVector{ComplexF32, SubArray{ComplexF32, 1, Vector{ComplexF32}, Tuple{UnitRange{Int64}}, true}, Tuple{Axis{(layer_1 = ViewAxis(1:69696, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))),)}}}, st::@NamedTuple{layer_1::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}})
    @ Lux ~/.julia/packages/Lux/lRugP/src/layers/containers.jl:511
 [15] (::Chain{@NamedTuple{layer_1::OperatorKernel{Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing}}}, Nothing})(x::Array{ComplexF32, 4}, ps::ComponentVector{ComplexF32, SubArray{ComplexF32, 1, Vector{ComplexF32}, Tuple{UnitRange{Int64}}, true}, Tuple{Axis{(layer_1 = ViewAxis(1:69696, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))),)}}}, st::@NamedTuple{layer_1::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}})
    @ Lux ~/.julia/packages/Lux/lRugP/src/layers/containers.jl:509
 [16] apply
    @ ~/.julia/packages/LuxCore/q0Mrq/src/LuxCore.jl:155 [inlined]
 [17] macro expansion
    @ ~/.julia/packages/Lux/lRugP/src/layers/containers.jl:0 [inlined]
 [18] applychain(layers::@NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Chain{@NamedTuple{layer_1::OperatorKernel{Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing}}}, Nothing}, layer_3::Chain{@NamedTuple{layer_1::Conv{typeof(NNlib.gelu_tanh), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}}, Nothing}}, x::Array{Float32, 4}, ps::ComponentVector{ComplexF32, Vector{ComplexF32}, Tuple{Axis{(layer_1 = ViewAxis(1:192, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 2, 64))), bias = ViewAxis(129:192, Shaped1DAxis((64,))))), layer_2 = ViewAxis(193:69888, Axis(layer_1 = ViewAxis(1:69696, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))),)), layer_3 = ViewAxis(69889:74178, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:4290, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 64, 2))), bias = ViewAxis(129:130, Shaped1DAxis((2,))))))))}}}, st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{layer_1::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}})
    @ Lux ~/.julia/packages/Lux/lRugP/src/layers/containers.jl:511
 [19] (::Chain{@NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Chain{@NamedTuple{layer_1::OperatorKernel{Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing}}}, Nothing}, layer_3::Chain{@NamedTuple{layer_1::Conv{typeof(NNlib.gelu_tanh), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}}, Nothing}}, Nothing})(x::Array{Float32, 4}, ps::ComponentVector{ComplexF32, Vector{ComplexF32}, Tuple{Axis{(layer_1 = ViewAxis(1:192, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 2, 64))), bias = ViewAxis(129:192, Shaped1DAxis((64,))))), layer_2 = ViewAxis(193:69888, Axis(layer_1 = ViewAxis(1:69696, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))),)), layer_3 = ViewAxis(69889:74178, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:4290, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 64, 2))), bias = ViewAxis(129:130, Shaped1DAxis((2,))))))))}}}, st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{layer_1::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}})
    @ Lux ~/.julia/packages/Lux/lRugP/src/layers/containers.jl:509
 [20] apply(model::Chain{@NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Chain{@NamedTuple{layer_1::OperatorKernel{Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing}}}, Nothing}, layer_3::Chain{@NamedTuple{layer_1::Conv{typeof(NNlib.gelu_tanh), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}}, Nothing}}, Nothing}, x::Array{Float32, 4}, ps::ComponentVector{ComplexF32, Vector{ComplexF32}, Tuple{Axis{(layer_1 = ViewAxis(1:192, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 2, 64))), bias = ViewAxis(129:192, Shaped1DAxis((64,))))), layer_2 = ViewAxis(193:69888, Axis(layer_1 = ViewAxis(1:69696, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))),)), layer_3 = ViewAxis(69889:74178, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:4290, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 64, 2))), bias = ViewAxis(129:130, Shaped1DAxis((2,))))))))}}}, st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{layer_1::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}})
    @ LuxCore ~/.julia/packages/LuxCore/q0Mrq/src/LuxCore.jl:155
 [21] (::FourierNeuralOperator{Chain{@NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Chain{@NamedTuple{layer_1::OperatorKernel{Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing}}}, Nothing}, layer_3::Chain{@NamedTuple{layer_1::Conv{typeof(NNlib.gelu_tanh), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}}, Nothing}}, Nothing}})(x::Array{Float32, 4}, ps::ComponentVector{ComplexF32, Vector{ComplexF32}, Tuple{Axis{(layer_1 = ViewAxis(1:192, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 2, 64))), bias = ViewAxis(129:192, Shaped1DAxis((64,))))), layer_2 = ViewAxis(193:69888, Axis(layer_1 = ViewAxis(1:69696, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))),)), layer_3 = ViewAxis(69889:74178, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:4290, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 64, 2))), bias = ViewAxis(129:130, Shaped1DAxis((2,))))))))}}}, st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{layer_1::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}})
    @ LuxCore ~/.julia/packages/LuxCore/q0Mrq/src/LuxCore.jl:269
 [22] apply(model::FourierNeuralOperator{Chain{@NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Chain{@NamedTuple{layer_1::OperatorKernel{Parallel{NeuralOperators.Fix1{typeof(NeuralOperators.add_act), typeof(NNlib.gelu_tanh)}, @NamedTuple{layer_1::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::OperatorConv{Static.True, FourierTransform{ComplexF32, Tuple{Int64, Int64}}, typeof(glorot_uniform)}}, Nothing}}}, Nothing}, layer_3::Chain{@NamedTuple{layer_1::Conv{typeof(NNlib.gelu_tanh), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}, layer_2::Conv{typeof(identity), Int64, Int64, Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}, Int64, Nothing, Nothing, Static.True, Static.False}}, Nothing}}, Nothing}}, x::Array{Float32, 4}, ps::ComponentVector{ComplexF32, Vector{ComplexF32}, Tuple{Axis{(layer_1 = ViewAxis(1:192, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 2, 64))), bias = ViewAxis(129:192, Shaped1DAxis((64,))))), layer_2 = ViewAxis(193:69888, Axis(layer_1 = ViewAxis(1:69696, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:69696, Axis(weight = ViewAxis(1:65536, ShapedAxis((64, 64, 16))),)))),)), layer_3 = ViewAxis(69889:74178, Axis(layer_1 = ViewAxis(1:4160, Axis(weight = ViewAxis(1:4096, ShapedAxis((1, 1, 64, 64))), bias = ViewAxis(4097:4160, Shaped1DAxis((64,))))), layer_2 = ViewAxis(4161:4290, Axis(weight = ViewAxis(1:128, ShapedAxis((1, 1, 64, 2))), bias = ViewAxis(129:130, Shaped1DAxis((2,))))))))}}}, st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{layer_1::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}})
    @ LuxCore ~/.julia/packages/LuxCore/q0Mrq/src/LuxCore.jl:155
 [23] top-level scope
    @ ~/Dropbox/eScience_projects/DEEPDIEP-repos/test/bug.jl:26
in expression starting at /home/simone/Dropbox/eScience_projects/DEEPDIEP-repos/test/bug.jl:26

Environment (please complete the following information):

  • Output of using Pkg; Pkg.status()
  [b0b7db55] ComponentArrays v0.15.27
  [b2108857] Lux v1.13.3
  [ea5c82af] NeuralOperators v0.5.3
  [53ae85a6] SciMLStructures v1.7.0
  [9a3f8284] Random v1.11.0
  • Output of using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
  [47edcb42] ADTypes v1.14.0
  [621f4979] AbstractFFTs v1.5.0
  [79e6a3ab] Adapt v4.3.0
  [dce04be8] ArgCheck v2.5.0
  [4fba245c] ArrayInterface v7.19.0
  [a9b6321e] Atomix v1.1.1
  [62783981] BitTwiddlingConvenienceFunctions v0.1.6
  [2a0fbf3d] CPUSummary v0.2.6
  [d360d2e6] ChainRulesCore v1.25.1
  [fb6a15b2] CloseOpenIntervals v0.1.13
  [bbf7d656] CommonSubexpressions v0.3.1
  [f70d9fcc] CommonWorldInvalidations v1.0.0
  [34da2185] Compat v4.16.0
  [b0b7db55] ComponentArrays v0.15.27
  [2569d6c7] ConcreteStructs v0.2.3
  [187b0558] ConstructionBase v1.5.8
  [adafc99b] CpuId v0.3.1
  [163ba53b] DiffResults v1.1.0
  [b552c78f] DiffRules v1.15.1
  [8d63f2c5] DispatchDoctor v0.4.19
  [ffbed154] DocStringExtensions v0.9.4
  [f151be2c] EnzymeCore v0.8.9
  [7a1cc6ca] FFTW v1.8.1
  [9aa1b823] FastClosures v0.3.2
  [f6369f11] ForwardDiff v1.0.1
  [d9f16b24] Functors v0.5.2
  [46192b85] GPUArraysCore v0.2.0
  [076d061b] HashArrayMappedTries v0.2.0
  [615f187c] IfElse v0.1.1
  [92d709cd] IrrationalConstants v0.2.4
  [692b3bcd] JLLWrappers v1.7.0
  [63c18a36] KernelAbstractions v0.9.34
  [10f19ff3] LayoutPointers v0.1.17
  [2ab3a3ac] LogExpFunctions v0.3.29
  [b2108857] Lux v1.13.3
  [bb33d45b] LuxCore v1.2.6
  [82251201] LuxLib v1.8.0
  [7e8f7934] MLDataDevices v1.10.0
  [1914dd2f] MacroTools v0.5.16
  [d125e4d3] ManualMemory v0.1.8
  [872c559c] NNlib v0.9.30
  [77ba4419] NaNMath v1.1.3
  [ea5c82af] NeuralOperators v0.5.3
  [3bd65402] Optimisers v0.4.6
  [f517fe37] Polyester v0.7.18
  [1d0040c9] PolyesterWeave v0.2.2
⌅ [aea7be01] PrecompileTools v1.2.1
  [21216c6a] Preferences v1.4.3
  [189a3867] Reexport v1.2.2
  [ae029012] Requires v1.3.1
  [94e857df] SIMDTypes v0.1.0
  [53ae85a6] SciMLStructures v1.7.0
  [7e506255] ScopedValues v1.3.0
  [efcf1570] Setfield v1.1.2
  [276daf66] SpecialFunctions v2.5.1
  [aedffcd0] Static v1.2.0
  [0d7ed370] StaticArrayInterface v1.8.0
  [90137ffa] StaticArrays v1.9.13
  [1e83bf80] StaticArraysCore v1.4.3
  [10745b16] Statistics v1.11.1
  [7792a7ef] StrideArraysCore v0.5.7
  [8290d209] ThreadingUtilities v0.5.4
  [013be700] UnsafeAtomics v0.3.0
  [d49dbf32] WeightInitializers v1.1.3
  [f5851436] FFTW_jll v3.3.11+0
  [1d5cc7b8] IntelOpenMP_jll v2025.0.4+0
  [856f044c] MKL_jll v2025.0.1+1
  [efe28fd5] OpenSpecFun_jll v0.5.6+0
  [1317d2d5] oneTBB_jll v2022.0.0+0
  [0dad84c5] ArgTools v1.1.2
  [56f22d72] Artifacts v1.11.0
  [2a0f44e3] Base64 v1.11.0
  [ade2ca70] Dates v1.11.0
  [f43a241f] Downloads v1.6.0
  [7b1f6079] FileWatching v1.11.0
  [9fa8497b] Future v1.11.0
  [b77e0a4c] InteractiveUtils v1.11.0
  [4af54fe1] LazyArtifacts v1.11.0
  [b27032c2] LibCURL v0.6.4
  [76f85450] LibGit2 v1.11.0
  [8f399da3] Libdl v1.11.0
  [37e2e46d] LinearAlgebra v1.11.0
  [56ddb016] Logging v1.11.0
  [d6f4376e] Markdown v1.11.0
  [ca575930] NetworkOptions v1.2.0
  [44cfe95a] Pkg v1.11.0
  [de0858da] Printf v1.11.0
  [9a3f8284] Random v1.11.0
  [ea8e919c] SHA v0.7.0
  [fa267f1f] TOML v1.0.3
  [a4e569a6] Tar v1.10.0
  [cf7118a7] UUIDs v1.11.0
  [4ec0a83e] Unicode v1.11.0
  [e66e0078] CompilerSupportLibraries_jll v1.1.1+0
  [deac9b47] LibCURL_jll v8.6.0+0
  [e37daf67] LibGit2_jll v1.7.2+0
  [29816b5a] LibSSH2_jll v1.11.0+1
  [c8ffd9c3] MbedTLS_jll v2.28.6+0
  [14a3606d] MozillaCACerts_jll v2023.12.12
  [4536629a] OpenBLAS_jll v0.3.27+1
  [05823500] OpenLibm_jll v0.8.5+0
  [83775a58] Zlib_jll v1.2.13+1
  [8e850b90] libblastrampoline_jll v5.11.0+0
  [8e850ede] nghttp2_jll v1.59.0+0
  [3f19e933] p7zip_jll v17.4.0+2
Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated -m`
  • Output of versioninfo()
Julia Version 1.11.5
Commit 760b2e5b739 (2025-04-14 06:53 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 20 × 13th Gen Intel(R) Core(TM) i7-13700H
  WORD_SIZE: 64
  LLVM: libLLVM-16.0.6 (ORCJIT, goldmont)
Threads: 20 default, 0 interactive, 10 GC (on 20 virtual cores)
Environment:
  LD_LIBRARY_PATH = /usr/local/cuda/lib64:/usr/local/cuda-11.2/lib64:/usr/local/cuda-11.1/lib64:/usr/local/cuda-11.1:
  JULIA_NUM_THREADS = 20

Additional context

If the parameters are not converted, I am not able to use the fno for sensitivity analysis via SciMLSensitivity.jl and I get the following error

  `p` is not a SciMLStructure. This is required for adjoint sensitivity analysis. For more information,
  see the documentation on SciMLStructures.jl for the definition of the SciMLStructures interface.
  In particular, adjoint sensitivities only applies to `Tunable`.

Is there maybe another way to get a SciMLStructure?

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions