Skip to content

Multiple sources for user provided values #275

@JBludau

Description

@JBludau

We somewhat have multiple sources for input values from the user (this came up in the wake of #274).

in https://github.com/ORNL/CabanaPD/blob/main/src/CabanaPD_Input.hpp we parse the json file we get from the user and use the values to compute other constants we need for the simulation. For single materials it is a reasonable assumption that the values in the input file are the ones used for the models in the simulation. Nevertheless, in the case of multiple materials this is not so straightforward.
Examples:

  • Multiple materials but all have the same density (e.g. we want to model impurities, microscopic structure, ...) . Thus only one density might be in the input file but we need to see if all models that are in ForceModels actually use this value. Additionally, the user could just use another value it computes from the density for another material (e.g. due to porosity).
  • For our critical timestep calculation we need to know which is the min/max of some material constants. If we just assume that the list given in the json file is a 1on1 mapping on the models (e.g. rho[0] and cp[0] are assigned to models[0] etc.), we unnecessarily restrict CabanaPD or make it complicated for the user to always have consistent sets.
  • Even if we have a 1on1 mapping we have a problem with the models we create from averaging two simple models. We would need to know how the models mixing two base models are created. And how many this will give. And then, if we want to have consistent sets of parameters we should at the minimum require the number of parameters to always be the number of models (which requires the json to be in sync with the cpp implementation at all times).

What can we do:
I think we should check everything only on the c++ side. Basically based on ForceModel in the Solver. We can still do input value checking, but we should basically only check if the set of classes we use for the computation in the end is consistent, not if the construction of that class is 1on1 in the json file.
Furthermore, we might run into very small values for some cases anyways (e.g. a case with small structures, thus we already are at e.g. e-4 for the size of the problem which chews away the precision we have in double). Thus it would be a nice extension to make all the solver and computation independent of the relative size. We could introduce something like a UnitConverter that gets a set of input values and allows to scale all of them with characteristic length,time,temp, etc. of the problem. All computation is done in a consistent, but dimensionless set of parameters and transformed back to the physical scales at output time.
A UnitConverter would allow us to solve both the multiple sources for parameters, and the scaling problem in one change that extends the capability of CabanaPD

@streeve @pabloseleson Thoughts?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions