Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
20ecbb4
Removed ExternalProject_Add for faster test builds
keiwanjamaly Nov 7, 2025
e4d450b
fix: mark JSONValue reference as maybe unused
keiwanjamaly Nov 7, 2025
d87adbb
feat: add ConfigurationMesh and GridAxis classes with associated tests
keiwanjamaly Nov 10, 2025
ecda7f6
restored CMakeLists.txt changes
keiwanjamaly Nov 13, 2025
e4c7946
feat: add default range and JSON configuration for GridAxis
keiwanjamaly Nov 14, 2025
21587fb
Set up the structure creation of a Cached_Data vector
keiwanjamaly Nov 20, 2025
30daa54
feat: enhance Kurganov-Tadmor implementation with ghost layer and bou…
keiwanjamaly Dec 2, 2025
140ae86
feat: enhance Kurganov-Tadmor implementation with ghost layer and bou…
keiwanjamaly Dec 2, 2025
10de82a
Merge branch 'feature/Kurganov_Tadmor' of github.com:satfra/DiFfRG_pr…
keiwanjamaly Dec 2, 2025
9fec166
computed reconstruction of derivatives
keiwanjamaly Dec 2, 2025
333265f
feat: suppress unused parameter warnings in mass, flux, source, and i…
keiwanjamaly Dec 3, 2025
67342e5
I'm just commiting this, because I got an error, and i hope, I don't …
keiwanjamaly Dec 5, 2025
37b44cb
feat: add flux derivative computation in Kurganov-Tadmor implementati…
keiwanjamaly Dec 16, 2025
80aa098
feat: enhance Kurganov-Tadmor with dx computation and update tests
keiwanjamaly Dec 17, 2025
3724873
feat: refactor Kurganov-Tadmor tests into a structured pipeline with …
keiwanjamaly Dec 17, 2025
f080712
feat: add flux and flux derivative computations in Kurganov-Tadmor an…
keiwanjamaly Dec 17, 2025
c49b029
fix: change mesh_config from reference to value in RectangularMesh class
keiwanjamaly Jan 20, 2026
b65f52e
fix: PhysicalParameters initialization
keiwanjamaly Jan 20, 2026
f2a369b
feat: update run function to also support no-adaptivity in explicit E…
keiwanjamaly Jan 20, 2026
7e378e8
fix: mark unused variable to avoid compiler warnings
keiwanjamaly Jan 20, 2026
5da64db
fix/added HDF5 as a depency to the project to fix #27
keiwanjamaly Feb 13, 2026
5a911e9
fix: removed warning, that const qualifyer had no effect
keiwanjamaly Feb 13, 2026
a00bc1f
fixup! fix/added HDF5 as a depency to the project to fix #27
keiwanjamaly Feb 13, 2026
912f39f
feat: added explicit euler for Kurganov-Tadmor computation
keiwanjamaly Feb 13, 2026
722d99c
feat: added explicit euler for Kurganov-Tadmor computation
keiwanjamaly Feb 13, 2026
8073bfb
feat: added diffusion flux and corresponding tests
keiwanjamaly Feb 17, 2026
6d0be90
Merge branch 'feature/Kurganov_Tadmor' of github.com:satfra/DiFfRG_pr…
keiwanjamaly Feb 17, 2026
f4e382f
fix: broken code after merge
keiwanjamaly Feb 17, 2026
4599d43
refactor: deleted unused imports
keiwanjamaly Feb 17, 2026
3500f67
feat: implemented solving multicomponent flow equations with the Kurg…
keiwanjamaly Feb 18, 2026
6c6e094
refactor: remove unnecessary comments in AbstractModel class
keiwanjamaly Feb 18, 2026
9c28e96
refactor: optimize mass matrix sparsity pattern creation in KurganovT…
keiwanjamaly Feb 19, 2026
7f1295c
refactor: remove commented-out mesh loop code in KurganovTadmor
keiwanjamaly Feb 19, 2026
276ef38
refactor: clean up mass function implementation in KurganovTadmor
keiwanjamaly Feb 19, 2026
1dd722a
feat: a simple version of the build_sparcity pattern without amr
keiwanjamaly Feb 24, 2026
48f800b
feat: added jacobian_mass
keiwanjamaly Feb 24, 2026
c408748
feat: implemented jacobian source and mass computation
keiwanjamaly Feb 24, 2026
8eae079
implementation of jacobian
keiwanjamaly Feb 26, 2026
f01bb41
refactor: limiter is now a templated strategy pattern
keiwanjamaly Mar 4, 2026
9d52de9
refactor: gradient reconstruction is a strategy which injects limiter
keiwanjamaly Mar 4, 2026
c4b6947
feat: add tests for MinMod limiter and TVD reconstructor functionality
keiwanjamaly Mar 4, 2026
64f40bc
Add reconstruct_u_derivative and compute_gradient_derivative to FV re…
keiwanjamaly Mar 5, 2026
a854937
docs: note non-locality problem in FV face Jacobian assembly
keiwanjamaly Mar 10, 2026
4288d9e
refactor: introduce CellData/NeighborData structs to expose DOF indices
keiwanjamaly Mar 10, 2026
6980c3c
chore: add .claude and .opencode to .gitignore
keiwanjamaly Mar 10, 2026
35c4c3f
feat: collect neighbor DOF indices into from_dofs for FV face Jacobian
keiwanjamaly Mar 13, 2026
e7500e6
refactor: move CellData and NeighborData to internal namespace in FV …
keiwanjamaly Mar 17, 2026
112679c
feat: implement tag_cell_dofs and make_tagged_neighbors for FV face J…
keiwanjamaly Mar 18, 2026
4134440
feat: add KT numerical flux Jacobian and flux Jacobian/Hessian utilities
keiwanjamaly Mar 20, 2026
facbfda
feat: added implicit advection flux contribution
keiwanjamaly Apr 2, 2026
ddcc1e7
feat: enable tagging for discovered Catch2 tests
keiwanjamaly Apr 2, 2026
c3a841b
refactor: improve test case formatting and logging in assembler Jacob…
keiwanjamaly Apr 3, 2026
a26a345
Move FV reconstructor dimension to class template
keiwanjamaly Apr 3, 2026
7fb48b2
feat: add gradient computation utilities for reconstruction at points
keiwanjamaly Apr 7, 2026
35ba455
feat: enhance diffusion flux Jacobian computation and update related …
keiwanjamaly Apr 7, 2026
d172b85
feat: enhance diffusion flux Jacobian computation and update related …
keiwanjamaly Apr 7, 2026
9b8a722
fix: removed fragile tests
keiwanjamaly Apr 7, 2026
936896c
Add SUNDIALS IDA coverage for KT assembler
keiwanjamaly Apr 9, 2026
06abce5
feat: implement FV boundary strategies and corresponding tests
keiwanjamaly Apr 9, 2026
d049517
Merge branch 'feature/Kurganov_Tadmor' into feature/kt-sundials-ida
keiwanjamaly Apr 9, 2026
5a5fcb4
feat: add .codex/ to .gitignore
keiwanjamaly Apr 9, 2026
d33e693
Merge remote-tracking branch 'origin/main' into feature/Kurganov_Tadmor
keiwanjamaly Apr 10, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .clangd
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
CompileFlags:
CompilationDatabase: build
---
If:
PathMatch:
- DiFfRG/build/.*
Index:
Background: Skip
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,9 @@ profile/
.cache/
.venv/
.vscode/
.claude/
.codex/
.opencode/

## Core latex/pdflatex auxiliary files:
*.aux
Expand Down
14 changes: 14 additions & 0 deletions DiFfRG/include/DiFfRG/common/tuples.hh
Original file line number Diff line number Diff line change
Expand Up @@ -76,15 +76,29 @@ namespace DiFfRG
named_tuple(tuple_type &&t) : tuple(t) {}
named_tuple(tuple_type &t) : tuple(t) {}

/**
@brief Convert from a tuple to a named_tuple.

This is typically used, to create a named_tuple with AutoDiff types.
*/
template <typename... T> static constexpr auto as(std::tuple<T...> &&tup)
{
return named_tuple<std::tuple<T...>, tuple_names>(tup);
}

/**
@brief Convert from a tuple to a named_tuple.

This is typically used, to create a named_tuple with AutoDiff types.
*/
template <typename... T> static constexpr auto as(std::tuple<T...> &tup)
{
return named_tuple<std::tuple<T...>, tuple_names>(tup);
}

/*
* @brief Get the index of the element with the given name.
*/
static consteval size_t get_idx(const char *name)
{
size_t running_sum = 0;
Expand Down
2 changes: 1 addition & 1 deletion DiFfRG/include/DiFfRG/discretization/FEM/ldg.hh
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ namespace DiFfRG
}
const auto &get_dof_handler(const uint i = 0) const { return *(dof_handler[i]); }
auto &get_dof_handler(const uint i = 0) { return *(dof_handler[i]); }
const auto get_dof_handler_list() const
auto get_dof_handler_list() const
{
std::vector<const DoFHandler<dim> *> ret;
for (uint i = 0; i < Components::count_fe_subsystems(); ++i)
Expand Down
1,107 changes: 995 additions & 112 deletions DiFfRG/include/DiFfRG/discretization/FV/assembler/KurganovTadmor.hh

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
#pragma once

#include <array>
#include <autodiff/forward/real/real.hpp>
#include <cstddef>
#include <deal.II/base/point.h>
#include <deal.II/base/tensor.h>
#include <tuple>

#include <DiFfRG/discretization/FV/assembler/flux_ties.hh>

namespace DiFfRG
{
namespace FV
{
namespace KurganovTadmor
{
namespace internal
{
template <typename NumberType, size_t n_components>
using JacobianMatrix = std::array<std::array<NumberType, n_components>, n_components>;

template <typename NumberType, int dim, size_t n_components>
using HessianTensor =
std::array<std::array<std::array<std::array<NumberType, n_components>, n_components>, n_components>, dim>;

/// Compute the Jacobian and full Hessian of the advection flux at a given state u
/// using second-order forward-mode autodiff with the polarization identity for off-diagonal entries.
template <typename Model, typename NumberType, int dim, size_t n_components>
auto compute_flux_jacobian_and_hessian(const std::array<NumberType, n_components> &u,
const dealii::Point<dim> &x_q, const Model &model)
{
using ADNumberType = autodiff::Real<2, NumberType>;
using autodiff::detail::seed;
using autodiff::detail::derivative;

auto unseed = [](ADNumberType &x) { seed<1>(x, NumberType(0)); };

std::array<ADNumberType, n_components> u_AD{};
for (size_t i = 0; i < n_components; ++i)
u_AD[i] = ADNumberType(u[i]);

std::array<dealii::Tensor<1, dim, NumberType>, n_components> F{};
std::array<JacobianMatrix<NumberType, n_components>, dim> J{};
HessianTensor<NumberType, dim, n_components> H{};

std::array<dealii::Tensor<1, dim, ADNumberType>, n_components> F_AD{};

// Pass 0: Evaluate the flux with clean (unseeded) AD variables to get the plain flux values.
model.KurganovTadmor_advection_flux(F_AD, x_q, advection_flux_tie(u_AD));
for (size_t i = 0; i < n_components; ++i)
for (size_t d = 0; d < dim; ++d)
F[i][d] = autodiff::detail::val(F_AD[i][d]);

// Pass 1: Compute Jacobian columns and diagonal Hessian entries H[j][j].
// Seed one variable at a time.
for (size_t j = 0; j < n_components; ++j) {
seed<1>(u_AD[j], NumberType(1));

model.KurganovTadmor_advection_flux(F_AD, x_q, advection_flux_tie(u_AD));

for (size_t d = 0; d < dim; ++d)
for (size_t i = 0; i < n_components; ++i) {
J[d][i][j] = derivative<1>(F_AD[i][d]);
H[d][i][j][j] = derivative<2>(F_AD[i][d]);
}

unseed(u_AD[j]);
}

// Pass 2: Compute off-diagonal Hessian entries (c > j) via the polarization identity:
// H[j][c] = (d²F/(e_j+e_c)² - H[j][j] - H[c][c]) / 2, H[c][j] = H[j][c]
// All diagonal entries H[j][j] and H[c][c] are available from pass 1.
for (size_t j = 0; j < n_components; ++j) {
for (size_t c = j + 1; c < n_components; ++c) {
seed<1>(u_AD[j], NumberType(1));
seed<1>(u_AD[c], NumberType(1));

model.KurganovTadmor_advection_flux(F_AD, x_q, advection_flux_tie(u_AD));

for (size_t d = 0; d < dim; ++d)
for (size_t i = 0; i < n_components; ++i) {
const NumberType cross =
(derivative<2>(F_AD[i][d]) - H[d][i][j][j] - H[d][i][c][c]) / NumberType(2);
H[d][i][j][c] = H[d][i][c][j] = cross;
}

unseed(u_AD[j]);
unseed(u_AD[c]);
}
}

return std::make_tuple(F, J, H);
}

} // namespace internal
} // namespace KurganovTadmor
} // namespace FV
} // namespace DiFfRG
26 changes: 26 additions & 0 deletions DiFfRG/include/DiFfRG/discretization/FV/assembler/flux_ties.hh
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#pragma once

#include <DiFfRG/common/tuples.hh>
#include <tuple>

namespace DiFfRG
{
namespace FV
{
namespace KurganovTadmor
{
namespace internal
{
template <typename... T> auto advection_flux_tie(T &&...t)
{
return named_tuple<std::tuple<T &...>, StringSet<"fe_functions">>(std::tie(t...));
}

template <typename... T> auto flux_tie(T &&...t)
{
return named_tuple<std::tuple<T &...>, StringSet<"fe_functions", "fe_derivatives">>(std::tie(t...));
}
} // namespace internal
} // namespace KurganovTadmor
} // namespace FV
} // namespace DiFfRG
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
#pragma once

// standard library
#include <concepts>

namespace DiFfRG
{
namespace def
{
/**
* @brief Concept that any slope-limiter mixin must satisfy.
*
* A limiter class must provide a static @c slope_limit method
* accepting two slopes and returning a limited slope value.
* The assembler uses this concept to produce clear compile
* errors when a model does not provide a limiter.
*
* New limiters only need to satisfy this concept:
* @code
* class MyLimiter
* {
* public:
* template <typename NumberType>
* static NumberType slope_limit(const NumberType &du_1,
* const NumberType &du_2)
* {
* // …
* }
* };
* static_assert(HasSlopeLimiter<MyLimiter>);
* @endcode
*/
template <typename T>
concept HasSlopeLimiter = requires(double a, double b) {
{ T::slope_limit(a, b) } -> std::convertible_to<double>;
};

namespace limiter_utils
{
/**
* @brief Signum helper: returns -1, 0 or +1.
*/
template <typename T> int sgn(T val) { return (T{} < val) - (val < T{}); }
} // namespace limiter_utils

} // namespace def
} // namespace DiFfRG
46 changes: 46 additions & 0 deletions DiFfRG/include/DiFfRG/discretization/FV/limiter/minmod_limiter.hh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#pragma once

// DiFfRG
#include <DiFfRG/discretization/FV/limiter/abstract_limiter.hh>

// standard library
#include <algorithm>
#include <cmath>

namespace DiFfRG
{
namespace def
{
/**
* @brief MinMod slope limiter.
*
* Implements the classic minmod TVD limiter:
* slope_limit(a, b) = 0.5 * (sgn(a) + sgn(b)) * min(|a|, |b|)
*
* When the two one-sided slopes have the same sign, the limiter returns the
* smaller one (in absolute value). When they have opposite signs (indicating
* a local extremum), it returns zero.
*
* Satisfies the @c HasSlopeLimiter concept so it can be passed as the
* @c Limiter template parameter of the TVDReconstructor:
* @code
* using Assembler = FV::KurganovTadmor::Assembler<Discretization, Model,
* def::TVDReconstructor<Discretization::dim, def::MinModLimiter, double>>;
* @endcode
*/
class MinModLimiter
{
public:
template <typename NumberType> static NumberType slope_limit(const NumberType &du_1, const NumberType &du_2)
{
// Use the algebraic identity min(a,b) = 0.5*(a+b-|a-b|) instead of std::min so that
// AD follows the active branch away from ties without introducing a std::min discontinuity.
const auto a1 = abs(du_1), a2 = abs(du_2);
const auto min_a = NumberType(0.5) * (a1 + a2 - abs(a1 - a2));
return NumberType(0.5) * (limiter_utils::sgn(du_1) + limiter_utils::sgn(du_2)) * min_a;
}
};
static_assert(HasSlopeLimiter<MinModLimiter>);

} // namespace def
} // namespace DiFfRG
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
#pragma once

// deal.II
#include <deal.II/base/point.h>
#include <deal.II/base/tensor.h>

// autodiff
#include <autodiff/forward/real/real.hpp>

// standard library
#include <array>
#include <concepts>
#include <cstddef>

namespace DiFfRG
{
namespace def
{
template <int dim> inline constexpr std::size_t n_faces = 2 * dim;

/**
* @brief Per-component gradient type: one Tensor<1,dim> per solution component.
*/
template <int dim, typename NumberType, size_t n_components>
using GradientType = std::array<dealii::Tensor<1, dim, NumberType>, n_components>;
/**
* @brief Concept that any gradient-reconstruction strategy must satisfy.
*/
template <typename T>
concept HasReconstructor =
requires {
{
T::dim
} -> std::convertible_to<int>;
} &&
// ---- compute_gradient (n_components=1) ----
requires(const dealii::Point<T::dim> &center, const std::array<double, 1> &u_center,
const std::array<dealii::Point<T::dim>, n_faces<T::dim>> &x_n,
const std::array<std::array<double, 1>, n_faces<T::dim>> &u_n) {
{
T::template compute_gradient<1>(center, u_center, x_n, u_n)
} -> std::same_as<GradientType<T::dim, double, 1>>;
} &&
// ---- compute_gradient (n_components=2) ----
requires(const dealii::Point<T::dim> &center, const std::array<double, 2> &u_center,
const std::array<dealii::Point<T::dim>, n_faces<T::dim>> &x_n,
const std::array<std::array<double, 2>, n_faces<T::dim>> &u_n) {
{
T::template compute_gradient<2>(center, u_center, x_n, u_n)
} -> std::same_as<GradientType<T::dim, double, 2>>;
} &&
// ---- compute_gradient_at_point (n_components=1) ----
requires(const dealii::Point<T::dim> &center, const dealii::Point<T::dim> &x,
const std::array<double, 1> &u_center, const std::array<dealii::Point<T::dim>, n_faces<T::dim>> &x_n,
const std::array<std::array<double, 1>, n_faces<T::dim>> &u_n) {
{
T::template compute_gradient_at_point<1>(center, x, u_center, x_n, u_n)
} -> std::same_as<GradientType<T::dim, double, 1>>;
} &&
// ---- compute_gradient_at_point (n_components=2) ----
requires(const dealii::Point<T::dim> &center, const dealii::Point<T::dim> &x,
const std::array<double, 2> &u_center, const std::array<dealii::Point<T::dim>, n_faces<T::dim>> &x_n,
const std::array<std::array<double, 2>, n_faces<T::dim>> &u_n) {
{
T::template compute_gradient_at_point<2>(center, x, u_center, x_n, u_n)
} -> std::same_as<GradientType<T::dim, double, 2>>;
} &&
// ---- compute_gradient_derivative (n_components=1) ----
requires(const dealii::Point<T::dim> &center, const std::array<autodiff::Real<1, double>, 1> &u_center,
const std::array<dealii::Point<T::dim>, n_faces<T::dim>> &x_n,
const std::array<std::array<autodiff::Real<1, double>, 1>, n_faces<T::dim>> &u_n) {
{
T::template compute_gradient_derivative<1>(center, u_center, x_n, u_n)
} -> std::same_as<GradientType<T::dim, double, 1>>;
} &&
// ---- compute_gradient_derivative (n_components=2) ----
requires(const dealii::Point<T::dim> &center, const std::array<autodiff::Real<1, double>, 2> &u_center,
const std::array<dealii::Point<T::dim>, n_faces<T::dim>> &x_n,
const std::array<std::array<autodiff::Real<1, double>, 2>, n_faces<T::dim>> &u_n) {
{
T::template compute_gradient_derivative<2>(center, u_center, x_n, u_n)
} -> std::same_as<GradientType<T::dim, double, 2>>;
} &&
// ---- compute_gradient_at_point_derivative (n_components=1) ----
requires(const dealii::Point<T::dim> &center, const dealii::Point<T::dim> &x,
const std::array<autodiff::Real<1, double>, 1> &u_center,
const std::array<dealii::Point<T::dim>, n_faces<T::dim>> &x_n,
const std::array<std::array<autodiff::Real<1, double>, 1>, n_faces<T::dim>> &u_n) {
{
T::template compute_gradient_at_point_derivative<1>(center, x, u_center, x_n, u_n)
} -> std::same_as<GradientType<T::dim, double, 1>>;
} &&
// ---- compute_gradient_at_point_derivative (n_components=2) ----
requires(const dealii::Point<T::dim> &center, const dealii::Point<T::dim> &x,
const std::array<autodiff::Real<1, double>, 2> &u_center,
const std::array<dealii::Point<T::dim>, n_faces<T::dim>> &x_n,
const std::array<std::array<autodiff::Real<1, double>, 2>, n_faces<T::dim>> &u_n) {
{
T::template compute_gradient_at_point_derivative<2>(center, x, u_center, x_n, u_n)
} -> std::same_as<GradientType<T::dim, double, 2>>;
};

} // namespace def
} // namespace DiFfRG
Loading
Loading