Skip to content

satfra/DiFfRG_current

Repository files navigation

arXiv Doxygen Wolfram Python

[This is the development repository for DiFfRG. For the current stable version, please visit [the main repository](https://github.com/satfra/DiFfRG)

DiFfRG - A Discretization Framework for functional Renormalization Group flows

DiFfRG is a set of tools for the discretization of flow equations arising in the functional Renormalization Group (fRG). It supports the setup and calculation of large systems of flow equations allowing for complex combinations of vertex and derivative expansions.

For spatial discretizations, i.e. discretizations of field space mostly used for derivative expansions, DiFfRG makes different finite element (FE) methods available. These include:

  • Continuous Galerkin FE
  • Discontinuos Galerkin FE
  • Direct discontinuous Galerkin FE
  • Local discontinuous Galerkin FE (including derived finite volume (FV) schemes)

The FEM methods included in DiFfRG are built upon the deal.ii finite element library, which is highly parallelized and allows for great performance and flexibility. PDEs consisting of RG-time dependent equations, as well as stationary equations can be solved together during the flow, allowing for techniques like flowing fields in a very accessible way.

Both explicit and implicit time-stepping methods are available and allow thus for efficient RG-time integration in the symmetric and symmetry-broken regime.

We also include a set of tools for the evaluation of integrals and discretization of momentum dependencies.

For an overview, please see the accompanying paper, the tutorial page in the documentation and the examples in Examples/.

This library has been developed within the fQCD Collaboration.

Citation

If you use DiFfRG in your scientific work, please cite the corresponding paper:

@article{Sattler:2024ozv,
    author = "Sattler, Franz R. and Pawlowski, Jan M.",
    title = "{DiFfRG: A Discretisation Framework for functional Renormalisation Group flows}",
    eprint = "2412.13043",
    archivePrefix = "arXiv",
    primaryClass = "hep-ph",
    month = "12",
    year = "2024"
}

Requirements

To compile and run this project, there are very few requirements which you can easily install using your package manager on Linux or MacOS:

  • git for external requirements and to clone this repository.
  • CMake for the build systems of DiFfRG, deal.ii and other libraries.
  • GNU Make or another generator of your choice.
  • A compiler supporting at least the C++20 standard, including a Fortran compiler (e.g. gfortran). This project is only tested using the GCC compiler suite, as well as with AppleClang, but in principle, ICC or standard Clang should also work.
  • LAPACK and BLAS in some form, e.g. OpenBLAS. Alternatively, pass -DBUILD_OpenBLAS=ON to have DiFfRG build OpenBLAS.
  • The GNU Scientific Library GSL.
  • Python is required by the Boost build system and used for visualization.
  • Doxygen and graphviz to build the documentation.

The following requirements are optional:

  • ParaView, a program to visualize and post-process the vtk data saved by DiFfRG when treating FEM discretizations.
  • CUDA for integration routines on the GPU, which gives a huge speedup for the calculation of fully momentum dependent flow equations (10 - 100x). In case you wish to use CUDA, make sure you have a compiler available on your system compatible with your version of nvcc, e.g. g++<=13.2 for CUDA 12.5

All other requirements are bundled and automatically built with DiFfRG. The framework has been tested with the following systems:

Arch Linux

$ pacman -S git cmake gcc gcc-fortran blas-openblas paraview python doxygen graphviz gsl

For a CUDA-enabled build, additionally

$ pacman -S cuda

Rocky Linux

$ dnf --enablerepo=devel install -y gcc-toolset-12 cmake git openblas-devel doxygen doxygen-latex python3 python3-pip gsl-devel patch
$ scl enable gcc-toolset-12 bash

The second line is necessary to switch into a shell where g++-12 is available

Ubuntu

$ apt-get update
$ apt-get install git cmake gfortran libopenblas-dev paraview build-essential python3 doxygen graphviz libgsl-dev

For a CUDA-enabled build, additionally

$ apt-get install cuda

macOS

First, install xcode and homebrew, then run

$ brew install cmake gcc doxygen paraview graphviz gsl python3 bash

Note: you have to install a newer GNU version of bash, since by default, only version 3.2 is installed, and for installation, PETSc requires a newer version

Windows

If using Windows, instead of running the project directly, it is recommended to use WSL and then go through the installation as if on Linux (e.g. Arch or Ubuntu).

Installation

As fast as possible

From the shell, run (this requires curl to be available on your system)

bash <(curl -s -L https://github.com/satfra/DiFfRG_current/raw/refs/heads/main/install.sh)

or, if you want to specify either the installation folder or the number of threads used for building the library,

THREADS=6 FOLDER=${HOME}/.local/share/DiFfRG/ bash <(curl -s https://github.com/satfra/DiFfRG_current/raw/refs/heads/main/install.sh)

CMake

You can download a script to install DiFfRG locally directly from a CMake file by putting into your CMakeLists.txt the lines

file(DOWNLOAD
  https://github.com/satfra/DiFfRG_current/raw/refs/heads/main/DiFfRG/cmake/InstallDiFfRG.cmake
  ${CURRENT_BINARY_DIR}/cmake/InstallDiFfRG.cmake)
include(${CURRENT_BINARY_DIR}/cmake/InstallDiFfRG.cmake)

This will fetch a script, which will automatically download and install DiFfRG and all of its dependencies to $HOME/.local/share/DiFfRG. If you wish to change this directory, or some other default values, you can set the following optional variables:

set(DiFfRG_INSTALL_DIR $ENV{HOME}/.local/share/DiFfRG/)
set(DiFfRG_BUILD_DIR $ENV{HOME}/.local/share/DiFfRG/build/)
set(DiFfRG_SOURCE_DIR $ENV{HOME}/.local/share/DiFfRG/src/)
set(TRY_DiFfRG_VERSION main)
set(PARALLEL_JOBS 8)

Manual installation

You can also manually clone DiFfRG to a directory of your choice

$ git clone https://github.com/satfra/DiFfRG_current.git

Then, create a build directory and run cmake

$ cd DiFfRG
$ mkdir build
$ cd build
$ cmake ../ -DCMAKE_INSTALL_PREFIX=~/.local/share/DiFfRG/ -DCMAKE_BUILD_TYPE=Release
$ cmake --build ./ -- -j8

By default, the library will install itself to $HOME/.local/share/DiFfRG, but you can control the destination by pointing CMAKE_INSTALL_PREFIX to a directory of your choice.

Verifying your installation

After installation, you can verify that all dependencies are correctly found:

$ cmake -DBUNDLED_DIR=~/.local/share/DiFfRG/bundled -P ~/.local/share/DiFfRG/cmake/verify_install.cmake

This prints a pass/fail table for each dependency, helping diagnose any issues.

Docker and other container runtime environments

Although a native install should be unproblematic in most cases, the setup with CUDA functionality may be daunting. Especially on high-performance clusters, and also depending on the packages available for chosen distribution, it may be much easier to work with the framework inside a container to avoid conflicting dependencies.

Besides the manual setup described below, we recommend using development containers if you are using VSCode. An appropriate .devcontainers configuration can be adapted from the one found in the DiFfRG top level directory.

The specific choice of container runtime environment is up to the user, however we provide a small build script to create a docker container for DiFfRG. To do this, you will need docker, docker-buildx and the NVIDIA container toolkit in case you wish to create a CUDA-compatible image.

To build a docker image, you can run the script build-container.sh in the containers/ folder, which will guide you through the process, i.e.

$ cd containers
$ bash build-container.sh

If using other environments, e.g. ENROOT, the preferred approach is simply to build an image on top of one of the CUDA images by NVIDIA.

For example, with ENROOT a DiFfRG image can be built on top of rockylinux9 by following these steps:

$ enroot import docker://nvidia/cuda:12.8.1-devel-rockylinux9
$ enroot create --name DiFfRG nvidia+cuda+12.5.1-devel-rockylinux9.sqsh
$ enroot start --root --rw -m ./:/DiFfRG DiFfRG bash

Afterwards, one proceeds with the above Rocky Linux setup.

Getting started with simulating fRG flows

For an overview, please see the tutorial page in the documentation. A local documentation is also always built automatically when running the setup script, but can also be built manually by running

$ make documentation

inside the DiFfRG_build directory. You can find then a code reference in the top directory.

All backend code is contained in the DiFfRG directory.

Several simulations are defined in the Applications directory, which can be used as a starting point for your own simulations.

Tips and FAQ

Logfiles and install issues

If DiFfRG fails to build on your machine, first check the appropriate logs. You find the main log at ~/.local/share/DiFfRG/build/DiFfRG.log if you are using the CMake install. Otherwise, you can redirect the output of the build, e.g.

cmake --build ./ -- -j8 | tee DiFfRG.log

and analyze the result.

If DiFfRG proves to be incompatible with your machine, please open an Issue on GitHub here, or, alternatively, send an email to the author (see the publication).

Contributing

DiFfRG is a work in progress. If you find some feature missing, a bug, or some other kind of improvement, you can get involved in the further development of DiFfRG.

Thanks to the collaborative nature of GitHub, you can simply fork the project and work on a private copy on your own GitHub account. Feel also encouraged to open an issue, or if you already have a (partially) ready contribution, open a pull request.

Configuration files

A DiFfRG simulation requires you to provide a valid parameters.json file in the execution path, or alternatively provide another JSON-file using the -p flag (see below).

To generate a "stock" parameters.json in the current folder, you can call any DiFfRG application as

$ ./my_simulation --generate-parameter-file

Before usage, don't forget to put in the parameters you defined in your own simulation!

Progress output

To monitor the progress of the simulation, one can set the verbosity parameter either in the parameter file,

{
  "output": {
    "verbosity": 1
  },
}

or from the CLI,

$ ./my_simulation -si /output/verbosity=1

Modifying parameters from the CLI

Any DiFfRG simulation using the DiFfRG::ConfigurationHelper class can be asked to give some syntax pertaining to the configuration:

$ ./my_simulation --help
This is a DiFfRG simulation. You can pass the following optional parameters:
  --help                      shows this text
  --generate-parameter-file   generates a parameter file with some default values
  -p                          specifiy a parameter file other than the standard parameter.json
  -sd                         overwrite a double parameter. This should be in the format '-s physical/T=0.1'
  -si                         overwrite an integer parameter. This should be in the format '-s physical/Nc=1'
  -sb                         overwrite a boolean parameter. This should be in the format '-s physical/use_sth=true'
  -ss                         overwrite a string parameter. This should be in the format '-s physical/a=hello'

e.g.

$ ./my_simulation -sd /physical/Lambda=1.0

Timestepper choice

In general, the IDA timestepper from the SUNDIALS-suite has proven to be the optimal choice for any fRG-flow with convexity restoration. Additionally, this solver allows for out-of-the-box solving of additional algebraic systems, which is handy for more complicated fRG setups.

If solving purely variable-dependent systems, one of the Boost time steppers, Boost_RK45, Boost_RK78 or Boost_ABM. The latter is especially excellent for extremely large systems which have no extremely fast dynamics, but lacks adaptive timestepping. In practice, choosing Boost_ABM over one of the RK steppers may speed up a Yang-Mills simulation with full momentum dependences by more than a factor of 10.

For systems with both spatial discretisations and variables, consider one of the implicit-explicit mixtures, SUNDIALS_IDA_Boost_RK45, SUNDIALS_IDA_Boost_RK78 or SUNDIALS_IDA_Boost_ABM.

Other Libraries used

The following third-party libraries are utilised by DiFfRG. They are automatically built and installed DiFfRG during the build process.

  • The main backend for field-space discretization is deal.II, which provides the entire FEM-machinery as well as many other utility components.
  • For performant and convenient calculation of Jacobian matrices we use the autodiff library, which implements automatic forward and backwards differentiation in C++ and also in CUDA.
  • Kokkos, a performance portability framework for shared-memory parallelization on GPU and CPU. We use it for the integration routines for flow equations.
  • Time integration relies heavily on the SUNDIALS suite, specifically on the IDAs solver.
  • Boost provides explicit time-stepping and various math algorithms.
  • Rapidcsv for quick processing of .csv files.
  • Catch2 for unit testing.
  • spdlog for logging.
  • Doxygen Awesome for a modern doxygen theme.
  • Boost provides explicit time-stepping and various math algorithms.
  • Eigen for some linear-algebra related tasks.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors