Skip to content

Use Fortran setup in workflows to access multiple compilers.#164

Closed
jatkinson1000 wants to merge 5 commits intomainfrom
intel-workflow
Closed

Use Fortran setup in workflows to access multiple compilers.#164
jatkinson1000 wants to merge 5 commits intomainfrom
intel-workflow

Conversation

@jatkinson1000
Copy link
Copy Markdown
Member

@jatkinson1000 jatkinson1000 commented Aug 15, 2024

Update test-suite workflow to use the fortran-lang github action to access intel compilers.

May close #140

At the moment we can build successfully with gcc and intel-classic compilers on ubuntu.
gcc is all good, but intel is failing on the CMakeTest asserts which #142 should resolve.

We need to decide what combinations of OS, toolchain, and standard to run the tests over - enough for sensible coverage, but not an excessive number of jobs.

@jatkinson1000 jatkinson1000 self-assigned this Aug 15, 2024
@jatkinson1000 jatkinson1000 force-pushed the intel-workflow branch 14 times, most recently from b9b90e4 to 226b09a Compare August 15, 2024 13:37
@jatkinson1000 jatkinson1000 marked this pull request as ready for review August 15, 2024 13:56
@jatkinson1000 jatkinson1000 force-pushed the intel-workflow branch 3 times, most recently from 5515f9e to 8121ec6 Compare October 22, 2024 17:24
@jatkinson1000 jatkinson1000 force-pushed the intel-workflow branch 6 times, most recently from 5a34d1e to c6bc489 Compare December 18, 2024 19:21
@jatkinson1000 jatkinson1000 force-pushed the intel-workflow branch 3 times, most recently from a9da5aa to ccc152d Compare February 13, 2025 18:57
jfdev001 added a commit to jfdev001/FTorch that referenced this pull request Oct 5, 2025
Co-authored-by: Jack Atkinson <jwa34@cam.ac.uk>

--------------------------------------
**Summary**
--------------------------------------
Building off work in Cambridge-ICCS#164, this PR
attempts a more atomic contribution to the CI for compiling and testing FTorch
with only the intel and intel classic compilers---in contrast, the
original PR opened a year ago makes macos, nvfortran, and intel CI
contributions but was failing tests at the time. My contribution follows
the structure of pFUnit's CI file
[pFUnit/.github/workflows/main.yml](https://github.com/Goddard-Fortran-Ecosystem/pFUnit/blob/1a915dd87d1ed17500eb4e5a07c68e5f398377f1/.github/workflows/main.yml),
that is I split the GNU testing and the Intel testing jobs in order to
more clearly isolate the different environments. I use OpenMPI compiled
from source in order to successfully run the integration tests.

--------------------------------------
**Using OpenMPI instead of oneAPI MPI**
--------------------------------------
The MPI implementation that ships with Intel oneAPI is **not** used
since the below error occurs when using that:

``` [ 94%] Building Fortran object
examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/mpi_infer_fortran.f90.o
/home/runner/work/FTorch/FTorch/examples/7_MPI/mpi_infer_fortran.f90(15):
error #6580: Name in only-list does not exist or is not accessible.
[MPI_GATHER] mpi_gather, mpi_init -------------------^ compilation
aborted for
/home/runner/work/FTorch/FTorch/examples/7_MPI/mpi_infer_fortran.f90
(code 1) gmake[2]: ***
[examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/build.make:78:
examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/mpi_infer_fortran.f90.o]
Error 1 gmake[1]: *** [CMakeFiles/Makefile2:1717:
examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/all] Error 2 gmake: ***
[Makefile:146: all] Error 2 ```

That is, `mpi_gather` is not recognized when compiling with the Intel
oneAPI MPI Fortran compiler. I verified this behavior by checking to see
which MPI `FindMPI` gets:

``` -- Found MPI_C: /opt/intel/oneapi/mpi/2021.16/lib/libmpifort.so
(found version "4.1") -- Found MPI_CXX:
/opt/intel/oneapi/mpi/2021.16/lib/libmpicxx.so (found version "4.1") --
Found MPI_Fortran: /opt/intel/oneapi/mpi/2021.16/lib/libmpifort.so
(found version "4.1") -- Found MPI: TRUE (found version "4.1") ```

The above error can be found in the logs for [github
job/51782067191](https://github.com/jfdev001/FTorch/actions/runs/18189775089/job/51782067191).

**Instead, I compile OpenMPI from source** using either `ifx` or
`ifort`. The C/C++ compilers for OpenMPI are `icx` and `icpx`,
respectively. I do not use the Intel classic C/C++ compilers (`icc` and
`icpc`, respectively) since this appears to cause the below error:

``` Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS
OpenMP_CXX_LIB_NAMES) ```

The above error can be found in the logs for [github
job/519939443997](https://github.com/jfdev001/FTorch/actions/runs/18263208690/job/51993943997).

I use OpenMPI v4.1.2 because this is just what happens to be on the
HPC systems that I use the most (DKRZ's Levante), though a different
version could certainly be used.

--------------------------------------
**Open questions---possibly new issues/new PRs**
--------------------------------------

* Compiling OpenMPI from source takes about 7 minutes. I have tried to
  cache the resulting binary using the
[actions/cache](https://github.com/actions/cache) action but it does not
seem to speed up this step. I'm probably doing something wrong, any
ideas what? I can work it out if the maintainers are generally happy
with the direction of my PR.
* I only use the Intel C/C++ compilers (i.e., `icx` and `icpx` v2023.2)
  for OpenMPI to circumvent an OpenMP issue that arises when using the
Intel Classic C/C++ compilers (i.e., `icc` and `icpc`). Does this seem
like a reasonable approach?
* Unlike the original PR, I only call the
  [fortran-lang/setup-fortran](https://github.com/fortran-lang/setup-fortran)
action with the `intel-classic` compiler option since this also gets an
older version (v2023.2) of the newer Intel Fortran and C/C++ compilers
(i.e., `ifx`, `icx`, `icpx`). Should I instead try and use a more recent
version of `ifx` (e.g., v2024.1)?
* The
  [fortran-lang/setup-fortran](https://github.com/fortran-lang/setup-fortran)
action builds oneAPI MPI distribution by default. This is not used by
the current iteration of the CI, and is therefore a waste of time.
Perhaps we should manually install the Intel Compilers we need (see
again
[pFUnit/.github/workflows/main.yml#L147](https://github.com/Goddard-Fortran-Ecosystem/pFUnit/blob/1a915dd87d1ed17500eb4e5a07c68e5f398377f1/.github/workflows/main.yml#L147)?
Otherwise, we could follow the approach of
[dealii/.github/workflows/linux.yml#L251](https://github.com/dealii/dealii/blob/3b7dfc0ff88c7b9f3aa7112f3956e4ade892209b/.github/workflows/linux.yml#L251)
and have an action that pulls **some** of what we need? Note that the
action [rscohn2/setup-oneapi](https://github.com/rscohn2/setup-oneapi)
does not yet seem to implement an easy setup of classic compilers, but
rather is tailored for the modern Intel compilers only.
jfdev001 added a commit to jfdev001/FTorch that referenced this pull request Oct 6, 2025
--------------------------------------
**Summary**
--------------------------------------
Building off work in Cambridge-ICCS#164, this PR
attempts a more atomic contribution to the CI for compiling and testing FTorch
with only the intel and intel classic compilers---in contrast, the
original PR opened a year ago makes macos, nvfortran, and intel CI
contributions but was failing tests at the time. My contribution follows
the structure of pFUnit's CI file
[pFUnit/.github/workflows/main.yml](https://github.com/Goddard-Fortran-Ecosystem/pFUnit/blob/1a915dd87d1ed17500eb4e5a07c68e5f398377f1/.github/workflows/main.yml),
that is I split the GNU testing and the Intel testing jobs in order to
more clearly isolate the different environments. I use OpenMPI compiled
from source in order to successfully run the integration tests.

--------------------------------------
**Using OpenMPI instead of oneAPI MPI**
--------------------------------------
The MPI implementation that ships with Intel oneAPI is **not** used
since the below error occurs when using that:

``` [ 94%] Building Fortran object
examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/mpi_infer_fortran.f90.o
/home/runner/work/FTorch/FTorch/examples/7_MPI/mpi_infer_fortran.f90(15):
error #6580: Name in only-list does not exist or is not accessible.
[MPI_GATHER] mpi_gather, mpi_init -------------------^ compilation
aborted for
/home/runner/work/FTorch/FTorch/examples/7_MPI/mpi_infer_fortran.f90
(code 1) gmake[2]: ***
[examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/build.make:78:
examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/mpi_infer_fortran.f90.o]
Error 1 gmake[1]: *** [CMakeFiles/Makefile2:1717:
examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/all] Error 2 gmake: ***
[Makefile:146: all] Error 2 ```

That is, `mpi_gather` is not recognized when compiling with the Intel
oneAPI MPI Fortran compiler. I verified this behavior by checking to see
which MPI `FindMPI` gets:

``` -- Found MPI_C: /opt/intel/oneapi/mpi/2021.16/lib/libmpifort.so
(found version "4.1") -- Found MPI_CXX:
/opt/intel/oneapi/mpi/2021.16/lib/libmpicxx.so (found version "4.1") --
Found MPI_Fortran: /opt/intel/oneapi/mpi/2021.16/lib/libmpifort.so
(found version "4.1") -- Found MPI: TRUE (found version "4.1") ```

The above error can be found in the logs for [github
job/51782067191](https://github.com/jfdev001/FTorch/actions/runs/18189775089/job/51782067191).

**Instead, I compile OpenMPI from source** using either `ifx` or
`ifort`. The C/C++ compilers for OpenMPI are `icx` and `icpx`,
respectively. I do not use the Intel classic C/C++ compilers (`icc` and
`icpc`, respectively) since this appears to cause the below error:

``` Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS
OpenMP_CXX_LIB_NAMES) ```

The above error can be found in the logs for [github
job/519939443997](https://github.com/jfdev001/FTorch/actions/runs/18263208690/job/51993943997).

I use OpenMPI v4.1.2 because this is just what happens to be on the
HPC systems that I use the most (DKRZ's Levante), though a different
version could certainly be used.

--------------------------------------
**Open questions---possibly new issues/new PRs**
--------------------------------------

* Compiling OpenMPI from source takes about 7 minutes. I have tried to
  cache the resulting binary using the
[actions/cache](https://github.com/actions/cache) action but it does not
seem to speed up this step. I'm probably doing something wrong, any
ideas what? I can work it out if the maintainers are generally happy
with the direction of my PR.
* I only use the Intel C/C++ compilers (i.e., `icx` and `icpx` v2023.2)
  for OpenMPI to circumvent an OpenMP issue that arises when using the
Intel Classic C/C++ compilers (i.e., `icc` and `icpc`). Does this seem
like a reasonable approach?
* Unlike the original PR, I only call the
  [fortran-lang/setup-fortran](https://github.com/fortran-lang/setup-fortran)
action with the `intel-classic` compiler option since this also gets an
older version (v2023.2) of the newer Intel Fortran and C/C++ compilers
(i.e., `ifx`, `icx`, `icpx`). Should I instead try and use a more recent
version of `ifx` (e.g., v2024.1)?
* The
  [fortran-lang/setup-fortran](https://github.com/fortran-lang/setup-fortran)
action builds oneAPI MPI distribution by default. This is not used by
the current iteration of the CI, and is therefore a waste of time.
Perhaps we should manually install the Intel Compilers we need (see
again
[pFUnit/.github/workflows/main.yml#L147](https://github.com/Goddard-Fortran-Ecosystem/pFUnit/blob/1a915dd87d1ed17500eb4e5a07c68e5f398377f1/.github/workflows/main.yml#L147)?
Otherwise, we could follow the approach of
[dealii/.github/workflows/linux.yml#L251](https://github.com/dealii/dealii/blob/3b7dfc0ff88c7b9f3aa7112f3956e4ade892209b/.github/workflows/linux.yml#L251)
and have an action that pulls **some** of what we need? Note that the
action [rscohn2/setup-oneapi](https://github.com/rscohn2/setup-oneapi)
does not yet seem to implement an easy setup of classic compilers, but
rather is tailored for the modern Intel compilers only.

Co-authored-by: Jack Atkinson <jwa34@cam.ac.uk>
@jatkinson1000
Copy link
Copy Markdown
Member Author

Closing this as stale/outdated.
We now have ifx and ifort in the CI, and I don't seen an immediate need for lfortran or nvhpc unless we find we have users or requests.

cc @joewallwork

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add intel ifx and ifort build to CI

1 participant