Use Fortran setup in workflows to access multiple compilers.#164
Closed
jatkinson1000 wants to merge 5 commits intomainfrom
Closed
Use Fortran setup in workflows to access multiple compilers.#164jatkinson1000 wants to merge 5 commits intomainfrom
jatkinson1000 wants to merge 5 commits intomainfrom
Conversation
c036285 to
8bdf384
Compare
b9b90e4 to
226b09a
Compare
226b09a to
285b9ef
Compare
5515f9e to
8121ec6
Compare
8121ec6 to
37740e4
Compare
5a34d1e to
c6bc489
Compare
build FTorch and utilise the MPS backend on Apple Silicon. Co-authored-by: Karl Harrison <kh296@users.noreply.github.com>
…ccess intel compilers.
a9da5aa to
ccc152d
Compare
ccc152d to
1a2a8da
Compare
jfdev001
added a commit
to jfdev001/FTorch
that referenced
this pull request
Oct 5, 2025
Co-authored-by: Jack Atkinson <jwa34@cam.ac.uk> -------------------------------------- **Summary** -------------------------------------- Building off work in Cambridge-ICCS#164, this PR attempts a more atomic contribution to the CI for compiling and testing FTorch with only the intel and intel classic compilers---in contrast, the original PR opened a year ago makes macos, nvfortran, and intel CI contributions but was failing tests at the time. My contribution follows the structure of pFUnit's CI file [pFUnit/.github/workflows/main.yml](https://github.com/Goddard-Fortran-Ecosystem/pFUnit/blob/1a915dd87d1ed17500eb4e5a07c68e5f398377f1/.github/workflows/main.yml), that is I split the GNU testing and the Intel testing jobs in order to more clearly isolate the different environments. I use OpenMPI compiled from source in order to successfully run the integration tests. -------------------------------------- **Using OpenMPI instead of oneAPI MPI** -------------------------------------- The MPI implementation that ships with Intel oneAPI is **not** used since the below error occurs when using that: ``` [ 94%] Building Fortran object examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/mpi_infer_fortran.f90.o /home/runner/work/FTorch/FTorch/examples/7_MPI/mpi_infer_fortran.f90(15): error #6580: Name in only-list does not exist or is not accessible. [MPI_GATHER] mpi_gather, mpi_init -------------------^ compilation aborted for /home/runner/work/FTorch/FTorch/examples/7_MPI/mpi_infer_fortran.f90 (code 1) gmake[2]: *** [examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/build.make:78: examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/mpi_infer_fortran.f90.o] Error 1 gmake[1]: *** [CMakeFiles/Makefile2:1717: examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/all] Error 2 gmake: *** [Makefile:146: all] Error 2 ``` That is, `mpi_gather` is not recognized when compiling with the Intel oneAPI MPI Fortran compiler. I verified this behavior by checking to see which MPI `FindMPI` gets: ``` -- Found MPI_C: /opt/intel/oneapi/mpi/2021.16/lib/libmpifort.so (found version "4.1") -- Found MPI_CXX: /opt/intel/oneapi/mpi/2021.16/lib/libmpicxx.so (found version "4.1") -- Found MPI_Fortran: /opt/intel/oneapi/mpi/2021.16/lib/libmpifort.so (found version "4.1") -- Found MPI: TRUE (found version "4.1") ``` The above error can be found in the logs for [github job/51782067191](https://github.com/jfdev001/FTorch/actions/runs/18189775089/job/51782067191). **Instead, I compile OpenMPI from source** using either `ifx` or `ifort`. The C/C++ compilers for OpenMPI are `icx` and `icpx`, respectively. I do not use the Intel classic C/C++ compilers (`icc` and `icpc`, respectively) since this appears to cause the below error: ``` Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES) ``` The above error can be found in the logs for [github job/519939443997](https://github.com/jfdev001/FTorch/actions/runs/18263208690/job/51993943997). I use OpenMPI v4.1.2 because this is just what happens to be on the HPC systems that I use the most (DKRZ's Levante), though a different version could certainly be used. -------------------------------------- **Open questions---possibly new issues/new PRs** -------------------------------------- * Compiling OpenMPI from source takes about 7 minutes. I have tried to cache the resulting binary using the [actions/cache](https://github.com/actions/cache) action but it does not seem to speed up this step. I'm probably doing something wrong, any ideas what? I can work it out if the maintainers are generally happy with the direction of my PR. * I only use the Intel C/C++ compilers (i.e., `icx` and `icpx` v2023.2) for OpenMPI to circumvent an OpenMP issue that arises when using the Intel Classic C/C++ compilers (i.e., `icc` and `icpc`). Does this seem like a reasonable approach? * Unlike the original PR, I only call the [fortran-lang/setup-fortran](https://github.com/fortran-lang/setup-fortran) action with the `intel-classic` compiler option since this also gets an older version (v2023.2) of the newer Intel Fortran and C/C++ compilers (i.e., `ifx`, `icx`, `icpx`). Should I instead try and use a more recent version of `ifx` (e.g., v2024.1)? * The [fortran-lang/setup-fortran](https://github.com/fortran-lang/setup-fortran) action builds oneAPI MPI distribution by default. This is not used by the current iteration of the CI, and is therefore a waste of time. Perhaps we should manually install the Intel Compilers we need (see again [pFUnit/.github/workflows/main.yml#L147](https://github.com/Goddard-Fortran-Ecosystem/pFUnit/blob/1a915dd87d1ed17500eb4e5a07c68e5f398377f1/.github/workflows/main.yml#L147)? Otherwise, we could follow the approach of [dealii/.github/workflows/linux.yml#L251](https://github.com/dealii/dealii/blob/3b7dfc0ff88c7b9f3aa7112f3956e4ade892209b/.github/workflows/linux.yml#L251) and have an action that pulls **some** of what we need? Note that the action [rscohn2/setup-oneapi](https://github.com/rscohn2/setup-oneapi) does not yet seem to implement an easy setup of classic compilers, but rather is tailored for the modern Intel compilers only.
jfdev001
added a commit
to jfdev001/FTorch
that referenced
this pull request
Oct 6, 2025
-------------------------------------- **Summary** -------------------------------------- Building off work in Cambridge-ICCS#164, this PR attempts a more atomic contribution to the CI for compiling and testing FTorch with only the intel and intel classic compilers---in contrast, the original PR opened a year ago makes macos, nvfortran, and intel CI contributions but was failing tests at the time. My contribution follows the structure of pFUnit's CI file [pFUnit/.github/workflows/main.yml](https://github.com/Goddard-Fortran-Ecosystem/pFUnit/blob/1a915dd87d1ed17500eb4e5a07c68e5f398377f1/.github/workflows/main.yml), that is I split the GNU testing and the Intel testing jobs in order to more clearly isolate the different environments. I use OpenMPI compiled from source in order to successfully run the integration tests. -------------------------------------- **Using OpenMPI instead of oneAPI MPI** -------------------------------------- The MPI implementation that ships with Intel oneAPI is **not** used since the below error occurs when using that: ``` [ 94%] Building Fortran object examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/mpi_infer_fortran.f90.o /home/runner/work/FTorch/FTorch/examples/7_MPI/mpi_infer_fortran.f90(15): error #6580: Name in only-list does not exist or is not accessible. [MPI_GATHER] mpi_gather, mpi_init -------------------^ compilation aborted for /home/runner/work/FTorch/FTorch/examples/7_MPI/mpi_infer_fortran.f90 (code 1) gmake[2]: *** [examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/build.make:78: examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/mpi_infer_fortran.f90.o] Error 1 gmake[1]: *** [CMakeFiles/Makefile2:1717: examples/7_MPI/CMakeFiles/mpi_infer_fortran.dir/all] Error 2 gmake: *** [Makefile:146: all] Error 2 ``` That is, `mpi_gather` is not recognized when compiling with the Intel oneAPI MPI Fortran compiler. I verified this behavior by checking to see which MPI `FindMPI` gets: ``` -- Found MPI_C: /opt/intel/oneapi/mpi/2021.16/lib/libmpifort.so (found version "4.1") -- Found MPI_CXX: /opt/intel/oneapi/mpi/2021.16/lib/libmpicxx.so (found version "4.1") -- Found MPI_Fortran: /opt/intel/oneapi/mpi/2021.16/lib/libmpifort.so (found version "4.1") -- Found MPI: TRUE (found version "4.1") ``` The above error can be found in the logs for [github job/51782067191](https://github.com/jfdev001/FTorch/actions/runs/18189775089/job/51782067191). **Instead, I compile OpenMPI from source** using either `ifx` or `ifort`. The C/C++ compilers for OpenMPI are `icx` and `icpx`, respectively. I do not use the Intel classic C/C++ compilers (`icc` and `icpc`, respectively) since this appears to cause the below error: ``` Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES) ``` The above error can be found in the logs for [github job/519939443997](https://github.com/jfdev001/FTorch/actions/runs/18263208690/job/51993943997). I use OpenMPI v4.1.2 because this is just what happens to be on the HPC systems that I use the most (DKRZ's Levante), though a different version could certainly be used. -------------------------------------- **Open questions---possibly new issues/new PRs** -------------------------------------- * Compiling OpenMPI from source takes about 7 minutes. I have tried to cache the resulting binary using the [actions/cache](https://github.com/actions/cache) action but it does not seem to speed up this step. I'm probably doing something wrong, any ideas what? I can work it out if the maintainers are generally happy with the direction of my PR. * I only use the Intel C/C++ compilers (i.e., `icx` and `icpx` v2023.2) for OpenMPI to circumvent an OpenMP issue that arises when using the Intel Classic C/C++ compilers (i.e., `icc` and `icpc`). Does this seem like a reasonable approach? * Unlike the original PR, I only call the [fortran-lang/setup-fortran](https://github.com/fortran-lang/setup-fortran) action with the `intel-classic` compiler option since this also gets an older version (v2023.2) of the newer Intel Fortran and C/C++ compilers (i.e., `ifx`, `icx`, `icpx`). Should I instead try and use a more recent version of `ifx` (e.g., v2024.1)? * The [fortran-lang/setup-fortran](https://github.com/fortran-lang/setup-fortran) action builds oneAPI MPI distribution by default. This is not used by the current iteration of the CI, and is therefore a waste of time. Perhaps we should manually install the Intel Compilers we need (see again [pFUnit/.github/workflows/main.yml#L147](https://github.com/Goddard-Fortran-Ecosystem/pFUnit/blob/1a915dd87d1ed17500eb4e5a07c68e5f398377f1/.github/workflows/main.yml#L147)? Otherwise, we could follow the approach of [dealii/.github/workflows/linux.yml#L251](https://github.com/dealii/dealii/blob/3b7dfc0ff88c7b9f3aa7112f3956e4ade892209b/.github/workflows/linux.yml#L251) and have an action that pulls **some** of what we need? Note that the action [rscohn2/setup-oneapi](https://github.com/rscohn2/setup-oneapi) does not yet seem to implement an easy setup of classic compilers, but rather is tailored for the modern Intel compilers only. Co-authored-by: Jack Atkinson <jwa34@cam.ac.uk>
Member
Author
|
Closing this as stale/outdated. cc @joewallwork |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Update test-suite workflow to use the fortran-lang github action to access intel compilers.
May close #140
At the moment we can build successfully with gcc and intel-classic compilers on ubuntu.
gcc is all good, but intel is failing on the CMakeTest asserts which #142 should resolve.
We need to decide what combinations of OS, toolchain, and standard to run the tests over - enough for sensible coverage, but not an excessive number of jobs.