UNIVERSITY OF WEST ATTICA
SCHOOL OF ENGINEERING
DEPARTMENT OF COMPUTER ENGINEERING AND INFORMATICS
Introduction to Parallel Computing
Vasileios Evangelos Athanasiou
Student ID: 19390005
Supervision
Supervisor: Vasileios Mamalis, Professor
Supervisor: Grammati Pantziou, Professor
Co-supervisor: Michalis Iordanakis, Special Technical Laboratory Staff
Athens, January 2023
The primary objective of this exercise is to manage and process a vector X of size N across p processes using MPI and collective communication.
| Section | Folder | Description |
|---|---|---|
| 1 | assign/ |
Assignment material for the Collective Communication laboratory |
| 1.1 | assign/PAR-LAB-EXER-II-2022-23.pdf |
Laboratory exercise description in English |
| 1.2 | assign/ΠΑΡ-ΕΡΓ-ΑΣΚ-ΙΙ-2022-23.pdf |
Περιγραφή εργαστηριακής άσκησης (Greek) |
| 2 | docs/ |
Documentation and theoretical background on collective communication |
| 2.1 | docs/Collective-Communication.pdf |
Theory and mechanisms of collective communication (EN) |
| 2.2 | docs/Συλλογική-Επικοινωνία.pdf |
Θεωρία Συλλογικής Επικοινωνίας (EL) |
| 3 | src/ |
Source code implementing collective communication operations |
| 3.1 | src/collective_communication.c |
C implementation of MPI collective communication primitives |
| 4 | README.md |
Repository overview, build, and execution instructions |
The system follows a manager–worker model:
-
Process P₀ (Manager):
- Initializes and owns the full vector
- Distributes vector segments to all processes (including itself)
- Coordinates global calculations and gathers results
-
Worker Processes (P₁ … Pₚ₋₁):
- Perform computations on their assigned sub-vectors
- Participate in collective communication operations
All calculations are executed locally first and then combined using MPI collective routines.
The program performs the following operations on the distributed vector X:
- Computes the mean value of the vector
- Counts:
- Elements greater than the average
- Elements less than the average
The dispersion (variance) is calculated using:
where:
is the mean value of the vector
Computes a normalized percentage vector
:
This expresses each element’s relative position between the minimum and maximum values.
- Identifies the maximum value in the vector
- Determines its global index
- Computes the prefix sum vector of X
- Each element contains the sum of all previous elements up to that position
git clone https://github.com/Introduction-to-Parallel-Computing/Collective-Communication.git
cd Collective-Communication/src- Programming Language: C
- Parallel Environment: MPI
MPI_InitMPI_Comm_rankMPI_Comm_sizeMPI_BcastMPI_ScatterMPI_GatherMPI_ReduceMPI_ScanMPI_Finalize
- Primary: Collective communication
- Secondary: Point-to-point blocking communication (used specifically for the prefix sum logic)
Compile the source code using the MPI compiler wrapper:
mpicc -o collective_communication collective_communication.cRun the program with mpirun, specifying the number of processes:
mpirun -np 4 ./collective_communicationImportant:
The vector size N must satisfy:
(i.e., N must be an integer multiple of the number of processes)
The current implementation does not support uneven vector sizes across processes.
Handling cases where
is not supported, as some processes would remain idle.
This project demonstrates effective use of MPI collective communication for distributed numerical processing. It highlights practical applications of MPI_Bcast, MPI_Scatter, MPI_Reduce, and MPI_Scan, offering a strong foundation for understanding data-parallel computation and process coordination in high-performance computing environments.
- Navigate to the
docs/directory - Open the report corresponding to your preferred language:
- English:
Collective-Communication.pdf - Greek:
Συλλογική-Επικοινωνία.pdf
- English:
