Version 3 (modified by francesca, 4 years ago) (diff) |
---|
Name and subject of the action
Last edition: Wikinfo(changed_ts)? by Wikinfo(changed_by)?
The PI is responsible to closely follow the progress of the action, and especially to contact NEMO project manager if the delay on preview (or review) are longer than the 2 weeks expected.
Summary
Action | MPI3 collective neighbours communications instead of point to point communications |
---|---|
PI(S) | Silvia Mocavero and Italo Epicoco |
Digest | MPI-3 provides new neighbourhood collective operations that allow to perform halo exchange with a single MPI communication call. |
Dependencies | If any |
Branch | dev_r13296_HPC-07_mocavero_mpi3 |
Previewer(s) | Mirek Andrejczuk |
Reviewer(s) | Mirek Andrejczuk |
Ticket | #2496 |
Description
This is the continuation of the work started in 2019 (HPC-12_Mocavero_mpi3).
MPI-3 provides new neighbourhood collective operations (i.e. MPI_Neighbor_allgather and MPI_Neighbor_alltoall) that allow to perform halo exchange with a single MPI communication call.
These collective communications have been integrated and tested on the NEMO code during 2019 in order to evaluate the code performance compared with the traditional point-to-point halo exchange currently implemented in NEMO. The first version of the implementation uses a cartesian topology, so it does not support 9-point stencil neither land domain exclusion and the north fold is handled as usual. The use of new collective communications has been tested on a representative kernel implementing the FCT advection scheme.
Preliminary tests show an improvement within a range of 18%-32% on the GYRE_PISCES configuration (with nn_GYRE=200), depending on the allocated number of cores. The output accuracy is preserved.
During 2020 we intend to integrate the graph topology to support the routines that use a 9-point stencil, the land domain exclusion and the north fold exchanges through MPI3 neighbourhood collective communications.
Implementation
Step 1: alignment of the dev_r11470_HPC_12_mpi3 branch with the new trunk
Step 2: integration of graph topology to allow each process to exchange halo with diagonal processes (when 9-point stencil is needed) or with non-neighbours processes (when land domain exclusion is activated or north fold has to be handled)
Step 3: replacement of point-to-point communications with collective ones within the NEMO code
Documentation updates
...
Preview
...
Tests
...
Review
...