New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
Ticket Diff – NEMO

Changes between Version 2 and Version 5 of Ticket #2011


Ignore:
Timestamp:
2019-05-15T17:43:26+02:00 (5 years ago)
Author:
francesca
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #2011

    • Property Summary changed from HPC-04_Mocavero_mpi3 to HPC-04(2018WP)_Mocavero_mpi3
    • Property Priority changed from low to high
  • Ticket #2011 – Description

    v2 v5  
    11== Context 
    22 
    3 Reducing communications overhead 
     3MPI-3 provides new neighbourhood collective operations (i.e. MPI_Neighbor_allgather and MPI_Neighbor_alltoall) that allow to perform halo exchange with a single MPI communication call when a 5-point stencil is used. 
     4 
     5Collective communications will be tested on the NEMO code in order to evaluate the code performance compared with the traditional point-to-point halo exchange currently implemented in NEMO. 
     6 
     7The replacement of point-to-point communication with new collective ones will be designed and implemented taking care of the results accuracy. 
    48 
    59== Implementation plan 
    610 
    7 Analysis of scalability improvement using MPI3 new communications (e.g. collective neighbours communications) instead of point to point communications. 
     11The work, started in 2018, is described in the following: 
    812 
    9 Step 1: extraction of a mini-app to be used as test case (done) 
     13Step 1: extraction of a mini-app to be used as test case. The advection kernel has been considered as test case and a mini-app has been implemented. The parallel application performs the MUSCL advection scheme and the dimension of the subdomain as well as the number of parallel processes can be set by the user (done) 
    1014 
    11 Step 2: implementation and performance comparison between the new MPI3 collective communications and the standard MPI2 point-to-point communications (ongoing) 
     15Step 2: integration of the new MPI-3 collective communications in the mini-app and performance comparison with the standard MPI-2 point-to-point communications. The evaluation of the proof of concept will be performed by changing the subdomain size. Performance analysis will be executed on systems available at CMCC. However, tests on other systems (available at Consortium partners sites) are welcome (ongoing) 
    1216 
    13 Step 3: integration of the new approach in the NEMO code and evaluation of improvements on the NEMO kernels compliant with the MPI3 collective neighbours communications stencil (depending on Step 2 results) 
     17Step 3: the collective communications will be integrated in the NEMO code. The use of collective communications could be optional and the choice between point-to-point and collective communications will be demanded to the user (through a dedicated namelist parameter), also depending on the architecture where the code will run. The initialisation of the cartesian topology will be integrated in the mppini module, while the new version of lbc_lnk (perform a single MPI-3 collective call) will be added in the lib_mpp module. No changes are required in the NEMO routines where the lbc_lnk is called. 
     18 
     19The proposed changes do not impact on NEMO usabilty. Reference manual will not be changed.