{{{ #!html

How to use the IPSL models and tools at obelix/LSCE cluster

}}} ---- [[PageOutline(1-2,Table of contents,,numbered)]] The LSCE computing environment is detailed here: [https://w3.lsce.ipsl.fr/informatique/util/index.php]. You can only access this web page via the LSCE network. The direct access to the cluster is only possible from the LSCE or from the TGCC machines. The cluster can be accessed via ssh and xdmcp protocols. * asterix : interactive use. The network includes a cluster for the interactive mode. This cluster is considered as a unique machine called '''asterix'''. * obelix : computing cluster. The LSCE has a small computing cluster. See its users' manual here: [https://w3.lsce.ipsl.fr/informatique/util/calcul/batch.php]. This cluster is considered as a unique machine called ''obelix''. Installation, compiling and running is done from obelix. ## Modipsl and compiling ## Install and compile from obelix. By default the compiling is done for MPI parallel mode. The compiler is `ifort` (''Intel'' compiler). The '''lxiv8''' target in modipsl/util/AA_make.gdef is used on obelix. Currently the components ORCHIDEE, LMDZ, IOIPSL and XIOS can be installed on obelix using the default compiling options in modipsl. [[BR]] ## libIGCM and environment ## libIGCM can be used at obelix. Computing, rebuild and time-series are performed. No pack, atlas or monitoring is done. The default shell at LSCE is `tcsh`, which syntax is different from the `ksh` syntax used by libIGCM. No specific configuration of your account is needed to compile and run using libIGCM. But you can use the environment proposed on the shared account by coping the file `/home/users/igcmg/.bashrc` in your `$HOME`. ## Disk space and archive directory ## The home for each login in /home/user/ have very small space. You need to have write access to another disk depending on the project you work on. All login can write to the disk /home/scratch01/login. Note that files older than 30 days can be deleted without warning on this disk. [[NoteBox(warn,The simulation output is stored in /home/scrath01. You need to change it to set a permanent archive by using '''ARCHIVE''' variable in config.card., 600px)]] The default archive directory at obelix is set to /home/scratch01/yourlogin. To store your simulation on a permanent disk you need to set '''ARCHIVE=/home/diskXXX/yourlogin''' in config.card section [!UserChoices]. Example: config.card {{{ [UserChoices] JobName=testO1 TagName=OL2 SpaceName=PROD ExperimentName=clim ARCHIVE=/home/orchidee02/login ... }}} Change login to your personal login. In the example according to the above config.card the simulation will be stored in /home/orchidee02/login/IGCM_OUT/OL2/PROD/clim/test01 . ## Example of parallel MPI job ## Here is an example of a job to run ORCHIDEE offline model launched on 15 cores with XIOS server on 1 core. This example can also be used for LMDZ, change the executable name in the run_file. A minimum change before launching the script is to source the modules that are needed. The same modules as used during compilation should be set. All input files and executables should be available in the folder where the job is launched. {{{ ###################### ## OBELIX LSCE ## ###################### #PBS -N MyTest #PBS -m a #PBS -j oe #PBS -q medium #PBS -o Script_Output #PBS -S /bin/ksh #PBS -l nodes=1:ppn=16 ## Go to current folder cd $PBS_O_WORKDIR ## To access module command source /usr/share/Modules/init/ksh ## Source same modules as used during compilation. See examples below depending on the configurations used. # For ORCHIDEE_3 and more recent offline versions or coupled v6.2 and more recent versions: #source ../modipsl/config/ORCHIDEE_OL/ARCH/arch-ifort_LSCE.env #source ../modipsl/config/LMDZOR_v6/ARCH/arch-ifort_LSCE.env # For ORCHIDEE_2_0, 2_1, 2_2 and coupled models v6.1.x: # source /home/orchideeshare/igcmg/MachineEnvironment/obelix/env_atlas_obelix ## Create a run_file to be used to lauch with XIOS in server mode # Note: 31+1 should be equal the number set in the header. For example, if you set 16 in the header, set -np 15 for orchidee and keep -np 1 for xios. # If you want to run LMDZ, change orchidee_ol_prod into the name for the lmdz executable. rm -f run_file echo "-np 15 ./orchidee_ol_prod" > run_file echo "-np 1 ./xios_server_prod.exe " >> run_file chmod +x run_file ## Launch the executables time mpirun --app ./run_file }}} To submit it you need to use the command '''qsub''', and you can follow your simulation with the command '''qstat -u login''' ## Specific installation of LMDZ ## * Download a configuration as default. All LMDZOR_v6 configurations can be used at obelix. Fully coupled model IPSLCM has not been adapted for use at obelix. * Change the default compilation from hybrid mode mpi_omp into mpi only. Do this in config/LMDZOR_v6/AA_make on the line '''makelmdz_fcm''' by changing '''-parallel mpi_omp''' into '''-parallel mpi'''. Recreate makefile by executing modipsl/util/ins_make. Compile as usual. * Before launching * LMDZ has an option set in run.def parameter file to use different filters close to the north and south pole. The default filter set in v6 configurations is the fft filter because this is faster. In the obelix compile options no specific FFT has been added and therefore this option can not be used. It must be changed before running LMDZ. This is done by setting '''use_filtre_fft=n''' in LMDZOR_v6/GENERAL/PARAM/run.def file. This directory is copied at each creation of a submit directory by ins_job. * Adapt config.card to use only MPI. Set for example following: {{{ [Executable] #D- For each component, Real name of executable, Name of executable for oasis ATM= (gcm.e.mpi.prod, lmdz.x, 7MPI, 1OMP) SRF= ("", "") SBG= ("", "") IOS= (xios_server.exe, xios.x, 1MPI) }}}