{{{ #!html

How to use the IPSL models and tools at LSCE

}}} ---- [[PageOutline(1-2,Table of contents,,numbered)]] The LSCE computing environment is detailed here: [https://w3.lsce.ipsl.fr/informatique/util/index.php]. You can only access this webpage via the LSCE network. * The interactivity The network includes a cluster for the interactive mode. This cluster is considered as a unique machine called '''asterix.lscelb.extra.cea.fr''' which can be shorten into '''asterix.lscelb'''. The direct access to the cluster is only possible from the LSCE or from the CCRT machines. The cluster can be accessed via ssh and xdmcp protocols. * The computing cluster The LSCE has a small computing cluster. See its users' manual here: [https://w3.lsce.ipsl.fr/informatique/util/calcul/batch.php]. This cluster is considered as a unique machine called ''obelix.lscelb.extra.cea.fr'' which can be shorten into '''obelix.lscelb'''. ## Modipsl and compiling ## By default the compiling is done for MPI parallel mode. The compiler is `ifort` (''Intel'' compiler). The '''lxiv8''' target in modipsl/util/AA_make.gdef is used on obelix. Currently the components ORCHIDEE, LMDZ, IOIPSL and XIOS can be installed on obelix using the compiling options in modipsl. [[BR]] ## libIGCM and environment ## libIGCM can be used at obelix. Computing, rebuild and time-series are performed. No atlas or monitoring is done. The default shell at LSCE is `tcsh`, which syntax is different from the `ksh` syntax used by libIGCM. No specific configuration of your account is needed to compile and run using libIGCM. But you can take the environment proposed on the shared account by coping the file `/home/users/igcmg/.bashrc` in your `$HOME`. ## Disk space and archive directory ## The home for each login in /home/user/ have very small space. You need to have write access to another disk depending on the project you work on. All login can write to the disk /home/scratch01/login. Files older than 30 days can be deleted without warning on this disk. The default archive directory at obelix is set to /home/scratch01/yourlogin. To store your simulation on a permanent disk you need to set ARCHIVE=/home/disk/yourlogin in config.card section [!UserChoices]. Example: config.card {{{ [UserChoices] JobName=testO1 TagName=OL2 SpaceName=PROD ExperimentName=clim ARCHIVE=/home/orchidee02/login ... }}} According to the above config.card the simulation will be stored in /home/orchidee01/login/IGCM_OUT/OL2/PROD/clim/test01 . ## Example of parallel MPI job ## Here is an example of a simple job to run the orchidee_ol executable. All input files and the executable must be in the directory before running the executable. {{{ ###################### ## OBELIX LSCE ## ###################### #PBS -N MyTest #PBS -m a #PBS -j oe #PBS -q medium #PBS -o Script_Output_SECHSTOM.000001 #PBS -S /bin/ksh #PBS -v BATCH_NUM_PROC_TOT=4 #PBS -l nodes=1:ppn=4 cd $PBS_O_WORKDIR mpirun -np ${BATCH_NUM_PROC_TOT} orchidee_ol }}} To submit it you need to use the command '''qsub''', and you can follow your simulation with the command '''qstat -u login'''