wiki:Documentation/UserGuide/TestCaseBatch

Version 10 (modified by jgipsl, 9 years ago) (diff)

--

How to run a simple test case with ORCHIDEE offline with a batch job

First prepare a run directory following the how to How to run a simple test case with ORCHIDEE.

Then prepare a job according to computer environement and the choice of parallelization and submit it.

curie/TGCC

MPI mode

  • First prepare a run directory following the how to How to run a simple test case with ORCHIDEE. But instead of using wget to copy the files, they can be found in the directory /ccc/work/cont003/dsm/p86ipsl/IGCM and can be linked or copied directly. The path is the same.
  • Then create a text file as follow and save it as Job_curie :
    #!/bin/ksh
    ######################
    ## CURIE   TGCC/CEA ##
    ######################
    #MSUB -r test             # name of the job
    #MSUB -o Script_Output    # name of output file for standard messages
    #MSUB -e Script_Output    # name of output file for error messages
    #MSUB -eo
    #MSUB -n 4                # Request numbre of cores
    #MSUB -T 1800             # Time limit in seconds
    #MSUB -Q test             # Queue test, priority acces
    #MSUB -q standard
    #MSUB -A gen6328          # Set your project id
    BATCH_NUM_PROC_TOT=$BRIDGE_MSUB_NPROC
    set +x
    
    cd ${BRIDGE_MSUB_PWD}
    /usr/bin/time ccc_mprun -n 4 ./orchidee_ol
    
  • Submit the job :
ccc_msub Job_curie
  • Survey the job while it is still in queue :
ccc_mstat -u

  • Kill the job if needed, take the jobid from the ccc_mstat command :
ccc_mdel jobid

Hybbrid MPI/OpenMP mode

Note that the hybrid mode is only for the coupled mode with lmdz.

#!/bin/bash 
#MSUB -r test                 # Request name 
#MSUB -n 32                   # Total number of tasks to use (MPI)
#MSUB -c 2                    # Number of threads per task to use (OMP)
#MSUB -N 4                    # Number of nodes  = 32*2/16 = 4
#MSUB -T 1800                 # Elapsed time limit in seconds 
#MSUB -q standard             # Choosing thin nodes
#MSUB -Q test                 # Choosing test Queue
#MSUB -o Script_output        # Standard output
#MSUB -e Script_output        # Error output
#MSUB -A gen****              # submission group number


set -x 
cd ${BRIDGE_MSUB_PWD}
export KMP_STACKSIZE=2g
export KMP_LIBRARY=turnaround
export MKL_SERIAL=YES
export OMP_NUM_THREADS=2         # = number of thread ask in header (2 - 4 - 8 ) 


ccc_mprun ./lmdz.x

ada/IDRIS MPI mode

#!/bin/ksh
# ######################
# ## ADA IDRIS ##
# ######################
# Job name
# @ job_name = SECHSTOM
# Job type
# @ job_type = parallel
# Standard output file name
# @ output = Script_Output_SECHSTOM.000001
# Error output file name
# @ error = Script_Output_SECHSTOM.000001
# Total number of tasks
# @ total_tasks = 4
# @ environment = "BATCH_NUM_PROC_TOT=4"
# Maximum CPU time per task hh:mm:ss
# @ wall_clock_limit = 1:00:00
# End of the header options
# @ queue
# Temporary due to memory problems on mpi version
export MP_EUILIBPATH=/smplocal/lib/ibmhpc/pe12012/ppe.pami/gnu/lib64/pami64
date
/usr/bin/time poe ./orchidee_ol
date
  • Submit the job :
    llsubmit Job
    
  • Survey the job while it is still in queue :
    llq | grep login
    
  • Kill the job if needed, take the jobid from the llq command :
    llcancel jobid
    

obelix/LSCE MPI mode

######################
## OBELIX      LSCE ##
######################
#PBS -N MyTest
#PBS -m a
#PBS -j oe
#PBS -q medium
#PBS -o Script_Output
#PBS -S /bin/ksh
#PBS -v BATCH_NUM_PROC_TOT=4
#PBS -l nodes=1:ppn=4

## Load same netcdf as used in the compilation
. /usr/share/Modules/init/ksh
module unload netcdf
module load netcdf/4p

## Go to current directory
cd $PBS_O_WORKDIR

## Launch the model
mpirun -np ${BATCH_NUM_PROC_TOT} orchidee_ol
  • Submit the job :
    qsub Job
    
  • Survey the job while it is still in queue :
    qstat -u login
    
  • Kill the job if needed, take the jobid from the qstat command :
    qdel jobid