Changes between Version 38 and Version 39 of Doc/ComputingCenters/TGCC/Irene
- Timestamp:
- 08/16/18 09:50:50 (6 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Doc/ComputingCenters/TGCC/Irene
v38 v39 11 11 * On-line users manual: https://www-tgcc.ccc.cea.fr/docs/irene.info.pdf (you will need a TGCC login and password) 12 12 * Irene computing nodes: the nodes of partition '''skylake''' have 48 cores each, which is 3 times more than the computing nodes from the standard partition of Curie. 13 * Skylake nodes for regular computation (coming from irene.info) 14 * Partition name: skylake 15 * CPUs: 2x24-cores Intel Skylake@2.7GHz (AVX512) 16 * Cores/Node: 48 17 * Nodes: 1 656 18 * Total cores: 79488 19 * RAM/Node: 192GB 20 * RAM/Core: 4GB 13 21 * When submitting a job through ccc_msub or ccc_mprun, you must specify -m work, -m scratch, -m store, or combine them like in -m work,scratch. This constraint has the advantage that your jobs won't be suspended if a file system you don't need becomes unavailable. This is done in all jobs in libIGCM. 14 * Compute nodes are diskless, meaning that /tmp is not hosted on a local hard drive anymore, but on system memory instead. It offers up to 16 GB (compared to 64 GB on Curie). Please note that any data written to it is reduces the size of the memory that remains available for computations. For libIGCM post-treatment jobs you might need to use more cores than used at curie .22 * Compute nodes are diskless, meaning that /tmp is not hosted on a local hard drive anymore, but on system memory instead. It offers up to 16 GB (compared to 64 GB on Curie). Please note that any data written to it is reduces the size of the memory that remains available for computations. For libIGCM post-treatment jobs you might need to use more cores than used at curie or, preferably, use xlarge nodes. 15 23 * The default time limit for a job submission is 2hours (7200s) contrary to 24h (86400s) on curie 24 * Irene post-processing nodes : xlarge are free and usefull for post-processing operations. Since 2018/08/16, it's possible to use them in libIGCM. 25 * Fat nodes for computation requiring a lot of shared memory (coming from irene.info) 26 * Partition name: xlarge 27 * CPUs: 4x28-cores Intel Skylake@2.1GHz 28 * GPUs: 1x Nvidia Pascal P100 29 * Cores/Node: 112 30 * Nodes: 5 31 * Total cores: 560 32 * RAM/Node: 3TB 33 * RAM/Core: 5.3GB 34 * IO: 2 HDD de 1 TB + 1 SSD 1600 GB/NVMe 16 35 17 36 # Job manager commands #