MareNostrum 5
EESSI, the base of the EFP Federated Software Catalog,
is available system-wide on MareNostrum 5 since early 2025. It is provided via the GPFS parallel
filesystem rather than directly via CernVM-FS: /cvmfs/software.eessi.io is a symbolic link that
points to a directory that is automatically synchronized with the EESSI repository.
Synchronisation
Synchronization is done daily, which means that once new software has been integrated into EESSI, the maximum wait time for it to appear on the system is 24 hours.
Access
Standard access instructions apply, even though EESSI is not provided directly via CernVM-FS.
Usage
EESSI is available by default on both login and compute nodes but requires loading a gateway module to access:
# Unload any loaded modules
module purge
# Keep bsc_* commands available
module load bsc/1.0
# Load the gateway module to EESSI
module load eessi_mn5/1
Known problems and limitations
- EESSI is available via the GPFS, a file system that is typically optimised for larger files. You may percieve some additional latency as compared to other systems which use native CernVM-FS (which is optimised to deliver software stacks).
- With
srun, you can avoid a harmless warning from PMIx with# Silence a harmless warning from PMIx (only required for EESSI/2023.06) export PMIX_MCA_psec=native - GPU drivers for EESSI version 2025.06 are currently not available, but will be added soon.
System-specific best practices
Multi-node jobs
We provide an example of running a latency test between nodes from the OSU Micro-Benchmarks suite. This is done on 2 nodes, each with GPUs, and checking the device-to-device (node-to-node) latency:
# Allocate resources for the test
# (2 nodes, 1 task per node, 1 GPU per task, site policy requires a minimum 20 CPUs per GPU)
salloc --time=00:30:00 --account="<SLURM_ACCOUNT>" --qos=acc_ehpc --nodes 2 --ntasks-per-node 1 --cpus-per-task 20 --gres=gpu:1
# Configure the EESSI environment and load the modules we need for our test
module purge
module load bsc/1.0 # Keep bsc_* commands available
module load eessi_mn5/1 # Load the gateway module to EESSI
module load EESSI/2023.06 # Load the EESSI module
module load OSU-Micro-Benchmarks/7.5-gompi-2023b-CUDA-12.4.0 # Load our required software
# Silence a harmless warning from PMIx (only required for EESSI/2023.06)
export PMIX_MCA_psec=native
# Check we have two different nodes
srun --mpi=pmix -n 2 hostname
# Launch our latency test
srun --mpi=pmix -n 2 osu_latency --accelerator cuda
We must include --mpi=pmix above to tell Slurm how to launch our MPI processes.
Getting help with using FSC on MareNostrum 5
If you need help with using EESSI on MareNostrum 5, please contact EFP Helpdesk.