Leonardo
Warning
As of 30 March 2026, there are some issues with non-interactive and multi-node usage of EESSI on Leonardo. The documentation below assumes that this issue has been resolved.
EESSI, the base of the EFP Federated Software Catalog, is available system-wide on Leonardo via a native CernVM-FS deployment (which is only mounted on-demand, see Access below) since early 2026.
Synchronisation
Leonardo is synchronised daily with the public EESSI infrastructure. This means that once new software has been integrated into EESSI, the maximum wait time for it to appear on the system is 24 hours.
Access
EESSI is only available on compute nodes, not on the login nodes.
To enable access to EESSI within your job, you must include _CVMFS_ (including the underscores)
in the name of job. This triggers making the EESSI mount point available within the job context.
In Slurm, you can control the job name at submission time via the --job-name or -J option.
Other than this requirement, the standard access instructions apply.
Usage
EESSI is available on-demand (see Access above) on compute nodes but requires initialisation. See the initialisation instructions for the Federated Software Catalog and then continue with loading your required software.
Known problems and limitations
- EESSI is not available on login nodes;
_CVMFS_must be included somewhere in your job name (see Access above);
System-specific best practices
Multi-node jobs
We provide an example of running a latency test between nodes from the OSU Micro-Benchmarks suite. This is done on 2 nodes, each with GPUs, and checking the device-to-device (node-to-node) latency:
# Allocate resources for the test (2 nodes, 1 task per node, 1 GPU per task)
salloc --time=00:30:00 --account="<SLURM_ACCOUNT>" --job-name EFP_CVMFS_ --partition dcgp_usr_prod --nodes 2 --tasks-per-node 1
# Configure the EESSI environment and load the modules we need for our test
source /cvmfs/software.eessi.io/versions/2025.06/init/lmod/bash
module load OSU-Micro-Benchmarks/7.5-gompi-2024a-CUDA-12.6.0
# Check we have two different nodes
srun --mpi=pmix_v3 -n 2 hostname
# Launch our latency test
srun --mpi=pmix_v3 -n 2 osu_latency
The --mpi=pmix_v3 is not strictly necessary, as it is in fact set by default in the environment via the
environment variable $SLURM_MPI_TYPE, we include it above for completeness.
Getting help with using FSC on Leonardo
If you need help with using EESSI on Leonardo, please contact EFP Helpdesk.