LUMI
Warning
As of 30 March 2026, multi-node usage of EESSI is still being tuned on LUMI. The documentation below assumes that this tuning has been completed.
EESSI, the base of the EFP Federated Software Catalog,
is available system-wide on LUMI via a SquashFS export of the software.eessi.io repository
(which is only mounted on-demand, see Access below) since early 2026.
Synchronisation
LUMI is synchronised daily with the public EESSI infrastructure. This means that once new software has been integrated into EESSI, the maximum wait time for it to appear on the system is 24 hours.
Access
Access on compute nodes
EESSI is only available on compute nodes when the eessi job constraint is used.
This is done by using the option --constraint=eessi when submitting your Slurm job.
Equivalently, you can set the $SBATCH_CONSTRAINT or $SALLOC_CONSTRAINT environment variable prior to starting a job:
export SBATCH_CONSTRAINT="eessi" # For the sbatch command
export SALLOC_CONSTRAINT="eessi" # For salloc
Other than this, the standard access instructions apply.
Access on login nodes
EESSI is not available be default on the LUMI login nodes, but it is possible to browse and use EESSI on the login nodes by using a container. The necessary commands to start this are shown below:
# Specify bind mounts to have access to all storage locations
export SINGULARITY_BIND="/pfs/lustref1/eessicvmfs/software.eessi.io.latest.sqsh:/cvmfs:image-src=/,/pfs,/users,/projappl,/project,/scratch,/flash"
# Start a container with these bind mounts
singularity shell docker://ubuntu:24.04
Other than this, the standard access instructions apply.
Usage
EESSI is available on-demand on compute nodes (see Access above), and requires initialisation. See the initialisation instructions for the Federated Software Catalog and then continue with loading your required software.
Known problems and limitations
- EESSI is not available on login nodes (but can be used via
singularity, see above); - You must use
eessias a constraint in Slurm job configuration, see Access above; - MPI jobs currently require launching with
mpirun(i.e., notsrun), as there is no PMI server running; - Multi-node MPI jobs may have slightly poorer communication performance than Cray MPICH (though still respectable);
- EESSI does not yet provide software installations that support AMD GPUs, this is actively being developed.
System-specific best practices
Multi-node jobs
We provide an example of a job script which runs a latency test between nodes from the OSU Micro-Benchmarks suite. This is done on 2 nodes, checking the node-to-node latency:
#!/usr/bin/bash
#SBATCH --time=00:30:00
#SBATCH --account="<SLURM_ACCOUNT>"
#SBATCH --partition small
#SBATCH --nodes 2
#SBATCH --ntasks-per-node 1
#SBATCH --cpus-per-task 1
#SBATCH --network=single_node_vni,job_vni,def_tles=0
#SBATCH --constraint=eessi
# Configure the EESSI environment and load the modules we need for our test
source /cvmfs/software.eessi.io/versions/2025.06/init/lmod/bash
module load OSU-Micro-Benchmarks/7.5-gompi-2025a
# Check we have two different nodes
mpirun -n 2 hostname
# Launch our latency test
mpirun -n 2 osu_latency
The --constraint=eessi is critical to have EESSI available on the allocated nodes (see also Access above).
Getting help with using FSC on LUMI
If you need help with using EESSI on LUMI, please contact EFP Helpdesk.