Skip to content

Deucalion

EESSI, the base of the EFP Federated Software Catalog, is available system-wide on Deucalion via a native CernVM-FS deployment since early 2026.

Synchronisation

Deucalion is synchronised daily with the public EESSI infrastructure. This means that once new software has been integrated into EESSI, the maximum wait time for it to appear on the system is 24 hours.

Access

Standard access instructions apply.

Usage

EESSI is available by default on both login and compute nodes but requires initialisation. See the initialisation instructions for the Federated Software Catalog and then continue with loading your required software.

Known problems and limitations

  • GPU drivers for EESSI version 2025.06 are currently not available, they require site configuration and will be added soon.
  • An Infiniband-related warning may be seen related to fork safety for the libibverbs library:
    [1774165437.998684] [gnx531:3813653:0]           ib_md.c:1232 UCX  WARN  IB: ibv_fork_init() was disabled or failed, yet a fork() has been issued.
    [1774165437.998691] [gnx531:3813653:0]           ib_md.c:1233 UCX  WARN  IB: data corruption might occur when using registered memory.
    
    This warning can be resolved by setting the $IBV_FORK_SAFE the environment variable:
    export IBV_FORK_SAFE=1
    
    This enforces fork safety but comes with a (usually insignificant) performance cost because this implies additional overhead.

System-specific best practices

Multi-node jobs

We provide an example of running a latency test between nodes from the OSU Micro-Benchmarks suite. This is done on 2 nodes, each with GPUs, and checking the device-to-device (node-to-node) latency:

# Allocate resources for the test
# (2 nodes, 1 task per node, when asking for GPUs site policy requires 32 CPUs per task)
salloc --time=00:30:00 --account="<SLURM_ACCOUNT>" --partition=normal-a100-40 --nodes 2 --ntasks-per-node 1 --cpus-per-task 32 --gpus-per-task 1
# Configure the EESSI environment and load the modules we need for our test
source /cvmfs/software.eessi.io/versions/2025.06/init/lmod/bash
module load OSU-Micro-Benchmarks/7.5-gompi-2025a
# Set an environment variable to avoid an Infiniband-related warning
export IBV_FORK_SAFE=1
# Check we have two different nodes
srun --mpi=pmix -n 2 hostname
# Launch our latency test
srun --mpi=pmix -n 2 osu_latency --accelerator cuda

The --mpi=pmix is not strictly necessary, as it is in fact set by default in the environment via the environment variable $SLURM_MPI_TYPE, we include it above for completeness.


Getting help with using FSC on Deucalion

If you need help with using EESSI on Deucalion, please contact EFP Helpdesk.

References