|
| 1 | +!!! danger "Work in progress" |
| 2 | + |
| 3 | + *(30 April 2025)* |
| 4 | + |
| 5 | + The contents of this tutorial are currently being reworked to be up-to-date with recent developments in CernVM-FS, |
| 6 | + and to be well integrated in the EESSI documentation. |
| 7 | + |
| 8 | + It is based on the *"Best Practices for CernVM-FS in HPC"* tutorial that was held on |
| 9 | + 4 Dec 2023, see also https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices. |
| 10 | + |
| 11 | + |
| 12 | +# Alternative ways to access CernVM-FS repositories |
| 13 | + |
| 14 | +While a [native installation of CernVM-FS on the client system](client.md), |
| 15 | +along with a [proxy server](proxy.md) and/or [Stratum 1 replica server](stratum1.md) for large-scale production setups, |
| 16 | +is recommended, there are other alternatives available for getting access to CernVM-FS repositories. |
| 17 | + |
| 18 | +We briefly cover some of these here, mostly to clarify that there are alternatives available, |
| 19 | +including some that do not require system administrator permissions. |
| 20 | + |
| 21 | +## `cvmfsexec` |
| 22 | + |
| 23 | +Using [`cvmfsexec`](https://github.com/cvmfs/cvmfsexec), mounting of CernVM-FS repositories as |
| 24 | +an unprivileged user is possible, without having CernVM-FS installed system-wide. |
| 25 | + |
| 26 | +`cvmfsexec` supports multiple ways of doing this depending on the OS version and system configuration, |
| 27 | +more specifically whether or not particular features are enabled, like: |
| 28 | + |
| 29 | +* [FUSE mounting](https://www.kernel.org/doc/html/latest/filesystems/fuse.html) with `fusermount`; |
| 30 | +* unprivileged user namespaces; |
| 31 | +* unprivileged namespace fuse mounts; |
| 32 | +* a `setuid` installation of Singularity 3.4+ (via `singcvmfs` which uses the `--fusemount` feature), |
| 33 | + or an unprivileged installation of Singularity 3.6+; |
| 34 | + |
| 35 | +Start by cloning the `cvmfsexec` repository from GitHub, and change to the `cvmfsexec` directory: |
| 36 | + |
| 37 | +``` |
| 38 | +git clone https://github.com/cvmfs/cvmfsexec.git |
| 39 | +cd cvmfsexec |
| 40 | +``` |
| 41 | + |
| 42 | +Before using `cvmfsexec`, you first need to make a `dist` directory that includes CernVM-FS, configuration files, |
| 43 | +and scripts. For this, you can run the `makedist` script that comes with `cvmfsexec`: |
| 44 | + |
| 45 | +``` |
| 46 | +./makedist default |
| 47 | +``` |
| 48 | + |
| 49 | +With the `dist` directory in place, you can use `cvmfsexec` to run commands in an environment |
| 50 | +where a CernVM-FS repository is mounted. |
| 51 | + |
| 52 | +For example, we can run a script named `test_eessi.sh` that contains: |
| 53 | + |
| 54 | +```shell |
| 55 | +#!/bin/bash |
| 56 | + |
| 57 | +source /cvmfs/software.eessi.io/versions/2023.06/init/bash |
| 58 | + |
| 59 | +module load TensorFlow/2.13.0-foss-2023a |
| 60 | + |
| 61 | +python -V |
| 62 | +python3 -c 'import tensorflow as tf; print(tf.__version__)' |
| 63 | +``` |
| 64 | + |
| 65 | +which gives: |
| 66 | +``` |
| 67 | +$ ./cvmfsexec software.eessi.io -- ./test_eessi.sh |
| 68 | +
|
| 69 | +CernVM-FS: loading Fuse module... done |
| 70 | +CernVM-FS: mounted cvmfs on /home/rocky/cvmfsexec/dist/cvmfs/cvmfs-config.cern.ch |
| 71 | +CernVM-FS: loading Fuse module... done |
| 72 | +CernVM-FS: mounted cvmfs on /home/rocky/cvmfsexec/dist/cvmfs/software.eessi.io |
| 73 | +
|
| 74 | +Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06! |
| 75 | +archdetect says x86_64/amd/zen2 |
| 76 | +Using x86_64/amd/zen2 as software subdirectory. |
| 77 | +Using /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all as the directory to be added to MODULEPATH. |
| 78 | +Found Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod/lmodrc.lua |
| 79 | +Initializing Lmod... |
| 80 | +Prepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all to $MODULEPATH... |
| 81 | +Environment set up to use EESSI (2023.06), have fun! |
| 82 | +
|
| 83 | +Python 3.11.3 |
| 84 | +2.13.0 |
| 85 | +``` |
| 86 | + |
| 87 | +By default, the CernVM-FS client cache directory will be located in `dist/var/lib/cvmfs`. |
| 88 | + |
| 89 | +For more information on `cvmfsexec`, see <https://github.com/cvmfs/cvmfsexec>. |
| 90 | + |
| 91 | + |
| 92 | +## Apptainer with `--fusemount` |
| 93 | + |
| 94 | +If [Apptainer](https://apptainer.org) is available, you can get access to a CernVM-FS repository |
| 95 | +by using a container image that includes the CernVM-FS client component (see for example the Docker recipe |
| 96 | +for the client container used in EESSI, which is available [here](https://github.com/EESSI/filesystem-layer/blob/main/containers/Dockerfile.EESSI-client-centos7)). |
| 97 | + |
| 98 | +Using the `--fusemount` option you can specify that a CernVM-FS repository should be mounted |
| 99 | +when starting the container. For example for [EESSI](../eessi/high-level-design.md#filesystem_layer), |
| 100 | +you should use: |
| 101 | + |
| 102 | +```bash |
| 103 | +apptainer ... --fusemount "container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io" ... |
| 104 | +``` |
| 105 | + |
| 106 | +There are a couple of caveats here: |
| 107 | + |
| 108 | +* If the configuration for the CernVM-FS repository is provided via the `cvmfs-config` repository, |
| 109 | + you need to instruct Apptainer to also mount that, by using the `--fusemount` option twice: once for |
| 110 | + the `cvmfs-config` repository, and once for the target repository itself: |
| 111 | + ```bash |
| 112 | + FUSEMOUNT_CVMFS_CONFIG="container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch" |
| 113 | + FUSEMOUNT_EESSI="container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io" |
| 114 | + apptainer ... --fusemount "${FUSEMOUNT_CVMFS_CONFIG}" --fusemount "${FUSEMOUNT_EESSI}" ... |
| 115 | + ``` |
| 116 | + |
| 117 | +* Next to mounting CernVM-FS repositories, you also need to *bind mount* local writable directories |
| 118 | + to `/var/run/cvmfs`, since CernVM-FS needs write access in those locations (for the CernVM-FS client cache): |
| 119 | + ```bash |
| 120 | + mkdir -p /tmp/$USER/{var-lib-cvmfs,var-run-cvmfs} |
| 121 | + export APPTAINER_BIND="/tmp/$USER/var-run-cvmfs:/var/run/cvmfs,/tmp/$USER/var-lib-cvmfs:/var/lib/cvmfs" |
| 122 | + apptainer ... --fusemount ... |
| 123 | + ``` |
| 124 | + |
| 125 | +To try this, you can use the EESSI client container that is available in Docker Hub, |
| 126 | +to start an interactive shell in which EESSI is available, as follows: |
| 127 | + |
| 128 | +```{ .bash .copy } |
| 129 | +mkdir -p /tmp/$USER/{var-lib-cvmfs,var-run-cvmfs} |
| 130 | +export APPTAINER_BIND="/tmp/$USER/var-run-cvmfs:/var/run/cvmfs,/tmp/$USER/var-lib-cvmfs:/var/lib/cvmfs" |
| 131 | +FUSEMOUNT_CVMFS_CONFIG="container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch" |
| 132 | +FUSEMOUNT_EESSI="container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io" |
| 133 | +apptainer shell --fusemount "${FUSEMOUNT_CVMFS_CONFIG}" --fusemount "${FUSEMOUNT_EESSI}" docker://ghcr.io/eessi/client-pilot:centos7 |
| 134 | +``` |
| 135 | + |
| 136 | +## Alien cache |
| 137 | + |
| 138 | +An alien cache can be used, optionally in combination with preloading, as another alternative, |
| 139 | +typically in combination with using a container image or unprivileged user namespaces. |
| 140 | + |
| 141 | +For more information, see the [Alien cache subsection](../configuration_hpc.md#alien-cache) in the next part of the |
| 142 | +tutorial. |
| 143 | + |
| 144 | +--- |
| 145 | + |
| 146 | +*(next: [Configuration on HPC systems](../configuration_hpc.md))* |
0 commit comments