1. Spack

  • Spack is package management software for supercomputer systems. For the details, see the official web site https://spack.io/.

  • In the supercomputer Fugaku, we manage and provide open source software (OSS) through Spack.

  • For frequently used OSS, we provide pre-built OSS in a Spack instance on the system side. We call this the “public instance”.

  • Users can keep their own Spack instance in their home directory and build OSS by themselves. We call this a “private instance”.

  • From a private instance, pre-built OSS served in the public instance is available by using a “chaining” functionality of Spack. This can save the user the cost of building many prerequisite packages.

  • Single Spack instance can handle multiple environments. Therefore, for example, two builds can coexist for one package:

    • A build for x86 of the login node by GCC

    • A build for A64FX of the computing node by Fujitsu compiler

2. Using Public Instance

To use pre-built OSS in the public instance, all that you have to do is to source the environment script.

Note

To use the public instance in a job on compute nodes, specify /vol0004 in an environment variable PJM_LLIO_GFSCACHE. Please refer to 8.8. Selecting a usage file system (volume) in Supercomputer Fugaku Users Guide - Use and job execution - for more details.

2.1. Sourcing environment script

Type the following in the command line. For bash:

$ . /vol0004/apps/oss/spack/share/spack/setup-env.sh

and for csh/tcsh:

$ setenv SPACK_ROOT /vol0004/apps/oss/spack
$ source /vol0004/apps/oss/spack/share/spack/setup-env.csh

For the batched jobs, insert that line to the job script.

Note

  • Currently, we do not recommend including this line to your login script .bashrc, etc. A potential risk for login failure exists when the filesystem is not stable.

  • Spack put work files in the directory specified by an environment variable TMPDIR. Since TMPDIR is set to your home directory as of April 25, 2024, you might encounter an error “Disk quota exceeded.” In such case, please set TMPDIR in the way described in:

    https://www.fugaku.r-ccs.riken.jp/en/operation/20220408_01

2.2. Checking pre-built packages

Type in the command line:

$ spack find -x

to show the list of available OSS. As for 2024-04-25, it is as follows:

login4$ spack find -x
-- linux-rhel8-a64fx / fj@4.8.0 ---------------------------------
fds@6.7.7

-- linux-rhel8-a64fx / fj@4.8.1 ---------------------------------
fds@6.7.9
ffvhc-ace@0.1

-- linux-rhel8-a64fx / fj@4.10.0 --------------------------------
adios2@2.9.2
akaikkr@2002v010
akaikkr@2021v001
akaikkr@2021v002
alamode@1.3.0
alamode@1.4.2
alamode@1.5.0
assimp@5.3.1
batchedblas@1.0
bcftools@1.12
bedtools2@2.31.0
biobambam2@2.0.177
blitz@1.0.2
boost@1.83.0
bwa@0.7.17
cairo@1.16.0
cblas@2015-06-06
cmake@3.17.1
cmake@3.21.4
cmake@3.27.7
cp2k@2023.1
cp2k@2023.1
cp2k@2023.1
cp2k@2023.1
cpmd@4.3
cppunit@1.14.0
cppunit@1.14.0
darshan-runtime@3.4.0
double-conversion@3.3.0
dssp@3.1.4
eigen@3.4.0
eigenexa@2.6
ermod@0.3.6
ffb@9.0
ffx@03.01
fribidi@1.0.12
frontistr@5.4
frontistr@5.5
fugaku-frontistr@master
fujitsu-fftw@1.1.0
fujitsu-mpi@head
fujitsu-ssl2@head
genesis@2.1.1
genesis@2.1.1
genesis@2.1.2
genesis@2.1.2
glib@2.74.1
glib@2.74.7
glib@2.74.7
glm@0.9.9.8
gmt@6.2.0
gobject-introspection@1.56.1
grads@2.2.3
gromacs@2020.6
gromacs@2021.5
gromacs@2022.4
gromacs@2023.4
gromacs@2024
gromacs@2024.1
hdf5@1.14.3
hphi@3.5.1
htslib@1.12
icu4c@67.1
improved-rdock@main
kiertaa@1.0.0b
kokkos@3.7.00
kokkos@4.2.01
lammps@20201029
lammps@20220623.2
lammps@20230802.3
libmmtf-cpp@1.1.0
libxc@6.2.2
libxrandr@1.5.3
libxscrnsaver@1.2.2
libxscrnsaver@1.2.2
libxt@1.1.5
lis@2.1.1
mesa@23.0.3
meson@1.2.1
meson@1.2.2
modylas-new@1.1.0
modylas-new@1.1.0
modylas-new@1.1.0
mptensor@0.3.0
mvmc@1.2.0
n2p2@2.1.4
nemo@4.2.0
netcdf-c@4.9.2
netcdf-cxx@4.2
netcdf-cxx4@4.3.1
netcdf-fortran@4.6.1
netlib-lapack@3.10.1
netlib-scalapack@2.2.0
ninja@1.11.1
nwchem@master
octa@8.4
onednn@3.0
openblas@0.3.21
openblas@0.3.21
openfdtd@3.1.1
openfoam@2012
openfoam@2106
openfoam@2112
openfoam@2206
openfoam@2212
openfoam@2306
openfoam@2312
openfoam-org@8
openfoam-org@9
openfoam-org@10
openfoam-org@11
openjdk@11.0.20.1_1
openmx@3.9.9
parallel-netcdf@1.12.3
paraview@5.11.2
parmetis@4.0.3
perl-test-needs@0.002010
perl-uri@5.08
petsc@3.19.6
pfapack@2014-09-17
phase0@2021.02
phase0@2021.02
phase0@2023.01
phase0@2023.01
picard@3.0.0
pixman@0.42.2
pixman@0.42.2
povray@3.7.0.8
py-ase@3.21.1
py-bottleneck@1.3.7
py-cython@3.0.4
py-dask@2022.10.2
py-devito@4.8.1
py-flit-core@3.9.0
py-h5py@3.8.0
py-hypothesis@6.23.1
py-jupyterhub@0.9.4
py-matplotlib@3.3.4
py-meson-python@0.13.1
py-mpi4py@3.1.4
py-msgpack@1.0.3
py-netcdf4@1.6.2
py-numexpr@2.8.4
py-numpy@1.22.4
py-numpy@1.25.2
py-packaging@23.1
py-packaging@23.1
py-pandas@2.1.2
py-pip@23.0
py-pip@23.0
py-pip@23.0
py-pip@23.1.2
py-pip@23.1.2
py-ply@3.11
py-pmw@2.0.1
py-pmw-patched@02-10-2020
py-pydmd@0.3
py-pygps@1.3.5
py-pyproject-metadata@0.7.1
py-pyproject-metadata@0.7.1
py-pyproject-metadata@0.7.1
py-pyqt5-sip@12.12.1
py-pytest@7.3.2
py-python-dateutil@2.8.2
py-pytoml@0.1.21
py-pytz@2023.3
py-scikit-learn@1.3.2
py-scipy@1.8.1
py-seaborn@0.12.2
py-setuptools@68.0.0
py-setuptools-scm@7.1.0
py-sip@6.7.9
py-spglib@2.0.2
py-toml@0.10.2
py-tomli@2.0.1
py-tomli@2.0.1
py-typing-extensions@4.8.0
py-versioneer@0.29
py-wheel@0.41.2
py-xarray@2023.7.0
python@3.10.8
python@3.10.8
python@3.11.6
qt@5.15.5
qt@5.15.5
qt@5.15.12
quantum-espresso@6.5
quantum-espresso@6.6
quantum-espresso@6.7
quantum-espresso@6.8
quantum-espresso@7.0
quantum-espresso@7.1
quantum-espresso@7.2
quantum-espresso@7.3
r@4.3.0
raja@2022.10.4
rapidjson@1.2.0-2022-03-09
rdkit@2023_03_1
ruby@3.1.0
salmon-tddft@2.0.2
salmon-tddft@2.1.0
salmon-tddft@2.2.0
samtools@1.12
screen@4.9.1
scsumma25d@1.0a
siesta-relmax3@rel-MaX-3
smash@3.0.0
smash@3.0.2
star@2.7.10b
suite-sparse@5.13.0
superlu-dist@8.1.2
tk@8.6.11
tk@8.6.11
tmux@3.3a
xcb-util@0.4.1
xcb-util-image@0.4.1
xcb-util-keysyms@0.4.1
xcb-util-renderutil@0.3.10
xcb-util-wm@0.4.2
xios@develop-2612
zpares@0.9.6a

-- linux-rhel8-a64fx / gcc@8.5.0 --------------------------------
blitz@1.0.2
fujitsu-mpi@head
gcc@10.5.0
gcc@11.4.0
gcc@12.2.0
gcc@13.2.0
gmt@6.2.0
libint@2.6.0
llvm@17.0.4
mpich-tofu@1.0
mpich-tofu@1.0
mpich-tofu@master
mpich-tofu@master

-- linux-rhel8-cascadelake / gcc@13.2.0 -------------------------
boost@1.83.0
cmake@3.27.7
darshan-util@3.4.4
global@6.6.7
gmt@6.2.0
gnuplot@5.4.3
imagemagick@7.1.1-11
libxml2@2.9.7
llvm@17.0.4
mercurial@6.4.5
ncview@2.1.9
netcdf-c@4.9.2
netcdf-fortran@4.6.1
openfoam@2306
openfoam@2312
openfoam-org@10
openfoam-org@11
openjdk@11.0.20.1_1
py-numpy@1.26.1
py-pip@23.1.2
python@3.11.6
screen@4.9.1
tmux@3.3a
xterm@353
zsh@5.8

-- linux-rhel8-skylake_avx512 / gcc@8.5.0 -----------------------
gcc@13.2.0
hdf5@1.12.2
omni-compiler@1.3.3
openfoam@2012
openfoam@2106
openfoam@2112
openfoam@2206
openfoam@2212
openfoam-org@8
openfoam-org@9
openmpi@3.1.6
py-mpi4py@3.1.4
py-phonopy@2.12.0
py-phonopy@2.20.0

It only shows explicitly installed packages with an option -x, tough many other dependent packages are installed. linux-rhel8-cascadelake shows packages for login nodes, and those below linux-rhel8-a64fx are for compute nodes.

2.3. Loading OSS

As an example case you want tmux ready to use, type:

$ spack load tmux

then it becomes available by setting the environment (e.g. PATH variable).

Similarity, to unload the package type:

$ spack unload tmux

Note

You can use the module command to load and unload packages as follows:

$ module load tmux
$ module unload tmux

3. Using Private Instance

The information below is only for the users who build OSS by themselves, and not necessary for users who just use the pre-built OSS by the system side.

Users can create their own Spack instance under their home directory and build OSS by themselves.

3.1. Cloning the repository

Although the installation path is arbitrary, we assume that is $TMPDIR for simplicity. For the setting of the environment variable TMPDIR, please refer to the note in the section of Sourcing environment script.

Clone the GitHub repository with the following commands:

$ cd $TMPDIR
$ git clone https://github.com/RIKEN-RCCS/spack.git
$ cd spack
$ git checkout fugaku-v0.21.0

Just after cloning, its branch is set to the default develop. In this sample, we switch to the fugaku-v0.21.0 branch.

3.2. Compiler setup

First, source the Spack environment in the login node:

$ . ~/spack/share/spack/setup-env.sh

After these settings, type:

$ spack compilers

If you have an output

==> Available compilers
-- fj rhel8-aarch64 ---------------------------------------------
fj@4.8.1

-- gcc rhel8-x86_64 ---------------------------------------------
gcc@8.5.0

the setup of compilers is successful.

3.3. Using the packages provided in the public instance

Spack has a “chaining” functionality where an instance can refer packages installed in other instances. Here, we introduce to set the public instance as an upstream of the private instance of each user so that the packages installed in the public instance is available from the private instance. For this functionality, create and edit a file ~/.spack/upstreams.yaml as in the following:

upstreams:
  spack-public-instance:
    install_tree: /vol0004/apps/oss/spack/opt/spack

Then, type

$ spack repo add /vol0004/apps/oss/spack/var/spack/repos/local

to register the local repository.

After those, type

$ spack find

to ensure that you find the OSS provided by the public instance.

3.4. Registering external packages

Copy packages.yaml from the system to use external packages such as Fujitsu MPI.

$ cp /vol0004/apps/oss/spack/etc/spack/packages.yaml ~/.spack/linux/

3.5. Sourcing the environments

Spack is available or refreshed after sourcing the script by typing as follows.

For bash:

$ . ~/spack/share/spack/setup-env.sh

For csh/tcsh:

% . ~/spack/share/spack/setup-env.csh

3.6. Installation and management of packages

By typing

$ spack list

all the available packages in Spack (more than 7000) are listed. You can narrow the list by a case-insensitive string, e.g.:

$ spack list mpi

This results in:

==> 33 packages.
compiz                          intel-oneapi-mpi  mpileaks           pbmpi          rkt-compiler-lib
cray-mpich                      mpi-bash          mpip               phylobayesmpi  spectrum-mpi
exempi                          mpi-serial        mpir               pnmpi          spiral-package-mpi
fujitsu-mpi                     mpi-test-suite    mpitrampoline      py-dask-mpi    sst-dumpi
hpcx-mpi                        mpibind           mpiwrapper         py-mpi4jax     umpire
intel-mpi                       mpich             mpix-launch-swift  py-mpi4py      vampirtrace
intel-mpi-benchmarks            mpich-tofu        msmpi              py-tempita     wi4mpi
intel-oneapi-compilers          mpifileutils      omni-compiler      r-rmpi
intel-oneapi-compilers-classic  mpilander         openmpi

To install openmpi, for example:

$ spack install openmpi

The version can be explicitly indicated like openmpi@4.1.1. To inquire available versions and variants (we omit the detail about it), type:

$ spack info openmpi

Note

In order to build packages for the compute nodes, users need to log in a compute node with an interactive job or to submit a job script.

Similarly, to uninstall it, type:

$ spack uninstall openmpi

When multiple packages in the same name are installed, Spack cannot identify the target package and this causes an error. To solve this situation, refer to Resolving multiple packages with the same name.

3.7. Resolving multiple packages with the same name

Sometimes multiple packages with the same name are installed in a Spack instance or among chained instances. It happens when different versions of a package or multiple builds for different architectures (e.g. login nodes and compute nodes) are installed. In such a case, just passing the package name to spack commands such as spack load causes an error because Spack cannot identify the target package uniquely.

For example, in the public instance:

$ spack load screen

will result in an error:

==> Error: screen matches multiple packages.
  Matching packages:
    5jzxnkf screen@4.9.1%fj@4.10.0 arch=linux-rhel8-a64fx
    ejmqzvl screen@4.9.1%gcc@13.2.0 arch=linux-rhel8-cascadelake
  Use a more specific spec.

Similar information is also available in a command

spack find -lv screen

In the following, we will introduce several ways to explicitly identify a package.

  • Specifying the hash: Spack defines a unique hash for a build with its detailed conditions, spec. You can uniquely specify a package build by its short hash in 7 letters after / (slash). For example:

    $ spack load /5jzxnkf
    $ spack load /ejmqzvl
    

    In case multiple packages in the same name exit in the public instance (ex. fftw; sometimes multiple packages with different dependent packages can exist), load the package shown in the following:

    $ spack find -lx
    

    Option -l shows short hash in 7 letters and -x only shows explicitly installed packages.

  • Specifying the version: After the package name, put @ (at) followed by its version, e.g.:

    $ spack load screen@4.9.1
    

    (It remains to be an error in our example case.)

  • Specifying the compiler that built the package: You can indicate the compiler name after % (percent). When screen for the login nodes is compiled by gcc and that for the login nodes fj, you can distinguish them by

    $ spack load screen%gcc
    $ spack load screen%fj
    

    In more detail, you can specify the version of the compilers as in screen%gcc@13.2.0 or screen%fj@4.10.0.

  • Specifying the target architecture: Following the package name, you can specify the architecture name of the package after arch=. The builds for the login and compute nodes can be distinguished as follows:

    $ spack load screen arch=linux-rhel8-cascadelake
    $ spack load screen arch=linux-rhel8-a64fx
    

4. Known issues and remedies

4.2. Performance Degradation in Multi-node Jobs

Packages provided via Spack are stored on the second-layer storage. Therefore, if you use the packages in a multi-node job, the performance may degrade due to concentrated access on a certain storage I/O node.

In such cases, you can distribute all of the files to be referenced, to the cache area of second-layer storage, which is on first-layer storage, by running the dir_transfer command for the paths in LD_LIBRARY_PATH and PATH. Thus, the performance degradation can be avoided.

An example of the dir_transfer command to distribute shared libraries and executable files is as follows:

spack load xxx
echo $LD_LIBRARY_PATH | sed -e 's/:/\n/g' | grep '^/vol0004/apps/oss/spack' | xargs /home/system/tool/dir_transfer
echo $PATH | sed -e 's/:/\n/g' | grep '^/vol0004/apps/oss/spack' | xargs /home/system/tool/dir_transfer

4.3. Error in using Python

When you use some Python modules installed with Spack, you might encounter an error as follows.

$ spack load py-ase %fj
$ python3
Python 3.8.12 (default, Nov 30 2021, 04:44:05)
[Clang 7.1.0 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import matplotlib
...
>>> plt.savefig('fig.png',format='png')
jwe0020i-u An error was detected during an abnormal termination process.
jwe0020i-u An error was detected during an abnormal termination process.
...

In such cases, you could avoid the error by loading the explicitly-installed python, which is indicated by spack find -lx python, after loading all of the modules you need. Note that abcdefg in the example below is a hash for the explicitly-installed python.

$ spack load py-ase
...
$ spack load /abcdefg

4.4. matches multiple packages error

If you get the matches multiple packages error, usually specify the hash displayed by spack find -lx.

4.5. If Rust compilation takes a long time?

If Rust installed with Spack takes a long time to compile, try setting the cn-read-cache parameter to off in the --llio option of the pjsub command.

$ pjsub --llio cn-read-cache=off job.sh