NOTICE:
This documentation pertains to the Red Hat 7 cluster, which is currently being deprecated.
For updated information, please refer to the new documentation for the Red Hat 9 cluster at:
https://www.uvm.edu/vacc/docs/start_guide/modules/
The VACC uses package managers, modules, and containers to load the software you need. You control what software is available in your environment by loading the correct module, package or container.
These are the ways to find and load the software you need:
Spack is a “package manager.” We use Spack whenever possible because this automates the process of installing, upgrading, configuring and removing software in a consistent manner. Please use the links above to jump to the section about each.
Can’t Find the Software You Need?
If you discover that the software you need isn’t already available, please send us a request at vacc@uvm.edu with the name of the software package you’d like installed.
Spack
With Spack, you must load your software each time you log into the cluster (though there is a simple workaround; see the “How to Load It” section, below).
Search for Your Software
To see the list of software packages available, type spack find
at your command prompt:
[your-UVM-netid@vacc-user1 ~]$ spack find
It is possible to refine your search, but not recommended because Spack will only search the exact term you enter, not parts of it. For example, the command spack find numpy
will not find the numpy software that is, in fact, available because the name in the system is “py-numpy” not “numpy.”
How to Load It
To load a package, type spack load <exact name of package>
. For example, where the package I want to load is “python@3.7.7”:
[your-UVM-netid@vacc-user1 ~]$ spack load python@3.7.7
In a few seconds, your software is loaded and ready to use.
How to load the same package every session? Instead of typing in the spack load <package name>
command every time you log in, you can simply add the spack load <package name>
statement at the beginning of the job script that needs the software. For an example, see Write a Job Script.
Modules
When you use a software module, you must load it each time you log into the cluster (though there is a simple workaround; see the “How to Load It” section, below).
Search for Your Software
To see the list of software modules available, type module avail
at your command prompt:
[your-UVM-netid@vacc-user1 ~]$ module avail
How to Load It
To load a module, type module load <exact name of module>
. For example, where the module I want to load is “mpi/openmpi-3.1.4-ib”:
[your-UVM-netid@vacc-user1 ~]$ module load mpi/openmpi-3.1.4-ib
NOTE: You must include the folder name where the module resides. For example:
mpi/openmpi-3.1.4-ib
NOT
openmpi-3.1.4-ib
In a few seconds, your software is loaded and ready to use.
How to load the same package every session? Instead of typing in the module load <exact name of module>
command every time you log in, you can simply add the module load <exact name of module>
statement at the beginning of the job script that needs the software. For an example, see Write a Job Script.
UVM -Tested Containers
We provide select NVIDIA Grid Cloud (NGC) containers on DeepGreen. An NVIDIA container wraps a GPU-accelerated application along with its dependencies into a single package that is guaranteed to deliver the best performance on NVIDIA GPUs.
We use Singularity for launching our provided containers, as well as your own containers. Singularity makes use of a container image file, which physically contains the container. This file is a physical representation of the container environment itself.
TensorFlow-NVIDIA 19.04-py3
Path: /gpfs3/cont/uvm-ngc/nv-tensorflow-19.04-py3.simg
Docs: TensorFlow User Guide (NVIDIA)
NOTE: You may see an error message that GPU drivers cannot be found; please ignore this. TensorFlow will find the GPUs requested. We are working to resolve this issue.
TensorFlow-NVIDIA 19.08-py3
Path: /gpfs3/cont/uvm-ngc/nv-tensorflow-19.08-py3.simg
Docs: TensorFlow User Guide (NVIDIA)
NOTE: You may see an error message that GPU drivers cannot be found; please ignore this. TensorFlow will find the GPUs requested. We are working to resolve this issue.
GROMACS 2018.2
Path: /gpfs3/cont/uvm-ngc/gromacs-gromacs2018.2.simg
Docs: GROMACS
NOTE: You may see an error message about application clocks; please ignore this. The functionality of the software is not affected. However, we are working to resolve this issue.