University of Vermont

Vermont Advanced Computing Core

Cluster Specs


VACC hardwareUVM's Enterprise Technology Services (ETS) has completed the third major upgrade to the high performance computing (HPC) cluster.

More than doubling the total number of computing cores available to faculty and student users, the upgrade also showcases a continued investment at UVM in "green computing," with next generation IBM hardware chosen for the upgrade bringing up to 30% more energy efficiency than equivalent HPC systems.

Multi-year grants from NASA, championed by US Senator Patrick Leahy, give strong support to the VACC.

The IBM Bluemoon cluster has a total compute node count: 380. total x86_64 compute core count: 3144.

As of September 2013, the Bluemoon cluster consists of:

  • 36 dual-processor, 6-core (Intel X5650) IBM dx360m3 nodes, with 24GB each, Infiniband 4XFDR (56Gbit/s)-connected. (Reserved for jobs which use IB.)
  • 92 dual-processor, 6-core (Intel X5650) IBM dx360m3 nodes, with 24GB each, Ethernet-connected.
  • 22 dual-processor, 6-core (Intel E5-2630) IBM dx360m4 nodes, with 32GB each, Ethernet-connected.
  • 98 dual-processor, quad-core (Opteron 2356) IBM x3455s, with 12GB each, Ethernet-connected.
  • 128 dual-processor, dual-core (Opteron 2220) IBM x3455s, with 6GB each, Ethernet-connected.
  • 1 dual-processor, 8-core (Intel E7-8837) IBM x3690 x5, with 512GB.
  • 1 4-way dual-core (Opteron 8220) shared memory machine with 128GB.
  • 2 GPU nodes, each with 2 NVidia Tesla M2090 GPUs
    • Each GPU has 512 CUDA cores and 5GB RAM
  • 6 I/O nodes (IBM x3655s, 10G ethernet connected)
    • connected to an IBM DS4800 and IBM DCS3860 providing roughly 500 terabytes of raw storage to GPFS.
    • GPFS metadata is stored on Flash/SSD storage to improve performance.
  • Cluster Resources' MOAB scheduler.
  • 4 user nodes (2 x IBM x3455s, 2 x IBM dx360m3s)

System Software

  • RedHat Enterprise Linux 5
  • Oscar (ver 4.2) - Open Source Cluster Application Resource cluster management
  • TORQUE (ver 4.2.4) - (Terascale Open-Source Resource and QUEue Manager)
  • Cluster Resources' MOAB scheduler.
  • Python 2 and Python 3

Application Software

In addition to the list below, we offer a variety of applications software, community codes, and tools for use on the VACC system. Science areas include chemistry, molecular dynamics and image processing, including various libraries and support packages for building high-performance software applications. An assortment of compilers, debuggers and performance tools are also available.

MPI Libraries

  • Lam-7.1.4
  • MVAPICH and MVAPICH2 for the Infiniband nodes
  • IB-compliant OpenMPI 1.6.4
  • Ethernet-only OpenMPI 1.6.5.


  • ACML 3.1.0


  • FFTW 2.1
  • FFTW 3.0.1
  • FTW 3.1


  • G09 (access by request only)

GSL 1.9

Ruby 1.8

R 2.15