University of Vermont

Vermont Advanced Computing Core

Cluster Specs

Hardware

VACC hardware

As of May 2016, the Bluemoon cluster consists of:

  • 24 dual-processor, 10-core (Intel E5-2650 v3) Dell PowerEdge R630 nodes, with 64 GB each, Infiniband 4XFDR (56Gbit/s)-connected. (Reserved for jobs which use IB.)
  • 16 dual-processor, 10-core (Intel E5-2650 v3) Dell PowerEdge R630 nodes, with 64 GB each, Ethernet-connected.
  • 3 dual-processor, 10-core (Intel E5-2650 v3) Dell PowerEdge R630 nodes, with 256 GB each, Ethernet-connected.
  • 36 dual-processor, 6-core (Intel X5650) IBM dx360m3 nodes, with 24GB each, Infiniband 4XFDR (56Gbit/s)-connected. (Reserved for jobs which use IB.)
  • 92 dual-processor, 6-core (Intel X5650) IBM dx360m3 nodes, with 24GB each, Ethernet-connected.
  • 22 dual-processor, 6-core (Intel E5-2630) IBM dx360m4 nodes, with 32GB each, Ethernet-connected.
  • 1 dual-processor, 8-core (Intel E7-8837) IBM x3690 x5, with 512GB.
  • 1 4-way dual-core (Opteron 8220) shared memory machine with 128GB.
  • 2 user nodes (2 x IBM dx360m3s))
  • 2 GPU nodes, each with 2 NVidia Tesla M2090 GPUs
    • Each GPU has 512 CUDA cores and 5GB RAM
  • 4 I/O nodes (IBM x3655s, 10G ethernet connected) connected to:
    • An IBM DS4800 providing 260 terabytes of raw storage to GPFS (roughly 197TB usable).
    • An IBM DS4700 providing 104 terabytes of raw storage (roughly 76TB usable).
    • An IBM DCS3850 providing 240 terabytes of raw storage to GPFS (roughly 164TB usable).
    • An IBM V3700 providing 10 terabytes of solid-state disk to GPFS (for fast random-access data & metadata.)
  • 2 Flash-storage GPFS Metadata nodes (IBM x3655s, 10G ethernet connected)
  • Cluster Resources' MOAB scheduler. (ver 9.0.2)
  • TORQUE (ver 6.2.2) - (Terascale Open-Source Resource and QUEue Manager)
  • RedHat Enterprise Linux 7


total compute node count: 199. total x86_64 compute core count: 2728.


Application Software

In addition to the list below, we offer a variety of applications software, community codes, and tools for use on the VACC system. Science areas include chemistry, molecular dynamics and image processing, including various libraries and support packages for building high-performance software applications. An assortment of compilers, debuggers and performance tools are also available.

MPI Libraries

  • Lam-7.1.4
  • MVAPICH and MVAPICH2 for the Infiniband nodes
  • IB-compliant OpenMPI 1.6.4
  • Ethernet-only OpenMPI 1.6.5.

BLAS, LAPACK

  • ACML 3.1.0

FFTW

  • FFTW 2.1
  • FFTW 3.0.1
  • FTW 3.1

Gaussian

  • G09 (access by request only)

GSL 1.9

Ruby 1.8

R 2.15