The Numerical Intensive Computing Cluster (

The Numerical Intensive Computing Cluster is made up of several virutalized clusters that have been optimized for different research needs.  The cluster consists of contributed nodes that have been funded by individual research groups on campus, and general nodes funded by general IT funds to augment the hardware availiblity for S&T students.  One benefit of contributing nodes to the shared cluster is that a research group is guaranteed use of the contributed nodes.  Other benefits provided to the research groups when they join the shared cluster include:

  • Complete system administration for contributed nodes.
  • Use of a high performance network interconnect.
  • Home and scratch storage space,
  • A dedicated data center facility for housing the cluster.  The eliminated the need to perform expensive space, cooling, and electrical modifications to existing office or lab space.
  • Participation in hardware selection.

Research groups who have contributed nodes to the NIC cluster also gain access to general cluster resources.  This gives them:
  • Access to pooled licenses, allowing researchers to run larger commercial applications without the cost of buying additional licenses.
  • Access to additional commercial and open source applications.
  • Web access to the the NIC cluster documentation.

Base and Contributed Equipment Standards and Policies

All contributed hardware must be compatible with the base node architecture, processor type, memory, disk space, and interconnect.  This maximizes the effective management of the NIC cluster to provide the highest level of services to the shared cluster users. ITRSS provides full support in helping researchers specify hardware at optimal an price/performance ratio to meet the current standards.

Once contributed, these nodes become part of the entire NIC cluster.  Because cycles are pooled across all general and contributed nodes, means that the node may be in use by others when a contributor needs the node. In theory the contributor could have to wait up to a week for thier specific hardware. In practice the number of cores contributed by a research group is generally available much sooner.  Jobs that run on the NIC cluster have a 192 hour limit, longer runs can be made by special arrangements.

NIC Cluster Hosting Costs

Research groups that contribute nodes to the NIC cluster agree to contribute their unused cycles to other researchers.

Research groups and users of the general cluster have the option of paying a one-time, per terabyte, charge for storage on the cluster file system.  This is particularly important for those that need more than the 50GB directory space per user that is available to cluster users.

Base and Contributed Equipment Renewals

After a period of three years all hardware within the shared cluster is evaluated for retention based on condition of equipment, cost to maintain, relative compute power and the ability to backfill with new systems.  This is done to maintain a high performance and low maintainence system, while maximizing the utilization of the data center space.

If the contributed nodes can still be effectively maintained, those nodes will remain inside the NIC cluster and continue to be reevaluated on an annual basis.  If the contributed nodes can no longer be effectively maintained, upon mutual agreement, they will be redeployed for other uses, or decommissioned.

The CUDA Cluster

A large section of the NIC cluster is dedicated to GPU processing and is refered too as the CUDA cluster because up until a few months ago it was a separate system.  It has been merged into main cluster. It consists of five nodes with seven Tesla C2050 GPUs and 14 nodes with two Tesla c2050 GPUs.

  • Primary use is for students in the computer science department experimenting with GPU accelerators and game theory applications.
  • The low utilization of the resource has opened it for utilization by the rest of S&T students.
  • Recently we have started sharing this resource with other campuses within the University of Missouri system.

The Shared NIC Cluster Hardware and Software

The NIC cluster had 64-bit nodes with an Ethernet network and many with Infiniband interconnect, with the following standard software suite:
  • The Torque/PBS scheduler.
  • Compilers:  GCC, Intel-9, Intel-10, and Intel-11 Compiler Suites.
  • Several commercial applications such as Ansys/Fluent, Matlab, Vasp, and Maple.
  • Several open source applications such as Casino QMC, Vulcan, and Paraview.