NICADD Compute Cluster

As of 2018 the NICADD cluster provides 700 processor slots (1.8-2.6 GHz) under the HT CONDOR batch system and 200 TB of shared disk space, with instant access to CERN, FERMILAB and OSG software libraries. The system is also serves NICADD's data acquisition servers and desktops for collecting and analyzing test data for MU2E, DUNE, and beam experiments. Detailed hardware description.

Nodes configuration

  • Interactive login nodes (1 Gbit/s public uplink, accessible via ssh)
    • t3int0.nicadd.niu.edu (SL7) - ATLAS T3 login node
    • hpcm.nicadd.niu.edu (SL7) - Beam group login node
    • cms1.nicadd.niu.edu (SL7) - CMS login/CUDA developer node

Note:: for wireless access within NIU please use the "NIUwireless" network (the "NIUguest" blocks ssh ports).

  • HTCONDOR batch system (1-4 Gbit/s internal network)
    • Scientific Linux 7.X, 1.5-2.5 GB of RAM per batch slot, up to 128 GB for special tasks
    • Nodes: pt3wrk0-pt3wrk4 (T3 ATLAS), phpc0-phpc2 (beam), pcms0-pcms6 (CMS), pnc0-pnc3 (shared)
  • InfiniBand subluster (SL7, ibhpc0 - ibhpc2)

A closer look at NICADD computing (slides).

Disk space configuration

  • /home/$USER - user's home area, 5GB initial quota
  • /bdata, /xdata - shared data partitions (80 TB each), for code development and analysis projects
  • /nfs/work/$nodename - local nodes scratches (1-4 TB), for batch jobs use

Backup policy.

We only provide the previous day backup of /home area. We strongly encourage users to use GIT system for any code development and do frequent backups of important data to remote locations.


Page last modified on August 26, 2019, at 01:24 AM EST

|--Created: Sergey A. Uzunyan -|- Site Config --|