As of 2018 the NICADD cluster provides 576 processors slots under the CONDOR batch system and 200 TB of shared disk space, with instant access to CERN, FERMILAB and OSG software collections. The system is also serves NICADD's data acquisition servers and desktops for collecting and analysing test data for MU2E, DUNE, and beam experiments.

Nodes configuration

  • Interactive login nodes (1 Gbit/s public uplink, accessible via ssh)
    • t3int0.nicadd.niu.edu (SL6) - ATLAS T3 login node
    • hpcm.nicadd.niu.edu (SL7) - Beam group login node
    • cms1.nicadd.niu.edu (SL7) - CMS login/CUDA developer node

Note:: for wireless access within NIU please use the "NIUwireless" network (the "NIUguest" blocks ssh ports).

  • HTCONDOR batch system (1-4 Gbit/s internal network)
    • 1.5-2.5 GB of RAM per batch slot, up to 128 GB for special tasks
    • pnc0-pnc3, pt3wrk0-pt3wrk4 (SL6)
    • phpc0-phpc2, pcms1-pcms6 (SL7)
  • InfiniBand subluster (SL7, ibhpc0 - ibhpc2)

Disk space configuration

  • /home/$USER - user's home area, 5GB initial quota
  • /bdata, /xdata - shared data partitions (80 TB each), for code development and analysis projects
  • /nfs/work/$nodename - local nodes scratches (1-4 TB), for batch jobs use


Page last modified on September 13, 2018, at 08:26 PM EST

|--Created: Sergey A. Uzunyan -|- Site Config --|