NICADD Compute Cluster
As of 2018, the NICADD cluster provides 700 processor slots (1.8-2.6 GHz) under the HT CONDOR batch system and 200 TB of shared disk space, with instant access to CERN, FERMILAB and OSG software libraries. The system also serves NICADD's data acquisition servers and desktops for collecting and analyzing test data for MU2E, DUNE, and beam experiments. Detailed hardware description. Alma Linux 8 upgrade (2023)We preparing the Nicadd cluster to run Alma Linux 8, to be in sync with the soon-expected NIU Metis system. The upgrade is scheduled for the last week of May 2023. Currently, two cluster nodes, pcms6, and pcms5, are upgraded to allow tests of critical applications. Access to the upgraded nodes
Both pcms5 and pcms6 nodes are running Alma Linux 8.7, CVMFS (and thus cvmfs-based tools of ATLAS, CMS, and DUNE), and provide local ROOT and custom compiler/linker tools via modules:
Both nodes are included in the mini-cluster under HT-Condor 10.0.2 batch system; test jobs are welcome. Nodes configuration
Note:: for wireless access within NIU please use the "NIUwireless" network (the "NIUguest" blocks ssh ports).
|
![]() |
Disk space configuration
- /home/$USER - user's home area, 5GB initial quota
- /bdata, /xdata - shared data partitions (80 TB each), for code development and analysis projects
- /nfs/work/$nodename - local nodes scratches (1-4 TB), for batch jobs use
Backup policy.
We only provide the previous day's backup of /the home area. We strongly encourage users to use the GIT system for any code development and frequently backup essential data to remote locations.