How to run an athena code on the NICADD T3 farm
Introduction
NIU computer account is required.
Setup the T3 environment (detailes are here).
Nicadd T3 cluster
Currently there are five nodes at nicadd cluster to be used primaraly for Atlas
related analysys :
pt3int0 (t3int0.nicadd.niu.edu) - Interactive public
node - for code development , batch submission and data
sample downloads
pt3nfs - NFS data server (hosts the 19TB
/xdata disk on 10GBit network line)
pt3nfs1 - NFS data server (hosts the 18TB
/bdata disk on 10GBit network line)
pt3wrk0-pt3wrk4 - T3 worker
nodes (176 batch slots total)
Additionally node42-56,pncX and phpcX nodes can run Atlas batch jobs (420 more slots) over data
from /xdata or /bdata disks.
Each node has a local disk space (/disk) that should be used as a scratch for batch jobs.
The batch system
We use Condor job management system.
The user documentation is here, (it's good to read at least "submitting a job" and "managing a job" chapters
to understand at least condor_status, condor_q, condor_submit, condor_hold, and condor_rm commands)
Here are the examples of job description file (atlasT3_CondorCmd.template ) and shell script
(atlasT3_jobtemplate.sh ) that uses Atlas evironment.
Next
dq2-get example :: DQ2 tutorial
Sergey A. Uzunyan
Last modified: Tue Aug 26 14:18:31 CDT 2014