Eos System Status
520 of 2228 Cores Active (23.34%)
63 of 255 Nodes Active (24.71%)
24 running jobs , 2 queued jobs
Last updated: 10:40AM Feb 8, 2016
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1641591Airfield_performance_3D_R_2eisalong72:12:50R8
1642064FeFeh2_h2o_h2o_ts_h2_anion_doublet_m06_freq.jobhaixialong0:38:36R8
1642074AgStudy_Solv_NO_DKH_bppn_PF6_12.jobjfranklong0:02:36R8
1642075AgStudy_Solv_NO_DKH_bppn_SbF6_12.jobjfranklong0:02:02R8
1642073AgStudy_Solv_NO_DKH_bppn_BF4_11.jobjfranklong0:02:19R8
1642019ssclowtlnmatulalong18:32:22R40
1642018sscnew2nmatulalong18:33:24R40
1642047Rct_S2_trim.joborg1synlong12:03:31R8
1642046Rct_S6_Ipr.joborg1synlong12:08:35R8
1642071H2OCO5Driveralucchese0:12:38R64
1642020p8hiroakimedium18:29:13R8
1642070relionjanglemedium0:16:20R160
1642063ec5_Li.jobnarenmedium0:46:02R12
1642062ec8_PF6_B.jobnarenmedium0:46:39R12
1642061ec8_PF6_A.jobnarenmedium0:48:30R12
1642060ec6_PF6_A.jobnarenmedium0:50:19R12
1641824FeFe_h2o_h2o_h_h_ts_h2_anion_doublet_m06_2th.jobhaixiascience_lms45:06:32R8
1641755Co4_21cisPisomer_wb97xd_def2tzvp_bindCo_bpin_ccpvdz_ultra_opt.jhaixiascience_lms61:07:23R8
1641574FeFe_h2o_h2o_h_h_ts_h2_anion_doublet_m06.jobhaixiascience_lms72:57:05R8
1642055protoMb.jobjhennytamug1:16:03R16
1642072mesoMb.jobjhennytamug0:10:32R16
1642067mesoMb3.jobjhennytamug0:29:23R16
1642059protoMb2.jobjhennytamug0:58:57R16
1642056protoMb3.jobjhennytamug1:08:04R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1633408cfxmosilongQ4
1632519cfxmosilongQ4

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |     6   250     0   500 |   216   216  1600/2048 |  8/50   100
long              Yes      Yes |     9   350     2   350 |   136   136   800/1024 | 8/200   200
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0   300 |     0     0          0 |    32   300
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     1     0     0     0 |    64    64         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     3     0     0     0 |    24    24        160 |     0   100
tamug             Yes      Yes |     5     0     0     0 |    80    80        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    24 Active Jobs     520 of 2228 Processors Active (23.34%)
                        63 of  255 Nodes Active      (24.71%)