Eos System Status
296 of 2340 Cores Active (12.65%)
37 of 269 Nodes Active (13.75%)
25 running jobs , 8 queued jobs
Last updated: 3:10PM May 30, 2016
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1730813CH3-S.25-Me-26-Me-12-Me.R.ts4.3yanfeilong41:18:19R8
1730885CH3-S.25-Me-26-Me-12-Me.S.ts4.3yanfeilong5:35:15R8
1730884original_less.25-Me-7-OMe-56-H-57-H.R.ts2.Cf3.2yanfeilong6:05:29R8
1730877original_less.25-Me-7-OMe-56-H-57-H.R.ts4.Cf1.2yanfeilong9:06:24R8
1730874original_less.25-Me-7-OMe-56-H-57-H.R.ts4.Cf3.3yanfeilong11:22:31R8
1730862original_less.25-Me-12-OMe-56-H-57-H.R.ts2.Cf3.3yanfeilong16:15:48R8
1730859original_less.25-Me-26-Me-56-H-57-H.R.ts2.3yanfeilong16:59:39R8
1730812original_less.25-Me-7-OMe-56-Me-57-Me.R.ts2.Cf1.3yanfeilong41:20:53R8
1730803original_less.25-Me-26-Me-56-Me-57-Me.S.ts1.3yanfeilong43:08:50R8
1730886original_less.25-Me-26-Me-56-H-57-H.S.ts1.3yanfeilong4:33:05R8
1730892original_less.25-Me-12-OMe-56-Me-57-Me.R.ts2.Cf3.3yanfeilong3:25:27R8
1730899original_less.25-Me-7-OMe-56-H-57-H.R.ts2.Cf1.3yanfeilong0:37:39R8
1730827original_less.25-Me-26-Me-56-Me-57-Me.S.ts3.3yanfeilong34:23:22R8
1730898original_less.25-Me-26-Me-56-Me-57-Me.R.ts2.3yanfeilong0:44:05R8
1730897CH3-S.mapping.R.ts2.2yanfeilong0:50:38R8
1730896original_less.25-Me-12-OMe-56-H-57-H.R.ts2.Cf1.3yanfeilong1:11:18R8
1730895original_less.25-Me-26-Me-56-H-57-H.R.ts3.2yanfeilong1:42:20R8
1730894CH3-S.mapping.S.ts5.2yanfeilong2:34:43R8
1730893wrf_mpi_run_bembenoitmedium3:01:51R104
1730854exo_2_r_1_eflip.jobtseguinmedium18:50:51R8
1730855exo_2_r_10_eflip.jobtseguinmedium18:50:06R8
1730857exo_2_r_30_eflip.jobtseguinmedium18:50:17R8
1730858exo_2_r_302_eflip.jobtseguinmedium18:50:03R8
1727679Ni_dppe_TS2_outside_cation2_tpss_2th.jobhaixiascience_lms137:37:31R8
1725747Co4_15_bh_transP_isomer1_wb97xd_def2tzvp_bindCo_bpin_ccpvdz_ulthaixiascience_lms189:39:56R8

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1643377microscaledmg_kballardlongQ8
1656743myjob_lookim15041longQ16
1656247myjob_lookim15041longQ16
1632519cfxmosilongQ4
1633408cfxmosilongQ4
1643391abaqusyuanzhilongQ1
1729390ErrorD0riveraluccheseQ64
1726037cfxfarzam67mediumQ8

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |     5   250     1   500 |   136   136  1600/2048 | 8/200   200
long              Yes      Yes |    18   350     6   350 |   144   144   800/1024 | 8/200   200
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0   300 |     0     0          0 |    32   300
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     0     0     1     0 |     0     0         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     2     0     0     0 |    16    16        160 |     0   100
tamug             Yes      Yes |     0     0     0     0 |     0     0        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    25 Active Jobs     296 of 2340 Processors Active (12.65%)
                        37 of  269 Nodes Active      (13.75%)