Eos System Status
1396 of 2504 CPUs Active (55.75%)
176 of 290 Nodes Active (60.69%)
80 running jobs , 1 queued jobs
Last updated: 4:40PM Jul 28, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1559158p1biswas85long56:28:18R8
1559065Crop_tpphz_Fe2Fe_vertextpphzopt.jobjfranklong72:40:51R8
1559363pentvertexconstrained_3.jobjfranklong31:02:15R8
1559777Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_30-1.jobljszatkolong7:53:18R8
1559776Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_18-6.jobljszatkolong7:53:50R8
1559871Ni-BC_AN_wB-6-311_GAS_SP-B3.jobljszatkolong1:39:36R8
1559376Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_30.jobljszatkolong30:39:16R8
1559842model.jobmogalong4:17:23R12
1559799model.jobmogalong6:52:14R12
1559811model.jobmogalong6:20:15R12
1559351model.jobmogalong31:47:17R12
1559352ADD2sbertolong31:35:00R64
1559703additivesbertolong22:50:34R64
1559173ts-migration-k2-24-bmk-singlet.jobstewartlong52:54:43R8
1559367ts-activation-k2-ortho-bmk-singlet-new-cont-3-redo-cont-redo-costewartlong30:50:59R8
1558955ts-migration-24-bmk-singlet-cont-redo-arenz.jobstewartlong76:02:45R8
1559700ts-migration-13-bmk-singlet-redo-arenz-cont-cont.jobstewartlong23:04:16R8
1559701ts-migration-13-bmk-singlet-redo-arenz-redo-cont.jobstewartlong23:02:59R8
1558959ts-migration-23-bmk-singlet-1-redo-cont.jobstewartlong75:53:42R8
1559172ts-migration-12-bmk-singlet-1-2-cont-redo-arenz.jobstewartlong52:57:27R8
1558958ts-migration-23-bmk-singlet-cont-cont.jobstewartlong75:54:48R8
1559818ts-migration-34-bmk-singlet-2-redo-arenz-cont.jobstewartlong5:24:11R8
1559817ts-migration-k2-13-bmk-singlet-cont-cont.jobstewartlong5:31:15R8
1559166ts-activation-c1-k2-bmk-singlet-2-redo-cont-arenz-cont-cont-contanghaolong53:39:59R8
1559167ts-activation-c1-k2-bmk-singlet-2-redo-cont-arenz-redo-cont-contanghaolong53:39:40R8
1558953ts-activation-c-a-bmk-singlet-cont-redo-v--cont.jobtanghaolong76:46:38R8
1559169ts-activation-c4-k2-bmk-singlet-3-1-redo-redo-cont-redo-arenz-rtanghaolong53:35:50R8
1559808H2OCO5D_NOCPriveralucchese6:22:42R64
1559800extend_80nsashutoshmedium6:47:35R48
1559888BTO-T-PBE_T2bakgencmedium0:38:40R16
1559899BTO-T-PBE_T7bakgencmedium0:28:07R16
1559898BTO-T-PBE_T1bakgencmedium0:28:14R16
1559897BTO-T-PBE_T4bakgencmedium0:29:06R16
1559896BTO-T-PBE_T3bakgencmedium0:29:51R16
1559895BTO-T-PBE_T2bakgencmedium0:30:07R16
1559893BTO-T-PBE_T1bakgencmedium0:31:16R16
1559892BTO-T-PBE_T2bakgencmedium0:34:57R16
1559889BTO-T-PBE_T2bakgencmedium0:38:34R16
1559890BTO-T-PBE_T2bakgencmedium0:38:13R16
15599042N-graph-CH4-11.jobdahiyanamedium0:06:34R8
15599142N-graph-CH4-9.jobdahiyanamedium0:05:45R8
15599032N-graph-CH4-10.jobdahiyanamedium0:06:54R8
15599052N-graph-CH4-12.jobdahiyanamedium0:06:40R8
15599062N-graph-CH4-1.jobdahiyanamedium0:06:49R8
15599072N-graph-CH4-2.jobdahiyanamedium0:06:35R8
15599082N-graph-CH4-3.jobdahiyanamedium0:06:44R8
15599092N-graph-CH4-4.jobdahiyanamedium0:06:53R8
15599102N-graph-CH4-5.jobdahiyanamedium0:06:52R8
15599132N-graph-CH4-8.jobdahiyanamedium0:05:46R8
15599122N-graph-CH4-7.jobdahiyanamedium0:06:41R8
15599112N-graph-CH4-6.jobdahiyanamedium0:06:51R8
1559827Steadydellismedium5:09:17R8
1559902Steadydellismedium0:17:39R8
1559677preemmimedium23:50:38R32
1559683preemmimedium23:41:46R32
1559742FeFe_h2o_h2o_neutral_8th_tpss.jobhaixiamedium19:18:33R8
1559812FeFe_h2o_h2o_neutral_9th_m06.jobhaixiamedium5:58:23R8
1559795new_pijh11aemedium6:53:01R256
1559822China_soe_s1liangmedium5:19:43R40
1559820China_meic_1liangmedium5:21:35R40
1559868run_castnetqyingmedium1:44:34R1
1559749master_surf_grid13sduttamedium19:02:05R1
1559793jobst2sn72medium7:00:36R32
1559886Eddyxueliumedium0:42:40R1
1559853mymatlabxueliumedium2:54:53R1
1559863wrf_runylinmedium2:03:42R80
1559831FeFe_h2o_h2o_anion_6th_m06_freq.jobhaixiascience_lms4:42:53R8
1559834FeFe_h2o_h2o_h_neutral_6th_m06_freq.jobhaixiascience_lms4:39:00R8
1559860Co4_ch2Sime3_T_m06_6311_631.jobhaixiascience_lms2:21:34R8
1559405molproriverascience_lms29:16:55R8
1553172ts-migration-12-ae-bmk-singlet-cont.jobstewartscience_lms386:24:21R8
1555364ts-activation-c-e-k2-bmk-singlet-cont-cont-cont.jobstewartscience_lms245:53:17R8
1555601ts-migration-12-ae-k2-bmk-singlet-cont-cont-cont-cont.jobstewartscience_lms215:40:27R8
1555496ts-activation-c-a-bmk-singlet-cont-cont-cont.jobstewartscience_lms222:18:54R8
1555603ts-migration-12-ae-k2-bmk-singlet-cont-cont-cont-cont-redo.jobtanghaoscience_lms215:36:31R8
1553171ts-migration-12-ae-bmk-singlet-redo.jobtanghaoscience_lms386:25:56R8
1557721ts-activation-c-a-k2-bmk-singlet-cont-cont.jobtanghaoscience_lms144:27:31R8
1559787submit.jobjhennytamug7:15:56R16
1559775mesoMb_job1jhennytamug8:07:16R16
1559774deuteroMb2.jobjhennytamug8:08:35R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1559814H2OCO5D_MP2riveraluccheseQ64

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    38   250     0   500 |   852   873  1600/2048 |  8/50   100
long              Yes      Yes |    27   200     0   250 |   344   344   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     1     0     1     0 |    64    64         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |    11     0     0     0 |    88    88        160 |     0   100
tamug             Yes      Yes |     3     0     0     0 |    48    48        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    80 Active Jobs    1396 of 2504 Processors Active (55.75%)
                       176 of  290 Nodes Active      (60.69%)