Eos System Status
693 of 2476 CPUs Active (27.99%)
91 of 286 Nodes Active (31.82%)
47 running jobs , 11 queued jobs
Last updated: 2:10PM May 24, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1522618IR_Ret_mod_1chiapanglong21:33:45R1
1522617IR_Ret_modchiapanglong21:45:41R1
1522417elect_NO_Hopt_2Br6Me.jobjfranklong45:41:16R8
1521302Ni-OCTA-Pi_AN-MeCl_P_wB-6-311_DMF.jobljszatkolong71:59:25R8
1521303Ni-OCTA-Pi-Me_wB-6-311_DMF.jobljszatkolong71:59:15R8
1522021Ni-Pi_AN-MeCl_TS_M06L-6_DMF_3.jobljszatkolong48:17:35R8
1521982Ni-OCTA-Pi-Me_wB-6-311_DMF_1.jobljszatkolong49:18:12R8
1521972Ni-Pi_AN-MeCl_S_wB-6-311_DMF-1.jobljszatkolong49:57:32R8
1521975Ni-OCTA-Pi-Me_M06L-6_DMF-2.jobljszatkolong49:41:37R8
1521979Ni-OCTA-Pi_AN-MeCl_P_wB-6-311_DMF-1.jobljszatkolong49:20:58R8
1521301Ni-OCTA-Pi_AN-MeCl_TS_wB-6-311_DMF_1.jobljszatkolong71:59:15R8
1521300Ni-OCTA-Pi_AN-MeCl_TS_wB-6-311_DMF.jobljszatkolong71:59:18R8
1520647Ni-Pi_AN-MeCl_TS_B3-6311_DMF_1-1.jobljszatkolong94:29:25R8
1521266Ni-OCTA-Pi_AN-MeCl_TS_B3-6311_DMF.jobljszatkolong72:07:14R8
1521267Ni-OCTA-Pi_AN-MeCl_TS_B3-6311_DMF_1.jobljszatkolong72:07:10R8
1521279Ni-OCTA-Pi_AN-MeCl_TS_M06L-6_DMF.jobljszatkolong72:03:00R8
1521280Ni-OCTA-Pi_AN-MeCl_TS_M06L-6_DMF_1.jobljszatkolong72:03:15R8
1521299Ni-OCTA-Pi_AN-MeCl_S_wB-6-311_DMF.jobljszatkolong71:58:47R8
1521980Ni-OCTA-Pi_AN-MeCl_P_wB-6-311_DMF_1.jobljszatkolong49:21:03R8
1522410dppn_mpw1pw91.jobmandylong45:47:50R8
1522437C_1_HS_HYPE_LowMassmmh1874long44:30:57R1
1522707C_1_C_1_LL_NoBase_512_LSmmh1874long16:01:34R1
1522710C_1_1_C_1_LL_512_LSmmh1874long15:49:15R1
1522436C_1_HS_HYPEmmh1874long44:31:06R1
1521654M146-05n325Ktermpanplong62:26:01R5
1521653M146-05n325Ktermpanplong62:25:14R12
1522658manu5-5-5w9.jobzhuyjlong18:31:37R8
1522659manu5-5-5w10.jobzhuyjlong18:31:22R8
1522660manu5-5-5w11.jobzhuyjlong18:31:29R8
1522023H2OCO5Driveralucchese48:07:21R24
1522734Gromacs_mdrunashutoshmedium3:23:31R48
1522754lammpschendimedium1:05:45R128
1522748case.outmei209medium1:46:22R3
1522749case.outmei209medium1:46:38R3
1522738kpoint_CdS.shsevilmedium2:44:51R32
1522761alatt_ZnO_PW91.shsevilmedium0:01:25R32
1522760ecut_ZnO_PW91.shsevilmedium0:32:05R32
1522744ecut_CdTe.shsevilmedium1:49:43R32
1522756cmaq_12km_jlytcsdcnmedium0:40:52R48
1522758cmaq_12km_augtcsdcnmedium0:35:00R48
1519414C_CpiPr_H2dp_TS2_wB-6-SDD_85-1r.jobljszatkoscience_lms143:35:26R8
1521970C_CpiPr_H2dp_TS2_wB-6-SDD_90.jobljszatkoscience_lms50:13:43R8
1519413C_CpiPr_H2dp_TS2_wB-6-SDD_84-1r.jobljszatkoscience_lms143:35:23R8
1521971C_CpiPr_H2dp_TS2_wB-6-SDD_91.jobljszatkoscience_lms50:13:37R8
1519834p3nicfcf3-poipr3-singlet-cont-opt-wb97xd+solvent-redo-cont-contstewartscience_lms117:09:30R8
1521441protoMb_Nbound.jobjhennytamug70:54:39R16
1521442protoMb_Nbound2.jobjhennytamug70:43:31R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1488396vivtest1chetnalongQ1
1488413vivtest1chetnalongQ1
1488441vivtest1chetnalongQ1
1508931TL4_NCHRP_22_20_2lsmaddahlongQ16
1507571starccm+nniedmediumQ8
1507536starccm+nniedmediumQ8
1516595R_81_9000paulodmediumQ88
1516596R_81_9000_stripedpaulodmediumQ88
1507624yohascript.jobyohannesmediumQ1
1507626yohascript.jobyohannesshortQ12
1491284seq1zjh08177shortQ1

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     2     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    10   250     5   500 |   406   410  1600/2048 |  8/50   100
long              Yes      Yes |    29   200     4   250 |   191   197   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     1     0     0     0 |    24    24         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     5     0     0     0 |    40    40        160 |     0   100
tamug             Yes      Yes |     2     0     0     0 |    32    32        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    47 Active Jobs     693 of 2476 Processors Active (27.99%)
                        91 of  286 Nodes Active      (31.82%)