Eos System Status
1029 of 2484 CPUs Active (41.43%)
140 of 287 Nodes Active (48.78%)
100 running jobs , 37 queued jobs
Last updated: 11:10AM May 5, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1513004TCPD_CPexo_exoDCPD.jobadilong42:12:18R8
1513002TCPD_CPendo_exoDCPD.jobadilong42:12:27R8
1513003TCPD_CPexo_endoDCPD.jobadilong42:12:27R8
1513001TCPD_CPendo_endoDCPD.jobadilong42:13:27R8
1514021p1biswas85long3:32:17R8
151399005Rx323e5bodetokslong12:24:14R8
1514062C_CpiPr_parH3_TS_wB-6-SDD_MeCN_16-r.jobljszatkolong0:26:41R8
1514061C_CpiPr_parH3_TS_wB-6-SDD_MeCN_17-r.jobljszatkolong0:27:17R8
1513296C_CpiPr_H2dp_TS2_wB-6-SDD_82-r3.jobljszatkolong24:21:51R8
1512415C_CpiPr_H2dp_TS2_wB-6-SDD_82-r2.jobljszatkolong95:05:30R8
1513301C_CpiPr_parH3_TS_wB-6-SDD_MeCN_15-r.jobljszatkolong24:09:31R8
1513299C_CpiPr_H2dp_TS2_wB-6-SDD_83.jobljszatkolong24:19:26R8
1513498mar_hardening_40_Xmax4marouenlong21:00:21R1
1513512mar_E_1_Xmax4_Speed1marouenlong20:38:01R1
1513521mar_softening_150_Xmax4marouenlong20:33:15R1
1513511mar_E_1_Xmax4_Speed20marouenlong20:38:30R1
1511640converge-udf.txtmashikhelong1:20:35R24
1512521C_1_1_C_1_LLmmh1874long90:37:44R1
1512719p3nicfcf3-poipr3-singlet-redo.jobstewartlong70:12:21R8
1514054ts-cyclation-pme3-singlet.jobstewartlong0:47:56R8
1512722p3nicf2-poipr3-singlet-redo.jobstewartlong70:11:34R8
1512560p2nicfcf3cf2cf2-poipr3-triplet.jobstewartlong89:05:11R8
1512685p3nicfcf3-poipr3-singlet-cont.jobstewartlong71:37:51R8
1512345M46-05n400Ktermpanplong30:10:08R5
1512333M46-05n375Ktermpanplong32:46:06R5
1512302M46-05n350Ktermpanplong0:29:43R4
1512335M46-05n375Ktermpanplong32:46:48R5
1512349M46-05n400Ktermpanplong27:55:24R5
1512343M46-05n400Ktermpanplong31:28:52R5
1512347M46-05n400Ktermpanplong29:02:38R5
1512341M46-05n400Ktermpanplong32:46:52R5
1512274M46-05n300Ktermpanplong17:05:17R4
1512331M46-05n375Ktermpanplong32:47:38R5
1512329M46-05n375Ktermpanplong32:46:52R5
1512284M46-05n325Ktermpanplong11:17:37R4
1512282M46-05n325Ktermpanplong11:22:03R4
1512280M46-05n325Ktermpanplong12:00:25R4
1512278M46-05n300Ktermpanplong12:23:24R4
1512276M46-05n300Ktermpanplong15:52:49R4
1512272M46-05n300Ktermpanplong20:27:21R4
1512270M46-05n300Ktermpanplong20:47:45R4
1512268M46-05n300Ktermpanplong23:26:47R4
1512266M46-05n300Ktermpanplong23:46:07R4
1512264M46-05n300Ktermpanplong26:59:55R4
1512288M46-05n325Ktermpanplong10:33:30R4
1512290M46-05n325Ktermpanplong9:28:02R4
1512292M46-05n325Ktermpanplong5:00:19R4
1512327M46-05n375Ktermpanplong58:18:03R5
1512339M46-05n400Ktermpanplong58:20:16R5
1512325M46-05n375Ktermpanplong58:21:41R5
1512321M46-05n375Ktermpanplong62:41:31R5
1512319M46-05n375Ktermpanplong62:48:55R5
1512317M46-05n350Ktermpanplong64:31:49R5
1512300M46-05n350Ktermpanplong0:41:56R4
1512298M46-05n325Ktermpanplong0:57:17R4
1512296M46-05n325Ktermpanplong2:24:10R4
1512294M46-05n325Ktermpanplong4:49:08R4
1512323M46-05n375Ktermpanplong59:57:03R5
1512351M46-05n400Ktermpanplong27:51:08R5
1512353M46-05n400Ktermpanplong60:04:53R5
1512355M46-05n400Ktermpanplong27:31:40R5
1512357M46-05n400Ktermpanplong27:25:30R5
1512286M46-05n325Ktermpanplong10:38:13R4
1513142cluster_8-2.jobzhuyjlong38:18:52R8
1513867YSZ_GGA_Finalbakgencmedium18:40:22R32
1513866YSZ_GGAbakgencmedium18:41:32R32
1514042IV_vo_MN12SX_DGDZVP2.jobhaixiamedium1:29:02R8
1514036IV_T_MN12SX_DGDZVP2.jobhaixiamedium1:39:07R8
1514088Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan10.jobhaixiamedium0:03:03R8
1514035IV_T_M06_DGDZVP2.jobhaixiamedium1:39:25R8
1514086Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan9.jobhaixiamedium0:04:39R8
1514030I_o_MN12SX_ccpVTZ_DZ_DGDZVP2.jobhaixiamedium1:55:52R8
1514080Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan3.jobhaixiamedium0:09:32R8
1514082Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan5.jobhaixiamedium0:07:44R8
1514083Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan6.jobhaixiamedium0:07:25R8
1514084Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan7.jobhaixiamedium0:06:10R8
1514085Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan8.jobhaixiamedium0:05:40R8
1514081Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan4.jobhaixiamedium0:07:44R8
1514077Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan1.jobhaixiamedium0:10:07R8
1514026I_MN12SX_ccpVTZ_DZ_DGDZVP2_stability.jobhaixiamedium2:02:36R8
1514079Co3_cis_c6h5_B_wb97xd_ccpvdz_lanl2dz_scan2.jobhaixiamedium0:09:45R8
1514022Co3_cis_c6h5_B_ts_elimination_wb97xd_ccpvdz_lanl2dz_5th.jobhaixiamedium2:15:43R8
1514029I_o_M06_ccpVTZ_DZ_DGDZVP2.jobhaixiamedium1:56:14R8
1514068daily_pijh11aemedium0:21:35R256
1514069thien1D_matlab_NiTi_unnecessary_comments_deleted1kubra87medium0:20:16R8
1514071sssp_delta_64nkj255medium0:14:24R64
1514058cmaq_12km_aprtcsdcnmedium0:31:49R36
1514057cmaq_12km_maytcsdcnmedium0:32:57R16
1514008fluentjobycwangmedium11:16:50R4
1514007fluentjobycwangmedium11:17:39R4
1514024Co3_cis_c6h5_B_ts_elimination_wb97xd_ccpvdz_lanl2dz_6th.jobhaixiascience_lms2:09:28R8
1514031I_T_M06_DGDZVP2.jobhaixiascience_lms1:47:21R8
1514032I_T_MN12SX_DGDZVP2.jobhaixiascience_lms1:47:54R8
1514034I_o_Q_MN12SX_DGDZVP2.jobhaixiascience_lms1:45:52R8
1514033I_o_Q_M06_DGDZVP2.jobhaixiascience_lms1:45:53R8
1514067pinfishfenghuoshort0:00:54R16
1514038q_3_5_small.jobtseguinwheeler1:34:01R8
1514039q_3_4_small.jobtseguinwheeler1:33:55R8
1514044q_5_r_small.jobtseguinwheeler1:20:16R8
1514046q_6_r_small.jobtseguinwheeler1:17:15R8

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1488396vivtest1chetnalongQ1
1488413vivtest1chetnalongQ1
1488441vivtest1chetnalongQ1
1508931TL4_NCHRP_22_20_2lsmaddahlongQ16
1512324M46-05n375KtermpanplongQ4
1512322M46-05n375KtermpanplongQ4
1512320M46-05n375KtermpanplongQ4
1512318M46-05n350KtermpanplongQ4
1512316M46-05n350KtermpanplongQ4
1512314M46-05n350KtermpanplongQ4
1512312M46-05n350KtermpanplongQ4
1512310M46-05n350KtermpanplongQ4
1512308M46-05n350KtermpanplongQ4
1512306M46-05n350KtermpanplongQ4
1512304M46-05n350KtermpanplongQ4
1512326M46-05n375KtermpanplongQ4
1512328M46-05n375KtermpanplongQ4
1512356M46-05n400KtermpanplongQ4
1512354M46-05n400KtermpanplongQ4
1512352M46-05n400KtermpanplongQ4
1512350M46-05n400KtermpanplongQ4
1512348M46-05n400KtermpanplongQ4
1512346M46-05n400KtermpanplongQ4
1512344M46-05n400KtermpanplongQ4
1512342M46-05n400KtermpanplongQ4
1512340M46-05n400KtermpanplongQ4
1512330M46-05n375KtermpanplongQ4
1512358M46-05n400KtermpanplongQ4
1512332M46-05n375KtermpanplongQ4
1512334M46-05n375KtermpanplongQ4
1512336M46-05n375KtermpanplongQ4
1512338M46-05n375KtermpanplongQ4
1507571starccm+nniedmediumQ8
1507536starccm+nniedmediumQ8
1507624yohascript.jobyohannesmediumQ1
1507626yohascript.jobyohannesshortQ12
1491284seq1zjh08177shortQ1

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     1     1     2     4 |    16    16       1024 |     0    10
medium            Yes      Yes |    26   250     3   500 |   588   588  1600/2048 |  8/50   100
long              Yes      Yes |    64   200    32   250 |   353   453   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     5     0     0     0 |    40    40        160 |     0   100
tamug             Yes      Yes |     0     0     0     0 |     0     0        128 |     0   100
wheeler           Yes      Yes |     4     0     0     0 |    32    32        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

   100 Active Jobs    1029 of 2484 Processors Active (41.43%)
                       140 of  287 Nodes Active      (48.78%)