Eos System Status
1359 of 2760 CPUs Active (49.24%)
171 of 321 Nodes Active (53.27%)
98 running jobs , 6 queued jobs
Last updated: 3:00PM Sep 21, 2014
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1321557TCPD_2.jobadilong23:24:00R8
1321556TCPD_1.jobadilong23:24:45R8
1321558TCPD_3.jobadilong23:23:18R8
1319834RVE4p2_analysis_100aec0780long91:50:18R2
1319835RVE4p41_analysis_100aec0780long91:46:32R2
1321855tmtaaCrCl-EO-CO2-TS.jobayeunglong6:02:45R8
1321839tmtaaCoCl-14DhNO-TS.jobayeunglong14:44:08R8
1321840tmtaaCrCl-EO-CO2-open.jobayeunglong14:42:46R8
1321185tmtaaCoCl-CHO-TS-2.jobayeunglong53:38:23R8
1321524paralleljinbikailong27:29:56R5
1321294C_CpiPr_H2dp_TS2_wB-6-SDD_60-2.jobljszatkolong48:46:05R8
1320195C_CpiPr_H2dp_TS2_wB-6-SDD_MeCN_55.jobljszatkolong77:24:15R8
1321293C_CpiPr_H2dp_TS2_wB-6-SDD_50-11.jobljszatkolong48:46:17R8
1321300CpiPr_HH10-3_SC_B3-6-SDD_5.jobljszatkolong48:26:54R8
13201206FR.mode2.jobmj8741long77:41:26R8
1321193model.jobmogalong52:48:41R12
1321869lammpsnarenlong4:45:39R8
1321968sigma-complex-wb97xd-redo.jobstewartlong0:07:19R8
1321965sigma-complex-BMK-redo.jobstewartlong0:11:21R8
13219031^diradical-Re-b3lyp-opt-2.jobtanghaolong2:32:46R8
13219011^diradical-Ru-cation-wB97XD-opt.jobtanghaolong2:39:18R8
13218971^diradical-Re-wB97XD-opt.jobtanghaolong2:56:53R8
13219071^diradical-Ru-cation-wB97XD-opt-2.jobtanghaolong2:28:59R8
13219061^diradical-Ru-cation-m06-opt-2.jobtanghaolong2:29:31R8
13218951^diradical-Re-m06-opt.jobtanghaolong2:59:58R8
13218961^diradical-Re-m06-opt-1.jobtanghaolong2:59:52R8
13218991^diradical-Ru-cation-m06-opt.jobtanghaolong2:45:20R8
13219021^diradical-Ru-cation-wB97XD-opt-1.jobtanghaolong2:39:04R8
13218981^diradical-Re-wB97XD-opt-1.jobtanghaolong2:56:46R8
13219041^diradical-Re-m06-opt-2.jobtanghaolong2:31:42R8
13219001^diradical-Ru-cation-m06-opt-1.jobtanghaolong2:45:37R8
13199352_ts3_dist4m.jobtseguinlong91:51:27R8
13199362_ts3_dist4m_7.jobtseguinlong91:51:31R8
13202092_ts3_pm_3.jobtseguinlong76:41:38R8
13202082_ts3_pm.jobtseguinlong76:44:09R8
13202102_ts3_pm_4.jobtseguinlong76:41:06R8
13212041_ts3_pm.jobtseguinlong52:29:33R8
13208062_ts3_pm_2.jobtseguinlong70:29:17R8
13200482_ts3_dist4m_4.jobtseguinlong81:03:48R8
1321826run_adda_mpixgl1989long17:03:29R40
1320334Si001_4x2_adsy0m4156long74:25:49R64
1320056H2OCOriveralucchese46:00:46R8
1320058H2OCOriveralucchese46:00:51R8
1320059H2OCOriveralucchese46:01:31R8
1321823pyrimidine.1.2.120.jobanaillymedium5:36:30R8
1321789pyrimidin-2-one.0.1.300.jobanaillymedium5:58:34R8
1321786pyrimidin-2-one.0.0.60.jobanaillymedium6:03:32R8
1321792pyrimidin-2-one.0.2.240.jobanaillymedium6:07:32R8
1321787pyrimidin-2-one.0.1.0.jobanaillymedium6:11:44R8
1321781pyridine.0.2.240.jobanaillymedium5:05:49R8
1321799pyrimidin-2-one.1.1.300.jobanaillymedium5:03:37R8
1321800pyrimidin-2-one.1.1.60.jobanaillymedium4:20:28R8
1321704naphthyridine.0.2.120.jobanaillymedium11:35:37R8
1321802pyrimidin-2-one.1.2.240.jobanaillymedium4:11:18R8
1321788pyrimidin-2-one.0.1.180.jobanaillymedium6:10:53R8
1321793pyrimidin-2-one.1.0.180.jobanaillymedium5:52:00R8
1321794pyrimidin-2-one.1.0.240.jobanaillymedium5:39:04R8
1321795pyrimidin-2-one.1.0.60.jobanaillymedium5:39:06R8
1321790pyrimidin-2-one.0.1.360.jobanaillymedium5:49:08R8
1321822pyrimidine.1.1.360.jobanaillymedium5:39:35R8
1321791pyrimidin-2-one.0.2.120.jobanaillymedium5:47:00R8
1321824pyrimidine.1.2.60.jobanaillymedium5:33:31R8
1321853diatombqsunmedium6:38:17R16
1321845LiF_LI2EDC_ECfso002medium13:03:38R32
1321850slab_LiSI_LiEDC_LiCO3_LiEDCfso002medium12:50:49R32
1321848Li2EDC_LiCO3_Li2EDCfso002medium12:55:46R32
1321847slab_LiSi_LiCO3_Li2EDCfso002medium12:58:36R32
1321846LiSi_LiEDC_LiF_LiEDC_ECfso002medium13:00:41R32
1321851LiFLi2EDCg1fso002medium12:47:22R32
1321849slab_LiSi_LiCO3_Li2EDC_ECfso002medium12:52:55R32
1321952ch4_ROA_test2jpp0_0jal805dmedium0:27:11R2
1321879ccsm3-YDlinkmedium3:46:35R208
1321910case.outmei209medium1:34:55R6
1321912case.outmei209medium1:33:29R6
1321913case.outmei209medium1:32:59R6
1321911case.outmei209medium1:34:01R6
13219206piles-C1mojdeh84medium1:11:10R32
13219216piles-C1mojdeh84medium1:04:10R32
13219416piles-C1mojdeh84medium0:45:06R32
13219456piles-C1mojdeh84medium0:41:08R32
13219256piles-C1mojdeh84medium0:58:22R32
1321909pucl4w7_aq.jobnarenmedium1:40:15R12
1321868pucl4w10.jobnarenmedium4:52:10R12
1321878th4w10cl2G.jobnarenmedium3:56:56R12
1321876pucl4w10_aq.jobnarenmedium4:32:32R12
1321509Ni_6ch2cl2_dianion_m06_ccpVTZ_2th_freq.jobhaixiascience_lms30:12:36R8
1311254C_CpiPr_H2dp_TS2_wB-6-SDD_MeCN_51-2.jobljszatkoscience_lms219:49:29R8
1318599C_CpiPr_H2dp_TS2_wB-6-SDD_THF_SC_3.jobljszatkoscience_lms124:14:34R8
1317320C_CpiPr_H2dp_TS2_wB-6-SDD_MeCN_52-11.jobljszatkoscience_lms149:08:44R8
13215952_ts3_dist4m_10.jobtseguinwheeler21:02:19R8
13219512_ts3_dist4m_11.jobtseguinwheeler0:31:10R8
13219502_ts3_dist4m_9.jobtseguinwheeler0:32:17R8
13218882_ts3_pm3.jobtseguinwheeler3:20:18R8
13218772_ts3_pm_5.jobtseguinwheeler4:06:57R8
13218802_ts3_pm_6.jobtseguinwheeler3:40:55R8
13218832_ts3_dist4m3_3.jobtseguinwheeler3:32:23R8
13218862_ts3_dist4m3.jobtseguinwheeler3:31:23R8
1310167NOR.jobadixlong254:47:21R8

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1312098honey_combamirsmollongQ1
1317745honey_combamirsmollongQ1
1318724ansystestlinqimediumQ8
1320488saul_MDsperezmediumQ8
1298660Scan_Bra_CP1_S_amide.jobpqxingscience_lmsQ12
1298661Scan_Bra_CP1_S_amide_Liph.jobpqxingscience_lmsQ12

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0     0
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0     1
medium            Yes      Yes |    41   250     2   500 |   826   844  1600/2048 |  8/50   120
long              Yes      Yes |    41   150     2   250 |   405   411   800/1024 |  8/40    40
xlong             Yes      Yes |     1     2     0    10 |     8     8         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   200
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0     0
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     2     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0     0
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0     0
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0     0
lucchese          Yes      Yes |     3     0     0     0 |    24    24         64 |     0     0
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0     0
science_lms       Yes      Yes |     4     0     2     0 |    32    32        160 |     0     0
tamug             Yes      Yes |     0     0     0     0 |     0     0        128 |     0     0
wheeler           Yes      Yes |     8     0     0     0 |    64    64        256 |     0     0
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16     0

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    98 Active Jobs    1359 of 2760 Processors Active (49.24%)
                       171 of  321 Nodes Active      (53.27%)