Eos System Status
721 of 2608 CPUs Active (27.65%)
91 of 302 Nodes Active (30.13%)
90 running jobs , 2 queued jobs
Last updated: 8:30AM Jul 4, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1549772STDINadilong24:54:59R8
1549279Ni-iBC-Me_wB-6311_DMF_15-4.jobljszatkolong69:55:29R8
1549807Ni-Pi-HS_AN-MeCl_TS_wB-6-311_DMF_22.jobljszatkolong22:33:36R8
1549814Ni-Pi-HS_AN-MeCl_TS_wB-6-311_DMF_19-F_r.jobljszatkolong22:30:58R8
1549789Ni-iBC-Me-HS_wB-6311_DMF_10-F.jobljszatkolong22:37:51R8
1549830Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_14-F.jobljszatkolong22:00:25R8
1549278Ni-iBC-Me_wB-6311_DMF_15-3.jobljszatkolong69:55:43R8
1549835Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_15.jobljszatkolong21:56:05R8
1549829Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_14.jobljszatkolong21:59:45R8
1549806Ni-Pi-HS_AN-MeCl_TS_wB-6-311_DMF_21.jobljszatkolong22:34:19R8
1549782Ni-iBC-Me-HS_wB-6311_DMF_10.jobljszatkolong22:46:44R8
1549670ts-activation-c3-k2-bmk-singlet-4-redo-redo-cont.jobstewartlong41:28:05R8
1549544ts-activation-k2-ortho-bmk-singlet-new-cont-3.jobstewartlong47:32:55R8
1549420ts-activation-c1-k2-bmk-singlet-2-redo-cont-arenz-cont.jobstewartlong65:01:18R8
1550013ts-activation-c3-bmk-singlet-redo-redo-cont-v-.jobstewartlong14:30:01R8
1550012ts-activation-c3-bmk-singlet-redo-redo-cont-redo-arenz.jobstewartlong14:32:57R8
1548930ts-activation-k2-meta-bmk-singlet-1-2.jobstewartlong95:30:37R8
1549043ts-activation-ortho-bmk-singlet-new-4-2-2-cont.jobstewartlong93:17:33R8
1548932ts-activation-k2-para-bmk-singlet-1-2.jobstewartlong95:26:03R8
1549130ts-activation-c1-bmk-singlet-2-redo-arenz-cont.jobstewartlong89:35:11R8
1549132ts-activation-c2-bmk-singlet-redo-arenz-cont.jobstewartlong89:35:08R8
1549133ts-activation-c4-bmk-singlet-cont-2-redo-arenz-cont.jobstewartlong89:34:15R8
1549289ts-activation-c2-k2-bmk-singlet-6-redo-cont-cont-v-.jobstewartlong69:41:04R8
1550014ts-activation-c3-bmk-singlet-redo-redo-cont-v+.jobstewartlong14:29:13R8
1549288ts-activation-c2-k2-bmk-singlet-6-redo-cont-cont-redo-arenz.jobstewartlong69:43:52R8
15498164^Re-cation-m06-sdd-f-opt-freq.jobtanghaolong22:30:36R8
15499053^Re-neutral-b3lyp-sdd-f-opt-freq.jobtanghaolong19:00:35R8
15499032^Re-cation-b3lyp-sdd-f-opt-freq.jobtanghaolong19:00:26R8
15497783^Re0_c2h4_wb97xd_sdd_f_scan-ts-4.jobtanghaolong23:23:02R8
15498111^Re-neutral-m06-sdd-f-opt-freq.jobtanghaolong22:31:14R8
15498881^Re-dication-wb97xd-sdd-f-opt-freq.jobtanghaolong19:17:34R8
15498122^Re-cation-m06-sdd-f-opt-freq.jobtanghaolong22:30:50R8
15498934^Re-cation-wb97xd-sdd-f-opt-freq.jobtanghaolong19:15:59R8
15498923^Re-neutral-wb97xd-sdd-f-opt-freq.jobtanghaolong19:16:04R8
15498913^Re-dication-wb97xd-sdd-f-opt-freq.jobtanghaolong19:15:51R8
15498902^Re-cation-wb97xd-sdd-f-opt-freq.jobtanghaolong19:16:12R8
15498133^Re-dication-m06-sdd-f-opt-freq.jobtanghaolong22:30:43R8
15498101^Re-dication-m06-sdd-f-opt-freq.jobtanghaolong22:30:39R8
15498891^Re-neutral-wb97xd-sdd-f-opt-freq.jobtanghaolong19:17:02R8
15499064^Re-cation-b3lyp-sdd-f-opt-freq.jobtanghaolong19:00:36R8
15499123^Re-dication-B97D-sdd-f-opt-freq.jobtanghaolong18:19:29R8
15499172^Re-cation-BHandHLYP-sdd-f-opt-freq.jobtanghaolong18:02:53R8
15499813^Re-dication-tpssh-sdd-f-opt-freq.jobtanghaolong2:21:23R8
15499802^Re-cation-tpssh-sdd-f-opt-freq.jobtanghaolong2:39:44R8
15497773^Re0_c2h4_m06_sdd_f_ts.jobtanghaolong23:33:40R8
15499781^Re-dication-tpssh-sdd-f-opt-freq.jobtanghaolong6:41:38R8
15499823^Re-neutral-tpssh-sdd-f-opt-freq.jobtanghaolong1:54:42R8
15499834^Re-cation-tpssh-sdd-f-opt-freq.jobtanghaolong1:54:03R8
15500003^Re-dication-b3lyp-sdd-f-opt-freq.jobtanghaolong0:25:58R8
15494113^ru1-m06-C2H4-SDD-F-ts.jobtanghaolong66:58:27R8
15498153^Re-neutral-m06-sdd-f-opt-freq.jobtanghaolong22:30:58R8
15497761^ru1-m06-C2H4-SDD-F-P-from-ts.jobtanghaolong23:43:15R8
15499451^Re-dication-mn12sx-sdd-f-opt-freq.jobtanghaolong7:33:12R8
15499354^Re-cation-BMK-sdd-f-opt-freq.jobtanghaolong12:51:50R8
15499183^Re-dication-BHandHLYP-sdd-f-opt-freq.jobtanghaolong18:03:00R8
15499193^Re-neutral-BHandHLYP-sdd-f-opt-freq.jobtanghaolong18:02:47R8
15499204^Re-cation-BHandHLYP-sdd-f-opt-freq.jobtanghaolong18:02:33R8
15499211^Re-dication-BHandHLYP-sdd-f-opt-freq.jobtanghaolong18:01:22R8
15499301^Re-dication-BMK-sdd-f-opt-freq.jobtanghaolong17:51:40R8
15499333^Re-dication-BMK-sdd-f-opt-freq.jobtanghaolong17:51:19R8
15499322^Re-cation-BMK-sdd-f-opt-freq.jobtanghaolong17:51:24R8
15499311^Re-neutral-BMK-sdd-f-opt-freq.jobtanghaolong17:51:30R8
15499343^Re-neutral-BMK-sdd-f-opt-freq.jobtanghaolong16:23:37R8
15499791^Re-neutral-tpssh-sdd-f-opt-freq.jobtanghaolong6:14:12R8
1550061sh1_multilayer65marouenmedium0:41:03R8
1550062sh1_multilayer15marouenmedium0:34:03R8
1550151case88mrajabimedium0:00:13R8
1550159case96mrajabimedium0:00:00R8
1550152case89mrajabimedium0:00:09R8
1550153case90mrajabimedium0:00:00R8
1550158case95mrajabimedium0:00:00R8
1550154case91mrajabimedium0:00:00R8
1550155case92mrajabimedium0:00:00R8
1550156case93mrajabimedium0:00:00R8
1550157case94mrajabimedium0:00:00R8
1550150case87mrajabimedium0:00:31R8
1549871Mo16S32_DBT_A.jobnarenmedium20:47:13R12
1549869Mo16S35_H2_B.jobnarenmedium20:52:03R12
1549276C_CpiPr_H2dp_TS2_wB-6-SDD_92-r3.jobljszatkoscience_lms70:03:12R8
1546244ts-activation-c-a-bmk-singlet-cont-redo.jobstewartscience_lms285:09:04R8
1546245ts-activation-c-e-bmk-singlet-cont-redo.jobstewartscience_lms285:05:18R8
1547800ts-migration-12-ae-bmk-singlet.jobstewartscience_lms161:36:13R8
1548330ts-activation-c-a-bmk-singlet-cont-cont.jobstewartscience_lms140:52:33R8
1548331ts-activation-c-e-bmk-singlet-cont-cont.jobstewartscience_lms140:49:43R8
15499461^Re-neutral-mn12sx-sdd-f-opt-freq.jobtanghaoscience_lms17:32:16R8
15499483^Re-dication-mn12sx-sdd-f-opt-freq.jobtanghaoscience_lms17:31:53R8
15499472^Re-cation-mn12sx-sdd-f-opt-freq.jobtanghaoscience_lms17:31:45R8
15499504^Re-cation-mn12sx-sdd-f-opt-freq.jobtanghaoscience_lms17:31:39R8
15499493^Re-neutral-mn12sx-sdd-f-opt-freq.jobtanghaoscience_lms17:31:20R8
1550063portal-vncmarouenvnc0:32:07R1

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1549658testmira0501longQ8
1550149case86mrajabimediumC8

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    15   250     0   500 |   128   128  1600/2048 |  8/50   100
long              Yes      Yes |    64   200     1   250 |   512   512   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     1     0     0    16 |     1     1         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |    11     0     0     0 |    88    88        160 |     0   100
tamug             Yes      Yes |     0     0     0     0 |     0     0        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    91 Active Jobs     729 of 2608 Processors Active (27.95%)
                        91 of  302 Nodes Active      (30.13%)