Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility
Eos System Status
1308 of 2612 CPUs Active (50.08%)
201 of 303 Nodes Active (66.34%)
91 running jobs , 8 queued jobs
Last updated: 4:00AM Feb 1, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1438778R67-L4F01_SSTflc2625long35:51:15R64
1438861elect_NO_Set1B_Cl2.jobjfranklong34:07:42R8
1437915elect_NO_Max_Cl2.jobjfranklong59:15:49R8
1437432C_CpiPr_H2dp_P_wB-6-SDD_MeCN_12.jobljszatkolong88:37:28R8
1437778CpiPr_parH3_TS_THF-wB-6-SDD_2.jobljszatkolong65:08:28R8
1438178C_CptBu_per_TS2_THF-wB-6-SDD_3.jobljszatkolong39:57:37R8
1438180C_CptBu_per_TS2_MeCN-wB-6-SDD_3.jobljszatkolong39:57:33R8
1438185Ni-Chl-HS_AN-MeCl_TS_wB-6-311_DMF_14.jobljszatkolong39:45:38R8
1438186Ni-Chl-HS_AN-MeCl_TS_wB-6-311_DMF_15.jobljszatkolong39:45:25R8
1438177C_CptBu_per_TS2_THF-B3-6-SDD_3.jobljszatkolong39:57:27R8
1438176C_CptBu_per_TS2_wB-6-SDD_4.jobljszatkolong39:57:51R8
1438179C_CptBu_per_TS2_MeCN-B3-6-SDD_3.jobljszatkolong39:57:46R8
1437464C_CptBu_per_TS2_wB-6-SDD_2.jobljszatkolong74:18:40R8
1437779CpiPr_parH3_TS_THF-wB-6-SDD_3.jobljszatkolong65:08:30R8
1437785CpiPr_parH3_TS_MeCN-wB-6-SDD_2.jobljszatkolong65:05:28R8
1437783CpiPr_parH3_TS_MeCN-wB-6-SDD_3.jobljszatkolong65:08:33R8
1438153C_CptBu_m1_B3-6-SDD_MeCN_3.jobljszatkolong40:42:11R8
1438980Fd1o2navdeeplong18:57:50R16
1438974Fd0o2navdeeplong19:21:01R16
1438973Ad0o2navdeeplong19:27:03R16
1438976Fd0o6navdeeplong19:18:15R16
1438975Fd0o4navdeeplong19:19:35R16
1438977Fd0o8navdeeplong19:16:24R16
1438978Fd1o0navdeeplong19:10:57R16
1438971Ad1o8navdeeplong19:32:26R16
1438972Ad2o0navdeeplong19:31:37R16
1438970Ad1o6navdeeplong19:33:22R16
1438969Ad1o4navdeeplong19:34:41R16
1438967Ad0o8navdeeplong19:38:46R16
1438966Ad0o4navdeeplong19:44:55R16
1438090Jd0o6navdeeplong46:31:46R1
1437717Jd1o0navdeeplong68:59:02R1
1438968Ad1o2navdeeplong19:36:05R16
1438102lsgg_0_mesh3qlf1582long42:55:59R1
1438137lsgg_3_mesh3qlf1582long41:34:14R1
1438990ts-activation-c-a-k2-bmk-singlet-1-1-redo+gd3bj-cont.jobstewartlong14:10:31R8
1438923ts-activation-c-e-k2-bmk-singlet-1-1-redo+gd3bj-v-.jobstewartlong32:23:19R8
1438205ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong38:27:20R8
1438200ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong38:32:31R8
1438198ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong38:33:58R8
1438195ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong38:35:57R8
1437529ts-activation-c2h4-k2-bmk-singlet-2-v+.jobstewartlong73:34:57R8
1439020ts-activation-c-a-bmk-singlet-redo-redo+gd3bj-v--redo.jobstewartlong11:54:24R8
1437419ts-activation-c1h1-k2-bmk-singlet-2-v+.jobstewartlong88:18:40R8
1438202ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong38:30:59R8
1439002ts-migration-12-BMK-freq-redo+gd3bj-redo-redo.jobstewartlong13:34:21R8
1438998ts-migration-12-iii-k2-bmk-c4c5-singlet-new-cont.jobstewartlong13:44:18R8
1438203ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong38:29:09R8
1438993ts-migration-12-ee-bmk-singlet-redo+gd3bj-cont.jobstewartlong14:05:18R8
1437443re1-wb97xd-styrene-S3-down-isomer-ts-aa.jobtanghaolong89:54:38R8
1439156H2OCOriveralucchese6:12:21R8
1439159H2OCOriveralucchese6:11:47R8
1439160H2OCOriveralucchese6:12:03R8
1439157H2OCOriveralucchese6:11:54R8
1439158H2OCOriveralucchese6:11:49R8
1439017R67-10I-GG-O1flc2625medium11:56:20R8
1439019R67-10I-WENO-O1flc2625medium11:54:52R8
1439128R67-9I-WENO-O2flc2625medium7:42:42R4
1439129R67-9I-GG-O2flc2625medium7:41:51R4
1439126R67-9I-LSQR-O2flc2625medium7:43:32R4
1439018R67-10I-LSQR-O1flc2625medium11:55:44R8
1439012ch4_ROA_E4.35jal805dmedium12:23:36R48
1439013ch4_ROA_E15.25jal805dmedium12:22:17R48
1439032CHSOA_7_1qyingmedium11:25:18R40
1439034CHSOA_8_1qyingmedium11:21:26R40
1439036CHSOA_9_1qyingmedium11:17:47R40
1439029CHSOA_6_1qyingmedium11:33:29R40
1439038CHSOA_10_1qyingmedium11:14:40R40
1439040CHSOA_11_1qyingmedium11:10:59R40
1439042CHSOA_12_1qyingmedium11:08:23R40
1439203LVFZtestrobbin89medium3:38:15R160
1439186cmaq_testtcsdcnmedium5:03:44R16
1439193cmaq_may1tcsdcnmedium4:31:28R16
1439131smk_rdn_2tcsdcnmedium7:24:01R1
1439130smk_rdn1tcsdcnmedium7:24:12R1
1439132smk_rdn3tcsdcnmedium7:23:50R1
1439167smk_rdrtcsdcnmedium5:54:14R1
1436804Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-12.jobljszatkoscience_lms136:23:48R8
1437436CpiPr_parH3_TS_wB-6-SDD.jobljszatkoscience_lms90:22:27R8
1436803Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-11.jobljszatkoscience_lms136:23:56R8
1438158H2OCOriverascience_lms40:40:30R8
1438157H2OCOriverascience_lms40:40:48R8
1438156H2OCOriverascience_lms40:40:41R8
1438154H2OCOriverascience_lms40:41:10R8
1438155H2OCOriverascience_lms40:40:38R8
1434657ts-activation-c5h9-k2-bmk-singlet-2-redo-2.jobstewartscience_lms323:11:39R8
1435268ts-migration-12-iii-bmk-c4c5-singlet-redo-cont-cont.jobstewartscience_lms258:08:32R8
1436801ts-activation-c5h9-k2-bmk-singlet-2-redo-3.jobstewartscience_lms136:27:33R8
1438092ts-migration-13-k2-bmk-c4c4-singlet-redo-redo-cont-redo.jobstewartscience_lms43:22:50R8
1436417ts-activation-c3h6-bmk-singlet-1-1-redo-redo-cont-cont.jobstewartscience_lms179:55:23R8
1438174deuteroMb2.jobjhennytamug40:02:07R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
14380573D_jobcolinhjxmediumQ1
1439043CHSOA_12_2qyingmediumH40
1439041CHSOA_11_2qyingmediumH40
1439039CHSOA_10_2qyingmediumH40
1439037CHSOA_9_2qyingmediumH40
1439035CHSOA_8_2qyingmediumH40
1439033CHSOA_7_2qyingmediumH40
1439031CHSOA_6_2qyingmediumH40

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    22   250     1   500 |   608   904  1600/2048 | 8/120   100
long              Yes      Yes |    50   200     0   250 |   540   542  1024/2048 | 8/120   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     5     0     0     0 |    40    40         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |    13     0     0     0 |   104   104        160 |     0   100
tamug             Yes      Yes |     1     0     0     0 |    16    16        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    91 Active Jobs    1308 of 2612 Processors Active (50.08%)
                       201 of  303 Nodes Active      (66.34%)