Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility
Eos System Status
1236 of 2612 CPUs Active (47.32%)
154 of 303 Nodes Active (50.83%)
67 running jobs , 1 queued jobs
Last updated: 1:20PM Dec 21, 2014
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1429980Khoakhoagpu2:20:31R12
1429966C3_y_permchilong13:24:35R32
1429965C3_x_permchilong13:31:31R32
1429799lammpskrmlong42:12:45R8
1429015C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-6q.jobljszatkolong47:02:44R8
1429030Ni-Chl-HS_AN-MeCl_S_wB-6-311_DMF_6q.jobljszatkolong47:02:50R8
1429029Ni-Chl-HS_AN-MeCl_S_wB-6-311_DMF_6.jobljszatkolong47:03:17R8
1429014C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-6.jobljszatkolong47:02:30R8
14299513q_synsyn_basic.joborg1synlong20:43:02R8
14299783f_basic_synanti1.joborg1synlong2:55:29R8
14299793f_trial2_basic_synanti1.joborg1synlong2:53:06R8
1429939ts-migration-12-ae-bmk-singlet.jobstewartlong21:17:53R8
1429948ts-activation-scan-c3h5-bmk-singlet.jobstewartlong20:56:12R8
1429926ts-migration-13-bmk-singlet.jobstewartlong22:05:24R8
1429925ts-migration-12-ee-bmk-singlet.jobstewartlong22:09:11R8
1429937ts-migration-12-ie-mn12sx-c1-bmk.jobstewartlong21:34:01R8
1429915ts-activation-c-a-k2-bmk-singlet-1-1.jobstewartlong22:55:59R8
1429916ts-activation-c-e-k2-bmk-singlet-1-1.jobstewartlong22:51:28R8
1419454ts-migration-11-k2-bmk-singlet.jobstewartlong74:48:47R8
1429016ts-migration-12-iii-bmk-c2c3.jobstewartlong47:02:31R8
1429560ts-migration-13-bmk-c2c2.jobstewartlong47:00:48R8
1429722ts-migration-13-bmk-c3c5.jobstewartlong46:52:47R8
1429743ts-activation-c-e-bmk-singlet-redo-2-v+-redo.jobstewartlong45:45:34R8
1429892ts-activation-c1h1-bmk-singlet-1-1.jobstewartlong24:03:51R8
1429908ts-migration-12-ie-bmk-c2c3.jobstewartlong23:26:44R8
1429909ts-migration-12-ie-scan-bmk-c4c5-redo.jobstewartlong23:22:18R8
1429924ts-migration-11-bmk-singlet.jobstewartlong22:13:07R8
1419451ts-migration-11-bmk-singlet.jobstewartlong74:51:49R8
1429906ts-activation-bmk-c4h7-v--redo.jobstewartlong23:39:56R8
1429997ts-migration-12-ii-bmk-c2-redo-cont.jobstewartlong0:01:52R8
1429996ts-migration-12-ii-bmk-c2-cont.jobstewartlong0:04:50R8
1429998ts-migration-12-ii-bmk-c2-redo-1-cont.jobstewartlong0:00:00R8
1429723ts-migration-13-bmk-c4c4.jobstewartlong46:48:43R8
1429981re0_ch2_PBE0_ts_sp.jobtanghaolong0:27:59R8
1429891re2_ch2_B97D_ts_sp-1.jobtanghaolong24:09:29R8
1429898re1_ch2_PBE0_ts_opt_freq.jobtanghaolong23:57:24R8
1429949re1-C2H4-wb97xd-styrene-S2-a-ts.jobtanghaolong20:53:04R8
1429738re1-C2H4-wb97xd-styrene-S3-scan-ts-1.jobtanghaolong46:04:57R8
1429984re2_ch2_B3PW91_ts_sp.jobtanghaolong0:23:09R8
1429913re1_ch2_B3PW91_ts_opt_freq.jobtanghaolong23:04:21R8
1429983re0_ch2_B3PW91_ts_sp.jobtanghaolong0:23:25R8
1429982re2_ch2_PBE0_ts_sp.jobtanghaolong0:27:39R8
1429967run_fractal_80.0xgl1989long13:23:09R40
1429968run_fractal_80.0_1.0xgl1989long13:22:10R40
1429922run_fractal_70.0_1.0xgl1989long22:24:55R40
1429920run_fractal_70.0xgl1989long22:26:46R40
1429992H2OCOriveralucchese0:14:03R8
1429995H2OCOriveralucchese0:13:59R8
1429994H2OCOriveralucchese0:13:58R8
1429993H2OCOriveralucchese0:14:03R8
1429991H2OCOriveralucchese0:14:13R8
1429976PHN-relion2junjiezmedium4:42:07R512
1421823CptBu_TS-rev_per_B3-6-SDD_QST3.jobljszatkoscience_lms70:04:32R8
1403029Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-3.jobljszatkoscience_lms146:29:44R8
1429720CpiPr_H2-rot_SC_B3-6-SDD_4.jobljszatkoscience_lms47:00:07R8
1403030Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-4.jobljszatkoscience_lms146:28:53R8
1429990H2OCOriverascience_lms0:14:28R8
1429986H2OCOriverascience_lms0:14:37R8
1429988H2OCOriverascience_lms0:14:36R8
1429987H2OCOriverascience_lms0:14:22R8
1429989H2OCOriverascience_lms0:14:22R8
1429894ts-activation-c1h1-bmk-singlet-1-redo.jobstewartscience_lms24:01:45R8
1401178tp-rhcnneo-mn12sx-triplet-correct-redo.jobstewartscience_lms291:49:16R8
1411217tp-rhcnneo-bmk-singlet-triplet-correct-redo.jobstewartscience_lms94:47:12R8
1411218tp-rhcnneo-mn12sx-triplet-correct-redo-1.jobstewartscience_lms94:44:12R8
1429806protoMb_Nbound.jobjhennytamug40:16:05R16
1429975protoMb_Nbound.jobjhennytamug9:51:10R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1411091CHSOA_5_2qyingmediumH40

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |     1   250     0   500 |   512   516  1600/2048 | 8/120   100
long              Yes      Yes |    45   200     0   250 |   536   536  1024/2048 | 8/120   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     1     0     0    48 |    12    12         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     5     0     0     0 |    40    40         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |    13     0     0     0 |   104   104        160 |     0   100
tamug             Yes      Yes |     2     0     0     0 |    32    32        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    67 Active Jobs    1236 of 2612 Processors Active (47.32%)
                       154 of  303 Nodes Active      (50.83%)