Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility
Eos System Status
1600 of 2612 CPUs Active (61.26%)
200 of 303 Nodes Active (66.01%)
91 running jobs , 1 queued jobs
Last updated: 12:20PM Dec 22, 2014
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1430162C3_z_permchilong0:40:04R32
1430161C1_x_permchilong0:40:32R32
1430163Off_track_nopile_layeredeasternlong0:39:32R4
1430146CpiPr_H2-rot_TS_B3-6-SDD-ff_2.jobljszatkolong2:27:12R8
1430145CpiPr_H2-rot_TS_B3-6-SDD-ff_1.jobljszatkolong2:27:11R8
1429029Ni-Chl-HS_AN-MeCl_S_wB-6-311_DMF_6.jobljszatkolong70:03:17R8
1429015C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-6q.jobljszatkolong70:02:45R8
1429030Ni-Chl-HS_AN-MeCl_S_wB-6-311_DMF_6q.jobljszatkolong70:02:50R8
1429014C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-6.jobljszatkolong70:02:30R8
14299793f_trial2_basic_synanti1.joborg1synlong25:53:06R8
14299783f_basic_synanti1.joborg1synlong25:55:29R8
14299513q_synsyn_basic.joborg1synlong43:43:03R8
1430119basic_trail4_3f_synsyn.joborg1synlong13:07:07R8
1430120basic_trail3_3f_synsyn.joborg1synlong13:07:08R8
1430122basic_trail1_3f_synsyn.joborg1synlong13:06:37R8
1430121basic_trail2_3f_synsyn.joborg1synlong13:06:41R8
1430134ts-migration-13-bmk-c4c4-redo.jobstewartlong3:20:22R8
1430138ts-migration-12-ii-bmk-c2-redo-cont-redo.jobstewartlong3:01:48R8
1430141ts-activation-scan-c2h4-bmk-singlet-1-1.jobstewartlong2:47:10R8
1430142ts-activation-scan-c2h4-bmk-singlet-1-2.jobstewartlong2:45:02R8
1430143ts-activation-c-a-k2-bmk-singlet-1-1-v-.jobstewartlong2:35:00R8
1430144ts-activation-c-a-k2-bmk-singlet-1-1-v+.jobstewartlong2:33:20R8
1430133ts-migration-13-bmk-c3c5-redo.jobstewartlong3:22:57R8
1430132ts-migration-13-bmk-c2c2-redo.jobstewartlong3:26:52R8
1429908ts-migration-12-ie-bmk-c2c3.jobstewartlong46:26:45R8
1430012ts-activation-c2h3-bmk-singlet.jobstewartlong22:19:19R8
1429906ts-activation-bmk-c4h7-v--redo.jobstewartlong46:39:56R8
1430003ts-migration-12-ie-bmk-c1-cont.jobstewartlong22:31:20R8
1430131ts-migration-12-iii-bmk-c2c3-redo.jobstewartlong3:29:37R8
1430084ts-activation-scan-c4h7-bmk-singlet.jobstewartlong19:55:27R8
1430064ts-activation-scan-c3h5-bmk-singlet-new.jobstewartlong20:19:53R8
1429999ts-activation-c-a-mn12sx-singlet-v+-redo-redo.jobstewartlong22:54:11R8
1429998ts-migration-12-ii-bmk-c2-redo-1-cont.jobstewartlong22:59:38R8
1430152ts-activation-scan-c4h8-bmk-singlet-2.jobstewartlong1:51:38R8
1430156ts-migration-11-k2-bmk-singlet-redo.jobstewartlong1:42:28R8
1430147ts-activation-bmk-c5h9-v+-redo-1-redo-1.jobstewartlong2:26:34R8
1430148ts-activation-scan-c1-k2-bmk-singlet-1-1.jobstewartlong2:22:04R8
1430149ts-activation-scan-c3h6-bmk-singlet-new-1.jobstewartlong2:13:48R8
1430150ts-activation-scan-c3h6-bmk-singlet-new-2.jobstewartlong2:11:25R8
1430160ts-activation-scan-c1h2-bmk-singlet.jobstewartlong0:54:04R8
1429892ts-activation-c1h1-bmk-singlet-1-1.jobstewartlong47:03:51R8
1429743ts-activation-c-e-bmk-singlet-redo-2-v+-redo.jobstewartlong68:45:34R8
1429996ts-migration-12-ii-bmk-c2-cont.jobstewartlong23:04:51R8
1429916ts-activation-c-e-k2-bmk-singlet-1-1.jobstewartlong45:51:29R8
1429924ts-migration-11-bmk-singlet.jobstewartlong45:13:07R8
1429926ts-migration-13-bmk-singlet.jobstewartlong45:05:24R8
1430155ts-migration-11-bmk-singlet-redo.jobstewartlong1:44:48R8
1430151ts-activation-scan-c4h8-bmk-singlet-1.jobstewartlong1:54:08R8
1429939ts-migration-12-ae-bmk-singlet.jobstewartlong44:17:53R8
1429948ts-activation-scan-c3h5-bmk-singlet.jobstewartlong43:56:57R8
1430007Re1-wb97xd-styrene-S3-a-sp.jobtanghaolong22:30:03R8
1430006Re1-wb97xd-styrene-S2-sp.jobtanghaolong22:30:09R8
1430130re2_ch2_B3PW91_ts_opt_freq.jobtanghaolong3:35:16R8
1430005Re1-wb97xd-styrene-S2-a-sp.jobtanghaolong22:30:27R8
1430068re1-wb97xd-styrene-S2-a-ts-sp.jobtanghaolong20:13:45R8
1430002re1-wb97xd-styrene-S3-a-ts.jobtanghaolong22:43:42R8
1430001re1-wb97xd-styrene-S2-ts.jobtanghaolong22:45:55R8
1429738re1-C2H4-wb97xd-styrene-S3-scan-ts-1.jobtanghaolong69:04:58R8
1430140re0_ch2_B3PW91_ts_opt_freq.jobtanghaolong2:56:23R8
1430008Re1-wb97xd-styrene-S3-sp.jobtanghaolong22:30:31R8
1430166he.jobtimik10flong0:27:23R8
1429967run_fractal_80.0xgl1989long36:23:09R40
1429968run_fractal_80.0_1.0xgl1989long36:22:10R40
1429991H2OCOriveralucchese23:14:13R8
1429994H2OCOriveralucchese23:13:58R8
1429993H2OCOriveralucchese23:14:03R8
1429992H2OCOriveralucchese23:14:03R8
1429995H2OCOriveralucchese23:13:59R8
1430153R67-L3F01flc2625medium1:49:19R8
1430116PHN-relion2junjiezmedium13:47:03R512
1430158cfxmosimedium1:17:51R4
1430117case1nmatulamedium13:40:04R8
1430113case6nmatulamedium14:18:37R8
1430115case7nmatulamedium14:06:15R8
1430096cylindernmatulamedium18:59:01R8
1430118case2nmatulamedium13:20:03R8
1430123case3nmatulamedium12:50:55R8
1430154BizReppaynermedium1:46:55R256
1403029Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-3.jobljszatkoscience_lms169:28:59R8
1421823CptBu_TS-rev_per_B3-6-SDD_QST3.jobljszatkoscience_lms93:04:32R8
1429986H2OCOriverascience_lms23:14:38R8
1429987H2OCOriverascience_lms23:14:22R8
1429988H2OCOriverascience_lms23:14:36R8
1429989H2OCOriverascience_lms23:14:22R8
1429990H2OCOriverascience_lms23:14:29R8
1411218tp-rhcnneo-mn12sx-triplet-correct-redo-1.jobstewartscience_lms117:44:12R8
1401178tp-rhcnneo-mn12sx-triplet-correct-redo.jobstewartscience_lms314:49:16R8
1429894ts-activation-c1h1-bmk-singlet-1-redo.jobstewartscience_lms47:01:45R8
1411217tp-rhcnneo-bmk-singlet-triplet-correct-redo.jobstewartscience_lms117:47:12R8
1429975protoMb_Nbound.jobjhennytamug32:51:11R16
1429806protoMb_Nbound.jobjhennytamug63:16:05R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1411091CHSOA_5_2qyingmediumH40

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    10   250     0   500 |   828   832  1600/2048 | 8/120   100
long              Yes      Yes |    63   200     0   250 |   612   612  1024/2048 | 8/120   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     5     0     0     0 |    40    40         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |    11     0     0     0 |    88    88        160 |     0   100
tamug             Yes      Yes |     2     0     0     0 |    32    32        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    91 Active Jobs    1600 of 2612 Processors Active (61.26%)
                       200 of  303 Nodes Active      (66.01%)