Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility
Eos System Status
937 of 2628 CPUs Active (35.65%)
119 of 305 Nodes Active (39.02%)
79 running jobs , 0 queued jobs
Last updated: 9:40AM Nov 28, 2014
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1397618OrgShale_3chilong44:23:17R1
1397834OrgShale_modchilong18:46:55R8
1397739LANL_42edougherlong38:57:54R1
1397785LANL_44edougherlong24:36:38R1
1397796AnnulusC02flc2625long23:01:29R1
1397797AnnulusB01flc2625long22:58:47R1
1397795AnnulusC01flc2625long23:02:35R1
1397798AnnulusB02flc2625long22:58:23R1
1394550Fe-iBC_AN-MeCl_TS_wB-6311_DMF_53.jobljszatkolong68:31:06R8
1393543Ni-Chl-HS_AN_wB-6-311_DMF.jobljszatkolong91:50:42R8
1394542C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-1q.jobljszatkolong68:43:10R8
1393539Ni-Chl-HS_AN-MeCl_S_M06L-6_DMF.jobljszatkolong93:29:37R8
1393473C_CptBu_H2dp_TS2_wB-6-SDD_71q.jobljszatkolong95:18:34R8
1393545Ni-Chl-HS_AN-MeCl_TS_wB-6-311_DMF.jobljszatkolong91:42:52R8
1393544Ni-Chl-HS_AN-MeCl_S_wB-6-311_DMF.jobljszatkolong91:44:29R8
1393546Ni-Chl-HS_AN-MeCl_P_wB-6-311_DMF.jobljszatkolong91:40:08R8
1394544Fe-iBC_AN-MeCl_TS_wB-6311_DMF_3G_11-1.jobljszatkolong68:32:05R8
1394534Ni-Chl-HS_AN-MeCl_TS_wB-6-311_DMF-1.jobljszatkolong69:36:42R8
1394533Ni-Chl-HS_AN-MeCl_S_wB-6-311_DMF-1.jobljszatkolong69:38:29R8
1394488Ni-Chl-HS_AN_wB-6-311_DMF-1.jobljszatkolong71:01:33R8
1394541C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-1.jobljszatkolong68:42:54R8
1394545Fe-iBC_AN-MeCl_TS_wB-6311_DMF_3G_12-1.jobljszatkolong68:32:08R8
1394546Fe-iBC_AN-MeCl_TS_wB-6311_DMF_50.jobljszatkolong68:31:19R8
1394547Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51.jobljszatkolong68:31:06R8
1394543Fe-iBC_AN-MeCl_TS_wB-6311_DMF_3G_10-1.jobljszatkolong68:31:52R8
1394548Fe-iBC_AN-MeCl_TS_wB-6311_DMF_52.jobljszatkolong68:31:07R8
1393470C_CptBu_H2dp_TS2_wB-6-SDD_70q.jobljszatkolong95:19:11R8
1393468C_CptBu_H2dp_TS2_wB-6-SDD_70.jobljszatkolong95:18:56R8
1397621Ni-Chl-HS_AN-MeCl_P_M06L-6_DMF-1.jobljszatkolong44:23:03R8
1393472C_CptBu_H2dp_TS2_wB-6-SDD_71.jobljszatkolong95:18:50R8
1397870Pickup_truck_60_mphmojdeh84long12:31:57R32
1397868Pickup_truck_60_mphmojdeh84long12:33:17R32
1397867Pickup_truck_60_mphmojdeh84long12:33:43R32
1397869Pickup_truck_60_mphmojdeh84long12:31:59R32
1385655xPSTD_L20muninglong58:02:16R12
1385653xPSTD_L20muninglong58:02:46R12
1385656xPSTD_L20muninglong58:01:18R12
1385657xPSTD_L20muninglong52:26:46R12
1385658xPSTD_L20muninglong52:26:39R12
1385661xPSTD_L20muninglong52:26:44R12
1385660xPSTD_L20muninglong52:27:14R12
1385659xPSTD_L20muninglong52:27:02R12
1385651xPSTD_L20muninglong59:51:01R12
1397687550um_vis_0Parlb3511long41:40:33R8
1397648550um_vis_0Parlb3511long43:16:00R8
1397643550um_vis_0Pa_2rlb3511long43:16:02R8
1397740LANL_43roozbehdlong38:50:26R1
1394572tp-rhcnneo-mn12l-triplet-bmk.jobstewartlong66:17:53R8
1394571tp-rhcnneo-tpss-triplet-bmk.jobstewartlong66:20:12R8
1394650ts-activation-mn12sx-singlet.jobstewartlong63:24:56R8
1394430R10_H128_08Psunracerlong79:26:41R1
1394432R20_H098_08Psunracerlong73:05:04R1
1394433R20_H128_08Psunracerlong73:00:38R1
1394434R30_H068_08Psunracerlong72:34:24R1
1394435R30_H098_08Psunracerlong71:40:27R1
13974081^Ru-cation-wb97xd-1SCH3-scan-ts-8.jobtanghaolong60:59:48R8
13974101^Ru-cation-wb97xd-1SCH3-scan-ts-6.jobtanghaolong60:58:39R8
13977581^Ru-cation-wb97xd-3SCH3-p-front-scan-ts-2.jobtanghaolong36:10:34R8
13974441^Ru-cation-wb97xd-3SCH3-p-front-scan-ts-1.jobtanghaolong59:59:00R8
13977571^Ru-cation-wb97xd-2SCH3-p-front-scan-ts-2.jobtanghaolong36:13:30R8
13977553^Ru-1-BMK-r.jobtanghaolong37:00:32R8
1397528H2OCOriveralucchese47:32:10R8
1397526H2OCOriveralucchese47:32:27R8
1397525H2OCOriveralucchese47:32:21R8
1397527H2OCOriveralucchese47:32:25R8
1397524H2OCOriveralucchese47:32:57R8
1397894eu2o3_G_B.jobnarenmedium1:03:43R12
1397895Li2O_EC.jobnarenmedium1:01:46R8
1397893eu2o3_GO1_Au.jobnarenmedium1:04:13R12
1397892BizReppaynermedium1:53:08R256
1388276C_CpiPr_H2dp_TS2_wB-6-SDD_THF_81.jobljszatkoscience_lms165:36:53R8
1397529H2OCOriverascience_lms47:31:56R8
1397530H2OCOriverascience_lms47:31:28R8
1397534H2OCOriverascience_lms47:30:55R8
1397533H2OCOriverascience_lms47:31:08R8
1397532H2OCOriverascience_lms47:31:16R8
1379506tp-rhcnneo-bmk-singlet-triplet-1.jobstewartscience_lms283:48:47R8
1393433tp-rhcnneo-mn12sx-triplet-1.jobstewartscience_lms96:22:47R8
1397471protoMb_conf3.jobjhennytamug55:49:49R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
No matching jobs.

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0     0
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0     1
medium            Yes      Yes |     4   250     0   500 |   288   288  1600/2048 | 8/120   120
long              Yes      Yes |    61   200     0   250 |   529   550  1024/2048 | 8/120   120
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   200
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0     0
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     2     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0     0
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0     0
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0     0
lucchese          Yes      Yes |     5     0     0     0 |    40    40         64 |     0     0
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0     0
science_lms       Yes      Yes |     8     0     0     0 |    64    64        160 |     0     0
tamug             Yes      Yes |     1     0     0     0 |    16    16        128 |     0     0
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0     0
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16     0

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    79 Active Jobs     937 of 2628 Processors Active (35.65%)
                       119 of  305 Nodes Active      (39.02%)