Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility
Eos System Status
2160 of 2556 CPUs Active (84.51%)
267 of 296 Nodes Active (90.20%)
172 running jobs , 8 queued jobs
Last updated: 10:00AM Mar 5, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1461720job22.jobabdokotblong18:35:54R4
1461673job15.jobabdokotblong19:56:34R4
1461674job16.jobabdokotblong19:57:06R4
1461709job17.jobabdokotblong19:09:33R4
1461710job18.jobabdokotblong19:09:36R4
1461711job19.jobabdokotblong18:53:23R4
1461718job20.jobabdokotblong18:53:50R4
1461719job21.jobabdokotblong18:36:17R4
1462076J_FM_d0navdeeplong1:26:07R1
1462077J_AFM_d0navdeeplong1:22:59R1
1461822flat/grid3nmatulalong15:46:18R1
1461294LANL_4roozbehdlong37:20:37R1
1461295LANL_4roozbehdlong37:20:37R1
1461296LANL_4roozbehdlong37:20:37R1
1461364Pentyne1HMM1CPPhigerrl581along1:42:43R32
1461363Pentyne1HOMOCPPhigerrl581along10:31:01R32
1461351Butyne1HMM1CPPhigerrl581along34:57:23R32
1461353Hexyne1HOMOCPPhigerrl581along34:57:45R32
1461352Hexyne1HOMOCPPhigerrl581along34:57:34R32
1461358Hexyne1HMM1CPPhigerrl581along29:59:26R32
1461357Hexyne1HMM1CPPhigerrl581along34:57:57R32
1461356Hexyne1HOMOCPPhigerrl581along34:57:34R32
1461355Hexyne1HOMOCPPhigerrl581along34:57:44R32
1461362Pentyne1HOMOCPPhigerrl581along22:37:07R32
1461359Hexyne1HMM1CPPhigerrl581along28:23:12R32
1461360Pentyne1HOMOCPPhigerrl581along27:20:24R32
1461361Pentyne1HOMOCPPhigerrl581along24:30:55R32
1461354Hexyne1HOMOCPPhigerrl581along34:58:04R32
1456148Hexyne3HOMOCPPhigrrl581along86:44:12R32
1456149Hexyne3HMM1CPPhigrrl581along86:32:57R32
1456147Hexyne3HOMOCPPhigrrl581along87:00:43R32
1456102Pentyne2HMM1CPPhigrrl581along87:02:13R32
1456101Pentyne2HMM1CPPhigrrl581along87:04:19R32
1456099Pentyne2HOMOCPPhigrrl581along88:07:05R32
1456150Hexyne3HMM1CPPhigrrl581along86:02:59R32
1456100Pentyne2HOMOCPPhigrrl581along87:57:43R32
1461821Job1srirvslong3:23:41R32
1461348ts-activation-c2h4-k2-bmk-singlet-new-2-redo-arenz-v+-cont.jobstewartlong34:59:41R8
1461931ts-activation-c2h3-k2-bmk-singlet-2-cont-redo-arenz-redo-cont-cstewartlong9:38:33R8
1461937ts-activation-c4h7-bmk-singlet-1-freq-redo-arenz-redo-cont-contstewartlong8:37:49R8
1461939ts-activation-c4h7-bmk-singlet-1-freq-redo-arenz-redo-cont-contstewartlong7:19:44R8
1461945ts-activation-c4h8-bmk-singlet-new-new-2-cont-freq-redo-arenz-rstewartlong6:40:52R8
1461942ts-activation-c4h8-k2-bmk-singlet-new-2-redo-arenz-redo-cont-costewartlong7:14:58R8
1461944ts-activation-c4h8-bmk-singlet-new-new-2-cont-freq-redo-arenz-rstewartlong7:13:10R8
1461930ts-activation-c1h2-bmk-singlet-new-new-1-2-freq-redo-arenz-redostewartlong9:56:15R8
1461910irc_ts2fsl.jobvangalslong13:29:33R8
1461318voflszhaoyucclong36:58:44R1
1461319voflszhaoyucclong36:37:00R1
1461613cluster_48.jobzhuyjlong16:38:56R8
1461614cluster_52.jobzhuyjlong14:32:12R8
1460076cluster_39.jobzhuyjlong61:52:13R8
1460068cluster_25.jobzhuyjlong64:17:13R8
1460592cluster_75.jobzhuyjlong47:05:18R8
1461820cluster_42.jobzhuyjlong15:50:31R8
1461612cluster_38.jobzhuyjlong18:09:02R8
1461611cluster_36.jobzhuyjlong21:27:12R8
1461610cluster_12.jobzhuyjlong21:43:20R8
1460591cluster_74.jobzhuyjlong47:05:34R8
1460590cluster_71.jobzhuyjlong47:05:14R8
1460593cluster_78.jobzhuyjlong47:04:52R8
1460589cluster_70.jobzhuyjlong47:05:34R8
1460587cluster_62.jobzhuyjlong47:05:36R8
1460585cluster_58.jobzhuyjlong47:06:12R8
1460507cluster_45.jobzhuyjlong57:50:19R8
1460506cluster_44.jobzhuyjlong57:50:33R8
1460066cluster_20.jobzhuyjlong64:17:20R8
1460065cluster_16.jobzhuyjlong64:17:19R8
1461608cluster_10.jobzhuyjlong22:03:09R8
1461609cluster_11.jobzhuyjlong22:03:26R8
1460717H2OCOriveralucchese45:11:19R8
1460715H2OCOriveralucchese45:11:49R8
1460714H2OCOriveralucchese45:11:26R8
1460713H2OCOriveralucchese45:12:52R8
1460716H2OCOriveralucchese45:11:23R8
1462089m062x3diss.jobbiswas85medium0:25:38R8
1462070wrf2.subchen05medium2:26:34R32
1462078wrf3.subchen05medium0:40:35R32
1461729Laminarjet43divya249medium19:33:24R32
1462075LVFZtestdunyuliumedium1:28:56R128
1462083volRelax_1.2emmimedium0:29:10R8
1462030volRelax_1.2emmimedium4:37:50R8
1462053volRelax_1.2emmimedium4:02:37R8
1462057volRelax_1.2emmimedium3:59:42R8
1462058volRelax_1.2emmimedium3:59:44R8
1462061volRelax_1.2emmimedium3:36:09R8
1462063volRelax_1.2emmimedium3:26:19R8
1461872volRelax_1.2emmimedium14:56:55R8
1462079volRelax_1.2emmimedium0:29:45R8
1462080volRelax_1.2emmimedium0:29:45R8
1462081volRelax_1.2emmimedium0:29:48R8
1462051volRelax_1.2emmimedium4:03:48R8
1462029volRelax_1.2emmimedium4:37:50R8
1462050volRelax_1.2emmimedium4:05:07R8
1462023volRelax_1.2emmimedium5:58:04R8
1462012volRelax_1.2emmimedium6:38:02R8
1462022volRelax_1.2emmimedium6:01:42R8
1462008volRelax_1.2emmimedium6:38:19R8
1462010volRelax_1.2emmimedium6:38:23R8
1462005volRelax_1.2emmimedium6:39:05R8
1462031volRelax_1.2emmimedium4:37:41R8
1462032volRelax_1.2emmimedium4:37:31R8
1462048volRelax_1.2emmimedium4:11:42R8
1461883volRelax_1.2emmimedium14:56:36R8
1461885volRelax_1.2emmimedium14:55:55R8
1462082volRelax_1.2emmimedium0:28:56R8
1462095volRelax_1.2emmimedium0:12:48R8
1462103volRelax_1.2emmimedium0:11:23R8
1462102volRelax_1.2emmimedium0:11:32R8
1462101volRelax_1.2emmimedium0:12:22R8
1462100volRelax_1.2emmimedium0:12:24R8
1462099volRelax_1.2emmimedium0:12:26R8
1462098volRelax_1.2emmimedium0:12:11R8
1462097volRelax_1.2emmimedium0:12:24R8
1462096volRelax_1.2emmimedium0:12:37R8
1462104volRelax_1.2emmimedium0:11:05R8
1462094volRelax_1.2emmimedium0:12:56R8
1462084volRelax_1.2emmimedium0:27:33R8
1462085volRelax_1.2emmimedium0:27:48R8
1462086volRelax_1.2emmimedium0:27:21R8
1462087volRelax_1.2emmimedium0:27:03R8
1462088volRelax_1.2emmimedium0:25:31R8
1462090volRelax_1.2emmimedium0:25:23R8
1462091volRelax_1.2emmimedium0:14:04R8
1462092volRelax_1.2emmimedium0:13:50R8
1462093volRelax_1.2emmimedium0:13:25R8
1461892volRelax_1.2emmimedium14:55:55R8
1461796R67-L5-F02flc2625medium17:08:19R32
1460884mpi_s_8_1hhammermedium0:06:18R8
1461925d00m12_910E-6m0k6927medium10:42:17R32
1461924d00m12_210E-5m0k6927medium10:47:11R32
1461923d00m12_10E-6m0k6927medium10:52:46R32
1461771cap4finenmatulamedium18:40:00R8
1461913Ti3Si0.5Al0.5C2_qhpson536medium12:49:12R32
1461927multi_phasetcd8922medium10:39:08R8
1461918multi_phasetcd8922medium12:16:51R8
1461926multi_phasetcd8922medium10:39:51R8
1461601multi_phasetcd8922medium22:25:11R8
1461577cmaq_12km_aprtcsdcnmedium22:54:25R16
1454107Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-17c.jobljszatkoscience_lms213:44:19R8
1460722H2OCOriverascience_lms45:10:59R8
1460721H2OCOriverascience_lms45:11:05R8
1460720H2OCOriverascience_lms45:11:06R8
1460719H2OCOriverascience_lms45:10:42R8
1460718H2OCOriverascience_lms45:11:07R8
1451868tp-rhcnneo-bmk-singlet-triplet-correct-redo-arenz.jobstewartscience_lms328:59:52R8
1454203ts-activation-c-a-bmk-singlet-redo-arenz-modred-rcfc.jobstewartscience_lms212:07:45R8
146178625_c1_r.jobtseguinwheeler17:43:03R8
146178921_c1_2.jobtseguinwheeler17:22:12R8
146178821_c1_1.jobtseguinwheeler17:22:48R8
1461835c1_q_2.jobtseguinwheeler14:15:15R8
146179021_c1_r_2.jobtseguinwheeler17:22:09R8
146179121_c1_r_1.jobtseguinwheeler17:22:14R8
1461824c2_q_2.jobtseguinwheeler16:16:49R8
146179321_c2_r_2.jobtseguinwheeler17:22:11R8
146179421_c2_2.jobtseguinwheeler17:21:56R8
146179521_c2_1.jobtseguinwheeler17:21:48R8
146178525_c2.jobtseguinwheeler17:48:07R8
146178425_c2_r.jobtseguinwheeler17:53:20R8
146178325_c1.jobtseguinwheeler17:54:52R8
1461831c2_r_q_2.jobtseguinwheeler16:16:25R8
1461836c1_q_3.jobtseguinwheeler12:17:48R8
1461834c1_q_1.jobtseguinwheeler15:31:06R8
1461663c1_r.jobtseguinwheeler21:05:13R8
1461664c2_r.jobtseguinwheeler21:05:08R8
1461823c2_q_1.jobtseguinwheeler16:16:32R8
1461833c2_r_q_4.jobtseguinwheeler16:16:42R8
1461832c2_r_q_3.jobtseguinwheeler16:16:25R8
146179221_c2_r_1.jobtseguinwheeler17:21:42R8
1461830c2_r_q_1.jobtseguinwheeler16:16:29R8
1461662c1.jobtseguinwheeler21:05:05R8
1461826c2_q_4.jobtseguinwheeler16:17:00R8
14619332_ts1_3_ts2_for_min.jobtseguinwheeler10:18:15R8

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1461371Hexyne2HMM1CPPhigerrl581alongQ32
1461370Hexyne2HMM1CPPhigerrl581alongQ32
1461369Hexyne2HMM1CPPhigerrl581alongQ32
1461368Hexyne2HOMOCPPhigerrl581alongQ32
1461367Hexyne2HOMOCPPhigerrl581alongQ32
1461366Hexyne2HOMOCPPhigerrl581alongQ32
1461365Pentyne1HMM1CPPhigerrl581alongQ32
1456842mat_jobjaisonmediumQ1

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    64   250     1   500 |   832   832  1600/2048 |  8/50   100
long              Yes      Yes |    69   200     7   250 |  1016  1016   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     5     0     0     0 |    40    40         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     8     0     0     0 |    64    64        160 |     0   100
tamug             Yes      Yes |     0     0     0     0 |     0     0        128 |     0   100
wheeler           Yes      Yes |    26     0     0     0 |   208   208        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

   172 Active Jobs    2160 of 2556 Processors Active (84.51%)
                       267 of  296 Nodes Active      (90.20%)