Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility
Eos System Status
1531 of 2564 CPUs Active (59.71%)
192 of 297 Nodes Active (64.65%)
133 running jobs , 3 queued jobs
Last updated: 9:10PM Mar 2, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1458598TCPD_NBexo_exoDCPD_2.jobadilong10:01:52R8
1458597TCPD_NBexo_exoDCPD_1.jobadilong10:58:17R8
1458203LANL_4edougherlong20:12:24R1
1458202LANL_4edougherlong20:12:24R1
1458200LANL_4edougherlong20:11:39R1
1458212LANL_4edougherlong17:57:23R1
1458199LANL_4edougherlong20:11:39R1
1458201LANL_4edougherlong20:12:24R1
1458198LANL_4edougherlong20:11:39R1
1458197LANL_4edougherlong20:11:39R1
1458204LANL_4edougherlong20:12:12R1
1458205LANL_4edougherlong17:57:41R1
1458206LANL_4edougherlong17:57:41R1
1458208LANL_4edougherlong17:57:41R1
1458209LANL_4edougherlong17:57:23R1
1458210LANL_4edougherlong17:57:23R1
1458211LANL_4edougherlong17:57:23R1
1458355LANL_4edougherlong16:33:09R1
1458354LANL_4edougherlong16:33:10R1
1458185LANL_4edougherlong31:52:57R1
1458194LANL_4edougherlong26:41:46R1
1458193LANL_4edougherlong26:43:06R1
1458207LANL_4edougherlong17:57:41R1
1458187LANL_4edougherlong27:03:10R1
1458195LANL_4edougherlong26:41:45R1
1458186LANL_4edougherlong31:53:04R1
1458188LANL_4edougherlong27:02:50R1
1458189LANL_4edougherlong26:51:22R1
1458190LANL_4edougherlong26:42:21R1
1458192LANL_4edougherlong26:43:06R1
1458191LANL_4edougherlong26:43:06R1
1458196LANL_4edougherlong26:41:45R1
1457915Full1T_H2O_phenyl_4.jobjfranklong46:33:57R8
1456810C_CptBu_per_TS2_B3-6-SDD_27q.jobljszatkolong76:36:14R8
1456808C_CptBu_per_TS2_B3-6-SDD_25.jobljszatkolong76:46:39R8
1456809C_CptBu_per_TS2_B3-6-SDD_26.jobljszatkolong76:45:26R8
1457025Ni-Chl-HS_AN-MeCl_TS_wB-6-311_DMF_26.jobljszatkolong75:09:04R8
1456799C_CpiPr_H2dp_TS2_wB-6-SDD_THF_87.jobljszatkolong80:54:16R8
1458398d0o0navdeeplong15:31:19R16
1458599order/roundvisnoz/23nmatulalong10:02:45R1
1458602flat/grid3nmatulalong10:03:31R1
1456147Hexyne3HOMOCPPhigrrl581along26:10:28R32
1456002Hexyne2HMM1CPPhigrrl581along79:48:22R32
1456003Hexyne2HMM1CPPhigrrl581along59:40:55R32
1456099Pentyne2HOMOCPPhigrrl581along27:17:35R32
1456150Hexyne3HMM1CPPhigrrl581along25:12:44R32
1456100Pentyne2HOMOCPPhigrrl581along27:07:28R32
1456102Pentyne2HMM1CPPhigrrl581along26:11:58R32
1456101Pentyne2HMM1CPPhigrrl581along26:14:04R32
1456149Hexyne3HMM1CPPhigrrl581along25:42:42R32
1456148Hexyne3HOMOCPPhigrrl581along25:53:57R32
1460397Job1srirvslong0:29:33R24
1460025Job1srirvslong4:42:13R16
1456772ts-migration-13-k2-bmk-singlet+0.20-redo-arenz.jobstewartlong76:22:32R8
1456761ts-migration-13-bmk-singlet-redo-arenz.jobstewartlong81:53:25R8
1456461ts-activation-c1-k2-bmk-singlet-1-1-redo-arenz-cont-v+.jobstewartlong95:36:32R8
1456758ts-migration-12-ee-k2-bmk-singlet-redo-arenz.jobstewartlong81:35:26R8
1456757ts-migration-12-ee-bmk-singlet-redo-arenz.jobstewartlong81:38:34R8
1456466ts-activation-c2h4-k2-bmk-singlet-new-2-redo-arenz-v+.jobstewartlong92:01:19R8
1456756ts-migration-12-ae-k2-bmk-singlet-redo-arenz.jobstewartlong81:40:28R8
1456754ts-activation-c4h8-k2-bmk-singlet-new-2-redo-arenz-redo-cont-costewartlong81:41:13R8
1456724ts-migration-11-bmk-singlet-redo-arenz.jobstewartlong82:24:58R8
1456773ts-activation-c2h3-k2-bmk-singlet-2-cont-redo-arenz-redo-cont-cstewartlong75:59:36R8
1456811ts-activation-k2-bmk-singlet-1-1-wb97xd-cont.jobstewartlong64:38:06R8
1457138ts-migration-14-k2-bmk-singlet-init-redo-arenz.jobstewartlong53:19:10R8
1457130ts-migration-11-c2-k2-bmk-singlet-redo-arenz.jobstewartlong54:19:28R8
1457666ts-migration-scan-14-bmk-singlet-redo-arenz.jobstewartlong18:04:24R8
1457137ts-migration-14-bmk-singlet-cont-redo-arenz.jobstewartlong53:46:09R8
1457027ts-activation-c2h4-k2-bmk-singlet-redo-arenz-2-redo-v--cont.jobstewartlong59:36:43R8
1456884ts-activation-c-a-mn12sx-singlet-bmk-redo+tight+int.jobstewartlong60:39:34R8
1456503ts-activation-c1h1-bmk-singlet-redo-arenz-v+.jobstewartlong89:18:49R8
1457571ts-migration-scan-12-ii-k2-bmk-c4c5-singlet-redo-arenz.jobstewartlong21:48:35R8
14572453_ts1_2.jobtseguinlong67:10:09R8
1460405main_2longPatches_200elem260vahidtlong0:24:26R8
1458689main_2longPatches_free246vahidtlong5:54:45R8
1458688main_2longPatches_free45vahidtlong6:15:42R8
1455864Mario-MOFy0m4156long25:21:23R64
1456927Mario-MOFy0m4156long7:51:21R64
1457897voflszhaoyucclong47:46:06R1
1457871voflszhaoyucclong48:48:05R1
1460062cluster_10.jobzhuyjlong3:29:00R8
1460063cluster_11.jobzhuyjlong3:28:49R8
1460064cluster_12.jobzhuyjlong3:27:20R8
1460065cluster_16.jobzhuyjlong3:27:03R8
1460066cluster_20.jobzhuyjlong3:27:05R8
1460067cluster_24.jobzhuyjlong3:27:19R8
1460068cluster_25.jobzhuyjlong3:26:58R8
1460069cluster_27.jobzhuyjlong1:05:58R8
1460070cluster_28.jobzhuyjlong1:06:16R8
1460073cluster_37.jobzhuyjlong1:01:49R8
1460075cluster_38.jobzhuyjlong1:01:18R8
1460076cluster_39.jobzhuyjlong1:01:12R8
1460072cluster_36.jobzhuyjlong1:02:53R8
1460077cluster_3.jobzhuyjlong0:50:26R8
1458682cluster_32.jobzhuyjlong6:35:04R8
1460071cluster_2.jobzhuyjlong1:04:30R8
1457928cluster_31.jobzhuyjlong42:33:41R8
1457925cluster_24.jobzhuyjlong44:31:54R8
1457924cluster_23.jobzhuyjlong44:31:22R8
1457923cluster_1.jobzhuyjlong44:31:26R8
1457919cluster_17.jobzhuyjlong44:31:13R8
1457929cluster_33.jobzhuyjlong42:18:20R8
1457930cluster_35.jobzhuyjlong42:05:33R8
1457926cluster_25.jobzhuyjlong44:31:39R8
1457931cluster_37.jobzhuyjlong41:52:19R8
1457932cluster_7.jobzhuyjlong40:25:43R8
1460357wrf3.subchen05medium2:32:38R32
1460362LVFZtestdunyuliumedium2:03:03R32
1458604R67-L5F01flc2625medium10:30:55R32
1458601neonharikapmedium10:43:53R16
14604063DnsSlot5mtuftsmedium0:25:10R32
1460443seagateowen8608medium0:06:27R32
1460036jobqianxfmedium4:25:17R40
1458606Ti3Si0.5Al0.5C2_qhpson536medium9:53:13R32
1458611Ti3Si0.5Al0.5C2_qhpson536medium9:44:39R32
1458609Ti3Si0.5Al0.5C2_qhpson536medium9:47:10R32
1458608Ti3Si0.5Al0.5C2_qhpson536medium9:48:45R32
1458607Ti3Si0.5Al0.5C2_qhpson536medium9:50:48R32
1458612Ti3Si0.5Al0.5C2_qhpson536medium9:42:50R32
1460444multi_phasetcd8922medium0:05:51R8
1460417multi_phasetcd8922medium0:20:56R8
1460415multi_phasetcd8922medium0:21:30R8
1460414multi_phasetcd8922medium0:21:55R8
1460412multi_phasetcd8922medium0:21:43R8
1460060cmaq_12km_jlytcsdcnmedium3:38:17R16
1458588cmaq_maytcsdcnmedium11:00:49R16
1458586cmaq_36km_apr_wsmtcsdcnmedium11:08:41R16
1453574Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-17.jobljszatkoscience_lms177:32:06R8
1454107Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-17c.jobljszatkoscience_lms152:54:49R8
1452088Tp-RhCNCH3-BMK-singlet-correct-triplet-freq-redo-arenz+calcall.stewartscience_lms249:33:40R8
1454203ts-activation-c-a-bmk-singlet-redo-arenz-modred-rcfc.jobstewartscience_lms151:18:14R8
1451868tp-rhcnneo-bmk-singlet-triplet-correct-redo-arenz.jobstewartscience_lms268:09:37R8
1460445PrefixSum_mpi_1cmy2014short0:04:41R1

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1458419Mario-MOFy0m4156longQ64
1456842mat_jobjaisonmediumQ1
1460446psum_mpi.jobjaycazshortQ64

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     1     1     1     4 |     1     1       1024 |     0    10
medium            Yes      Yes |    21   250     1   500 |   496   526  1600/2048 |  8/50   100
long              Yes      Yes |   106   200     1   250 |   994   994   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     5     0     0     0 |    40    40        160 |     0   100
tamug             Yes      Yes |     0     0     0     0 |     0     0        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

   133 Active Jobs    1531 of 2564 Processors Active (59.71%)
                       192 of  297 Nodes Active      (64.65%)