Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility
Eos System Status
755 of 2484 CPUs Active (30.39%)
98 of 287 Nodes Active (34.15%)
88 running jobs , 18 queued jobs
Last updated: 12:40PM Mar 28, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1486202tmatrixbileiatmo22:32:20R8
1486201tmatrixbileiatmo22:33:09R8
1485404pamchgkhelmut4:25:32R8
1485391pamchgkhelmut6:48:45R8
1485392pamchgkhelmut6:45:26R8
1485393pamchgkhelmut6:44:55R8
1485394pamchgkhelmut6:44:27R8
1485395pamchgkhelmut6:43:47R8
1485397pamchgkhelmut6:41:07R8
1485399pamchgkhelmut6:37:47R8
1485396pamchgkhelmut6:42:35R8
1485398pamchgkhelmut6:39:57R8
1485400pamchgkhelmut6:37:08R8
1485401pamchgkhelmut6:29:47R8
1485403pamchgkhelmut4:31:38R8
1485405pamchgkhelmut4:21:33R8
1485390pamchgkhelmut6:50:51R8
1485389pamchgkhelmut6:51:36R8
1485388pamchgkhelmut6:54:17R8
1485402pamchgkhelmut6:29:31R8
1485382pamchgkhelmut7:05:59R8
1485383pamchgkhelmut7:04:02R8
1485384pamchgkhelmut7:03:58R8
1485385pamchgkhelmut6:58:26R8
1485387pamchgkhelmut6:54:50R8
1485386pamchgkhelmut6:57:31R8
1486377tmatrixbileilong20:10:07R8
1486409tmatrixbileilong8:50:23R8
1486367tmatrixbileilong20:21:16R8
1486378tmatrixbileilong20:08:01R8
1486364tmatrixbileilong22:04:54R8
1486365tmatrixbileilong22:04:47R8
1486366tmatrixbileilong20:23:39R8
1486658R67-L6B_F002flc2625long17:44:40R32
1485525C_CptBuH_MeCNass_TS_B3-6-SDD_GAS_2.jobljszatkolong44:38:44R8
1481737C_CpiPrH_MeCNass_S_B3-6-SDD_GAS-f.jobljszatkolong92:07:52R8
1481750C_CptBuH_MeCNass_S_B3-6-SDD_GAS.jobljszatkolong92:02:19R8
1481751C_CptBuH_MeCNass_S_B3-6-SDD_GAS-f.jobljszatkolong92:02:17R8
1482880C_CpiPr_parH3_TS_wB-6-SDD_MeCN_3.jobljszatkolong72:48:39R8
1486117C_CpiPr_parH3_TS_B3-6-SDD_THF_3.jobljszatkolong24:43:06R8
1482933C_CptBu_THFass_TS_B3-6-SDD_THF_4.jobljszatkolong71:51:53R8
1482934C_CpiPrH_THFass_TS_B3-6-SDD_THF-3.jobljszatkolong71:48:23R8
1482936C_CpiPrH_THFass_TS_B3-6-SDD_THF_3.jobljszatkolong71:48:05R8
1482932C_CptBu_THFass_TS_B3-6-SDD_THF_3.jobljszatkolong71:52:02R8
1485537C_CptBu_THFass_TS_B3-6-SDD-GD3BJ.jobljszatkolong44:29:40R8
1485536C_CptBu_THFass_S_B3-6-SDD-GD3BJ.jobljszatkolong44:29:12R8
1486119C_CpiPr_parH3_TS_wB-6-SDD_THF_3.jobljszatkolong24:42:28R8
1486118C_CpiPr_parH3_TS_wB-6-SDD_THF-3.jobljszatkolong24:43:06R8
1486116C_CpiPr_parH3_TS_B3-6-SDD_THF-3.jobljszatkolong24:42:22R8
1485541C_CpiPrH_THFass_P_B3-6-SDD-GD3BJ.jobljszatkolong44:29:32R8
1486104C_CpiPr_H2dp_TS2_wB-6-SDD_THF_88q-r3.jobljszatkolong24:56:01R8
1486105C_CpiPr_H2dp_TS2_wB-6-SDD_THF_89-r3.jobljszatkolong24:56:14R8
1485540C_CpiPrH_THFass_TS_B3-6-SDD-GD3BJ.jobljszatkolong44:29:27R8
1485539C_CpiPrH_THFass_S_B3-6-SDD-GD3BJ.jobljszatkolong44:29:31R8
1485538C_CptBu_THFass_P_B3-6-SDD-GD3BJ.jobljszatkolong44:29:39R8
1481736C_CpiPrH_MeCNass_S_B3-6-SDD_GAS.jobljszatkolong92:07:58R8
1486404flat2/grid2nmatulalong21:59:03R1
1486411flat2/grid3nmatulalong21:55:03R1
1486632cap/15fnmatulalong18:40:32R8
1486376order/roundinvnoz2/23nmatulalong22:01:44R1
1486806Job1srirvslong3:19:58R24
1485257ts-migration-12-ii-k2-bmk-c2c3-singlet-redo-arenz-cont-cont-constewartlong46:38:27R8
1483457ts-migration-13-bmk-c4c4-singlet-redo-arenz-redo-cont.jobstewartlong70:25:43R8
1482995ts-migration-13-bmk-c3c5-singlet-redo-arenz-redo-cont-cont.jobstewartlong71:06:15R8
1483459ts-migration-11-bmk-singlet-redo-arenz-cont.jobstewartlong70:20:04R8
1485421ts-migration-11-bmk-singlet-redo-arenz-cont-cont-cont-cont.jobstewartlong45:35:17R8
1486234ts-activation-c-e-k2-bmk-singlet-1-1-v--redo-arenz.jobstewartlong22:20:55R8
1486031ts-migration-13-bmk-c4c4-redo-1+0.15-redo-arenz.jobstewartlong26:05:18R8
1483600ts-activation-c1h2-k2-bmk-singlet-new-2-redo-arenz-redo-cont-v+tanghaolong65:55:55R8
1486759P6-03nCri1termpanplong13:37:14R1
14855572.jobzhuyjlong44:17:25R8
14855291.jobzhuyjlong44:32:05R8
1486503H2OCOriveralucchese21:04:43R8
1486499H2OCOriveralucchese21:05:03R8
1486501H2OCOriveralucchese21:04:49R8
1486500H2OCOriveralucchese21:04:53R8
1486498H2OCOriveralucchese21:05:30R8
1486865real_NOAH_ORI.subchen05medium0:01:30R4
1486866wrf_NOAH_URB.subchen05medium0:00:00R32
1486813case.outmei209medium1:49:28R3
14866353DnsSlot7mtuftsmedium18:12:38R32
1486495H2OCOriverascience_lms21:06:20R8
1486497H2OCOriverascience_lms21:05:24R8
1486493H2OCOriverascience_lms21:06:32R8
1486496H2OCOriverascience_lms21:05:23R8
1486494H2OCOriverascience_lms21:06:30R8
1485219shift_ts_4_s2_14_opt_freq.jobtseguinwheeler47:41:26R8
14854201st_min_ts_2.jobtseguinwheeler45:37:48R8

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1481990LowerasanamediumQ8
1481989UpperasanamediumQ8
1466030Fans_3D_delzhsk4nmediumQ1
1456842mat_jobjaisonmediumQ1
1480244psum_mpi_8cpujyyzmediumQ8
1480245psum_mpi_16cpujyyzmediumQ16
1480246psum_mpi_32cpujyyzmediumQ32
1480247psum_mpi_64cpujyyzmediumQ64
1480274psum_openmp_32cpujyyzmediumQ32
1480401psum_openmp_32cpujyyzmediumQ32
1480416psum_openmp_32cpujyyzmediumQ32
1480562psum_mpi_8cpujyyzmediumQ8
1480564psum_mpi_32cpujyyzmediumQ32
1480565psum_mpi_64cpujyyzmediumQ64
1480604psum_openmp_32cpujyyzmediumQ32
1486768omp3zjh08177mediumQ12
1480587psum_mpi_8cpujyyzshortQ8
1479635ts3.jobtseguinwheelerE8

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     1     4 |     0     0       1024 |     0    10
medium            Yes      Yes |     4   250    16   500 |    71   111  1600/2048 |  8/50   100
long              Yes      Yes |    46   200     0   250 |   380   388   800/1600 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     2     0     0     0 |    16    16        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |    24     0     0     0 |   192   192     96/192 |     0   100
lucchese          Yes      Yes |     5     0     0     0 |    40    40         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     5     0     0     0 |    40    40        160 |     0   100
tamug             Yes      Yes |     0     0     0     0 |     0     0        128 |     0   100
wheeler           Yes      Yes |     2     0     0     0 |    24    24        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    89 Active Jobs     763 of 2484 Processors Active (30.72%)
                        98 of  287 Nodes Active      (34.15%)