Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility
Eos System Status
567 of 2484 CPUs Active (22.83%)
74 of 287 Nodes Active (25.78%)
67 running jobs , 128 queued jobs
Last updated: 9:20PM Mar 29, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1487421pamchgkhelmut12:23:29R8
1487435pamchgkhelmut12:15:12R8
1487436pamchgkhelmut12:14:01R8
1487437pamchgkhelmut12:14:07R8
1487430pamchgkhelmut12:18:11R8
1487431pamchgkhelmut12:17:11R8
1487432pamchgkhelmut12:16:34R8
1487433pamchgkhelmut12:16:03R8
1487427pamchgkhelmut12:19:40R8
1487428pamchgkhelmut12:18:51R8
1487429pamchgkhelmut12:18:29R8
1487422pamchgkhelmut12:23:00R8
1487423pamchgkhelmut12:22:11R8
1487424pamchgkhelmut12:20:47R8
1487418pamchgkhelmut12:26:14R8
1487419pamchgkhelmut12:25:36R8
1487434pamchgkhelmut12:16:06R8
1487441pamchgkhelmut10:46:02R8
1487426pamchgkhelmut12:20:29R8
1487440pamchgkhelmut10:47:37R8
1487420pamchgkhelmut12:25:34R8
1487425pamchgkhelmut12:20:09R8
1487439pamchgkhelmut10:47:30R8
1487438pamchgkhelmut12:12:31R8
1488668exoDCPD_T1.jobadilong2:04:06R8
1488473vivtest1chetnalong4:43:28R1
1486105C_CpiPr_H2dp_TS2_wB-6-SDD_THF_89-r3.jobljszatkolong57:35:14R8
1486117C_CpiPr_parH3_TS_B3-6-SDD_THF_3.jobljszatkolong57:22:51R8
1485525C_CptBuH_MeCNass_TS_B3-6-SDD_GAS_2.jobljszatkolong77:19:14R8
1486119C_CpiPr_parH3_TS_wB-6-SDD_THF_3.jobljszatkolong57:22:58R8
1486104C_CpiPr_H2dp_TS2_wB-6-SDD_THF_88q-r3.jobljszatkolong57:35:01R8
1486118C_CpiPr_parH3_TS_wB-6-SDD_THF-3.jobljszatkolong57:22:51R8
1486116C_CpiPr_parH3_TS_B3-6-SDD_THF-3.jobljszatkolong57:22:52R8
1488297Forsupercomp_Interp6nanditalong10:30:39R8
1486411flat2/grid3nmatulalong54:35:34R1
1488628cap/15mnmatulalong2:14:56R8
1486376order/roundinvnoz2/23nmatulalong54:42:14R1
1488626order/roundinvnoz2/22nmatulalong2:17:25R1
1486404flat2/grid2nmatulalong54:39:33R1
1486632cap/15fnmatulalong51:20:17R8
1486806Job1srirvslong35:59:43R24
1486888tp-rhcnneo-bmk-singlet-triplet-correct-redo-arenz-cont-cont-constewartlong31:33:03R8
1485421ts-migration-11-bmk-singlet-redo-arenz-cont-cont-cont-cont.jobstewartlong78:14:18R8
1486900ts-migration-13-bmk-c4c4-singlet-redo-arenz-cont-cont-cont.jobstewartlong31:29:23R8
1486906ts-activation-c2h4-k2-bmk-singlet-new-2-redo-arenz-redo-cont-cotanghaolong31:18:41R8
1486907ts-activation-c4h8-k2-bmk-singlet-new-2-redo-arenz-vtight-cont.tanghaolong31:16:50R8
14855572.jobzhuyjlong76:57:56R8
14855291.jobzhuyjlong77:12:35R8
1486498H2OCOriveralucchese53:45:15R8
1486501H2OCOriveralucchese53:45:19R8
1486503H2OCOriveralucchese53:45:13R8
1486499H2OCOriveralucchese53:45:33R8
1486500H2OCOriveralucchese53:45:23R8
1488572wrf_NOAH.subchen05medium3:43:14R32
1488790wrf_NOAH_URB.subchen05medium0:18:10R32
1488539mru_4h_hco2_ts_h2_hco2h_m06_631gp_sdd_3th.jobhaixiamedium3:49:21R8
1488798ps_mp1jlgeermedium0:00:00R8
1488795prefix_seq_1nkj255medium0:07:38R8
1488397multi_phasetcd8922medium6:34:21R24
1488383smk_rdr_4tcsdcnmedium7:01:25R1
1488368smk_rdn1_4tcsdcnmedium7:15:51R1
1488630main_2longPatches_5nodes2150vahidtmedium2:13:00R8
1486494H2OCOriverascience_lms53:45:30R8
1486493H2OCOriverascience_lms53:45:32R8
1486497H2OCOriverascience_lms53:45:54R8
1486495H2OCOriverascience_lms53:46:05R8
1486496H2OCOriverascience_lms53:45:53R8

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1487478pamchgkhelmutQ8
1487466pamchgkhelmutQ8
1487467pamchgkhelmutQ8
1487468pamchgkhelmutQ8
1487469pamchgkhelmutQ8
1487470pamchgkhelmutQ8
1487471pamchgkhelmutQ8
1487472pamchgkhelmutQ8
1487473pamchgkhelmutQ8
1487474pamchgkhelmutQ8
1487475pamchgkhelmutQ8
1487476pamchgkhelmutQ8
1487465pamchgkhelmutQ8
1487487pamchgkhelmutQ8
1487479pamchgkhelmutQ8
1487480pamchgkhelmutQ8
1487463pamchgkhelmutQ8
1487481pamchgkhelmutQ8
1487489pamchgkhelmutQ8
1487488pamchgkhelmutQ8
1487482pamchgkhelmutQ8
1487483pamchgkhelmutQ8
1487484pamchgkhelmutQ8
1487485pamchgkhelmutQ8
1487477pamchgkhelmutQ8
1487461pamchgkhelmutQ8
1487462pamchgkhelmutQ8
1487455pamchgkhelmutQ8
1487456pamchgkhelmutQ8
1487457pamchgkhelmutQ8
1487458pamchgkhelmutQ8
1487459pamchgkhelmutQ8
1487460pamchgkhelmutQ8
1487442pamchgkhelmutQ8
1487443pamchgkhelmutQ8
1487444pamchgkhelmutQ8
1487454pamchgkhelmutQ8
1487453pamchgkhelmutQ8
1487464pamchgkhelmutQ8
1487446pamchgkhelmutQ8
1487447pamchgkhelmutQ8
1487445pamchgkhelmutQ8
1487448pamchgkhelmutQ8
1487449pamchgkhelmutQ8
1487450pamchgkhelmutQ8
1487451pamchgkhelmutQ8
1487452pamchgkhelmutQ8
1487486pamchgkhelmutQ8
1488441vivtest1chetnalongQ1
1488413vivtest1chetnalongQ1
1488396vivtest1chetnalongQ1
1481989UpperasanamediumQ8
1481990LowerasanamediumQ8
1488797PrefixSum_openMPcmy2014mediumC8
1488014psum_openmp_4cpudanielzhmediumQ4
1488022psum_openmp_1cpudanielzhmediumQ1
1488004psum_openmp_4cpudanielzhmediumQ4
1488013psum_openmp_2cpudanielzhmediumQ2
1488002psum_openmp_1cpudanielzhmediumQ1
1488003psum_openmp_2cpudanielzhmediumQ2
1487987psum_openmp_1cpudanielzhmediumQ1
1487988psum_openmp_2cpudanielzhmediumQ2
1487989psum_openmp_4cpudanielzhmediumQ4
1487978psum_openmp_2cpudanielzhmediumQ2
1487979psum_openmp_4cpudanielzhmediumQ4
1488061psum_mpi_4cpudanielzhmediumQ4
1488060psum_openmp_4cpudanielzhmediumQ4
1488012psum_openmp_1cpudanielzhmediumQ1
1487052psum_mpi_32cpudanielzhmediumQ8
1487055psum_mpi_32cpudanielzhmediumQ8
1487054psum_mpi_32cpudanielzhmediumQ8
1487053psum_mpi_32cpudanielzhmediumQ8
1487083psum_mpi_32cpudanielzhmediumQ8
1487057psum_mpi_32cpudanielzhmediumQ8
1487056psum_mpi_32cpudanielzhmediumQ8
1488062psum_mpi_8cpudanielzhmediumQ8
1488023psum_openmp_2cpudanielzhmediumQ2
1488024psum_openmp_4cpudanielzhmediumQ4
1487885psum_mpi_1cpudanielzhmediumQ1
1487960psum_openmp_2cpudanielzhmediumQ2
1487961psum_openmp_4cpudanielzhmediumQ4
1487910psum_openmp_1cpudanielzhmediumQ1
1487911psum_openmp_2cpudanielzhmediumQ2
1487912psum_openmp_4cpudanielzhmediumQ4
1487898psum_mpi_1cpudanielzhmediumQ1
1487886psum_mpi_1cpudanielzhmediumQ1
1487900psum_mpi_4cpudanielzhmediumQ4
1487901psum_mpi_8cpudanielzhmediumQ8
1487888psum_mpi_1cpudanielzhmediumQ1
1487889psum_mpi_1cpudanielzhmediumQ1
1487890psum_mpi_2cpudanielzhmediumQ2
1487899psum_mpi_2cpudanielzhmediumQ2
1487926psum_mpi_8cpudanielzhmediumQ8
1487920psum_mpi_8cpudanielzhmediumQ8
1487959psum_openmp_1cpudanielzhmediumQ1
1487941psum_openmp_4cpudanielzhmediumQ4
1487977psum_openmp_1cpudanielzhmediumQ1
1487949psum_openmp_1cpudanielzhmediumQ1
1487950psum_openmp_2cpudanielzhmediumQ2
1487951psum_openmp_4cpudanielzhmediumQ4
1487939psum_openmp_1cpudanielzhmediumQ1
1487940psum_openmp_2cpudanielzhmediumQ2
1487928psum_openmp_1cpudanielzhmediumQ1
1487929psum_openmp_2cpudanielzhmediumQ2
1487930psum_openmp_4cpudanielzhmediumQ4
1466030Fans_3D_delzhsk4nmediumQ1
1456842mat_jobjaisonmediumQ1
1480274psum_openmp_32cpujyyzmediumQ32
1480247psum_mpi_64cpujyyzmediumQ64
1480401psum_openmp_32cpujyyzmediumQ32
1480416psum_openmp_32cpujyyzmediumQ32
1480564psum_mpi_32cpujyyzmediumQ32
1480565psum_mpi_64cpujyyzmediumQ64
1480604psum_openmp_32cpujyyzmediumQ32
1480246psum_mpi_32cpujyyzmediumQ32
1480245psum_mpi_16cpujyyzmediumQ16
1480244psum_mpi_8cpujyyzmediumQ8
1480562psum_mpi_8cpujyyzmediumQ8
1488377R_32_omp_12_3bilpaulodmediumQ12
1488780R_32_mpi_128_32bilpaulodmediumQ128
1488778psum_openmp_4cpuyunhongmediumC4
1488691job_psum_openmp_16_1000000000.jobdhoxharegularQ16
1487859openmpjob.jobsujoys7regularQ12
1487051psum_mpi_32cpudanielzhshortQ8
1487039psum_mpi_32cpudanielzhshortQ8
1480587psum_mpi_8cpujyyzshortQ8
1486875final_mpisujoys7shortQ1
1479635ts3.jobtseguinwheelerE8

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     2     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     4     4 |     0     0       1024 |     0    10
medium            Yes      Yes |     9   250    68   500 |   115   110  1600/2048 |  8/50   100
long              Yes      Yes |    24   200     3   250 |   173   181   800/1600 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |    24     0    48     0 |   192   192     96/192 |     0   100
lucchese          Yes      Yes |     5     0     0     0 |    40    40         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     5     0     0     0 |    40    40        160 |     0   100
tamug             Yes      Yes |     0     0     0     0 |     0     0        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     8     8        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    68 Active Jobs     575 of 2484 Processors Active (23.15%)
                        74 of  287 Nodes Active      (25.78%)