Eos System Status
652 of 2500 CPUs Active (26.08%)
85 of 289 Nodes Active (29.41%)
55 running jobs , 51 queued jobs
Last updated: 11:00PM Apr 26, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1507321pamchgkhelmut41:59:55R8
1507334pamchgkhelmut41:26:06R8
1507335pamchgkhelmut41:23:40R8
1507336pamchgkhelmut41:22:56R8
1507337pamchgkhelmut41:19:38R8
1507338pamchgkhelmut41:13:56R8
1507339pamchgkhelmut41:08:18R8
1507340pamchgkhelmut41:05:35R8
1507341pamchgkhelmut41:00:07R8
1507342pamchgkhelmut40:57:57R8
1507343pamchgkhelmut40:47:38R8
1508457pamchgkhelmut42:02:00R8
1507333pamchgkhelmut41:27:06R8
1507332pamchgkhelmut41:32:16R8
1507322pamchgkhelmut41:58:19R8
1507323pamchgkhelmut41:56:52R8
1507324pamchgkhelmut41:55:50R8
1507325pamchgkhelmut41:53:54R8
1507327pamchgkhelmut41:51:30R8
1507326pamchgkhelmut41:52:30R8
1507328pamchgkhelmut41:46:39R8
1507329pamchgkhelmut41:44:59R8
1507330pamchgkhelmut41:39:06R8
1507331pamchgkhelmut41:34:16R8
15096942DConFace102bodetokslong0:15:08R8
1509196C_CpiPr_parH3_TS_wB-6-SDD_THF_10-1.jobljszatkolong59:26:43R8
1509198C_CpiPr_parH3_TS_wB-6-SDD_THF_11-r.jobljszatkolong59:26:39R8
1509199C_CpiPr_parH3_TS_wB-6-SDD_THF_11-1.jobljszatkolong59:26:40R8
1509206C_CpiPr_parH3_TS_wB-6-SDD_MeCN_12.jobljszatkolong59:16:01R8
1509207C_CpiPr_parH3_TS_wB-6-SDD_MeCN_13.jobljszatkolong59:16:04R8
1509585mar_friction_0_Xmax4marouenlong17:50:51R1
1509584mar_friction_0_9_Xmax4marouenlong17:51:02R1
1509583mar_sigma_50_Xmax4marouenlong18:16:17R1
1509597ru1-B97D-C2H4-ts-opt-freq-1.jobtanghaolong10:10:48R8
1509635ru1-PBE0-C2H4-ts-sp.jobtanghaolong5:57:04R8
1509460M16-05n400Ktermpanplong33:06:01R8
1509140R3-10nRestermpanplong60:00:43R5
1509403M16-05n400Ktermpanplong46:58:54R8
1509459M16-05n400Ktermpanplong33:05:28R8
1509516M16-05n400Ktermpanplong30:31:33R8
1509518M16-05n400Ktermpanplong30:30:40R8
1509517M16-05n400Ktermpanplong30:31:24R8
1509656Co3_cis_c6h5_B_ts_elimination_wb97xd_ccpvdz_lanl2dz_3th.jobhaixiamedium5:22:10R8
1509592daily_pijh11aemedium13:48:09R256
1509611avgsandshenry92medium6:44:46R8
1509681case.outvinayaksmedium4:06:45R3
1509682case.outvinayaksmedium4:06:02R3
1509685case.outvinayaksmedium3:58:54R3
1509683case.outvinayaksmedium4:05:59R3
1509697mpi_run_1cpu_64yayzmanmedium0:03:27R8
1509696serial_run_64yayzmanmedium0:03:37R8
1509670ru_nhch2_hco2_ts_h_m06_631gp_sdd.jobhaixiascience_lms4:08:28R8
1509665mru_4h_cis_ohh2o_ts_nh_m06_631gp_sdd.jobhaixiascience_lms4:33:21R8
1509662mru_4h_ohh2o_ts_nh_m06_631gp_sdd.jobhaixiascience_lms4:41:20R8
1509699sssp_mpi_16nkj255short0:02:47R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1507344pamchgkhelmutQ8
1508467pamchgkhelmutQ8
1508469pamchgkhelmutQ8
1508470pamchgkhelmutQ8
1508471pamchgkhelmutQ8
1508472pamchgkhelmutQ8
1508473pamchgkhelmutQ8
1508474pamchgkhelmutQ8
1508475pamchgkhelmutQ8
1508468pamchgkhelmutQ8
1508466pamchgkhelmutQ8
1508458pamchgkhelmutQ8
1508459pamchgkhelmutQ8
1508460pamchgkhelmutQ8
1508461pamchgkhelmutQ8
1508462pamchgkhelmutQ8
1508463pamchgkhelmutQ8
1508464pamchgkhelmutQ8
1508465pamchgkhelmutQ8
1508476pamchgkhelmutQ8
1508477pamchgkhelmutQ8
1508478pamchgkhelmutQ8
1508490pamchgkhelmutQ8
1508491pamchgkhelmutQ8
1508492pamchgkhelmutQ8
1508493pamchgkhelmutQ8
1508494pamchgkhelmutQ8
1508495pamchgkhelmutQ8
1508496pamchgkhelmutQ8
1508485pamchgkhelmutQ8
1508489pamchgkhelmutQ8
1508488pamchgkhelmutQ8
1508487pamchgkhelmutQ8
1508480pamchgkhelmutQ8
1508481pamchgkhelmutQ8
1508482pamchgkhelmutQ8
1508484pamchgkhelmutQ8
1508483pamchgkhelmutQ8
1508486pamchgkhelmutQ8
1508479pamchgkhelmutQ8
1488413vivtest1chetnalongQ1
1488441vivtest1chetnalongQ1
1488396vivtest1chetnalongQ1
1508931TL4_NCHRP_22_20_2lsmaddahlongQ16
1507218fluentjobli3902mediumQ4
1507217fluentjobli3902mediumQ4
1507571starccm+nniedmediumQ8
1507536starccm+nniedmediumQ8
1507624yohascript.jobyohannesmediumQ1
1507626yohascript.jobyohannesshortQ12
1491284seq1zjh08177shortQ1

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     1     1     2     4 |    16    16       1024 |     0    10
medium            Yes      Yes |     9   250     5   500 |   300   306  1600/2048 |  8/50   100
long              Yes      Yes |    18   200     4   250 |   120   122   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |    24     0    40     0 |   192   192     96/192 |     0   100
lucchese          Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     3     0     0     0 |    24    24        160 |     0   100
tamug             Yes      Yes |     0     0     0     0 |     0     0        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    55 Active Jobs     652 of 2500 Processors Active (26.08%)
                        85 of  289 Nodes Active      (29.41%)