Eos System Status
2242 of 2584 CPUs Active (86.76%)
281 of 300 Nodes Active (93.67%)
75 running jobs , 7 queued jobs
Last updated: 1:30AM Aug 3, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
156318960Na-40K_SiO2_Script.jobbinjianlong2:07:09R48
1562929Copy_SI_2F_2.jobjfranklong36:52:19R8
1563147China_edgar_4lianglong10:28:38R40
1563149China_reas_t4lianglong10:20:14R40
1563151China_meic_3lianglong9:40:06R40
1563143China_meic_1lianglong10:32:02R40
1563133China_meic_4lianglong12:03:30R40
1563134China_reas_t2lianglong12:02:48R40
1563142China_reas_t3lianglong10:33:20R40
1562944China_edgar_1lianglong29:24:08R40
1562945China_reas_t1lianglong29:21:38R40
1562946China_soe_s1lianglong29:05:23R40
1563146China_edgar_3lianglong10:29:13R40
1563145China_edgar_2lianglong10:30:24R40
1563144China_meic_2lianglong10:31:45R40
1563180China_soe_s3lianglong4:11:18R40
1563181China_soe_s4lianglong4:11:01R40
1563154China_soe_s2lianglong8:58:17R40
1562718Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_30-4.jobljszatkolong63:58:37R8
1562717Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_30-3.jobljszatkolong63:58:36R8
1562716Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_30-2.jobljszatkolong63:58:49R8
1562719Ni-Pi-HS_AN-MeCl_P_B3-6311_DMF_30-5.jobljszatkolong63:58:27R8
1563131GoldBartlettqzhenglong12:23:47R1
1563170Gold_R_Bartlettqzhenglong5:47:04R1
1563041TS2yB-280.jobrajeshlong21:05:46R8
1563035TS2yB-220.jobrajeshlong21:06:06R8
1563037TS2yB-240.jobrajeshlong21:05:51R8
1563042TS2yB-290.jobrajeshlong21:06:19R8
1563038TS2yB-250.jobrajeshlong21:05:51R8
1563039TS2yB-260.jobrajeshlong21:06:03R8
1563040TS2yB-270.jobrajeshlong21:05:57R8
1563043TS2yB-300.jobrajeshlong21:06:24R8
1563036TS2yB-230.jobrajeshlong21:05:50R8
1562793additivesbertolong56:55:26R64
1563184ADD2sbertolong4:05:34R64
1563064ts-migration-23-bmk-singlet-cont-cont-redo-arenz-cont.jobstewartlong14:28:42R8
1560441ts-migration-k2-23-bmk-singlet.jobstewartlong87:24:46R8
1562713ts-migration-k2-12-bmk-singlet-redo-arenz.jobstewartlong64:08:06R8
1560489ts-migration-k2-13-bmk-singlet-redo-arenz.jobstewartlong85:19:57R8
1560481ts-migration-k2-24-bmk-singlet-redo-arenz.jobstewartlong85:27:37R8
1563128ts-migration-24-bmk-singlet-cont-redo-arenz-redo-cont.jobstewartlong12:28:12R8
1560473ts-activation-c4-k2-bmk-singlet-3-1-redo-redo-cont-redo-arenz-rtanghaolong86:15:07R8
1562925ts-activation-c3-bmk-singlet-redo-redo-cont-redo-arenz-redo-contanghaolong36:52:22R8
1563060ts-activation-c3-bmk-singlet-redo-redo-cont-redo-arenz-redo-contanghaolong15:36:15R8
1563062ts-activation-c3-bmk-singlet-redo-redo-cont-redo-arenz-redo-contanghaolong15:29:50R8
1562765mbtca1w3b.jobzhuyjlong61:27:35R8
1563138lammps-cdna14alfdougmedium1:40:16R128
1563058lammps-cdna5alfdougmedium2:37:11R128
1563056lammps-cdna13alfdougmedium14:07:08R128
1563054lammps-cdna14_v2alfdougmedium16:56:28R128
1563057lammps-cdna7alfdougmedium3:02:35R128
1563055lammps-cdna11alfdougmedium14:34:07R128
1563053lammps-cdna8alfdougmedium17:16:45R128
1563011lammps-cdna13_v2alfdougmedium18:50:39R128
1563192lammpschendimedium0:56:30R32
1563175Co3_TS14_T_qst3_wb97xd_ccpvdz_lanl2dz.jobhaixiamedium4:55:08R8
1563153Co3_18_T_bh_wb97xd_ccpvdz_lanl2dz.jobhaixiamedium9:33:54R8
1563049TS2y-6.jobrajeshmedium20:41:27R8
1563047TS2y-4.jobrajeshmedium21:06:20R8
1563044TS2y-1.jobrajeshmedium21:05:45R8
1563045TS2y-2.jobrajeshmedium21:06:12R8
1563048TS2y-5.jobrajeshmedium20:41:16R8
1562813Co3_H_dimer_openS_wb97xd_ccpvdz_lanl2dz.jobhaixiascience_lms37:39:06R8
1563174Co3_TS13_T_qst3_wb97xd_ccpvdz_lanl2dz.jobhaixiascience_lms4:58:28R8
1563167MECP3.jobhaixiascience_lms7:00:16R8
1560083C_CpiPr_H2dp_TS2_wB-6-SDD_95-2.jobljszatkoscience_lms111:13:51R8
1555496ts-activation-c-a-bmk-singlet-cont-cont-cont.jobstewartscience_lms351:08:24R8
1555601ts-migration-12-ae-k2-bmk-singlet-cont-cont-cont-cont.jobstewartscience_lms344:29:57R8
1560476ts-activation-c-e-k2-bmk-singlet-cont-cont-cont-v-.jobstewartscience_lms86:06:07R8
1560478ts-activation-c-e-k2-bmk-singlet-cont-cont-cont-v+.jobstewartscience_lms86:04:05R8
1555603ts-migration-12-ae-k2-bmk-singlet-cont-cont-cont-cont-redo.jobtanghaoscience_lms344:25:17R8
1557721ts-activation-c-a-k2-bmk-singlet-cont-cont.jobtanghaoscience_lms273:17:47R8
1560142ts-activation-c-a-bmk-singlet-cont-redo-v--cont-cont.jobtanghaoscience_lms109:27:43R8
1563059ts-activation-c-a-bmk-singlet-cont-redo-v+-cont-cont.jobtanghaoscience_lms15:40:17R8
1560468protoMb_S2.jobjhennytamug86:32:26R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
156319140Na-60K_SiO2_Script.jobbinjianlongQ48
1563139lammps-cdna12_v2alfdougmediumQ128
1563186wrf_mpi_runcjnowomediumQ112
1562890lammps/Testhhk2006mediumQ8
1560178Input1hodadmediumQ1
1560190Input1hodadmediumQ1
1560193Input1hodadmediumQ1

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    16   250     6   500 |  1112  1112  1600/2048 |  8/50   100
long              Yes      Yes |    46   200     1   250 |  1018  1020   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |    12     0     0     0 |    96    96        160 |     0   100
tamug             Yes      Yes |     1     0     0     0 |    16    16        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    75 Active Jobs    2242 of 2584 Processors Active (86.76%)
                       281 of  300 Nodes Active      (93.67%)