Eos System Status
2182 of 2520 CPUs Active (86.59%)
266 of 292 Nodes Active (91.10%)
68 running jobs , 7 queued jobs
Last updated: 4:20PM Aug 27, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1569564lammpschendilong16:39:45R32
1569624Co4_cl_planar_T_TPSS_6311gdp_631g.jobhaixialong5:29:30R8
1569627Co4_cl_planar_T_wb97xd_6311gdp_631g.jobhaixialong5:24:19R8
1569626Co4_cl_tetrahedral_T_wb97xd_6311gdp_631g_freq.jobhaixialong5:24:30R8
1569625Co4_cl_planar_wb97xd_6311gdp_631g_freq.jobhaixialong5:24:21R8
1569619Co4_cl_tetrahedral_T_TPSS_6311gdp_631g_freq.jobhaixialong5:36:48R8
1569622Co4_cl_planar_TPSS_6311gdp_631g_freq.jobhaixialong5:33:20R8
1569086AgStudy_Dry_bptz_PF6_3.jobjfranklong53:03:03R8
1569084AgStudy_Dry_bppn_PF6_3.jobjfranklong53:03:19R8
1569085AgStudy_Dry_bppn_SbF6_3.jobjfranklong53:03:06R8
1569662AgStudy_Dry_bppn_BF4_3.jobjfranklong2:36:51R8
1569668AgStudy_Dry_bptz_SbF6_3.jobjfranklong2:28:22R8
1569082AgStudy_Dry_bmtz_BF4_3.jobjfranklong53:03:17R8
1569603model.jobmogalong6:22:36R12
1569613model.jobmogalong5:50:54R12
1569612model.jobmogalong5:52:12R12
1569605model.jobmogalong6:19:30R12
1569068model.jobmogalong54:16:08R12
1569611model.jobmogalong5:52:54R12
1569609model.jobmogalong5:52:45R12
1569600model.jobmogalong6:23:33R12
1569704Input1mohsenlong0:07:31R8
1569266Ni1Mo15S33_DBT_H2.jobnarenlong30:11:15R12
1569701job1.jobposenatolong0:46:12R1
1569700job1.jobposenatolong0:51:29R1
1569282ts-migration-scan-12-ae-bmk-singlet-1-cont-cont.jobstewartlong29:04:26R8
1568836ts-activation-meta-bmk-singlet-4-2-redo-cont-redo-tpss-cont-constewartlong77:41:44R8
1569618tfe-bonding-pme3-singlet-1.jobstewartlong5:36:47R8
1569255ts-activation-c-a-bmk-singlet-cont-redo-v+-redo-original-cont-ctanghaolong30:40:34R8
1568945145.jobzhuyjlong70:20:51R8
1569133H2OCOriveralucchese50:26:56R8
1569131H2OCOriveralucchese50:27:08R8
1569132H2OCOriveralucchese50:27:06R8
1569128H2OCOriveralucchese50:27:37R8
1569129H2OCOriveralucchese50:27:35R8
1569130H2OCOriveralucchese50:27:14R8
1569478lammps-cdna13_v2alfdougmedium3:34:27R128
1569316lammps-cdna9_v2alfdougmedium7:56:58R128
1569640lammps-cdna14_v2alfdougmedium1:15:54R128
1569319lammps-cdna10_v2alfdougmedium6:20:10R128
1569315lammps-cdna14_v3alfdougmedium13:39:46R128
1569320lammps-cdna11_v2alfdougmedium6:17:57R128
1569591gM2NCH4-2.jobdahiyanamedium6:35:28R8
1569589gM2NCH4-1.jobdahiyanamedium6:37:12R8
1569687Co4_ch2Sime3_TPSS_631dgp_631g_freq.jobhaixiamedium1:01:52R8
1569678Co4_ch2Sime3_T_tetrahedron_TPSS_lanl2dz_631gp_631g_freq.jobhaixiamedium1:58:14R8
1569675Co4_ch2Sime3_TPSS_631gdp_631g.jobhaixiamedium2:11:48R8
15696523d-class-4Kjunjiezmedium0:06:05R256
15696233d-class-4Kjunjiezmedium5:31:40R256
1569616China_psna_1liangmedium5:40:50R40
1569621China_psna_2liangmedium5:34:15R40
1569660wrf_mpi_run_bembenoitmedium1:11:20R96
1569602model.jobmogamedium6:22:52R12
1569631Mo16S32_thiophene_A_H2.jobnarenmedium5:01:49R12
1569632Mo16S32_thiophene_3_H2.jobnarenmedium5:01:53R12
1569464gnr4_eu_q1_Au_opt.jobnarenmedium22:25:10R12
15696804f1p_10niteshmedium1:08:12R8
1569638LVFZtestrobbin89medium0:20:57R80
1569702cplglmcoylinmedium0:17:14R80
1569648cmaq_s99pahzj821medium3:05:15R40
1569666Co4_11_TPSS_6311gdp_631g_2th.jobhaixiascience_lms2:34:24R8
1569579Co4_11_TPSS_6311gdp_631g_freq.jobhaixiascience_lms7:04:25R8
1569580Co4_11_T_TPSS_6311gdp_631g_freq.jobhaixiascience_lms7:03:24R8
1569688Co4_ch2Sime3_T_tetrahedron_TPSS_631dgp_631g_freq.jobhaixiascience_lms1:49:15R8
1569604protoMb.jobjhennytamug6:19:40R16
1569596deuteroMb4.jobjhennytamug6:26:35R16
1569595submit.jobjhennytamug6:30:13R16
1569587deuteroMb2.jobjhennytamug6:41:10R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
1569694lammps-cdna11_v3alfdougmediumQ128
1569645lammps-cdna12_v3alfdougmediumQ128
1569643lammps-cdna9_v3alfdougmediumQ128
1569644lammps-cdna10_v3alfdougmediumQ128
1569641lammps-cdna12_v2alfdougmediumQ128
1569654DanielJetGaussianDiv-NullStartandestemediumQ512
1569653DanielJetGaussianUni-NullStartandestemediumQ512

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    24   250     7   500 |  1752  1986  1600/2048 |  8/50   100
long              Yes      Yes |    30   200     0   250 |   286   291   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     6     0     0     0 |    48    54         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     4     0     0     0 |    32    32        160 |     0   100
tamug             Yes      Yes |     4     0     0     0 |    64    64        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    68 Active Jobs    2182 of 2520 Processors Active (86.59%)
                       266 of  292 Nodes Active      (91.10%)