Eos System Status
2004 of 2532 CPUs Active (79.15%)
241 of 293 Nodes Active (82.25%)
88 running jobs , 0 queued jobs
Last updated: 9:10PM Sep 1, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1570661job1.jobabdokotblong28:58:39R1
1570793TCPD_CPexo_exoDCPD.jobadilong20:19:14R8
1570740TCPD_CPendo_endoDCPD.jobadilong24:25:43R8
1570914Co4_TS1_TPSS_631gdp_631g_opt.jobhaixialong5:29:05R8
1570983Co4_9_TPSS_631gdp_631g_opt_2th.jobhaixialong1:24:12R8
1570964Co4_10_TPSS_631gdp_631g_opt_2th.jobhaixialong3:31:59R8
1570965Co4_23_TPSS_631gdp_631g_opt_freq.jobhaixialong3:29:53R8
1570962Co4_TS6_TPSS_631gdp_631g_opt.jobhaixialong3:38:43R8
1570785Co4_TS11_TPSS_631gdp_631g_opt.jobhaixialong22:28:47R8
1570767Co4_20_TPSS_631gdp_631g_opt_freq.jobhaixialong23:10:00R8
1570915Co4_TS2_TPSS_631gdp_631g_opt.jobhaixialong5:29:05R8
1570642AgStudy_Dry_bptz_PF6_4.jobjfranklong31:51:46R8
1570637AgStudy_Dry_bppn_PF6_4.jobjfranklong32:12:43R8
1570622AgStudy_Solv_Ag_bmtz_BF4_POptBenzene.jobjfranklong33:48:10R8
1570620AgStudy_Solv_Ag_bmtz_BF4_POptAll.jobjfranklong33:48:22R8
1570820AgStudy_Dry_bptz_SbF6_4.jobjfranklong8:51:15R8
1570632AgStudy_Dry_bppn_SbF6_2.jobjfranklong32:38:21R8
1570848AgStudy_Dry_bppn_BF4_4.jobjfranklong8:18:26R8
1571054deuteroMb5.jobjhennylong0:35:12R16
1570753China_psna_2lianglong23:33:02R40
1570787China_psna_5lianglong21:19:28R40
1570748China_psna_1lianglong23:47:23R40
1570789China_psna_7lianglong18:58:25R40
1570784China_psna_4lianglong21:19:26R40
1570788China_psna_6lianglong19:09:08R40
1570783China_psna_3lianglong21:19:28R40
1570816model.jobmogalong10:04:13R12
1570896model.jobmogalong5:56:21R12
1570894model.jobmogalong6:05:30R12
1570817model.jobmogalong10:01:23R12
1570891model.jobmogalong6:07:56R8
1570893model.jobmogalong6:05:22R12
1570840model.jobmogalong7:06:46R12
1570942model.jobmogalong4:28:39R12
1570948model.jobmogalong4:13:32R12
1570947model.jobmogalong4:15:23R12
1570827model.jobmogalong7:12:45R12
1570628rar_anneal_longrarensulong33:38:57R64
1570629rar_anneal_longrarensulong33:36:48R64
1570631rar_anneal_longrarensulong33:22:47R84
1570630rar_anneal_longrarensulong33:27:49R64
1570218ts-activation-meta-bmk-singlet-1-2-redo-redo-redo-cont-redo-tpsstewartlong58:22:20R8
1570036T_CASSCF_14e_13orb_Re0_sdd_f_from_dft_sp.jobtanghaolong77:38:11R8
1570819145-3.jobzhuyjlong8:51:17R8
1570754H2OCOriveralucchese23:18:55R8
1570755H2OCOriveralucchese23:18:45R8
1570756H2OCOriveralucchese23:19:17R8
1570757H2OCOriveralucchese23:19:04R8
1570758H2OCOriveralucchese23:19:06R8
1570760H2OCOriveralucchese23:18:56R8
1570780lammps-cdna12_v3alfdougmedium12:34:28R128
1570781lammps-cdna13_v3alfdougmedium8:56:00R128
1571058Al_9_12bettoeosmedium0:25:17R32
1571059Al_9_10_repbettoeosmedium0:23:48R32
1571092Al_ref_1_1bettoeosmedium0:19:44R32
1571057Al_9_10bettoeosmedium0:29:26R32
1571093Al_ref_1_2bettoeosmedium0:18:01R32
1571094Al_ref_1_3bettoeosmedium0:16:40R32
1571115Al_ref_2_3bettoeosmedium0:16:36R32
1570879Co4_24_TPSS_631gdp_631g_opt_fix.jobhaixiamedium6:42:31R8
1570959Co4_TS4_TPSS_631gdp_631g_opt_fix.jobhaixiamedium3:42:04R8
1570957Co4_15_TPSS_631gdp_631g_opt_2th_fix.jobhaixiamedium3:48:25R8
1570982Co4_TS3_TPSS_631gdp_631g_freqsp.jobhaixiamedium1:26:41R8
1570979Co4_TS7_TPSS_631gdp_631g_opt_freq.jobhaixiamedium1:31:18R8
1570984Co4_4_TPSS_631gdp_631g_opt_freq.jobhaixiamedium1:20:41R8
1570832Co4_TS5_TPSS_631gdp_631g_opt_fix.jobhaixiamedium8:55:59R8
1570807Co4_22_TPSS_631gdp_631g_opt_fix.jobhaixiamedium10:37:25R8
1570919Co4_25_TPSS_631gdp_631g_opt_2th_fix.jobhaixiamedium5:30:43R8
1571229Input1hodadmedium0:00:00R8
15708133d-class-4Kjunjiezmedium9:32:19R256
1571056wrf_mpi_run_bembenoitmedium0:32:39R104
1570966Mo16S32_thiophene_3_H2.jobnarenmedium2:35:10R12
1570967Mo16S32_thiophene_A_2H2.jobnarenmedium2:33:17R12
1570802Mo16S32_thiophene_A_H.jobnarenmedium11:17:55R12
1570952Q_E750_Y1_H4_Er1_F3_256sxiao02medium4:03:46R1
1570950Q_E750_Y1_H2_Er1_F3_256sxiao02medium4:04:43R1
1570951Q_E750_Y1_H3_Er1_F3_256sxiao02medium4:04:08R1
1570799BTTT_ct.jobwheelermedium11:24:11R8
1570859Co4_ch2Sime3_T_ts_tetrahedron_TPSS_631gdp_631g.jobhaixiascience_lms6:00:41R8
1570208Co4_ch2Sime3_T_tetrahedron_TPSS_SDD_6311gp_631g_freq_3th.jobhaixiascience_lms60:18:27R8
1570786Co4_TS15_TPSS_631gdp_631g_opt.jobhaixiascience_lms8:57:40R8
1570795ts-migration-12-ae-bmk-singlet-1-cont-cont-cont.jobstewartscience_lms12:20:24R8
1571226deuteroMb5.jobjhennytamug0:07:30R16
1571228deuteroMb4.jobjhennytamug0:06:36R16
1571055deuteroMb3.jobjhennytamug0:34:13R16
1570860mesoMb_job1jhennytamug7:42:07R16
1570655protoMb.jobjhennytamug29:14:15R16
1570961deuteroMb2.jobjhennytamug3:39:54R16

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
No matching jobs.

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    28   250     0   500 |   960  1057  1600/2048 |  8/50   100
long              Yes      Yes |    44   200     0   250 |   861   841   800/1024 |  8/40   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     6     0     0     0 |    48    54         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |     4     0     0     0 |    32    32        160 |     0   100
tamug             Yes      Yes |     6     0     0     0 |    96    96        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     0     0     0     0 |     0     0         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

    88 Active Jobs    2004 of 2532 Processors Active (79.15%)
                       241 of  293 Nodes Active      (82.25%)