Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility
Eos System Status
1666 of 2612 CPUs Active (63.78%)
212 of 303 Nodes Active (69.97%)
155 running jobs , 1 queued jobs
Last updated: 1:00AM Jan 31, 2015
Ganglia: Detailed TAMU SC cluster and workstation status

Running Jobs

Job ID Job Name Owner Queue ▾ Walltime Status CPUs
1437520job1.jobabdokotblong58:34:07R4
1437523job4.jobabdokotblong56:47:03R4
1437522job3.jobabdokotblong58:33:11R4
1438903C2_frac_x_permchilong5:59:18R32
1438139R67-L4S00_SSTflc2625long14:27:06R64
1438778R67-L4F01_SSTflc2625long8:51:15R64
1438875elect_NO_Set3B_naphth.jobjfranklong7:07:12R8
1438862elect_NO_Set1B_ClMeF.jobjfranklong7:07:55R8
1438871elect_NO_Set2B_phenyl.jobjfranklong7:07:45R8
1438864elect_NO_Set1T_2Cl4OH.jobjfranklong7:07:43R8
1438865elect_NO_Set1T_2F.jobjfranklong7:07:39R8
1438868elect_NO_Set1T_NOOH.jobjfranklong7:07:52R8
1438867elect_NO_Set1T_ClMeF.jobjfranklong7:07:44R8
1438866elect_NO_Set1T_Cl2.jobjfranklong7:07:49R8
1438869elect_NO_Set2B_2Me.jobjfranklong7:07:36R8
1438876elect_NO_Set3B_naphthOMe.jobjfranklong7:07:04R8
1438878elect_NO_Set3T_naphth.jobjfranklong7:06:46R8
1438879elect_NO_Set3T_naphthOMe.jobjfranklong7:07:25R8
1438880elect_NO_Set3T_obridge.jobjfranklong7:07:00R8
1438870elect_NO_Set2B_5OH.jobjfranklong7:07:50R8
1438873elect_NO_Set2T_5OH.jobjfranklong7:07:21R8
1438874elect_NO_Set2T_phenyl.jobjfranklong7:07:01R8
1438861elect_NO_Set1B_Cl2.jobjfranklong7:07:42R8
1438860elect_NO_Set1B_2F.jobjfranklong7:07:39R8
1438877elect_NO_Set3B_obridge.jobjfranklong7:07:04R8
1438872elect_NO_Set2T_2Me.jobjfranklong7:06:51R8
1438409elect_NO_Max_4Me_2.jobjfranklong9:49:10R8
1438411elect_NO_Max_ClMeF_2.jobjfranklong9:46:14R8
1438422elect_NO_Max_naphthOMe_2.jobjfranklong9:33:51R8
1438863elect_NO_Set1B_NOOH.jobjfranklong7:07:43R8
1438859elect_NO_Set1B_2Cl4OH.jobjfranklong7:07:25R8
1437915elect_NO_Max_Cl2.jobjfranklong32:15:49R8
1437945glyc-to-1-cujomber23long30:39:50R48
14379402-to-4-ptjomber23long30:40:52R48
1437938glyc-to-1-ptjomber23long32:06:25R48
14379391-to-2-ptjomber23long30:56:56R48
1438179C_CptBu_per_TS2_MeCN-B3-6-SDD_3.jobljszatkolong12:57:46R8
1438176C_CptBu_per_TS2_wB-6-SDD_4.jobljszatkolong12:57:51R8
1438177C_CptBu_per_TS2_THF-B3-6-SDD_3.jobljszatkolong12:57:27R8
1438180C_CptBu_per_TS2_MeCN-wB-6-SDD_3.jobljszatkolong12:57:32R8
1438186Ni-Chl-HS_AN-MeCl_TS_wB-6-311_DMF_15.jobljszatkolong12:45:24R8
1438185Ni-Chl-HS_AN-MeCl_TS_wB-6-311_DMF_14.jobljszatkolong12:45:38R8
1438178C_CptBu_per_TS2_THF-wB-6-SDD_3.jobljszatkolong12:57:37R8
1437785CpiPr_parH3_TS_MeCN-wB-6-SDD_2.jobljszatkolong38:05:28R8
1438153C_CptBu_m1_B3-6-SDD_MeCN_3.jobljszatkolong13:42:10R8
1437278C_CptBu_per_TS2_MeCN-wB-6-SDD_2.jobljszatkolong70:21:56R8
1437125C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-20q_r2.jobljszatkolong87:16:24R8
1437450CpiPr_parH3_TS_THF-wB-6-SDD_1.jobljszatkolong47:57:26R8
1437432C_CpiPr_H2dp_P_wB-6-SDD_MeCN_12.jobljszatkolong61:37:28R8
1437167C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-21.jobljszatkolong82:59:15R8
1437277C_CptBu_m1_B3-6-SDD_MeCN_2.jobljszatkolong70:57:53R8
1437275C_CptBu_per_TS2_MeCN-B3-6-SDD_2.jobljszatkolong72:11:19R8
1437274C_CptBu_per_TS2_THF-wB-6-SDD_2.jobljszatkolong77:33:00R8
1437272C_CptBu_per_TS2_THF-B3-6-SDD_2.jobljszatkolong77:39:44R8
1437271C_CptBu_per_TS2_wB-6-SDD_3.jobljszatkolong77:40:10R8
1437169C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-22.jobljszatkolong82:59:22R8
1437168C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-21q.jobljszatkolong82:58:56R8
1437452CpiPr_parH3_TS_MeCN-wB-6-SDD_1.jobljszatkolong47:30:18R8
1437783CpiPr_parH3_TS_MeCN-wB-6-SDD_3.jobljszatkolong38:08:33R8
1437124C_CpiPr_H2dp_TS2_wB-6-SDD_THF_80-20q_r.jobljszatkolong87:29:43R8
1437778CpiPr_parH3_TS_THF-wB-6-SDD_2.jobljszatkolong38:08:28R8
1437779CpiPr_parH3_TS_THF-wB-6-SDD_3.jobljszatkolong38:08:30R8
1437464C_CptBu_per_TS2_wB-6-SDD_2.jobljszatkolong47:18:40R8
1438406GC_adjointnany12long9:58:47R8
1437380d0o4navdeeplong72:01:42R16
1437377Jd0o2navdeeplong75:19:50R1
1437382d0o8navdeeplong70:24:29R16
1438090Jd0o6navdeeplong19:31:46R1
1437387d1o4navdeeplong66:05:46R16
1437717Jd1o0navdeeplong41:59:01R1
1437386d1o2navdeeplong66:24:15R16
1438916Jd1o0navdeeplong5:28:31R1
1438136lsgg_2_mesh3_2qlf1582long14:36:01R1
1438107lsgg_2_mesh3tqlf1582long15:47:14R1
1437766lsgg_3_mesh4qlf1582long38:45:21R1
1438137lsgg_3_mesh3qlf1582long14:34:14R1
1438106lsgg_1_mesh3tqlf1582long15:50:18R1
1438102lsgg_0_mesh3qlf1582long15:55:59R1
1437760lsgg_1_mesh4qlf1582long38:51:23R1
1437824lsgg_2_mesh4qlf1582long36:15:22R1
1437754lsgg_0_mesh4qlf1582long38:59:52R1
1437160ts-migration-12-iii-k2-bmk-c4c5-singlet-new.jobstewartlong82:58:53R8
1437815ts-activation-c-a-bmk-singlet-redo-redo+gd3bj-v-.jobstewartlong36:29:30R8
1437816ts-activation-c-a-bmk-singlet-redo-redo+gd3bj-v+.jobstewartlong36:28:32R8
1437419ts-activation-c1h1-k2-bmk-singlet-2-v+.jobstewartlong61:18:40R8
1437529ts-activation-c2h4-k2-bmk-singlet-2-v+.jobstewartlong46:34:57R8
1437721ts-activation-c3h6-k2-bmk-singlet-correct-2-v+.jobstewartlong40:10:48R8
1438105ts-activation-c2h4-k2-bmk-singlet-2-v--redo.jobstewartlong15:51:28R8
1438094ts-activation-c2h3-k2-bmk-singlet-2-v-.jobstewartlong16:09:34R8
1438095ts-activation-c2h3-k2-bmk-singlet-2-v+.jobstewartlong16:08:48R8
1438195ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong11:35:57R8
1438379ts-migration-12-BMK-freq-redo+gd3bj-redo.jobstewartlong10:30:10R8
1438205ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong11:27:20R8
1438200ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong11:32:31R8
1438203ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong11:29:09R8
1438202ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong11:30:59R8
1438198ts-activation-c5h9-k2-bmk-singlet-2-redo-cont-cont-cont-redo+0.stewartlong11:33:58R8
1438920ts-activation-c-e-k2-bmk-singlet-1-1-redo+gd3bj-v+.jobstewartlong5:23:48R8
1438923ts-activation-c-e-k2-bmk-singlet-1-1-redo+gd3bj-v-.jobstewartlong5:23:19R8
1437443re1-wb97xd-styrene-S3-down-isomer-ts-aa.jobtanghaolong62:54:37R8
1437441re1-wb97xd-styrene-S2-a-up-isomer-ts-aa.jobtanghaolong63:01:24R8
1438960wrf.job.logchen05medium0:06:09R32
1438907C2_frac_z_permchimedium5:55:57R32
1438904C2_frac_y_permchimedium5:59:03R32
1438883R67-8I-LSQR-O1flc2625medium6:46:50R4
1438833H_Li4fso002medium7:39:02R64
1438958cam5_20yr_pdjh11aemedium2:32:53R256
14382043DnsSlot4mtuftsmedium11:28:48R32
1438929order13nmatulamedium5:18:31R1
1438928order642nmatulamedium5:18:54R1
1438925order442nmatulamedium5:21:18R1
1438930order23nmatulamedium5:18:10R1
1438931order33nmatulamedium5:17:40R1
1438945test8nmatulamedium5:05:26R1
1438944test7nmatulamedium5:06:44R1
1438942test6nmatulamedium5:07:27R1
1438940test5nmatulamedium5:08:09R1
1438938test3nmatulamedium5:09:33R1
1438936test2nmatulamedium5:11:10R1
1438934test1nmatulamedium5:11:53R1
1438933order63nmatulamedium5:13:57R1
1438932order43nmatulamedium5:14:33R1
1438921order34nmatulamedium5:24:07R1
1438922order342nmatulamedium5:23:25R1
1438825order23remixnmatulamedium7:50:37R1
1438924order44nmatulamedium5:22:10R1
1438913order04nmatulamedium5:32:57R1
1438914order042nmatulamedium5:32:06R1
1438915order14nmatulamedium5:29:08R1
1438917order142nmatulamedium5:27:38R1
1438918order24nmatulamedium5:26:10R1
1438919order242nmatulamedium5:25:13R1
1438826order23remix2nmatulamedium7:50:21R1
1438856smk_rdrtcsdcnmedium7:09:38R1
1438849smk_rdn1tcsdcnmedium7:14:42R1
1438853smk_rdn_2tcsdcnmedium7:13:35R1
1438855smk_rdn3tcsdcnmedium7:11:20R1
1436804Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-12.jobljszatkoscience_lms109:23:48R8
1436803Fe-iBC_AN-MeCl_TS_wB-6311_DMF_51-11.jobljszatkoscience_lms109:23:56R8
1437436CpiPr_parH3_TS_wB-6-SDD.jobljszatkoscience_lms63:22:27R8
1438158H2OCOriverascience_lms13:40:30R8
1438156H2OCOriverascience_lms13:40:41R8
1438157H2OCOriverascience_lms13:40:48R8
1438155H2OCOriverascience_lms13:40:38R8
1438154H2OCOriverascience_lms13:41:10R8
1434657ts-activation-c5h9-k2-bmk-singlet-2-redo-2.jobstewartscience_lms296:11:39R8
1436801ts-activation-c5h9-k2-bmk-singlet-2-redo-3.jobstewartscience_lms109:27:33R8
1436417ts-activation-c3h6-bmk-singlet-1-1-redo-redo-cont-cont.jobstewartscience_lms152:55:23R8
1438092ts-migration-13-k2-bmk-c4c4-singlet-redo-redo-cont-redo.jobstewartscience_lms16:22:50R8
1435268ts-migration-12-iii-bmk-c4c5-singlet-redo-cont-cont.jobstewartscience_lms231:08:32R8
1438174deuteroMb2.jobjhennytamug13:02:07R16
1438955igombileiyang3:28:07R8
1438952igombileiyang3:31:12R8
1438951igombileiyang3:32:17R8
1438956igombileiyang3:28:14R8

Idle Jobs

Job ID Job Name Owner Queue ▾ Status CPUs
14380573D_jobcolinhjxmediumQ1

Batch Queues

Normally, a job script does not need to specify a particular queue. The submitted job will be directed initially to the regular queue. Then, the batch system will assign the job to one of the execution queues (small, medium, or long) based on the job's requested resources. These execution queues will be referred to as the public queues.

The default walltime limit for all public queues is 1 hour if no walltime limit is specified in the job script.

Per job limits per queue:
-------------------------

Queue          Min   Max   Max          Max
              Node  Node  Cpus     Walltime
short            1   128  1024     01:00:00
medium           1   128  1024     24:00:00
long             1     8    64     96:00:00
xlong            1     1     8    500:00:00
low              1     1     8     96:00:00
special          1   362  3088     48:00:00
abaqus           1     1     4     96:00:00
vnc              1     1     8     06:00:00
gpu              1     4    48     24:00:00
staff            1     0     0         None
atmo             1    16   192         None
chang            1     8    64         None
helmut           1    12    96         None
lucchese         1     8    64         None
quiring          1     4    32         None
science_lms      1    16   160         None
tamug            1    16   128         None
wheeler          1    24   256         None
yang             1     8    80         None

Queue limits and status:
------------------------

Queue          Accept     Run  |  Curr   Max  Curr   Max |  Curr  Curr    Max PEs |   Max   Max
                Jobs?    Jobs? |  RJob  RJob  QJob  QJob |  CPUs   PEs  Soft/Hard | UserR UserQ
regular           Yes      Yes |     0     0     0     0 |     0     0          0 |     0   100
short             Yes      Yes |     0     1     0     4 |     0     0       1024 |     0    10
medium            Yes      Yes |    36   250     1   500 |   481   511  1600/2048 | 8/120   100
long              Yes      Yes |   101   200     0   250 |  1033  1051  1024/2048 | 8/120   100
xlong             Yes      Yes |     0     2     0    10 |     0     0         24 |     1    10
low               Yes      Yes |     0    96     0   400 |     0     0        944 |    50   100
special           Yes      Yes |     0     2     0    10 |     0     0       3088 |   1/2     4
abaqus            Yes      Yes |     0     0     0     0 |     0     0         24 |     0   100
vnc               Yes      Yes |     0     0     0    16 |     0     0         24 |     5     8
gpu               Yes      Yes |     0     0     0    48 |     0     0         48 |     4     8
staff             Yes      Yes |     0     0     0    32 |     0     0          0 |    32    32
atmo              Yes      Yes |     0     0     0     0 |     0     0        192 |     0   100
chang             Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
helmut            Yes      Yes |     0     0     0     0 |     0     0     96/192 |     0   100
lucchese          Yes      Yes |     0     0     0     0 |     0     0         64 |     0   100
quiring           Yes      Yes |     0     0     0     0 |     0     0         32 |     0   100
science_lms       Yes      Yes |    13     0     0     0 |   104   104        160 |     0   100
tamug             Yes      Yes |     1     0     0     0 |    16    16        128 |     0   100
wheeler           Yes      Yes |     0     0     0     0 |     0     0        256 |     0   100
yang              Yes      Yes |     4     0     0     0 |    32    32         80 |    16   100

RJob = Running jobs
QJob = Queued jobs.
UserR = Running jobs for a user.
UserQ = Queued jobs for a user.
PE = Processor equivalent based on requested resources (ie. memory).  

Any queued jobs exceeding a queued job limit (queue-wide or user) will
be ineligible for scheduling consideration.

   155 Active Jobs    1666 of 2612 Processors Active (63.78%)
                       212 of  303 Nodes Active      (69.97%)