My customer is running a 10.2.0.2 database on AIX5.3, p590 with 14 Power 5
CPUs and SMT enabled.
The day workload is 8000 dedicated user connections plus 5 or so concurrent
long running reports.
They have an overnight batch job that is CPU bound and taking 8+ hours to
On an unladen identical test system the same batch job completes in 2.5
Lets put aside wait events and tracing for now as I know how to measure and
"On the AIX 5L operating system, the default scheduling policy is fair round
robin, also referenced as SCHED_OTHER. This policy implements a classic
priority queue round-robin algorithm with one major difference: the priority
is no longer fixed. This means, for example, if a task is using a large
amount of processor time, its priority level is slowly downgraded to give
other jobs an opportunity to gain access to the processor. The drawback of
this approach is that sometimes a process can end up with a priority level
so low that it does not have an opportunity to run and finish. By default, a
process receives a base priority of 40 and a default nice value of 20.
Combined, these two values define the default priority of a process (for
example, 60). However, a process can carry a priority ranging between 0 to
255, where priority 0 is the highest and 255 is the lowest (least favorable
for the purposes of gaining access to a processor)"
I've noticed that the CPU priority for the batch job is a lowly 90-120 for
most of the run.
Q1. Is there any way of measuring the impact of this on the job?
Q2 I am after experiences in changing the default priority for an AIX5L
database with (or without) this kind of mixed workload. Good or bad
Thanks in advance,