Alright I'll make this scenario more specific. I'm an RMAN user. My
large_pool_size has been 16M "forever". The Backup & Recovery
Advanced Users Guide  says that:
"If LARGE_POOL_SIZE is set, then the database attempts to get memory
from the large pool. If this value is not large enough, then an error
is recorded in the alert log, the database does not try to get buffers
from the shared pool, and asynchronous I/O is not used."
and that the formula for setting LARGE_POOL_SIZE is as follows:
LARGE_POOL_SIZE = number_of_allocated_channels *
(16 MB + ( 4 * size_of_tape_buffer ) )
Of course I'm backing up to disk, not tape, but it would seem I should
be using a lot more than 16M. However, I don't see any errors in the
alert.log with "async" or "sync" in the text, so perhaps the large
pool is still just fine?
On 5/7/07, Don Seiler
> I'm wondering if any of you have general "rules of thumb" when it
> comes to sizing the various pools and db buffer cache within the SGA.
> I'm going to go back to static SGA rather than risk ASMM thrashing
> about and causing another ORA-00600 at 2:30 in the morning. I can see
> where ASMM left the sizes at last, but just wondering what human
> thinks of things.
> This is Oracle 10.2.0.2 on RHEL3. sga_max_size is 1456M on 32-bit,
> going to be (at least) 8192M on 64-bit. The database is a hybrid of
> OLTP and warehouse. When I say "warehouse", I mean that large
> partitioned tables holding millions of records exist, and are bulk
> loaded via external tables and data pump throughout the day. Other
> than the bulk loading, those tables are read-only.
> Any advice would be appreciated (yes I've checked the V$*_ADVICE views as well).
oracle blog: http://ora.seiler.us