Originally posted on The Pythian Group blog.

A couple of weeks ago, we noticed some timeouts in some of our standard Oracle RDBMS health check scripts on a new instance. I had just migrated this instance to bigger, better, badder hardware and so it had been given more SGA to use, namely a bigger buffer cache. The software version was still Oracle, as we wanted to introduce as few variables as possible (we were already moving to a new platform with an endian change).

At first the timeouts were infrequent, but over the course of a week started to grow in frequencey until the point where none of the checks were finishing in the allowed timeframe. We ran an AWR report, and tucked far down in the “Latch Activity” section, a colleague noticed this:
Pct    Avg   Wait                 Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
simulator lru latch 10,032,617 3.3 0.7 44950 336,837 0.3
Latch Activity DB/Inst: FOO/foo Snaps: 156-157
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Name Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
transaction branch alloc 112,412 0.0 0.0 0 0 N/A
undo global data 466,321 0.0 0.0 0 0 N/A
user lock 7,440 0.8 0.4 1 0 N/A
The “simulator lru latch” event brought us to MetaLink note 5918642.8 and bug 5918642. Affecting 10g and 11g prior to and, respectively. The bug is with the database buffer cache advisor, controlled by the parameter db_cache_advice, which defaults to ON (depending on statistics_level). The note simply states:
High simulator lru latch contention can occur when db_cache_advice is set to ON if there is a large buffer cache.
We simply set db_cache_advice to OFF (thankfully it is a dynamic parameter), and pretty quickly our checks were running just fine.

My suggestion is to simply turn this off unless you are actively using the cache advisor to tune an instance. Once you are done tuning, and are no longer using the advisor, turn it off.

NOTE:  As Mladen Gogola pointed out in the comments, turning this off will cause problems if you are using automatic memory management (i.e. sga_target > 0).   Re-pasting his post here:
The problem with that advice is that it will prevent automatic memory management from resizing the buffer cache and the instance will end up with a huge, mostly empty, shared pool and default buffer cache. Automatic memory management is biased toward shared pool even with the cache dvice turned on, without it, buffer cache will be reduced to the minimum size, usually only 64MB. If you disable cache advice, I would also recommend disabling the automatic memory management and configuring SGA manually.