The Top 5 foreground events is important from analyzing performance issues in the database.
In Oracle 11g we can see this events under the following header
Top 5 Timed Foreground Events
In Oracle 19c, in the Container database CDB-PDB architecture, this header is not there. But there is a header for the following
Top 10 Foreground Events by Total Wait Time
Some of the main wait events to look for in this section are
db file sequential read
db file parallel read
db file scattered read
log file sync
What is the DB File sequential read wait event?
The Oracle process wants a block that is currently not in the SGA and it is waiting for the database block to be read into the SGA from disk. This wait event comes under the User I/O wait class.
This wait event occurs when a process has issued an I/O request to read one block from a datafile into the buffer cache and is waiting for the operation to complete. The oracle server process waits for the OS to provide the block from the data file, and the wait event on which the server process waits till the block is made available is termed as db file sequential read.
It indicates that single block reads (index reads) are causing issues. At the DBA level the following action can be taken.
This can be restored by increasing the DB CACHE SIZE memory parameter in SGA. However if the system is memory constrained and we cannot increase this SGA parameter, we can look to reduce the wait time per incident. This can be done by increasing the number of disks in the array and increasing the spread of files across the disks, thus improving the I/O efficiency. This is an option for disk based systems. The only other alternate is to increase the server memory through a server upgrade and then increase the DB CACHE SIZE memory parameter. Other read based waits would also benefit with these two changes.
At an application level, the process causing an increase in this parameter should be checked, analyzing all queries in the process for performance and also queries causing high reads. Following are the steps that can be followed
In your explain plan if index range scans are being used, with a higher number of blocks being visited, we can try using a more selective index which will access fewer index blocks.
If your indexes are fragmented, it will result in more data blocks being visited. In this case doing an index rebuild will compact its contents into fewer blocks.
If you have a cluster table with more number of tables, you can rebuild the table with its rows sorted by a particular index column, thus reducing the number of data block visits for each index block.
Partitioning and partition pruning will also help reducing the number of index and data blocks being visited.
To get the top wait events between 2 snapshots you can run the following query.
SELECT wait_name, before_snap_id, after_snap_id, BEGIN_INTERVAL_TIME, END_INTERVAL_TIME,
number_of_waits, ave_microseconds from
(
select before.snap_id before_snap_id, after.snap_id after_snap_id,
sn.BEGIN_INTERVAL_TIME, sn.END_INTERVAL_TIME,
(after.total_waits-before.total_waits) number_of_waits,
round((after.time_waited_micro-before.time_waited_micro)/
(after.total_waits-before.total_waits),2) ave_microseconds,
before.event_name wait_name
from DBA_HIST_SYSTEM_EVENT before,
DBA_HIST_SYSTEM_EVENT after,
DBA_HIST_SNAPSHOT sn
where 1=1 AND
--and before.event_name='db file parallel read' and
after.event_name=before.event_name and
after.snap_id=before.snap_id+1 and
after.instance_number=1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number and
(after.total_waits-before.total_waits) > 1000
)
where after_snap_id = 347
order by after_snap_id desc, number_of_waits DESC;
DB file sequential read wait event by itself may not be indicative of any performance issue in the database. But if there are some other events also which are showing high values it will become easier to conclude a problem may exist and find a possible solution. For instance if enqueue and latch events are also high we can further check the size of buffer cache.
How can we determine if db file sequential read is causing a performance issue? There are no fixed industry standards or guidelines for this. You can draw benchmarks for your environment, and use these benchmarks as reference whenever performance issues are noticed. Thus it can differ from database to database.
At session level, we can discover which session has a high TIME_WAITED on this event from the v$SESSION_WAIT view. The TIME_WAITED must be evaluated against the LOGON_TIME for non idle events for a more accurate analysis. Sessions which have logged on for a few hours or days may have a high value for db file sequential read event in which case a high value for TIME_WAITED may not be an issue. Below is a good query to check which sessions have a high TIME_WAITED.
Below is a good query with which you can get a fairly good idea of the TIME_WAITED for db file sequential read event
SELECT a.SID, a.event, a.time_waited, a.time_waited/c.sum_time_waited * 100 pct_wait_time,
round((sysdate - b.logon_time) * 24) hours_connected
from v$session_event a, v$session b,
(select sid, sum(time_waited) sum_time_waited
from v$session_event
having sum(time_waited) > 0 group by sid) c
where a.sid = b.sid
and a.sid = c.sid
and a.time_waited > 0
and a.event = 'db file sequential read'
order by hours_connected desc, pct_wait_time;
References
Thought for the day
Each Day give thanks to the Lord that he awakes you and praise him daily and that no matter what life my bring, look forward to better days, and you will have Joy and Peace in your life.
--Beatrice Ivoruy
No comments:
Post a Comment