Total Pageviews

Saturday 30 November 2013

Archive log Report On Hourly Basis

col day for a10
col thread# format 9999 heading "Thread"
break on thread# skip 2;
set lines 500
set pages 300
set trimspool on

select thread#, to_char(first_time,'YYYY-MM-DD') day,
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
count(*) Total
from v$log_history
where first_time > sysdate - &DaysMinus
group by thread#, to_char(first_time,'YYYY-MM-DD') order by 2 ;

Wednesday 18 September 2013

Moving data (table and index) across tablespace

There are 2 methods we can use to do this.  One is to use a combination of "alter table X move tablespace Y" and "alter index X rebuild tablespace Y"  -- this works in Oracle8i release 8.1 and up ONLY.  Attached is a script called moveall.sql. 

Moveall.sql
set echo off

column order_col1 noprint
column order_col2 noprint

set heading off
set verify off
set feedback off
set echo off

spool tmp.sql

select decode( segment_type, 'TABLE',  segment_name, table_name ) order_col1,
             decode( segment_type, 'TABLE', 1, 2 ) order_col2,
      'alter ' || segment_type || ' ' || segment_name ||decode( segment_type, 'TABLE', ' move ', ' rebuild ' ) ||  chr(10) || ' tablespace &1 ' || chr(10) || ' storage ( initial ' || initial_extent || ' next ' ||
        next_extent || chr(10) || ' minextents ' || min_extents || ' maxextents ' || max_extents || chr(10) ||' pctincrease ' || pct_increase || ' freelists ' || freelists || ');'
from user_segments,

       (select table_name, index_name from user_indexes )
where segment_type in ( 'TABLE', 'INDEX' )
   and segment_name = index_name (+)
order by 1, 2
/

spool off

set heading on
set verify on
set feedback on
set echo on

REM UNCOMMENT TO AUTO RUN the generated commands
REM ELSE edit tmp.sql, modify as needed and run it
REM @tmp

--- eof ----


It uses the user_segments table to generate all of the needed "alter table move" and "alter index rebuild" statements to move a table/index into another tablespace preserving the storage characteristics currently assigned to the object.  For example, when we run moveall.sql in the SCOTT schema, we might see:

scott@ORACLE> @moveall
scott@ORACLE> set echo off

alter INDEX PK_DEPT rebuild
tablespace users
storage ( initial 10240 next 10240
minextents 1 maxextents 121
pctincrease 50 freelists 1);

....

It begins by moving a table and then rebuilding each of the indexes on that table.  Since the indexes on the tables being moved will become unusable after the table, this script rebuilds them right after moving a table -- before moving the next table (to reduce downtime).

Running the moveall.sql script is harmless as it is written.  It generates the SQL you need to run and saves the sql into yet another script file "tmp.sql".  You should edit tmp.sql, review it, modify it if you want (eg: if you have a multi-cpu system, you could modify the index rebuilds to be "parallel N", "unrecoverable" and add other options to make them go faster on your system), and then run it.

Another method would be to use EXPort and IMPort.  You would

o EXP the user account.
o drop all of the objects this user owns.  You can 'select' the drop statements you need
(script attached) in much the same way we 'select' the alter table/index statements
o revoke UNLIMITED TABLESPACE from the user
o alter the users default tablespace to the target tablespace
o give the user an unlimited quota on this new tablespace and their temporary tablespace
o IMP this users data.

So, the process to move SCOTT's objects from their current tablespace to a NEW_TABLESPACE
would be:

1) do an export of all of scott's objects.  Make sure no one modifies them after you begin this process.  You will lose these changes if they do.

   $ exp userid=scott/tiger owner=scott

2) you would drop all of scotts tables.  This will get the indexes as well.  I don't suggest dropping the user SCOTT but rather dropping scott's objects.  Dropping scott would cause any system priveleges SCOTT has to disappear and the import would not restore them.  This script can be used to drop someones tables:

--------------------------------------
set heading off
set feedback off
set verify off
set echo off

spool tmp.sql
select 'drop table &1..' || table_name || ' cascade constraints;'
  from dba_tables
where owner = upper('&1')
/
spool off
@tmp.sql
--------------------------------------



3) You would modify the user to *not* have unlimited tablespace (else the IMP will just put the objects right back into the tablespace they came from) and then give them unlimited quota's on the new tablespace you want the objects to go into and on their temporary tablespace (for the sorts the index creates will do)

alter user SCOTT default tablespace NEW_TABLESPACE
/
revoke unlimited tablespace from SCOTT
/
alter user SCOTT quota unlimited on NEW_TABLESPACE
/
alter user SCOTT quota unlimited on SCOTTS_TEMPORARY_TABLESPACE
/

4) you will IMP the data back in for that user.  IMP will rewrite the create statements to use the users default tablespace when it discovers that it cannot create the objects in their original tablespace.  Please make sure to review the file imp.log after you do this for any and all errors after you import.

imp userid=scott/tiger full=y ignore=y log=imp.log


5) you can optionally restore 'unlimited tablespace' to this user (or not).  If you do not, this user can only create objects in this new tablespace and temp (which in itselfis not a bad thing)...



As with any operation of this magnitude -- please test these procedures on a small test
account (such as SCOTT) to become familar with them.


A couple of side notes:

o the alter table move/alter index rebuild is more flexible and faster the exp/imp (and less error prone -- you never actually drop the objects).  Additionally, it would be easy to modify the script to move TABLES to one tablespace and INDEXES to a different tablespace.  The drawback to using this method is the you cannot move a table with a LONG or LONG RAW.  You must exp that table and imp it into a table.  You can do this easily  by exporting the table with the LONG/LONG RAW, dropping that table -- creating an empty version of this table in the new tablespace and importing just that table.

o if you use the exp/imp, it is upto you to ensure that no modifications happen to the tables after you begin the export.  There are no mechanisms in place to ensure this -- you must do this (else you will lose changes)



How the above code works:

select decode( segment_type, 'TABLE',
                       segment_name, table_name ) order_col1,
       decode( segment_type, 'TABLE', 1, 2 ) order_col2,
     
If the segment_type is a table, return the SEGMENT_NAME (which is actually the table
name).  Else, if the segment type is not a table (it is an index) return the TABLE_NAME
that the index is on.  This'll group a table and all of its indexes together. 

The second column just makes sure that tables are done PRIOR to their indexes.


Tuesday 9 July 2013

SQL tracing and Tkprof


If the value of the timed_statistics parameter is false, you set it to true with the following statement.
SQL> alter system set timed_statistics=true scope=both;

You can also set this parameter at the session level with the following statement:
SQL> alter session set timed_statistics=true
To find all trace files for the current instance, issue the following query:

SQL> select value from v$diag_info where name = 'Diag Trace';
VALUE
--------------------------------------------------------------------------------
e:\app\sumit.tyagi\diag\rdbms\sumit\sumit\trace
In Oracle Database 11g, the default value of the max_dump_file_size parameter is unlimited, as you
can verify by issuing the following command:

SQL> show parameter max_dump_file_size
NAME                  VALUE
------------------------------------ ----------- ----------
max_dump_file_size  unlimited

An unlimited dump file size means that the file can grow as large as the operating system permits
Tracing a Specific SQL Statement:
Switch on the SQl_tracing:
SQL> alter session set events 'sql_trace level 12';   
Execute  your query:
SQL> select count(*) from sales;
Switch of the tracing:
SQL> alter session set events 'sql_trace off';
You can trace a specific SQL statement running in a different session by issuing an alter system set events statement:

If you have the SQL_ID of the session to trace:

SQL> alter system set events 'sql_trace[sql:fb2yu0p1kgvhr] level 12';
SQL> alter system set events 'sql_trace[sql:fb2yu0p1kgvhr] off';
OR
If you have the sid and Serial# then use the below:
execute dbms_monitor.session_trace_enable(session_id=>138,serial_num=>242,
waits=>true,binds=>false);

Enabling Tracing in Your Own Session
SQL>execute dbms_session.session_trace_enable(waits=>true, binds=> false);

To disable tracing, the user must execute the session_trace_disable procedure, as shown here:

SQL> execute dbms_session.session_trace_disable();
Issue the following statement to set an identifier for your trace files, before you start generating the trace:
SQL> Alter session set tracefile_identifier='MyTraceFile';    
     
Queries which can help to find the correct session for tracing:

select a.sid, a.serial#, b.spid, b.pid, a.username, a.osuser, a.machine
from v$session a, v$process b where a.username IS NOT NULL
and a.paddr=b.addr;
select sid, serial#, username from v$session where status=’ACTIVE’;

Once you get the SID and SERIAL# from the previous query, invoke the session_trace_enable
procedure of the DBMS_MONITOR package, as shown here:

SQL> execute dbms_monitor.session_trace_enable(session_id=>138,serial_num=>242,
waits=>true,binds=>false);
PL/SQL procedure successfully completed.




Wednesday 3 July 2013

Displaying an Execution Plan for a Query


SQL> set autotrace on;
SQL> select * from dual;
D
-
X
Execution Plan
----------------------------------------------------------
Plan hash value: 272002086
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     2 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------
Statistics
----------------------------------------------------------
         24  recursive calls
          0  db block gets
          6  consistent gets
          3  physical reads
          0  redo size
        418  bytes sent via SQL*Net to client
        419  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
1      rows processed

How It Works
There are several options to choose from when using AUTOTRACE, and the basic factors are as follows:

1. Do you want to execute the query?
2. Do you want to see the execution plan for the query?
3. Do you want to see the execution statistics for the query?

The most common use for AUTOTRACE is to get the execution plan for the query, without running the query. By doing this, you can quickly see whether you have a reasonable execution plan, and can do this without having to execute the query.


AUTOTRACE Option
Execution Plan
Statistics
Query Executed
AUTOT[RACE] OFF
NO
No
Yes
AUTOT[RACE] ON
Yes
Yes
Yes
AUTOT[RACE] ON EXP[LAIN]
Yes
No
Yes
AUTOT[RACE] ON STAT[ISTICS]
No
Yes
Yes
AUTOT[RACE] TRACE[ONLY]
Yes
Yes
Yes (but output is suppressed
AUTOT[RACE] TRACE[ONLY] EXP[LAIN]
Yes
No
No


Once you are done using AUTOTRACE for a given session and want to turn it off and run other queries without using AUTOTRACE, run the following command from within your SQL Plus session:
SQL> set autot off

The default for each SQL Plus session is AUTOTRACE OFF, but if you want to check to see what your current AUTOTRACE setting is for a given session, you can do that by executing the following command:
SQL> show autot
autotrace OFF

DBMS_XPLAN.DISPLAY


SQL> explain plan for select * from dual;

Explained.        

SQL> SELECT * FROM table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     2 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

8 rows selected.




Tuesday 14 May 2013

OC4J Configuration issue - EM not working


[oracle@testmachine ~]$ emctl status dbconsole
Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name.
[oracle@testmachine ~]$ export ORACLE_UNQNAME=orcldb
[oracle@testmachine ~]$ emctl status dbconsole
OC4J Configuration issue. /home/oracle/11.2.0/product/db_1/oc4j/j2ee/OC4J_DBConsole_testmachine_orcldb not found.

To check if EM is installed/configured properly and running :

cd /home/oracle/11.2.0/product/db_1/oc4j/j2ee/
ls -l 

" In our case the folder shown in Red was missing" .If it is present in ur case check the permission 

-rw-r--r--  1 oracle oinstall  985 Mar 22 10:50 deploy_db_wf.ini
drwxr-xr-x 14 oracle oinstall 4096 Mar 22 10:49 home
drwxr-xr-x  3 oracle oinstall 4096 Mar 22 10:48 oc4j_applications
drwxr-x---  6 oracle oinstall 4096 Mar 22 10:48 OC4J_DBConsole
drwxr-xr-x 10 oracle oinstall 4096 May 14 15:49 OC4J_DBConsole_gnoctest_orcldb
drwxr-xr-x  5 oracle oinstall 4096 Mar 22 10:48 OC4J_Workflow_Component_Container
drwxr-xr-x  5 oracle oinstall 4096 Mar 22 10:48 OC4J_Workflow_Management_Container
drwxr-xr-x  2 oracle oinstall 4096 Mar 22 10:49 utilities

To check the port 1158 on which Em runs :


 > netstat -an| grep 1158

If your configuration is ok it will LISTEN on the port 1158 else it will give blank output.

Solution:

Warning: Below command puts the database in Quiesce Mode for the DB Control Releases 10.x to 11.1.x. Starting with DB Control Release 11.2.x, the database is no longer put in quiesce mode. The following note can be consulted for further details: 

Note 375946.1 Running EMCA Results in Database quiesce And No New Connections or Operations Can Be Performed During the DB Control Repository Creation

IMPORTANT: When dropping the repository with EMCA or RepManager on an 11.2 database, this warning will be displayed:
WARNING : While repository is dropped the database will be put in quiesce mode.

This warning can be safely ignored (only on 11.2 databases).


[oracle@testmachine ~]$ $ORACLE_HOME/bin/emca -config dbcontrol db -repos recreate

STARTED EMCA at May 14, 2013 3:37:52 PM
EM Configuration Assistant, Version 11.2.0.0.2 Production
Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Enter the following information:
Database SID: orcldb
Listener port number: 1522
Listener ORACLE_HOME [ /home/oracle/11.2.0/product/db_1 ]: /home/oracle/11.2.0/product/db_1
Password for SYS user:
Password for DBSNMP user:
Password for SYSMAN user:
Email address for notifications (optional):
Outgoing Mail (SMTP) server for notifications (optional):
-----------------------------------------------------------------

You have specified the following settings

Database ORACLE_HOME ................ /home/oracle/11.2.0/product/db_1

Local hostname ................ testmachine
Listener ORACLE_HOME ................ /home/oracle/11.2.0/product/db_1
Listener port number ................ 1522
Database SID ................ orcldb
Email address for notifications ...............
Outgoing Mail (SMTP) server for notifications ...............

-----------------------------------------------------------------
Do you wish to continue? [yes(Y)/no(N)]: y
May 14, 2013 3:38:18 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /home/oracle/11.2.0/product/cfgtoollogs/emca/orcldb/emca_2013_05_14_15_37_51.log.
May 14, 2013 3:38:20 PM oracle.sysman.emcp.EMReposConfig invoke

Management Repository has been placed in secure mode wherein Enterprise Manager data will be encrypted.  The encryption key has been placed in the file: /home/oracle/11.2.0/product/db_1/testmachine_orcldb/sysman/config/emkey.ora.   Please ensure this file is backed up as the encrypted data will become unusable if this file is lost.

***********************************************************
Enterprise Manager configuration completed successfully
FINISHED EMCA at May 14, 2013 3:52:44 PM

------------------------------------------------------------------------------------------------------

[oracle@testmachine ~]$ netstat -an|grep 1158
tcp        0      0 0.0.0.0:1158                0.0.0.0:*                   LISTEN


[oracle@testmachine ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation.  All rights reserved.
https://testmachine:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory /home/oracle/11.2.0/product/db_1/testmachine_orcldb/sysman/log


For more details follow the oracle documentation on oracle support :

How To Drop, Create And Recreate the Database Control (DB Control) Release 10g and 11g [ID 278100.1]





Monday 1 April 2013

PGA usage for each session

SELECT se.sid,n.name , SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME, MAX(se.value) pga_memory, 
       DECODE( r.APPLY_NAME, NULL,  DECODE(coor.APPLY_NAME,NULL, ser.APPLY_NAME, coor.APPLY_NAME),  r.APPLY_NAME ) APPLY_NAM,
       cap.CAPTURE_NAME, q.QNAME, s.USERNAME
FROM v$sesstat se, v$statname n, V$SESSION s, 
 V$STREAMS_APPLY_READER r, V$STREAMS_APPLY_coordinator coor, V$STREAMS_APPLY_SERVER ser,
 GV$STREAMS_CAPTURE c, dba_capture cap ,
 dba_queue_schedules q
WHERE n.statistic# = se.statistic#
AND   s.sid= se.sid
AND n.name IN ('session pga memory')
AND s.SID = r.SID(+) 
AND s.SERIAL# = r.SERIAL#(+)
AND s.SID = coor.SID(+) 
AND s.SERIAL# = coor.SERIAL#(+)
AND s.SID = ser.SID(+) 
AND s.SERIAL# = ser.SERIAL#(+)
AND c.CAPTURE_NAME = cap.CAPTURE_NAME(+)  
AND s.SID            = c.SID(+)
AND s.SERIAL# = c.SERIAL# (+)
AND s.SID            = TO_NUMBER(SUBSTR (q.session_id(+), 1,  INSTR(q.session_id(+), ',')- 1 )) 
AND s.SERIAL# = TO_NUMBER(SUBSTR (q.session_id(+), INSTR(q.session_id(+), ',') +1 ,  LENGTH(q.session_id(+))  - INSTR(q.session_id(+), ',')))
GROUP BY n.name,se.sid, SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4),DECODE( r.APPLY_NAME, NULL,  DECODE(coor.APPLY_NAME,NULL, ser.APPLY_NAME, coor.APPLY_NAME),  r.APPLY_NAME ), cap.CAPTURE_NAME, q.qname, s.USERNAME
ORDER BY 4 DESC ;

Dba_scheduler_running_jobs


Using DBA_SCHEDULER_RUNNING_JOBS one can see jobs currently running. In the example below it shows the oracle session id and corresponding OS process id.
22:20:51 sys@TESTDB> select job_name, session_id from dba_scheduler_running_jobs;
JOB_NAME SESSION_ID
—————————— ———-
GATHER_STATS_JOB 364
1 row selected.
sys@TESTDB> select program from v$session where sid = 364;
PROGRAM
————————————————
oracle@hostname (J002)
1 row selected.
sys@TESTDB> select vs.program, spid from v$session vs, v$process vp where vs.sid = 364 and vs.paddr = vp.addr
PROGRAM SPID
———————————————— ————
oracle@hostname (J002) 1642698
1 row selected.
Oracle j000 shadow processes are DBMS_SCHEDULER sessions
sys@TESTDB> !ps -ef | grep 1642698
oracle 1642698 1 120 00:00:07 – 1071:14 ora_j002_TESTDB
oracle 13836458 11677872 2 22:23:06 pts/5 0:00 grep 1642698

Wednesday 20 March 2013

Do You Have a Wait Problem?



It’s easy to find out the percentage of time a database has spent waiting for resources instead of actually executing. Issue the following query to find out the relative percentages of wait times and actual CPU processing in the database:

select metric_name,value from v$sysmetric where metric_name in ('Database CPU Time Ratio','Database Wait Time Ratio') and intsize_csec = (select max(intsize_csec) from v$sysmetric);

METRIC_NAME VALUE
————————————------------ -----------
Database Wait Time Ratio 11.371689
Database CPU Time Ratio 87.831890
SQL>

If the query shows a very high value for the Database Wait Time Ratio, or if the Database Wait Time Ratio is much greater than the Database CPU Time Ratio, the database is spending more time waiting than processing and you must dig deeper into the Oracle wait events to identify the specific wait events causing this.

Find Detailed Information:
You can use the following Oracle views to find out detailed information of what a wait event is actually waiting for and how long it has waited for each resource. 
  • V$SESSION: This view shows the event currently being waited for as well as the event last waited for in each session.
  • V$SESSION_WAIT: This view lists either the event currently being waited for or the event last waited on for each session. It also shows the wait state and the wait time.
  • V$SESSION_WAIT_HISTORY: This view shows the last ten wait events for each current session.
  • V$SESSION_EVENT: This view shows the cumulative history of events waited on for each session. The data in this view is available only so long as a session is active.
  • V$SYSTEM_EVENT: This view shows each wait event and the time the entire instance waited for that event since you started the instance.
  • V$SYSTEM_WAIT_CLASS: This view shows wait event statistics by wait classes.

How It Works
Your goal in tuning performance is to minimize the total response time. If the Database Wait Time Ratio  is high, your response time will also be high due to waits or bottlenecks in your system. On the other hand, high values for the Database CPU Time Ratio indicate a well-running database, with few waits or bottlenecks.
The Database CPU Time Ratio is calculated by dividing the total CPU used by the database by the Oracle time model statistic DB time.
Oracle uses time model statistics to measure the time spent in the database by the type of operation. Database time, or DB time, is the most important time model statistic—it represents the total time spent in database calls, and serves as a measure of total instance workload. DB time is computed by adding the CPU time and wait time of all sessions (excluding the waits for idle events).
An AWR report shows the total DB time for the instance (in the section titled “Time Model System Stats”) during the period covered by the AWR snapshots. If the time model statistic DB CPU consumes most of the DB time for the instance, it shows the database was actively processing most of the time. DB time tuning, or understanding how the database is spending its time, is fundamental to understanding performance. The total time spent by foreground sessions making database calls consists of I/O time, CPU time, and time spent waiting for non-idle events. Your DB time will increase as the system load increases—
that is, as more users log on and larger queries are executed, the greater the system load. However, even in the absence of an increase in system load, DB time can increase, due to deterioration either in I/O or application performance. As application performance degrades, wait time will increase and consequently DB time (that is, response time) will increase.
DB time is captured by internal instrumentation, ASH, AWR, and ADDM, and you can find detailed performance information by querying various views or through Enterprise Manager.

The V$SESSION_WAIT view shows more detailed information than the V$SESSION_EVENT and the V$SYSTEM_EVENT views. While both the V$SESSION_EVENT and the V$SESSION_WAIT views show that there are waits such as the event db file scattered read, for example, only the V$SESSION_WAIT view shows the file number (P1), the block number read  (P2), and the number of blocks read (P3). The columns P1 and P2 from this view help you identify the segments involved in the wait event that is currently occurring.


The Automatic Workload Repository (AWR) queries the V$SYSTEM_EVENT view for its wait event–related analysis.

You can first query the V$SYSTEM_EVENT view to rank the top wait events by total and average time
waited for that event.

In addition to providing information about blocking and blocked users and the current wait events, the V$SESSION view also shows the objects that are causing the problem, by providing the file number and block number for the object.