Total Pageviews

Wednesday 20 March 2013

Do You Have a Wait Problem?



It’s easy to find out the percentage of time a database has spent waiting for resources instead of actually executing. Issue the following query to find out the relative percentages of wait times and actual CPU processing in the database:

select metric_name,value from v$sysmetric where metric_name in ('Database CPU Time Ratio','Database Wait Time Ratio') and intsize_csec = (select max(intsize_csec) from v$sysmetric);

METRIC_NAME VALUE
————————————------------ -----------
Database Wait Time Ratio 11.371689
Database CPU Time Ratio 87.831890
SQL>

If the query shows a very high value for the Database Wait Time Ratio, or if the Database Wait Time Ratio is much greater than the Database CPU Time Ratio, the database is spending more time waiting than processing and you must dig deeper into the Oracle wait events to identify the specific wait events causing this.

Find Detailed Information:
You can use the following Oracle views to find out detailed information of what a wait event is actually waiting for and how long it has waited for each resource. 
  • V$SESSION: This view shows the event currently being waited for as well as the event last waited for in each session.
  • V$SESSION_WAIT: This view lists either the event currently being waited for or the event last waited on for each session. It also shows the wait state and the wait time.
  • V$SESSION_WAIT_HISTORY: This view shows the last ten wait events for each current session.
  • V$SESSION_EVENT: This view shows the cumulative history of events waited on for each session. The data in this view is available only so long as a session is active.
  • V$SYSTEM_EVENT: This view shows each wait event and the time the entire instance waited for that event since you started the instance.
  • V$SYSTEM_WAIT_CLASS: This view shows wait event statistics by wait classes.

How It Works
Your goal in tuning performance is to minimize the total response time. If the Database Wait Time Ratio  is high, your response time will also be high due to waits or bottlenecks in your system. On the other hand, high values for the Database CPU Time Ratio indicate a well-running database, with few waits or bottlenecks.
The Database CPU Time Ratio is calculated by dividing the total CPU used by the database by the Oracle time model statistic DB time.
Oracle uses time model statistics to measure the time spent in the database by the type of operation. Database time, or DB time, is the most important time model statistic—it represents the total time spent in database calls, and serves as a measure of total instance workload. DB time is computed by adding the CPU time and wait time of all sessions (excluding the waits for idle events).
An AWR report shows the total DB time for the instance (in the section titled “Time Model System Stats”) during the period covered by the AWR snapshots. If the time model statistic DB CPU consumes most of the DB time for the instance, it shows the database was actively processing most of the time. DB time tuning, or understanding how the database is spending its time, is fundamental to understanding performance. The total time spent by foreground sessions making database calls consists of I/O time, CPU time, and time spent waiting for non-idle events. Your DB time will increase as the system load increases—
that is, as more users log on and larger queries are executed, the greater the system load. However, even in the absence of an increase in system load, DB time can increase, due to deterioration either in I/O or application performance. As application performance degrades, wait time will increase and consequently DB time (that is, response time) will increase.
DB time is captured by internal instrumentation, ASH, AWR, and ADDM, and you can find detailed performance information by querying various views or through Enterprise Manager.

The V$SESSION_WAIT view shows more detailed information than the V$SESSION_EVENT and the V$SYSTEM_EVENT views. While both the V$SESSION_EVENT and the V$SESSION_WAIT views show that there are waits such as the event db file scattered read, for example, only the V$SESSION_WAIT view shows the file number (P1), the block number read  (P2), and the number of blocks read (P3). The columns P1 and P2 from this view help you identify the segments involved in the wait event that is currently occurring.


The Automatic Workload Repository (AWR) queries the V$SYSTEM_EVENT view for its wait event–related analysis.

You can first query the V$SYSTEM_EVENT view to rank the top wait events by total and average time
waited for that event.

In addition to providing information about blocking and blocked users and the current wait events, the V$SESSION view also shows the objects that are causing the problem, by providing the file number and block number for the object.

Thursday 14 March 2013

Maximize the Data Up-Load Performance in Oracle



Use a combination of the following two features to maximize the speed of the insert statements:

  • Set table’s logging attribute to NOLOGGING.
  • Use direct path loading feature, such as the following:
                                i.            Insert /* APPEND */ on queries that use a subquery for determining which   records are inserted.
                              ii.            INSERT /*+ APEND_VALUES */ on queries that use a VALUES clause.
                            iii.            Create table ….As SELECT  
Below is the example to insert the data of table “test_backup” into test table with minimum redo generation:
1.First check the logging mode of the test table.
Select  table_name,logging from user_tables where table_name = 'Test';

2.If the table is in logging mode , alter it to nologging mode.

Alter table test nologging;

3. Now that NOLOGGING has been enabled, there should be a minimal amount of redo generated for direct path operations. The following example uses a direct path INSERT statement to load data into the table:

insert /*+APPEND */ into Test select * from test_backup;

How It Works

Direct path inserts have two performance advantages over regular insert statements:
  • If NOLOGGING is specified, then a minimal amount of redo is generated.
  • The buffer cache is bypassed and data is loaded directly into the datafiles. This can significantly improve the loading performance.

The NOLOGGING feature minimizes the generation of redo for direct path operations only. For direct path inserts, the NOLOGGING option can significantly increase the loading speed. One perception is that NOLOGGING eliminates redo generation for the table for all DML operations. That isn’t correct. The NOLOGGING feature never affects redo generation for regular INSERT, UPDATE, MERGE, and DELETE statements.

One downside to reducing redo generation is that you can’t recover the data created via NOLOGGING in the event a failure occurs after the data is loaded (and before you back up the table). If you can tolerate some risk of data loss, then use NOLOGGING but back up the table soon after the data is loaded. If your data is critical, then don’t use NOLOGGING. If your data can be easily re-created, then NOLOGGING is desirable when you’re trying to improve performance of large data loads.
What happens if you have a media failure after you’ve populated a table in NOLOGGING mode (and before you’ve made a backup of the table)? After a restore and recovery operation, it will appear that the table has been restored:

SQL> desc Test;
Name Null? Type
----------------------------------------- -------- ----------------------------
Test_ID NUMBER
NAME VARCHAR2(2000)

However, when executing a query that scans every block in the table, an error is thrown.

SQL> select * from Test;

This indicates that there is logical corruption in the datafile:

ORA-01578: ORACLE data block corrupted (file # 10, block # 198)
ORA-01110: data file 10: '/ora01/dbfile/O11R2/users201.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

As the prior output indicates, the data in the table is unrecoverable. Use NOLOGGING only in situations
where the data isn’t critical or in scenarios where you can back up the data soon after it was created.

There are some quirks of NOLOGGING that need some explanation. You can specify logging characteristics at the database, tablespace, and object levels. If your database has been enabled to force logging, then this overrides any NOLOGGING specified for a table. If you specify a logging clause at the tablespace level, it sets the default logging for any CREATE TABLE statements that don’t explicitly use a logging clause.

You can verify the logging mode of the database as follows:
SQL> select name, log_mode, force_logging from v$database;
The next statement verifies the logging mode of a tablespace:
SQL> select tablespace_name, logging from dba_tablespaces;
And this example verifies the logging mode of a table:
SQL> select owner, table_name, logging from dba_tables where logging = 'NO';

Tuesday 12 March 2013

Tablespace usage report (w/ autoextend)

If you use auto extending table-spaces in your Oracle environment and use Grid Control to monitor them, you will know that it does not take the auto extension into account. The following script will give you a breakout of your TS usage including the auto extension.

============================================================
set pagesize 1000
set lines 200

col tablespace_name format a32 head 'Tablespace|Name'
col total format 999.99 head 'Total(GB)'
col used format 999.99 head 'Used(GB)'
col free format 999.99 head 'Free(GB)'
col pct format 999 head 'Pct|Used'
col next_extent format 999,999 head 'Next|Extent(KB)'
col Contents format a4 head 'Cont'
col Logging format a3 head 'Log'
col Extent_Management format a3 head 'Ext|Mgt'
col Allocation_Type format a5 head 'Alloc|Type'

SELECT inn1.tablespace_name tbs_name
 ,ROUND(inn1.total/1024,2) TOTAL_SPACE_GB
 ,ROUND((inn1.total-inn1.allocated+NVL(inn2.free,0))/1024,2) FREE_SPACE_GB
,ROUND((inn1.allocated-NVL(inn2.free,0))/1024,2) USED_SPACE_GB
,ROUND(((inn1.allocated-NVL(inn2.free,0))/inn1.total)*100) USED_PERCENT
FROM
(select tablespace_name,SUM(bytes)/1024/1024 allocated,SUM(DECODE(autoextensible,'YES',maxbytes,bytes))/1024/1024 total
    FROM dba_data_files
    GROUP BY tablespace_name) inn1,
(SELECT tablespace_name, NVL(SUM(bytes)/1024/1024,0) free
FROM dba_free_space
GROUP BY tablespace_name) inn2
WHERE inn1.tablespace_name= inn2.tablespace_name(+) 
and inn1.tablespace_name = '&tbsname' ;


SELECT file_name, tablespace_name, round(bytes / 1073741842,2) SIZE_GB, autoextensible, (increment_by * 32768) / 1048576 increment_MB, maxbytes / 1073741824 max_size_GB FROM dba_data_files WHERE tablespace_name = '&tbsname';

Monday 11 March 2013

DBI connect ORA-28009:(DBD ERROR: OCISessionBegin)

When trying to connect remote oracle database using Perl as SYSDBA got the below error :

my $dbh = DBI->connect( "dbi:Oracle:orcl", "SYS", "sys", {ora_session_mode =>ORA_SYSDBA,RaiseError => 1,AutoCommit => 0 })
 or die "Can't open orcl database: $DBI::errstr";


my $dbh = DBI->connect( "dbi:Oracle:orcl", "SYS", "sys", {RaiseError => 1,AutoCommit => 0 })
 or die "Can't open $db_name database: $DBI::errstr";
 


 Error Msg:

DBI connect('orcl','sys',...) failed: ORA-28009: connection as SYS should be as SYSDBA or SYSOPER (DBD ERROR: OCISessionBegin) 

Solution:

Use the below updated script to connect oracle DB using Perl (SYSDBA role):
  
#!/usr/bin/perl

use strict ;

use DBI;
use Getopt::Long qw(:config no_ignore_case);

#use DBD::Oracle qw(ora_session_modes);

my $dbh = DBI->connect( "dbi:Oracle:orcl", "SYS", "sys", {ora_session_mode =>2,RaiseError => 1,AutoCommit => 0 })
 or die "Can't open $db_name database: $DBI::errstr";

my $sth = $dbh->prepare( "SELECT * FROM dba_tables" )
      or die "Can't prepare SQL statement: $DBI::errstr\n";

  ### Execute the statement in the database
  $sth->execute
      or die "Can't execute SQL statement: $DBI::errstr\n";

  ### Retrieve the returned rows of data
  my @row;
  while ( @row = $sth->fetchrow_array(  ) ) {
      print "Row: @row\n";
  }
  warn "Data fetching terminated early by error: $DBI::errstr\n"
      if $DBI::err;

$dbh->disconnect
      or warn "Error disconnecting: $DBI::errstr\n";

  exit;

Thursday 7 March 2013

install_driver(oracle) failed: Can't locate DBD/oracle.pm in @INC

Error when trying to connect oracle database using perl script test.pl:

[oracle@sumit-3 scripts]$ perl test.pl

install_driver(oracle) failed: Can't locate DBD/oracle.pm in @INC (@INC contains: /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib/perl5/5.8.8/i386-linux-thread-multi /usr/lib/perl5/5.8.8 .) at (eval 3) line 3.
Perhaps the DBD::oracle perl module hasn't been fully installed,or perhaps the capitalisation of 'oracle' isn't right.


solution:

http://www.idevelopment.info/data/Oracle/DBA_tips/Programming/PROGRAMMING_2.shtml




http://www.pythian.com/blog/fixing-the-dreaded-unable-to-locate-an-oracle-mk-proc-mk-or-other-suitable-mk-error-in-dbdoracle-insalls/

Identify Trace file related to an oracle process -The Easy Way

To find the trace file location/name assosiated with a particular PID use the below query:

select PID,spid,username,program,background,tracefile from v$process;

tracefile column will give you the required location.