Total Pageviews

Tuesday 31 July 2012

Backup Types in oracle 11G

An Oracle database offers several types of backups. We summarize the main types of backups in the following sections.

Physical and Logical Backups
  • Physical backup:Copy physical database files using os commands.
  •  Logical Backup: Backup usind exp-imp or Data-Pump utility
Whole and Partial Backups:
  • A whole-backup of a database is the backup of the entire database; A whole-database backup includes all the datafiles plus the control files. 
  • A partial backup refers to backups of a tablespace or datafile in a database. You can also back up the control file just by itself by making either a text or a binary copy of it.
Online and Offline Backups
  • An offline backup, also called a cold backup, is one made after shutting down the database using the shutdown command or the shutdown command with the immediate or transactional clause.
  • An offline backup, provided you make one after the database is shut down gracefully, is always consistent, whether you’re operating in archivelog or noarchivelog mode.
  • When making an offline backup with RMAN, you must,however, start the database you want to back up in the mount mode.
  • An online backup, also called a hot or warm backup, is one made while the database instance is still open. By definition, an online backup is always inconsistent.
  • During a recovery, the application of the necessary archived redo logs will make the backup consistent. Thus, you can make online backups of any database you’re operating, and the resulting inconsistent backups can be made consistent with the application of archived redo logs.
Full and Incremental Backup
  • A full backup of a database will contain complete backups of all the datafiles.
  • Incremental backups contain only the changed data blocks in the datafiles.You can make incremental backups only with the help of RMAN—you can’t make incremental backups using user-managed backup techniques.
Concept of SCN  and Consistent/Inconsistent Backups:


To understand the crucial difference between consistent and inconsistent backups, you must first understand the concept of the system change number (SCN). 

System change number (SCN) 
  • The SCN is an Oracle server–assigned number that indicates a committed version of the database. It’s quite possible that different datafiles in the database might have a different SCN at any given point in time. 
  •  If  the SCNs across all the datafiles are synchronized, it means that the data across the datafiles comes from a single point of time and, thus, is consistent.
  • During each checkpoint, the server makes all database file SCNs consistent with respect to an identical SCN.In addition, it updates the control file with that SCN information. 
  • This synchronization of the SCNs gives you a consistent backup of your database. Not only does each of the datafiles in the database have the same SCN, it must also not contain any database changes beyond that common SCN.
Consistent and Inconsistent Backups
  • If you back up your database while it’s running, you may end up with backups of the various datafiles at various time points and different SCNs. This means your backups are inconsistent, since the SCNs aren’t identical across all the datafiles.
  • If you’re operating the database in noarchivelog mode, you can use only consistent backups to restore your database.
  • If you’re operating in archivelog mode, however, you can use consistent or inconsistent backups to restore the database.
  • If you’re using a consistent backup, you can open a whole-database backup without recovery and without using the open resetlogs command. 
  • If you’re using inconsistent backups, however, you must use archived redo logs to make the data current and synchronize the SCNs across the datafiles.
  • The key fact here is that the recovery process will make your inconsistent backups consistent again by using the data from the archived redo logs and the online redo log files to apply
  • all the necessary changes across the datafiles to make them all consistent with reference to a single SCN.
  • If you’re running the database in noarchivelog mode, the recommended approach to backing up the database is to shut down the database cleanly first and then to back up all the datafiles. 
  • If you’re using RMAN to perform an offline backup, the database must be mounted before you can actually perform the RMAN backup. This is because RMAN needs to update the target database control file.
If you’re running the database in archivelog mode, you can back up a whole database in any of the following ways:
  • Closed and consistent
  • Closed and inconsistent
  • Open and inconsistent
  • The ability to back up a database while it is open and in use is a key benefit of running a database in archivelog mode. 
  •  The only part of the database that is not backed up using RMAN is the "Online Redo Log" files:these are protected by multiplexing. 

----- Reference : RMAN Recipes by Arun Nanda

Noarchivelog Mode of Database Operation

  • In noarchivelog mode, Oracle will overwrite the filled online redo logs, instead of archiving (saving)the online redo logs.
  • In this mode, you’re protected only from instance failures, such as those caused by a power failure, for example, but not from a media failure. Thus, if there is a media failure, such as a damaged disk drive, the changes that were overwritten are gone forever, and the database won’t be able to access those data modifications to recover thedatabase up to the current point in time.
  • The transactions made since the last backup are lost forever, and you can restore the database only to the point of the last backup you made.

If you are running your database in noarchivelog mode and you happen to lose a datafile, for example, you follow these steps to get back to work again:
  • If the instance isn’t already shut down, first shut it down.
  • Restore the entire database (datafiles and control files) from the backups.
  • Restart the database by using the startup (open mode) command.
Users lose any data that was changed or newly entered in the database since you took the backup that was just restored. You can enter the data if you have a source, or you’re going to have a data loss situation.

If you are running a production database—or if you want to make sure that all the data changes made to any database, for that matter, are always protected—you must operate your database in archivelog mode. Only a database running in archivelog mode can recover from both instance and media failures. You can’t perform a media recovery on a database running in noarchivelog mode.

  • If you’re running the database in noarchivelog mode, remember that you can make a whole-database backup only after first shutting the database down. You can’t make any online tablespace backups in such a database.
  • A database in noarchivelog mode also can’t use the tablespace point-in-time recovery technique. Make sure you take frequent whole-database backups if an important database is running in noarchivelog mode for some reason.
Flashback Technology


Traditionally, restoring backed-up datafiles and recovering the database with the help of archived redo logs was the only way you could rewind the database to a previous point in time or view older data. Oracle’s flashback technology offers new techniques that let you recover from several types of errors without ever having to restore backup files. The key idea behind the flashback technology is to improve database availability while you’re fixing logical data errors. While you’re correcting the logical data errors in one or more errors, all the other database objects continue to be available to the users unhindered. Flashback technology actually consists of a half dozen specific features, most but not all of which rely on the use of undo data to undo the effect of logical errors:

Oracle flashback query (uses undo data): This feature lets you view results from a past period in time. You can choose to use this query to retrieve lost or wrongly deleted data.
Oracle flashback version query (uses undo data): This feature lets you view all versions of a table’s rows during a specific interval. You can use this feature for retrieving old data as
well as for auditing purposes.
Oracle flashback transaction query (uses undo data): This feature enables you to view all the changes made by one or more transactions during a specified period of time.
Oracle flashback transaction backout (uses undo data): This new Oracle Database 11g feature lets you back out unwanted transactions by using compensating transactions.
The Oracle flashback table (uses undo data): This feature lets you recover a table (online) to a previous point in time. You can recover a table or a set of tables to a past point in time
by using the contents of the undo tablespace. The database can remain online during this time, thus enhancing its availability.

Oracle flashback drop feature (uses the recycle bin): This relies on the concept of a recycle bin and lets you restore a dropped table. When you accidentally drop a table with the drop
table statement, information about the purged table is saved in the recycle bin (which is actually a data dictionary table) under a system-assigned name. Actually, the table’s contents
remain intact and in place, but the data dictionary marks the table as having been dropped. You can then “undrop” the table at a later time by using the flashback table
... to before drop statement, which recovers the dropped object from the recycle bin. The flashback table feature relies entirely on the recycle bin concept.

----- Reference : RMAN Recipes by Arun Nanda

Friday 27 July 2012

Import data from Table to CSV Oracle 10g

Use to below procedure to import data from table to CSV:

CREATE OR REPLACE PROCEDURE Dump_Table_To_CSV( p_tname IN VARCHAR2,
p_dir IN VARCHAR2,
p_filename IN VARCHAR2 )
IS
l_output UTL_FILE.FILE_TYPE;
l_theCursor INTEGER DEFAULT DBMS_SQL.OPEN_CURSOR;
l_columnValue VARCHAR2(4000);
l_status INTEGER;
l_query VARCHAR2(1000)
DEFAULT 'select * from ' || p_tname;
l_colCnt NUMBER := 0;
l_separator VARCHAR2(1);
l_descTbl DBMS_SQL.DESC_TAB;
BEGIN
l_output := UTL_FILE.FOPEN( p_dir, p_filename, 'w' );
EXECUTE IMMEDIATE 'alter session set nls_date_format=''dd-mon-yyyy hh24:mi:ss''
';

DBMS_SQL.PARSE( l_theCursor, l_query, dbms_sql.native );
DBMS_SQL.DESCRIBE_COLUMNS( l_theCursor, l_colCnt, l_descTbl );

FOR i IN 1 .. l_colCnt LOOP
UTL_FILE.PUT( l_output, l_separator || '"' || l_descTbl(i).col_name || '"'
);
DBMS_SQL.DEFINE_COLUMN( l_theCursor, i, l_columnValue, 4000 );
l_separator := ';';
END LOOP;
UTL_FILE.NEW_LINE( l_output );

l_status := DBMS_SQL.EXECUTE(l_theCursor);

WHILE ( DBMS_SQL.FETCH_ROWS(l_theCursor) > 0 ) LOOP
l_separator := '';
FOR i IN 1 .. l_colCnt LOOP
DBMS_SQL.COLUMN_VALUE( l_theCursor, i, l_columnValue );
UTL_FILE.PUT( l_output, l_separator || l_columnValue );
l_separator := ';';
END LOOP;
UTL_FILE.NEW_LINE( l_output );
END LOOP;
DBMS_SQL.CLOSE_CURSOR(l_theCursor);
UTL_FILE.FCLOSE( l_output );

EXECUTE IMMEDIATE 'alter session set nls_date_format=''dd-MON-yy'' ';
EXCEPTION
WHEN OTHERS THEN
EXECUTE IMMEDIATE 'alter session set nls_date_format=''dd-MON-yy'' ';
RAISE;
END;
/

To test the procedure :

exec Dump_Table_To_CSV('EMP_TABLE','DUMP_DIR','EMP_FILE.CSV');

Before executing the procedure make sure the user has read,write permission on the Directory "/home/oracle/DUMP_DIR". (Similar to data pump directory).To check directory information you can use the below queries:

SELECT grantee, table_name directory_name, PRIVILEGE FROM user_tab_privs;
SELECT * FROM all_directories;




Monday 23 July 2012

Oracle DataPump

Oracle DataPump Features:
Datapump is a server side utility.You initiate datapump jobs from a user process either sql*plus or through EM but all the work is done by server processes.This improves the performance dramatically over the old exp ort/import utilities because the datapump processes running on the server have direct acess to the datafiles and SGA;they don’t have to go via a session.
Data Pump takes the old export and import utilities one step further, you can have total control over the job running (stop it, pause it, check it, restart it). Data pump is a server side technology and it can transfer large amounts of data very quickly using parallel streams to achieve maximum throughput, they can be 15-45% faster than the older import/export utilities. Advantages using data pump are
  • ability to estimate jobs times
  • ability to restart failed jobs
  • perform fine-grained object selection
  • monitor running jobs
  • directly load a database from a remote instance via the network
  • remapping capabilities
  • improved performance using parallel executions
A couple of notes is that you cannot export to a tape device only to disk, and the import will only work with version of oracle 10.1 or greater.

Data Pump Uses
You can use data pump for the following:
  • migrating databases
  • copying databases
  • transferring oracle databases between different operating systems
  • backing up important tables before you change them
  • moving database objects from one tablespace to another
  • transporting tablespace's between databases
  • reorganizing fragmented table data
  • extracting the DDL for tables and other objects such as stored procedures and packages
Data Access methods
Data pump has two methods for loading data, direct path or external table path you as a dba have no control with what data pump uses, normally simple structures such as heap tables without triggers will use direct path more complex tables will use the external path, oracle will always try and use the direct-path method.
· Direct Path: Bypasses the database buffer cache and writes beyond the high water mark when finished adjusts the high water mark, No undo is generated and can switch off redo as well, minimal impact to users as does not use SGA. Must disable triggers on tables before use.
· External Path: Uses the database buffer cache acts as a SELECT statement into a dump file, during import reconstructs statements into INSERT statements, so whole process is like a normal SELECT/INSERT job. Both undo and redo are generated and uses a normal COMMIT just like a DML statement would.
In the following cases oracle will use the external path if any of the below are in use:
  • clustered tables
  • active triggers in the table
  • a single partition in a table with a global index
  • referential integrity constraints
  • domain indexes on LOB columns
  • tables with fine-grained access control enabled in the insert mode
Data Pump files

You will use three types's of files when using data pump, all files will be created on the server.
  • dump files - holds the data and metadata
  • log files - the resulting output from the data pump command
  • sql files - contain the DDL statements describing the objects included in the job but can contain data
  • Master data pump tables - when using datapump it will create tables within the schema, this is used for controlling the datapump job, the table is removed when finished.

Changing from Original Export/Import to Oracle Data Pump:

New concept of 'DIRECTORY' in expdp:

In order to use Data Pump, the database administrator must create a directory object and grant
Privileges to the user on that directory object. If a directory object is not specified, a default
directory object called data_pump_dir is provided. The default data_pump_dir is available only
to privileged users unless access is granted by the DBA.
SQL> Create directory FullDump as ‘C:\FullDump’;
Directory Created
SQL> Create directory SchemaDump as ‘C:\SchemaDump’;
Directory Created
SQL>
SELECT grantee, table_name ,directory_name, PRIVILEGE FROM user_tab_privs;
SELECT * FROM all_directories;

After a directory is created, you need to grant READ and WRITE permission on the directory
to other users. For example, to allow the Oracle database to read and to write to files on behalf
of user scott in the directory named by SchemaDump, you must execute the following
Command:
SQL> GRANT READ, WRITE ON DIRECTORY SchemaDump TO Scott;
Once the directory access is granted, the user scott can export his database objects in that directory using the below command:
expdp scott/tiger schemas=scott directory=SchemaDump dumpfile=scott.dmp logfile=scott.log
If user donot have read-write permission on direcotry then expdp will fail with below error:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name “SCHEMADUMP” is invalid.

Table Exports/Imports

The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax.
  • expdp scott/tiger tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
  •  impdp scott/tiger tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.

Schema Exports/Imports

The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax.
  •  expdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
  • impdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
Database Exports/Imports

The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax.
  • expdp system/password full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
  •  impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log
INCLUDE and EXCLUDE

The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export/import. When the EXCLUDE parameter is used, all objects except those specified by it will be included in the export/import. The two parameters are mutually exclusive, so use the parameter that requires the least entries to give you the result you require. The basic syntax for both parameters is the same.
Syntax:
  • INCLUDE=object_type[:name_clause] [, ...]
  • EXCLUDE=object_type[:name_clause] [, ...]
Example:
  • expdp scott/tiger schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
  •  expdp scott/tiger schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
  • expdp scott/tiger TABLES=SCOTT.EMP EXCLUDE=STATISTICS DIRECTORY=datapump DUMPFILE=EMP.dmp LOGFILE=EMP.log
We can exclude/Include index, constraints, statistics instead of INDEX=y,n or constraints=y,n and statistics=none as in old exp/imp

REMAP_SCHEMA: Fromuser/Touser has been replaced by remap_schema in oracle 10g

Syntax: REMAP_SCHEMA=source_schema:target_schema

If the schema you are remapping to does not already exist, the import operation creates it, provided the dump file set contains the necessary CREATE USER metadata and you are importing with enough privileges.
If you do not have enough privileges to perform an import that creates dump files containing the metadata necessary to create a schema, then you must create the target schema before performing the import operation. This is because the dump files do not contain the necessary information for the import to create the schema automatically.
If the import operation does create the schema, then after the import is complete, you must assign it a valid password in order to connect to it. The SQL statement to do this, which requires privileges, is:
SQL> ALTER USER [schema_name] IDENTIFIED BY [new_pswd]
Suppose that you want to map HR schema to Scott schema:
  • expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp
  •  impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp REMAP_SCHEMA=hr:scott
NETWORK_LINK

With network mode imports, one doesn't need any intermediate dump files (No more FTP'ing of dump files). Data is exported across a database link and imported directly into the target database.
SQL> create user sigma_local identified by sigma_local;
User created.
SQL> grant connect, resource to sigma_local;
Grant succeeded.
SQL> grant read, write on directory SchemaDump to sigma_local;
Grant succeeded.
SQL> grant create database link to sigma_local;
Grant succeeded.
SQL> create database link old_sigma connect to sigma identified by sigma using 'orcl2';
### Now to import all objects of sigma schema (on remote db) to old_sigma (on local db) use the below command.
impdp sigma_local/sigma_local DIRECTORY=SchemaDump NETWORK_LINK=old_sigma remap_schema=sigma:sigma_local


Stop and restart an expdp/impdp in progress:

· expdp system/sys full=Y exclude=statistics directory=FullDump dumpfile=FullexportDump_stat.dmp logfile=expdpfull_stat.log job_name=SYS_EXPORT_FULL_01
· If you want to pause the job for some time then press CTL+C , type the command given below
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([yes]/no): yes
· Now you want to restart the job then you need to attach the job
1. expdp system/sys attach=SYS_EXPORT_FULL_01
2. Export> START_JOB
3. Export> STATUS=600
4. Export> CONTINUE_CLIENT
The following commands are valid while in interactive mode.
  • · ADD_FILE:Add dumpfile to dumpfile set.
  • · CONTINUE_CLIENT:Return to logging mode. Job will be re-started if idle.
  • · EXIT_CLIENT:Quit client session and leave job running.
  • · FILESIZE:Default filesize (bytes) for subsequent ADD_FILE commands.
  • · HELP:Summarize interactive commands.
  • · KILL_JOB:Detach and delete job.
  • · PARALLEL: Change the number of active workers for current job.PARALLEL=.
  • · START_JOB: Start/resume current job.
  • · STATUS: Frequency (secs) job status is to be monitored where the default (0) will show new status when available.STATUS[=interval]
  • · STOP_JOB:Orderly shutdown of job execution and exits the client.
  • · STOP_JOB=IMMEDIATE performs an immediate shutdown of the Data Pump job.

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Cause: The listener received a request to establish a connection to a database or other service. The connect descriptor received by the listener specified a service name for a database service that either has not yet dynamically registered with the listener or has not been statically configured for the listener.

Solution:

- Check if DB service is known using command: lsnrctl services <listener name>

- Check that the SERVICE_NAME parameter in the connect descriptor of the net service name used specifies a service known by the listener.

- If an easy connect naming connect identifier was used, check that the service name specified is a service known by the listener.

- Check for an event in the listener.log file.

Make the below changes in the listener.ora (for service MYDBNAME)
------------------------------------------

SID_LIST_LISTENER =
   (SID_LIST =
       (SID_DESC =
           (SID_NAME = PLSExtProc)
          (ORACLE_HOME = /home/oracle/11.2.0/product/db_1)
         (PROGRAM = extproc)
      )
     (SID_DESC =
       (SID_NAME = MYDBNAME)
      (ORACLE_HOME = /home/oracle/11.2.0/product/db_1)
     )
)

LISTENER =
   (DESCRIPTION_LIST =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
       (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
      )
   )

Monday 16 July 2012

How to Autostart oracle when Linux server boots up


1. Connect through root user and create the script as below at the location “/etc/init.d”
vi startup_reboot.sh and add the below lines in the script:


export ORACLE_SID=mydb
export ORACLE_BASE=/home/oracle/10.2.0/product
export ORACLE_HOME=$ORACLE_BASE/db_1
export ORA_OWNER=oracle
su - $ORA_OWNER -c "$ORACLE_HOME/bin/lsnrctl start" $ORACLE_HOME
su - $ORA_OWNER -c $ORACLE_HOME/bin/dbstart $ORACLE_HOME
touch /var/lock/subsys/startup_reboot.sh

Note: Oracle Envirnment variables may change according to the entries in .bash_profile

2. Give proper permission to the root user to avoid permission errors:
chmod +x startup_reboot.sh

3. Connect through root user and create a link to run the script at the time of reboot:

ln -s /etc/init.d/startup_reboot.sh /etc/rc.d/rc5.d/S50startup_reboot.sh
The S50 is to tell the system to start the script when it boots up.

4) Update “/etc/oratab”

Connect through oracle user and Update the <N|Y> flag to “Y” for the Database Instance that need to be started up on reboot/default.The first and second fields are the system identifier and home directory of the database respectively. The third filed indicates to the dbstart utility that the database should, "Y", or should not,"N", be brought up at system boot time.

  # Entries are of the form:
              $ORACLE_SID:$ORACLE_HOME:<N|Y>
  # An example entry:
              mydb:/home/oracle/10.2.0/product/db_1:Y

 


Friday 13 July 2012

Changing hostname on RHEL

1. Change the ^HOSTNAME line in /etc/sysconfig/network

2. Change the hostname (FQDN and alias) in /etc/hosts

3. Reboot the server

Tuesday 10 July 2012

Alter/Add Redo Log Files

ALTER DATABASE ADD LOGFILE GROUP 6 ( '/home/oracle/10.2.0/product/oradata/db/redo06.log') SIZE 400M;
ALTER DATABASE ADD LOGFILE GROUP 7 ( '/home/oracle/10.2.0/product/oradata/db/redo07.log') SIZE 400M;
ALTER DATABASE ADD LOGFILE GROUP 8 ( '/home/oracle/10.2.0/product/oradata/db/redo08.log') SIZE 400M;
ALTER DATABASE ADD LOGFILE GROUP 9 ( '/home/oracle/10.2.0/product/oradata/db/redo09.log') SIZE 400M;
ALTER DATABASE ADD LOGFILE GROUP 10 ( '/home/oracle/10.2.0/product/oradata/db/redo10.log') SIZE 400M;

ALTER DATABASE DROP LOGFILE GROUP 2;

ALTER DATABASE DROP LOGFILE GROUP 4;

ALTER SYSTEM SWITCH LOGFILE;

ALTER SYSTEM SWITCH LOGFILE;
ALTER SYSTEM SWITCH LOGFILE;

ALTER DATABASE DROP LOGFILE GROUP 1;

ALTER DATABASE DROP LOGFILE GROUP 3;
ALTER DATABASE DROP LOGFILE GROUP 5;

Alter SGA

SQL> Alter system set sga_max_size=3072M scope=spfile;

System altered.

SQL> shu immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1795162112 bytes
Fixed Size 2021504 bytes
Variable Size 1124075392 bytes
Database Buffers 654311424 bytes
Redo Buffers 14753792 bytes
Database mounted.
Database opened.
SQL>
SQL> show parameter sga_max_size;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 1712M
SQL>
SQL>
SQL>
SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1712M
sga_target big integer 880M
SQL>
SQL>

SQL> alter system set sga_target=3050M scope=both;

System altered.

SQL> shu immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1795162112 bytes
Fixed Size 2021504 bytes
Variable Size 301991808 bytes
Database Buffers 1476395008 bytes
Redo Buffers 14753792 bytes
Database mounted.
Database opened.
SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1712M
sga_target big integer 1664M
SQL>

Wednesday 4 July 2012

Oracle Database Instant Client and PHP OCI-8 connectivity


Download the Basic and the SDK Instant Client packages from the OTN Instant Client page.I have used RPMs:
Install the RPMs as the root user, for example:
rpm -Uvh oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm
rpm -Uvh oracle-instantclient11.2-devel-11.2.0.3.0-1.x86_64.rpm

The first RPM puts Oracle libraries in /usr/lib/oracle/11.2/client64/lib and the second creates headers in /usr/include/oracle/11.2/client64.

Below is the log of the steps that i followed :


[root@sumit-rhel-5]# rpm -Uvh oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:oracle-instantclient11.########################################### [100%]
[root@sumit-rhel-5]# rpm -Uvh oracle-instantclient11.2-devel-11.2.0.3.0-1.x86_64.rpm
Preparing...                ########################################### [100%]
   1:oracle-instantclient11.########################################### [100%]
[root@sumit-rhel-5 oracle]# cd /usr/lib/oracle/11.2/client64/lib
[root@sumit-rhel-5 lib]# ls -l
total 182304
lrwxrwxrwx 1 root root        17 Jul  4 05:52 libclntsh.so -> libclntsh.so.11.1
-rw-r--r-- 1 root root  52761218 Sep 17  2011 libclntsh.so.11.1
-rw-r--r-- 1 root root   7955322 Sep 17  2011 libnnz11.so
lrwxrwxrwx 1 root root        15 Jul  4 05:52 libocci.so -> libocci.so.11.1
-rw-r--r-- 1 root root   1971762 Sep 17  2011 libocci.so.11.1
-rw-r--r-- 1 root root 118408281 Sep 17  2011 libociei.so
-rw-r--r-- 1 root root    164836 Sep 17  2011 libocijdbc11.so
-rw-r--r-- 1 root root   2095661 Sep 17  2011 ojdbc5.jar
-rw-r--r-- 1 root root   2714016 Sep 17  2011 ojdbc6.jar
-rw-r--r-- 1 root root    300666 Sep 17  2011 ottclasses.zip
-rw-r--r-- 1 root root     66779 Sep 17  2011 xstreams.jar
[root@sumit-rhel-5 lib]# cd /usr/include/oracle/11.2/client64
[root@sumit-rhel-5 client64]# ls -l
total 1672
-rw-r--r-- 1 root root  42534 Sep 17  2011 ldap.h
-rw-r--r-- 1 root root  34962 Sep 17  2011 nzerror.h
-rw-r--r-- 1 root root  74787 Sep 17  2011 nzt.h
-rw-r--r-- 1 root root  11586 Sep 17  2011 occiAQ.h
[root@sumit-rhel-5 oracle]# ls -l
total 58768
-rw-r--r-- 1 root   root   59492344 Jul  4 05:41 oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm
-rw-rw-r-- 1 oracle oracle   609139 Jul  4 05:48 oracle-instantclient11.2-devel-11.2.0.3.0-1.x86_64.rpm
drwxr-xr-x 2 root   root       4096 Jul  4 05:49 rpmoci
[root@sumit-rhel-5 oracle]#
[root@sumit-rhel-5 oracle]#
[root@sumit-rhel-5 oracle]# pecl install oci8
downloading oci8-1.4.7.tgz ...
Starting to download oci8-1.4.7.tgz (168,584 bytes)
.......................done: 168,584 bytes
10 source files, building
running: phpize
Configuring for:
PHP Api Version:         20041225
Zend Module Api No:      20060613
Zend Extension Api No:   220060519
Please provide the path to the ORACLE_HOME directory. Use 'instantclient,/path/to/instant/client/lib' if you're compiling with Oracle Instant Client [autodetect] : (Press Enter)
building in /var/tmp/pear-build-root/oci8-1.4.7
running: /var/tmp/oci8/configure --with-oci8
checking for egrep... grep -E
checking for a sed that does not truncate output... /bin/sed
checking for cc... cc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether cc accepts -g... yes
checking for cc option to accept ANSI C... none needed
checking how to run the C preprocessor... cc -E
checking for icc... no
checking for suncc... no
.....................
cc -shared  .libs/oci8.o .libs/oci8_lob.o .libs/oci8_statement.o .libs/oci8_collection.o .libs/oci8_interface.o  -L/usr/lib/oracle/11.2/client64/lib -lclntsh  -Wl,-rpath -Wl,/usr/lib/oracle/11.2/client64/lib -Wl,-soname -Wl,oci8.so -o .libs/oci8.so
creating oci8.la
(cd .libs && rm -f oci8.la && ln -s ../oci8.la oci8.la)
/bin/sh /var/tmp/pear-build-root/oci8-1.4.7/libtool --mode=install cp ./oci8.la /var/tmp/pear-build-root/oci8-1.4.7/modules
cp ./.libs/oci8.so /var/tmp/pear-build-root/oci8-1.4.7/modules/oci8.so
cp ./.libs/oci8.lai /var/tmp/pear-build-root/oci8-1.4.7/modules/oci8.la
PATH="$PATH:/sbin" ldconfig -n /var/tmp/pear-build-root/oci8-1.4.7/modules
----------------------------------------------------------------------
Libraries have been installed in:
   /var/tmp/pear-build-root/oci8-1.4.7/modules
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the `LD_LIBRARY_PATH' environment variable
     during execution
   - add LIBDIR to the `LD_RUN_PATH' environment variable
     during linking
   - use the `-Wl,--rpath -Wl,LIBDIR' linker flag
   - have your system administrator add LIBDIR to `/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
Build complete.
Don't forget to run 'make test'.
running: make INSTALL_ROOT="/var/tmp/pear-build-root/install-oci8-1.4.7" install
Installing shared extensions:     /var/tmp/pear-build-root/install-oci8-1.4.7/usr/lib64/php/modules/
running: find "/var/tmp/pear-build-root/install-oci8-1.4.7" | xargs ls -dils
196806   4 drwxr-xr-x 3 root root   4096 Jul  4 05:55 /var/tmp/pear-build-root/install-oci8-1.4.7
196810   4 drwxr-xr-x 3 root root   4096 Jul  4 05:55 /var/tmp/pear-build-root/install-oci8-1.4.7/usr
196811   4 drwxr-xr-x 3 root root   4096 Jul  4 05:55 /var/tmp/pear-build-root/install-oci8-1.4.7/usr/lib64
196812   4 drwxr-xr-x 3 root root   4096 Jul  4 05:55 /var/tmp/pear-build-root/install-oci8-1.4.7/usr/lib64/php
196813   4 drwxr-xr-x 2 root root   4096 Jul  4 05:55 /var/tmp/pear-build-root/install-oci8-1.4.7/usr/lib64/php/modules
720897 484 -rwxr-xr-x 1 root root 489488 Jul  4 05:55 /var/tmp/pear-build-root/install-oci8-1.4.7/usr/lib64/php/modules/oci8.so

Build process completed successfully
Installing '/usr/lib64/php/modules/oci8.so'
install ok: channel://pecl.php.net/oci8-1.4.7
configuration option "php_ini" is not set to php.ini location

You should add "extension=oci8.so" to php.ini

[root@sumit-rhel-5 modules]# vi /etc/sysconfig/httpd

# Configuration file for the httpd service.

#
# The default processing model (MPM) is the process-based
# 'prefork' model.  A thread-based model, 'worker', is also
# available, but does not work with some modules (such as PHP).
# The service must be stopped before changing this variable.
#
#HTTPD=/usr/sbin/httpd.worker

#
# To pass additional options (for instance, -D definitions) to the
# httpd binary at startup, set OPTIONS here.
#
#OPTIONS=

#
# By default, the httpd process is started in the C locale; to
# change the locale in which the server runs, the HTTPD_LANG
# variable can be set.
#
#HTTPD_LANG=C
export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
~
"/etc/sysconfig/httpd" 24L, 674C written
[root@sumit-rhel-5 modules]#
[root@sumit-rhel-5 modules]#
[root@sumit-rhel-5 modules]#
[root@sumit-rhel-5 modules]#
[root@sumit-rhel-5 modules]# service httpd restart
Stopping httpd:                                            [  OK  ]
Starting httpd:                                               [  OK  ]
[root@sumit-rhel-5 modules]#


Monday 2 July 2012

free buffer waits


Possible Causes :
· This means we are waiting for a free buffer but there are none available in the cache because there are too many dirty buffers in the cache

· Either the buffer cache is too small or the DBWR is slow in writing modified buffers to disk

· DBWR is unable to keep up to the write requests

· Checkpoints happening too fast – maybe due to high database activity and under-sized online redo log files

· Large sorts and full table scans are filling the cache with modified blocks faster than the DBWR is able to write to disk
· If the number of dirty buffers that need to be written to disk is larger than the number that DBWR can write per batch, then these waits can be observed

Actions :
Reduce checkpoint frequency – increase the size of the online redo log files

Examine the size of the buffer cache – consider increasing the size of the buffer cache in the SGA

Set disk_asynch_io = true set

If not using asynchronous I/O increase the number of db writer processes or dbwr slaves

Ensure hot spots do not exist by spreading datafiles over disks and disk controllers

Pre-sorting or reorganizing data can help

buffer busy waits


Possible Causes :
· Buffer busy waits are common in an I/O-bound Oracle system.
· The two main cases where this can occur are:
· Another session is reading the block into the buffer
· Another session holds the buffer in an incompatible mode to our request
· These waits indicate read/read, read/write, or write/write contention.
· The Oracle session is waiting to pin a buffer .A buffer must be pinned before it can be read or modified. Only one process can pin a
buffer at any one time.

· This wait can be intensified by a large block size as more rows can be contained within the block

· This wait happens when a session wants to access a database block in the buffer cache but it cannot as the buffer is “busy

· It is also often due to several processes repeatedly reading the same blocks (eg: i lots of people scan the same index or data block)

Actions :
· The main way to reduce buffer busy waits is to reduce the total I/O on the system

· Depending on the block type, the actions will differ

Data Blocks

· Eliminate HOT blocks from the application. Check for repeatedly scanned / unselective indexes.

· Try rebuilding the object with a higher PCTFREE so that you reduce the number of rows per block.
·
Check for ‘right- hand-indexes’ (indexes that get inserted into at the same point by many processes).

· Increase INITRANS and MAXTRANS and reduce PCTUSED This will make the table less dense .

· Reduce the number of rows per block

Segment Header

· Increase of number of FREELISTs and FREELIST GROUPs

Undo Header

· Increase the number of Rollback Segments

log file sync:


Possible Causes :
· Oracle foreground processes are waiting for a COMMIT or ROLLBACK to complete
Actions :
· Tune LGWR to get good throughput to disk eg: Do not put redo logs on RAID5
· Reduce overall number of commits by batching transactions so that there are fewer distinct COMMIT operations


log file parallel write


log file parallel write

Possible Causes :
· LGWR waits while writing contents of the redo log buffer cache to the online log files on disk
· I/O wait on sub system holding the online redo log files

Actions :
· Reduce the amount of redo being generated

· Do not leave tablespaces in hot backup mode for longer than necessary

· Do not use RAID 5 for redo log files

· Use faster disks for redo log files

· Ensure that the disks holding the archived redo log files and the online redo log files are separate so as to avoid contention

· Consider using NOLOGGING or UNRECOVERABLE options in SQL statements

Db file scattered reads


Possible Causes :
· The Oracle session has requested and is waiting for multiple contiguous database blocks (up to DB_FILE_MULTIBLOCK_READ_COUNT) to be read into the SGA from disk.
· Full Table scans

· Fast Full Index Scans

Actions :
· Optimize multi-block I/O by setting the parameter DB_FILE_MULTIBLOCK_READ_COUNT

· Partition pruning to reduce number of blocks visited

· Consider the usage of multiple buffer pools and cache frequently used indexes/tables in the KEEP pool
· Optimize the SQL statement that initiated most of the waits. The goal is to minimize the number of physical
and logical reads.
· Should the statement access the data by a full table scan or index FFS? Would an index range or unique scan
be more efficient? Does the query use the right driving table?
· Are the SQL predicates appropriate for hash or merge join?
· If full scans are appropriate, can parallel query improve the response time?
· The objective is to reduce the demands for both the logical and physical I/Os, and this is best
achieved through SQL and application tuning.
· Make sure all statistics are representative of the actual data. Check the LAST_ANALYZED date

Remarks:
· If an application that has been running fine for a while suddenly clocks a lot of time on the db file scattered read event and there hasn’t been a code change, you might want to check to see if one or more indexes has been dropped or become unusable.
· Or whether the stats has been stale.