Total Pageviews

Friday 21 December 2012

How to change system date in Linux

To change system date in REDHAT linux, first login using root user and type:-

$date MMDDhhmmYYYY

For example: i want to change my system date to Dec 21 2012, 9:35 AM i will type:-

$date 122109352012

Fri Dec 21 09:35:00 IST 2012

It’s simple

Note:
MM – month
DD – day
YYYY – year
hh – hour is based on 24 hour
mm – minutes

Wednesday 19 December 2012

SWITCH LOGFILE vs ARCHIVELOG CURRENT

 Oracle has two ways to force a logfile switch, ALTER SYSTEM SWITCH LOGFILE and ALTER SYSTEM ARCHIVELOG CURRENT. 
What is the difference between ALTER SYSTEM SWITCH LOGFILE and ALTER SYSTEM ARCHIVELOG CURRENT, and when do I use each?

Yes, both ALTER SYSTEM SWITCH LOGFILE and ALTER SYSTEM ARCHIVELOG CURRENT will force a log switch, but they do it in different ways!   
Both the SWITCH LOGFILE and ARCHIVELOG CURRENT write a quiesce checkpoint, a firm place whereby that last redo log is a part of the hot backup, but ARCHIVELOG CURRENT waits for the writing to complete.  This can take several minutes for multi-gigabyte redo logs.  
Conversely, the ALTER SYSTEM SWITCH LOGFILE command is very fast and returns control to the caller in less than a second while ALTER SYSTEM ARCHIVELOG CURRENT pauses.  
As we see below, the ALTER SYSTEM SWITCH LOGFILE is fast because it does not wait for the archiver process (ARCH) to complete writing the online redo log to the archivelog log filesystem:
  1. It issues database checkpoint
  2.  It immediately starts writing to the next redo log
  3.  In the background, the “switch logfile” command tells the ARCH background process to copy the “old” redo log file to the redo log filesystem.  
Here are the important differences between ALTER SYSTEM SWITCH LOGFILE and ALTER SYSTEM ARCHIVELOG CURRENT:
  •  RAC:  If you are running RAC, the ALTER SYSTEM ARCHIVELOG CURRENTwill switch the logs on all RAC nodes (instances), whereas ALTER SYSTEM SWITCH LOGFILE will only switch he logfile on the instance where you issue the switch command.  Hence, ALTER SYSTEM ARCHIVELOG CURRENT is a best practice for RAC systems.
  • ALTER SYSTEM  SWITCH LOGFILE is asynchronous:  This command is fast to return to the invoking program because the writing of the redo log to the OS filesystem is done in the background.  There is a very small risk in cases where the ARCH process cannot complete writing the redo log, such as cases where the OS archivelog file directory is out of space.  It is also risky because the calling script may move on to a subsequent step, assuming that the redo has been written.  Some scripts will place a SLEEP 60 command in their backup script to allow time for the redo to complete writing, but this is not a best practice.  
  • ALTER SYSTEM ARCHIVELOG CURRENT is synchronous:  This is faster to return because this command waits until the online redo log has completed the writing of the redo log file to the filesystem.  This command is safer because it waits for the OS to acknowledge (ACK) that the redo log has been successfully written.  Hence, ALTER SYSTEM ARCHIVELOG CURRENT is the best practice for production backup scripts with RMAN.
  • Argument required:  The ALTER SYSTEM ARCHIVELOG CURRENTallows you to specify the thread to archive while the ALTER SYSTEM SWITCH LOGFILE archives only the current thread.  If you do not pass the thread parameter, Oracle will archive all full online redo logs.   

Tuesday 18 December 2012

Convert MSSM to automatic segment space management (ASSM)?


The segment space management that you specify at tablespace creation time applies to all segments subsequently created in the tablespace.

You cannot change the segment space management mode of a existing tablespace.

You cannot alter a locally managed tablespace to a locally managed temporary tablespace, nor can you change its method of segment space management.

To Convert the segment space management mode of a tablespace , you have to re-create the tablespace :
  1. Export the contents of the tablespace
  2. Drop the Tablespace which needed to be converted
  3. Create tablespace with the desired segment space management mode
  4. Import the contents of the the dump file to the newly created tablespace

Friday 7 December 2012

ORA-01102: cannot mount database in EXCLUSIVE mode

SQL> Alter database mount;
Alter database mount
*
ERROR at line 1:
ORA-01102: cannot mount database in EXCLUSIVE mode
SQL>


Cause: An instance tried to mount the database in exclusive mode, but some other instance has already mounted the database in exclusive or parallel mode. 

Action: Either mount the database in parallel mode or shut down all other instances before mounting the database in exclusive mode. 
database is started in EXCLUSIVE mode by default. Therefore, the 
ORA-01102 error is misleading and may have occurred due to one of the 
following reasons: 
  • - there is still an "sgadef<sid>.dbf" file in the "ORACLE_HOME/dbs" directory 
  • - the processes for Oracle (pmon, smon, lgwr and dbwr) still exist 
  • - shared memory segments and semaphores still exist even though the database has been shutdown 
  • - there is a "ORACLE_HOME/dbs/lk<sid>" file 
The "lk<sid>" and "sgadef<sid>.dbf" files are used for locking shared memory.  It seems that even though no memory is allocated, Oracle thinks memory is  still locked. By removing the "sgadef" and "lk" files you remove any knowledge oracle has of shared memory that is in use. Now the database can start.

Solution:

SQL> shu immediate;
ORA-01507: database not mounted

ORACLE instance shut down.
SQL> exit;
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

[oracle@DBA-RHEL4-10G ~]$ ps -ef | grep ora_ | grep $ORACLE_SID

oracle 13123 1 0 02:29 ? 00:00:00 ora_pmon_test85
oracle 13125 1 0 02:29 ? 00:00:01 ora_psp0_test85
oracle 13127 1 0 02:29 ? 00:00:00 ora_mman_test85
oracle 13129 1 0 02:29 ? 00:00:22 ora_dbw0_test85
oracle 13131 1 0 02:29 ? 00:00:19 ora_lgwr_test85
oracle 13133 1 0 02:29 ? 00:02:07 ora_ckpt_test85
oracle 13135 1 0 02:29 ? 00:00:02 ora_smon_test85
oracle 13137 1 0 02:29 ? 00:00:00 ora_reco_test85
oracle 13139 1 0 02:29 ? 00:00:00 ora_cjq0_test85
oracle 13141 1 0 02:29 ? 00:00:01 ora_mmon_test85
oracle 13143 1 0 02:29 ? 00:00:02 ora_mmnl_test85
oracle 13145 1 0 02:29 ? 00:00:00 ora_d000_test85
oracle 13147 1 0 02:29 ? 00:00:00 ora_s000_test85
oracle 13151 1 0 02:29 ? 00:00:00 ora_qmnc_test85
oracle 13218 1 0 02:29 ? 00:00:00 ora_q001_test85
oracle 13220 1 0 02:29 ? 00:00:00 ora_q002_test85
oracle 18163 1 0 21:29 ? 00:00:00 ora_j000_test85
[oracle@DBA-RHEL4-10G ~]$
[oracle@DBA-RHEL4-10G ~]$ kill -9 <process-list>
[oracle@DBA-RHEL4-10G ~]$

Verify that the "$ORACLE_HOME/dbs/lk<sid>" file does not exist .If it is present copy it to some temp location for safety and delete it from the location  $ORACLE_HOME/dbs/

Now start the instance.

[oracle@DBA-RHEL4-11G dbs]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Dec 6 21:32:46 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 587202560 bytes
Fixed Size 2022504 bytes
Variable Size 163578776 bytes
Database Buffers 415236096 bytes
Redo Buffers 6365184 bytes
Database mounted.
SQL> Alter database open;

Database altered.

SQL>

Saturday 24 November 2012

Instance-Crash Recovery

Crash recovery or instance recovery is the automatic recovery of the database by the oracle server , without any intervention by the DBA. For example, if a power outage brings down your database instance , when the power supply resumes,you only need to restart the database instance.
The oracle server will use the informaton saved in the onlineredo log files to synchronize the datafiles.Instance recovery involves the following two key operations:

  • Rolling forward:During this operation,the oracle server will update all datafiles with the information from the redo log files.The online redo log-files are always written to before the data is recorded in the datafiles. Thus an instance recovery may usually leave the online log files "ahead" of the data-files.
  • Rolling back:During this operation, uncommitted changes that were added to the datafiles the roll-forward operation are rolled back.Oracle does this by using the undo tablespace contents to return uncommitted changes to return uncommitted changes to their original states.At the end of rollback stage only committed data at the end of time of instance failure is retained in the datafiles.
 During rollforward process , the database server must apply all transactions between the last checkpoint and the end of redo log.You use the parameter fast_start_mttr_target to specify the number of seconds you want the crash recovery to take.Oracle will try to recover the instance as close as possible to the time that you specify for the fast_start_mttr_target parameter.Maximum value of this parameter is 3600 seconds (one hr).



Wednesday 21 November 2012

Memory Notification: Library Cache Object Loaded Into Sga

Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 and later
Symptoms:

 The following messages are reported in alert.log after 10g Release 2 is installed.
  • Memory Notification: Library Cache Object loaded into SGA
  •   Heap size 2294K exceeds notification threshold (2048K)
Cause

These are warning messages that should not cause the program responsible for these errors to fail. They appear as a result of new event messaging mechanism and memory manager in 10g Release 2.

The meaning is that the process is just spending a lot of time in finding free memory extents during an allocate as the memory may be heavily fragmented. Fragmentation in memory is impossible to eliminate completely, however, continued messages of large allocations in memory indicate there are tuning opportunities on the application.

The messages do not imply that an ORA-4031 is about to happen.

Solution

In 10g we have a new undocumented parameter that sets the KGL heap size warning threshold. This parameter was not present in 10gR1. Warnings are written if heap size exceeds this threshold.

Set _kgl_large_heap_warning_threshold to a reasonable high value or zero to prevent these warning messages. Value needs to be set in bytes. 

If you want to set this to 8192 (8192 * 1024) and are using an spfile: 

(logged in as "/ as sysdba")

SQL> alter system set "_kgl_large_heap_warning_threshold"=8388608 scope=spfile ; 
SQL> shutdown immediate 
SQL> startup 

NOTE: The default threshold in 10.2.0.1 is 2M. So these messages could show up frequently in some application environments.

In 10.2.0.2, the threshold was increased to 50MB after regression tests, so this should be a reasonable and recommended value.

Tnsping.exe issue - Entry Point Not Found


Below is the error msg when dev team tried to connect to the database server through sql-developer/Toad.









The Procedure entry point snlinAddrLocalhost could not be located in the dynamic link library oran|10.dll

Possible Reasons: Environment variables are not set properly or pointing to some other oracle home.TNS_ADMIN must be set if tnsping.exe is not able to load.

Solution: Set the correct environment variables as show below:

SET PATH=D:\oracle\product\10.2.0\db_1\bin
SET ORACLE_BASE=D:\oracle\product\10.2.0
SET ORACLE_HOME=D:\oracle\product\10.2.0\db_1
SET ORACLE_SID=orcl
SET TNS_ADMIN=D:\oracle\product\10.2.0\db_1\NETWORK\ADMIN

Restart the machine.




THE SGA in ORACLE


A system global area (SGA) is a group of shared memory structures that contain data and control information for one Oracle database instance. If multiple users are concurrently connected to the same instance, then the data in the instance's SGA is shared among the users. Consequently, the SGA is sometimes called theshared global area.

An SGA and Oracle processes constitute an Oracle instance. Oracle automatically allocates memory for an SGA when you start an instance, and the operating system reclaims the memory when you shut down the instance. Each instance has its own SGA.
The SGA is read/write. All users connected to a multiple-process database instance can read information contained within the instance's SGA, and several processes write to the SGA during execution of Oracle.
The SGA contains the following data structures:
  • Database buffer cache
  • Redo log buffer
  • Shared pool
  • Java pool
  • Large pool (optional)
  • Data dictionary cache
  • Other miscellaneous information
Part of the SGA contains general information about the state of the database and the instance, which the background processes need to access; this is called the fixed SGA. No user data is stored here. The SGA also includes information communicated between processes, such as locking information.
If the system uses shared server architecture, then the request and response queues and some contents of the PGA are in the SGA.

Dynamic SGA

With the dynamic SGA infrastructure, the size of the buffer cache, the shared pool, the large pool, and the process-private memory can be changed without shutting down the instance.
Dynamic SGA allows Oracle to set, at run time, limits on how much virtual memory Oracle uses for the SGA. Oracle can start instances underconfigured and allow the instance to use more memory by growing the SGA components, up to a maximum of SGA_MAX_SIZE. If SGA_MAX_SIZE specified in the initialization parameter file is less than the sum of all components specified or defaulted at initialization time, then the setting in the initialization parameter file is ignored.
For optimal performance in most systems, the entire SGA should fit in real memory. If it does not, and if virtual memory is used to store parts of it, then overall database system performance can decrease dramatically, because portions of the SGA are paged (written to and read from disk) by the operating system. The amount of memory dedicated to all shared areas in the SGA also has performance impact.

Parameter
Description
DB_CACHE_SIZE
The size of the cache of standard blocks.
LOG_BUFFER
The number of bytes allocated for the redo log buffer.
SHARED_POOL_SIZE
The size in bytes of the area devoted to shared SQL and PL/SQL statements.
LARGE_POOL_SIZE
The size of the large pool; the default is 0.

Database Buffer Cache

The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles. All user processes concurrently connected to the instance share access to the database buffer cache.
The database buffer cache and the shared SQL cache are logically segmented into multiple sets. This organization into multiple sets reduces contention on multiprocessor systems.

Organization of the Database Buffer Cache

 The buffers in the cache are organized in two lists: the write list and the least recently used (LRU) list. The write list holds dirty buffers, which contain data that has been modified but has not yet been written to disk. The LRU list holds free buffers, pinned buffers, and dirty buffers that have not yet been moved to the write list.Free buffers do not contain any useful data and are available for use. Pinned buffers are currently being accessed.
When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list. As more buffers are continually moved to the MRU end of the LRU list, dirty buffers age toward the LRU end of the LRU list.
The first time an Oracle user process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it must copy the data block from a datafile on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than data access through a cache miss.


Checking The Cache Hit Ratio

Oracle maintains statistics of buffer cache hits and misses. The following query will show you the overall buffer cache hit ratio for the entire instance since it was started:
     SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
     FROM   v$sysstat P1, v$sysstat P2, v$sysstat P3
     WHERE  P1.name = 'db block gets'
     AND    P2.name = 'consistent gets'
     AND    P3.name = 'physical reads'

Before reading a data block into the cache, the process must first find a free buffer. The process searches the LRU list, starting at the least recently used end of the list. The process searches either until it finds a free buffer or until it has searched the threshold limit of buffers.
If the user process finds a dirty buffer as it searches the LRU list, it moves that buffer to the write list and continues to search. When the process finds a free buffer, it reads the data block from disk into the buffer and moves the buffer to the MRU end of the LRU list.
If an Oracle user process searches the threshold limit of buffers without finding a free buffer, the process stops searching the LRU list and signals the DBW0 background process to write some of the dirty buffers to disk.

The LRU Algorithm and Full Table Scans

When the user process is performing a full table scan, it reads the blocks of the table into buffers and puts them on the LRU end (instead of the MRU end) of the LRU list. This is because a fully scanned table usually is needed only briefly, so the blocks should be moved out quickly to leave more frequently used blocks in the cache.
You can control this default behavior of blocks involved in table scans on a table-by-table basis. To specify that blocks of the table are to be placed at the MRU end of the list during a full table scan, use the CACHE clause when creating or altering a table or cluster. You can specify this behavior for small lookup tables or large static historical tables to avoid I/O on subsequent accesses of the table.

Size of the Database Buffer Cache

Oracle supports multiple block size in a database. This is the default block size--the block size used for the system tablespace. You specify the standard block size by setting the parameter DB_BLOCK_SIZE. Legitimate values are from 2K to 32K.
The sizes and numbers of non-standard block size buffers are specified by the following parameters:
·         DB_2K_CACHE_SIZE (used with tablespace block size of 2k)
·         DB_4K_CACHE_SIZE (used with tablespace block size of 4k)
·         DB_8K_CACHE_SIZE (used with tablespace block size of 8k)
·         DB_16K_CACHE_SIZE (used with tablespace block size of 16k)
·         DB_32K_CACHE_SIZE (used with tablespace block size of 32k)
Each parameter specifies the size of the cache for the corresponding block size.
Multiple Buffer Pools
You can configure the database buffer cache with separate buffer pools that either keep data in the buffer cache or make the buffers available for new data immediately after using the data blocks. Particular schema objects (tables, clusters, indexes, and partitions) can then be assigned to the appropriate buffer pool to control the way their data blocks age out of the cache.
  • The KEEP buffer pool retains the schema object's data blocks in memory.
  • The RECYCLE buffer pool eliminates data blocks from memory as soon as they are no longer needed.
  • The DEFAULT buffer pool contains data blocks from schema objects that are not assigned to any buffer pool, as well as schema objects that are explicitly assigned to the DEFAULT pool.
The initialization parameters that configure the KEEP and RECYCLE buffer pools are DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE.

Redo Log Buffer

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP operations. Redo entries are used for database recovery, if necessary.
Redo entries are copied by Oracle server processes from the user's memory space to the redo log buffer in the SGA. The redo entries take up continuous, sequential space in the buffer. The background process LGWR writes the redo log buffer to the active online redo log file (or group of files) on disk.
The contents of this buffer are flushed:
·         Every three seconds
·         Whenever someone commits a transaction
·         When its gets one third full or contains 1MB of cached redo log data.
·         When LGWR is asked to switch logs
The initialization parameter LOG_BUFFER determines the size (in bytes) of the redo log buffer. In general, larger values reduce log file I/O, particularly if transactions are long or numerous. The default setting is either 512 kilobytes (KB) or 128 KB times the setting of the CPU_COUNT parameter, whichever is greater
Automatic SGA Memory Management:
10g: SGA_TARGET   and   PGA_AGGREGATE_TARGET
11g: Merory_target or (SGA_TARGET   and   PGA_AGGREGATE_TARGET)
OLTP Systems:
Heavy OLTP: Larger SGA as larger memory is needed in case of OLTP transactions
Heavy Batch: Smaller SGA
Note: To use automatic SGA memory management, the parameter statistics_level must be set to TYPICAL or ALL.If statistics collection is not enabled, the database will not have the historical information needed to make the necessary sizing decisions.


Tuesday 20 November 2012

Type of tables in Oracle


Heap organized tables:

These are normal, standard database tables. Data is managed in a heap-like fashion. As data is added, the first free space found in the segment that can fit the data will be used. As data is removed from the table, it allows space to become available for reuse by subsequent INSERTs and UPDATEs. This is the origin of the name “heap” as it refers to this type of table. A heap is a bunch of space, and it is used in a somewhat random fashion.

Index organized tables:

These tables are stored in an index structure. This imposes physical order on the rows themselves. Whereas in a heap the data is stuffed wherever it might fit, in index-organized tables (IOTs) the data is stored in sorted
order, according to the primary key.

Index clustered tables:

Clusters are groups of one or more tables, physically stored on the same database blocks, with all rows that share a common cluster key value being stored physically near each other. Two goals are achieved in this structure. First, many tables may be stored physically joined together. Normally, you would expect data from only one table to be found on a database block, but with clustered tables, data from many tables may be stored on the same block. Second, all data that contains the same cluster key value, such as DEPTNO = 10, will be physically stored together. The data is clustered around the cluster key value. A cluster key is built using a B*Tree index.

Hash clustered tables:

These tables are similar to index clustered tables, but instead of using a B*Tree index to locate the data by cluster key, the hash cluster hashes the key to the cluster to arrive at the database block the data should be on. In a hash cluster, the data is the index (metaphorically speaking). These tables are appropriate for data that is read frequently via an equality comparison on the key.

Sorted hash clustered tables:

This table type is new in Oracle 10g and combines some aspects of a hash-clustered table with those of an IOT. The concept is as follows: you have some key value that rows will be hashed by (say, CUSTOMER_ID), and then a series of records related to that key that arrive in sorted order (timestamp-based records) and are processed in that sorted order. For example, a customer places orders in your order entry system, and these orders are retrieved and processed in a first in, first out (FIFO) manner. In such a system, a sorted hash cluster may be the right data structure for you.

Nested tables: These are part of the object-relational extensions to Oracle. They are simply system-generated and maintained child tables in a parent/child relationship. They work much in the same way as EMP and DEPT in the SCOTT schema with the EMP table being the nested table. EMP is considered to be a child of the DEPT table, since the EMP table has a foreign key, DEPTNO, that points to DEPT. The main difference is that they are not stand-alone tables like EMP.

Temporary tables: These tables store scratch data for the life of a transaction or the life of a session. These tables allocate temporary extents, as needed, from the current user’s temporary tablespace. Each session will see only the extents that session allocates; it will never see any of the data created in any other session.

Object tables: These tables are created based on an object type. They have special attributes not associated with non-object tables, such as a system-generated REF (object identifier) for each row. Object tables are really special cases of heap, index organized, and temporary tables, and they may include nested tables as part of their structure as well.
External tables: The data in these tables are not stored in the database itself; rather, they reside outside of the database in ordinary operating system files. External tables in Oracle9i and above give you the ability to query a file residing outside the database as if it were a normal table inside the database. They are most useful as a means of getting data into the database (they are a very powerful data-loading tool). Furthermore, in Oracle 10g, which introduces an external table unload capability, they provide an easy way to move data between Oracle databases without using database links.

To find all Sys Priviliges under a role in Oracle 10g

Select privilege from DBA_SYS_PRIVS where grantee like 'RESOURCE' order by privilege;

RESOURCE
------------------------
create cluster
create indextype
create operator
create procedure
create sequence
create table
create trigger
create type

Thursday 15 November 2012

ORA-19721 by Transportable Tablespace Import (TTS)

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.2.0.4 - Release: 10.1 to 10.2

On 10.1 and above: when attempting to transfer a datafile as a Transportable Tablespace (TTS), the following error occurs, even though the databases are at the same version:

ORA-19721: Cannot find datafile with absolute file number # in tablespace <datafile>

The issue can be reproduced at will with the following steps:

1. Export the data file as a Transportable Tablespace, from a 10.1 (or above) database
2. Import the data file

Cause:

The file header for the data file has a format that is less than v10.0.

When the source tablespace was set to READ-ONLY in preparation for being transported, it is likely that the COMPATIBLE parameter was set to 9.2, at that time. As per unpublished Bug 2905128, this means that the file header is still version 9.2, even though the database is version 10.1 or above.

So, because the target database has COMPATIBLE set to 10.2, the data file being transported has the wrong file header format, and the ORA-19721 error is reported.

Solution

1. Change the tablespace in the SOURCE database back into Read/Write mode:

connect / as sysdba
ALTER TABLESPACE <datafile-name> READ WRITE;



2. If the SOURCE database still has COMPATIBLE set to 9.2, change it 10.2 and bounce the database:

connect / as sysdba
ALTER SYSTEM SET COMPATIBLE=10.2 SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;


3. Change the tablespace to Read Only mode, again:

ALTER TABLESPACE <datafile-name> READ ONLY;

4. Re-run the Export of the data file using the Transportable Tablespace method.

5. Re-run the Import of the datafile using the Transportable Tablespace method.

libXp.so.6 missing when installing Oracle on Ubantu/Redhat Linux

Problem Description:
Upon starting the installer, I receive the following error:
Error while loading shared libraries: libXp.so.6: cannot open shared object file: No such file or directory

This occurs on newer Red Hat based distributions where libXp.so.6 has become deprecated. In order to install and operate oracle, you will need to obtain this library. 

In newer distributions, you should be able to obtain this library from the following RPM files:

libXp-1.0.0*.i386 (32-bit)
libXp-1.0.0*.x86_64 (64-bit)

You can also use the yum package tool if available to install the libXp.so.6 library. If you have yum installed, run the following command to obtain the library:

For Redhat 32 bit:
sudo yum install libXp.so.6

For Redhat 64-bit:
sudo yum install libXp.x86_64

For Ubantu :
try install the libxp-dev package:

apt-get install libxp6 libxp-dev
-or-
yum install libXp libXp-devel

Friday 26 October 2012

Why can't you use RMAN to back up Online Redo Log Files

The Oracle database uses the three types of "live" files during its operation:datafiles,control files and online redo log files.Of these files RMAN only backs up the datafiles and the control files.You can't use RMAN to backup the online redo log files.
If you are operating in No-Archivelog mode,then you won't need the online redo log files ,since the database files are always consistent when you backup the database using the only permitted modes of backing up a database in No-archivelog mode,which are closed whole backups.
You won't need the online redo log backups if you are operating in Archivelog mode either ,since RMAN is continually backing up all your archived redo logs.
However you must make sure you always multiplex the online redo log so you won't lose all the members of a group.

Thursday 25 October 2012

Flash Recovery Area

What is Flash Recovery Area ??

FRA is a special area on disk that is used by the database as a backup location.By default RMAN creates backups of all types – regular backup sets, image copies, and archive logs – in that area.Since RMAN knows about the existence of this area,it automatically deletes unneeded backups based on redundancy and retension periods to make room for new backups.
In addition to backups, the flash recovery area can also store online redo log files, archived redo log files, and control files. Again, these are optional; you can always define the location of those files to be anywhere, not necessarily inside the flash recovery area. Since the flash recovery area is generally used for backup files, you should consider creating it on disks different from your main database disks. Doing so helps protect you from losing both your main database and your backup files from a single failure. You can further take advantage of this probability by putting one member of the redo log group or one control file on the flash recovery area. This reduces the possibility of all members of a redo log group or all control files getting damaged at the same time.
How to create the FRA ??
Things to consider before creating the FRA:
  • ·        FRA Size:Depends on the size of backups , archive logs etc
  •          FRA location: As a best practice you should  avoid putting the FRA and the database files on the same mount point or disk group (if on ASM).
Having the answers to these questions in mind, you can then use the following process to create the flash recovery area:

Disable the parameters log_archive_dest and log_archive_duplex_dest, if they are set in the database. You can do that by issuing the following commands:

alter system set log_archive_duplex_dest = '';
alter system set log_archive_dest = '';

Log on as a user with the sysdba role and issue the following commands to size and create the flash recovery area:

alter system set db_recovery_size = 4G;

alter system set db_recovery_dest = '/home/oracle/flasharea';

The sequence of these commands is important; you have to issue them in that order, not the reverse. However, do replace the size and path name with the values you have chosen for your system.That’s it; the flash recovery area is ready for operation.
What if you don't want archived redo logs in FRA ??

Remember, you can always define a different location for archived redo logs. If you use a different location, then you can’t just erase the values of the parameters log_archive_dest and log_archive_duplex_dest, as suggested in the earlier solution:

alter system set log_archive_duplex_dest = '';
alter system set log_archive_dest = '';
To place your log files elsewhere than the flash recovery area, you should use a different parameter to specify the archived redo log location; use log_archive_dest_1 instead of log_archive_dest. Suppose log_archive_dest used to be /dbarch. You can use log_archive_dest_1 to specify the same location for archived redo logs. First, check the value of the parameter log_archive_dest:
SQL>show parameter log_archive_dest
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------
log_archive_dest string /dbarch
The current setting of the archived redo log destination is /home/backup/archives. Next, set the log_archive_dest_1 
parameter to that location:
SQL> alter system set log_archive_dest_1 = 'location=/home/backup/archives';
Now set log_archive_dest to NULL:
SQL> alter system set log_archive_dest = '';
If you have set the two parameters—log_archive_dest and log_archive_duplex_dest—in the initialization parameter file, you should edit the file to remove these two parameters completely.