Total Pageviews

Saturday 24 November 2012

Instance-Crash Recovery

Crash recovery or instance recovery is the automatic recovery of the database by the oracle server , without any intervention by the DBA. For example, if a power outage brings down your database instance , when the power supply resumes,you only need to restart the database instance.
The oracle server will use the informaton saved in the onlineredo log files to synchronize the datafiles.Instance recovery involves the following two key operations:

  • Rolling forward:During this operation,the oracle server will update all datafiles with the information from the redo log files.The online redo log-files are always written to before the data is recorded in the datafiles. Thus an instance recovery may usually leave the online log files "ahead" of the data-files.
  • Rolling back:During this operation, uncommitted changes that were added to the datafiles the roll-forward operation are rolled back.Oracle does this by using the undo tablespace contents to return uncommitted changes to return uncommitted changes to their original states.At the end of rollback stage only committed data at the end of time of instance failure is retained in the datafiles.
 During rollforward process , the database server must apply all transactions between the last checkpoint and the end of redo log.You use the parameter fast_start_mttr_target to specify the number of seconds you want the crash recovery to take.Oracle will try to recover the instance as close as possible to the time that you specify for the fast_start_mttr_target parameter.Maximum value of this parameter is 3600 seconds (one hr).



Wednesday 21 November 2012

Memory Notification: Library Cache Object Loaded Into Sga

Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 and later
Symptoms:

 The following messages are reported in alert.log after 10g Release 2 is installed.
  • Memory Notification: Library Cache Object loaded into SGA
  •   Heap size 2294K exceeds notification threshold (2048K)
Cause

These are warning messages that should not cause the program responsible for these errors to fail. They appear as a result of new event messaging mechanism and memory manager in 10g Release 2.

The meaning is that the process is just spending a lot of time in finding free memory extents during an allocate as the memory may be heavily fragmented. Fragmentation in memory is impossible to eliminate completely, however, continued messages of large allocations in memory indicate there are tuning opportunities on the application.

The messages do not imply that an ORA-4031 is about to happen.

Solution

In 10g we have a new undocumented parameter that sets the KGL heap size warning threshold. This parameter was not present in 10gR1. Warnings are written if heap size exceeds this threshold.

Set _kgl_large_heap_warning_threshold to a reasonable high value or zero to prevent these warning messages. Value needs to be set in bytes. 

If you want to set this to 8192 (8192 * 1024) and are using an spfile: 

(logged in as "/ as sysdba")

SQL> alter system set "_kgl_large_heap_warning_threshold"=8388608 scope=spfile ; 
SQL> shutdown immediate 
SQL> startup 

NOTE: The default threshold in 10.2.0.1 is 2M. So these messages could show up frequently in some application environments.

In 10.2.0.2, the threshold was increased to 50MB after regression tests, so this should be a reasonable and recommended value.

Tnsping.exe issue - Entry Point Not Found


Below is the error msg when dev team tried to connect to the database server through sql-developer/Toad.









The Procedure entry point snlinAddrLocalhost could not be located in the dynamic link library oran|10.dll

Possible Reasons: Environment variables are not set properly or pointing to some other oracle home.TNS_ADMIN must be set if tnsping.exe is not able to load.

Solution: Set the correct environment variables as show below:

SET PATH=D:\oracle\product\10.2.0\db_1\bin
SET ORACLE_BASE=D:\oracle\product\10.2.0
SET ORACLE_HOME=D:\oracle\product\10.2.0\db_1
SET ORACLE_SID=orcl
SET TNS_ADMIN=D:\oracle\product\10.2.0\db_1\NETWORK\ADMIN

Restart the machine.




THE SGA in ORACLE


A system global area (SGA) is a group of shared memory structures that contain data and control information for one Oracle database instance. If multiple users are concurrently connected to the same instance, then the data in the instance's SGA is shared among the users. Consequently, the SGA is sometimes called theshared global area.

An SGA and Oracle processes constitute an Oracle instance. Oracle automatically allocates memory for an SGA when you start an instance, and the operating system reclaims the memory when you shut down the instance. Each instance has its own SGA.
The SGA is read/write. All users connected to a multiple-process database instance can read information contained within the instance's SGA, and several processes write to the SGA during execution of Oracle.
The SGA contains the following data structures:
  • Database buffer cache
  • Redo log buffer
  • Shared pool
  • Java pool
  • Large pool (optional)
  • Data dictionary cache
  • Other miscellaneous information
Part of the SGA contains general information about the state of the database and the instance, which the background processes need to access; this is called the fixed SGA. No user data is stored here. The SGA also includes information communicated between processes, such as locking information.
If the system uses shared server architecture, then the request and response queues and some contents of the PGA are in the SGA.

Dynamic SGA

With the dynamic SGA infrastructure, the size of the buffer cache, the shared pool, the large pool, and the process-private memory can be changed without shutting down the instance.
Dynamic SGA allows Oracle to set, at run time, limits on how much virtual memory Oracle uses for the SGA. Oracle can start instances underconfigured and allow the instance to use more memory by growing the SGA components, up to a maximum of SGA_MAX_SIZE. If SGA_MAX_SIZE specified in the initialization parameter file is less than the sum of all components specified or defaulted at initialization time, then the setting in the initialization parameter file is ignored.
For optimal performance in most systems, the entire SGA should fit in real memory. If it does not, and if virtual memory is used to store parts of it, then overall database system performance can decrease dramatically, because portions of the SGA are paged (written to and read from disk) by the operating system. The amount of memory dedicated to all shared areas in the SGA also has performance impact.

Parameter
Description
DB_CACHE_SIZE
The size of the cache of standard blocks.
LOG_BUFFER
The number of bytes allocated for the redo log buffer.
SHARED_POOL_SIZE
The size in bytes of the area devoted to shared SQL and PL/SQL statements.
LARGE_POOL_SIZE
The size of the large pool; the default is 0.

Database Buffer Cache

The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles. All user processes concurrently connected to the instance share access to the database buffer cache.
The database buffer cache and the shared SQL cache are logically segmented into multiple sets. This organization into multiple sets reduces contention on multiprocessor systems.

Organization of the Database Buffer Cache

 The buffers in the cache are organized in two lists: the write list and the least recently used (LRU) list. The write list holds dirty buffers, which contain data that has been modified but has not yet been written to disk. The LRU list holds free buffers, pinned buffers, and dirty buffers that have not yet been moved to the write list.Free buffers do not contain any useful data and are available for use. Pinned buffers are currently being accessed.
When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list. As more buffers are continually moved to the MRU end of the LRU list, dirty buffers age toward the LRU end of the LRU list.
The first time an Oracle user process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it must copy the data block from a datafile on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than data access through a cache miss.


Checking The Cache Hit Ratio

Oracle maintains statistics of buffer cache hits and misses. The following query will show you the overall buffer cache hit ratio for the entire instance since it was started:
     SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
     FROM   v$sysstat P1, v$sysstat P2, v$sysstat P3
     WHERE  P1.name = 'db block gets'
     AND    P2.name = 'consistent gets'
     AND    P3.name = 'physical reads'

Before reading a data block into the cache, the process must first find a free buffer. The process searches the LRU list, starting at the least recently used end of the list. The process searches either until it finds a free buffer or until it has searched the threshold limit of buffers.
If the user process finds a dirty buffer as it searches the LRU list, it moves that buffer to the write list and continues to search. When the process finds a free buffer, it reads the data block from disk into the buffer and moves the buffer to the MRU end of the LRU list.
If an Oracle user process searches the threshold limit of buffers without finding a free buffer, the process stops searching the LRU list and signals the DBW0 background process to write some of the dirty buffers to disk.

The LRU Algorithm and Full Table Scans

When the user process is performing a full table scan, it reads the blocks of the table into buffers and puts them on the LRU end (instead of the MRU end) of the LRU list. This is because a fully scanned table usually is needed only briefly, so the blocks should be moved out quickly to leave more frequently used blocks in the cache.
You can control this default behavior of blocks involved in table scans on a table-by-table basis. To specify that blocks of the table are to be placed at the MRU end of the list during a full table scan, use the CACHE clause when creating or altering a table or cluster. You can specify this behavior for small lookup tables or large static historical tables to avoid I/O on subsequent accesses of the table.

Size of the Database Buffer Cache

Oracle supports multiple block size in a database. This is the default block size--the block size used for the system tablespace. You specify the standard block size by setting the parameter DB_BLOCK_SIZE. Legitimate values are from 2K to 32K.
The sizes and numbers of non-standard block size buffers are specified by the following parameters:
·         DB_2K_CACHE_SIZE (used with tablespace block size of 2k)
·         DB_4K_CACHE_SIZE (used with tablespace block size of 4k)
·         DB_8K_CACHE_SIZE (used with tablespace block size of 8k)
·         DB_16K_CACHE_SIZE (used with tablespace block size of 16k)
·         DB_32K_CACHE_SIZE (used with tablespace block size of 32k)
Each parameter specifies the size of the cache for the corresponding block size.
Multiple Buffer Pools
You can configure the database buffer cache with separate buffer pools that either keep data in the buffer cache or make the buffers available for new data immediately after using the data blocks. Particular schema objects (tables, clusters, indexes, and partitions) can then be assigned to the appropriate buffer pool to control the way their data blocks age out of the cache.
  • The KEEP buffer pool retains the schema object's data blocks in memory.
  • The RECYCLE buffer pool eliminates data blocks from memory as soon as they are no longer needed.
  • The DEFAULT buffer pool contains data blocks from schema objects that are not assigned to any buffer pool, as well as schema objects that are explicitly assigned to the DEFAULT pool.
The initialization parameters that configure the KEEP and RECYCLE buffer pools are DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE.

Redo Log Buffer

The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP operations. Redo entries are used for database recovery, if necessary.
Redo entries are copied by Oracle server processes from the user's memory space to the redo log buffer in the SGA. The redo entries take up continuous, sequential space in the buffer. The background process LGWR writes the redo log buffer to the active online redo log file (or group of files) on disk.
The contents of this buffer are flushed:
·         Every three seconds
·         Whenever someone commits a transaction
·         When its gets one third full or contains 1MB of cached redo log data.
·         When LGWR is asked to switch logs
The initialization parameter LOG_BUFFER determines the size (in bytes) of the redo log buffer. In general, larger values reduce log file I/O, particularly if transactions are long or numerous. The default setting is either 512 kilobytes (KB) or 128 KB times the setting of the CPU_COUNT parameter, whichever is greater
Automatic SGA Memory Management:
10g: SGA_TARGET   and   PGA_AGGREGATE_TARGET
11g: Merory_target or (SGA_TARGET   and   PGA_AGGREGATE_TARGET)
OLTP Systems:
Heavy OLTP: Larger SGA as larger memory is needed in case of OLTP transactions
Heavy Batch: Smaller SGA
Note: To use automatic SGA memory management, the parameter statistics_level must be set to TYPICAL or ALL.If statistics collection is not enabled, the database will not have the historical information needed to make the necessary sizing decisions.


Tuesday 20 November 2012

Type of tables in Oracle


Heap organized tables:

These are normal, standard database tables. Data is managed in a heap-like fashion. As data is added, the first free space found in the segment that can fit the data will be used. As data is removed from the table, it allows space to become available for reuse by subsequent INSERTs and UPDATEs. This is the origin of the name “heap” as it refers to this type of table. A heap is a bunch of space, and it is used in a somewhat random fashion.

Index organized tables:

These tables are stored in an index structure. This imposes physical order on the rows themselves. Whereas in a heap the data is stuffed wherever it might fit, in index-organized tables (IOTs) the data is stored in sorted
order, according to the primary key.

Index clustered tables:

Clusters are groups of one or more tables, physically stored on the same database blocks, with all rows that share a common cluster key value being stored physically near each other. Two goals are achieved in this structure. First, many tables may be stored physically joined together. Normally, you would expect data from only one table to be found on a database block, but with clustered tables, data from many tables may be stored on the same block. Second, all data that contains the same cluster key value, such as DEPTNO = 10, will be physically stored together. The data is clustered around the cluster key value. A cluster key is built using a B*Tree index.

Hash clustered tables:

These tables are similar to index clustered tables, but instead of using a B*Tree index to locate the data by cluster key, the hash cluster hashes the key to the cluster to arrive at the database block the data should be on. In a hash cluster, the data is the index (metaphorically speaking). These tables are appropriate for data that is read frequently via an equality comparison on the key.

Sorted hash clustered tables:

This table type is new in Oracle 10g and combines some aspects of a hash-clustered table with those of an IOT. The concept is as follows: you have some key value that rows will be hashed by (say, CUSTOMER_ID), and then a series of records related to that key that arrive in sorted order (timestamp-based records) and are processed in that sorted order. For example, a customer places orders in your order entry system, and these orders are retrieved and processed in a first in, first out (FIFO) manner. In such a system, a sorted hash cluster may be the right data structure for you.

Nested tables: These are part of the object-relational extensions to Oracle. They are simply system-generated and maintained child tables in a parent/child relationship. They work much in the same way as EMP and DEPT in the SCOTT schema with the EMP table being the nested table. EMP is considered to be a child of the DEPT table, since the EMP table has a foreign key, DEPTNO, that points to DEPT. The main difference is that they are not stand-alone tables like EMP.

Temporary tables: These tables store scratch data for the life of a transaction or the life of a session. These tables allocate temporary extents, as needed, from the current user’s temporary tablespace. Each session will see only the extents that session allocates; it will never see any of the data created in any other session.

Object tables: These tables are created based on an object type. They have special attributes not associated with non-object tables, such as a system-generated REF (object identifier) for each row. Object tables are really special cases of heap, index organized, and temporary tables, and they may include nested tables as part of their structure as well.
External tables: The data in these tables are not stored in the database itself; rather, they reside outside of the database in ordinary operating system files. External tables in Oracle9i and above give you the ability to query a file residing outside the database as if it were a normal table inside the database. They are most useful as a means of getting data into the database (they are a very powerful data-loading tool). Furthermore, in Oracle 10g, which introduces an external table unload capability, they provide an easy way to move data between Oracle databases without using database links.

To find all Sys Priviliges under a role in Oracle 10g

Select privilege from DBA_SYS_PRIVS where grantee like 'RESOURCE' order by privilege;

RESOURCE
------------------------
create cluster
create indextype
create operator
create procedure
create sequence
create table
create trigger
create type

Thursday 15 November 2012

ORA-19721 by Transportable Tablespace Import (TTS)

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.2.0.4 - Release: 10.1 to 10.2

On 10.1 and above: when attempting to transfer a datafile as a Transportable Tablespace (TTS), the following error occurs, even though the databases are at the same version:

ORA-19721: Cannot find datafile with absolute file number # in tablespace <datafile>

The issue can be reproduced at will with the following steps:

1. Export the data file as a Transportable Tablespace, from a 10.1 (or above) database
2. Import the data file

Cause:

The file header for the data file has a format that is less than v10.0.

When the source tablespace was set to READ-ONLY in preparation for being transported, it is likely that the COMPATIBLE parameter was set to 9.2, at that time. As per unpublished Bug 2905128, this means that the file header is still version 9.2, even though the database is version 10.1 or above.

So, because the target database has COMPATIBLE set to 10.2, the data file being transported has the wrong file header format, and the ORA-19721 error is reported.

Solution

1. Change the tablespace in the SOURCE database back into Read/Write mode:

connect / as sysdba
ALTER TABLESPACE <datafile-name> READ WRITE;



2. If the SOURCE database still has COMPATIBLE set to 9.2, change it 10.2 and bounce the database:

connect / as sysdba
ALTER SYSTEM SET COMPATIBLE=10.2 SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;


3. Change the tablespace to Read Only mode, again:

ALTER TABLESPACE <datafile-name> READ ONLY;

4. Re-run the Export of the data file using the Transportable Tablespace method.

5. Re-run the Import of the datafile using the Transportable Tablespace method.

libXp.so.6 missing when installing Oracle on Ubantu/Redhat Linux

Problem Description:
Upon starting the installer, I receive the following error:
Error while loading shared libraries: libXp.so.6: cannot open shared object file: No such file or directory

This occurs on newer Red Hat based distributions where libXp.so.6 has become deprecated. In order to install and operate oracle, you will need to obtain this library. 

In newer distributions, you should be able to obtain this library from the following RPM files:

libXp-1.0.0*.i386 (32-bit)
libXp-1.0.0*.x86_64 (64-bit)

You can also use the yum package tool if available to install the libXp.so.6 library. If you have yum installed, run the following command to obtain the library:

For Redhat 32 bit:
sudo yum install libXp.so.6

For Redhat 64-bit:
sudo yum install libXp.x86_64

For Ubantu :
try install the libxp-dev package:

apt-get install libxp6 libxp-dev
-or-
yum install libXp libXp-devel