Total Pageviews

Friday 24 August 2012

RMAN: ksfqxcre: failure to allocate shared memory

If You Fail to Allocate Shared Memory, Set LARGE_POOL_SIZE

Set this initialization parameter if the database reports an error in the alert.log
stating that it does not have enough memory and that it will not start I/O slaves. The
message should resemble the following:
ksfqxcre: failure to allocate shared memory means sync I/O will be used whenever async I/O to file not supported natively.
When attempting to get shared buffers for I/O slaves, the database does the following
  • If LARGE_POOL_SIZE is set, then the database attempts to get memory from the large pool. If this value is not large enough, then an error is recorded in the alert log, the database does not try to get buffers from the shared pool, and asynchronous I/O is not used.
  • If LARGE_POOL_SIZE is not set, then the database attempts to get memory from the shared pool.
  • If the database cannot get enough memory, then it obtains I/O buffer memory from the PGA and writes a message to the alert.log file indicating that synchronous I/O is used for this backup.
The memory from the large pool is used for many features, including the shared server (formerly called multi-threaded server), parallel query, and RMAN I/O slave buffers.
Configuring the large pool prevents RMAN from competing with other subsystems for
the same memory.
Requests for contiguous memory allocations from the shared pool are usually small (under 5 KB) in size. However, it is possible that a request for a large contiguous memory allocation can either fail or require significant memory housekeeping to release the required amount of contiguous memory. Although the shared pool may be unable to satisfy this memory request, the large pool is able to do so. The large pool
does not have a least recently used (LRU) list; the database does not attempt to age memory out of the large pool.
Use the LARGE_POOL_SIZE initialization parameter to configure the large pool. To see in which pool (shared pool or large pool) the memory for an object resides, query
V$SGASTAT.POOL.

The formula for setting LARGE_POOL_SIZE is as follows:

LARGE_POOL_SIZE = number_of_allocated_channels * (16 MB + ( 4 * size_of_tape_buffer ) )

Thursday 23 August 2012

Direct and Asynchronous I/O


I/O operations in UNIX and Linux systems typically go through the file system cache. Although this doesn't represent a problem in itself, this extra processing does require resources. Bypassing the file system cache reduces CPU requirements, and frees up the file system cache for other non-database file operations. Operations against raw devices automatically bypass the file system cache.
When a synchronous I/O request is submitted to the operating system, the writing process blocks until the write is complete before continuing processing. With asynchronous I/O, processing continues while the I/O request is submitted and processed. This allows asynchronous I/O to bypass some of the performance bottlenecks associated with I/O operations.
Oracle can take advantage of direct I/O and asynchronous I/O on supported platforms using the FILESYSTEMIO_OPTIONS parameter, whose possible values are listed below.
  • ASYNCH - Enabled asynchronous I/O where possible.
  • DIRECTIO- Enabled direct I/O where possible.
  • SETALL- Enabled both direct I/O and asynchronous I/O where possible.
  • NONE - Disabled both direct I/O and asynchronous I/O.
The following example shows how the parameter is set.

SQL> SHOW PARAMETER FILESYSTEMIO_OPTIONS
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options                 string      none
SQL> ALTER SYSTEM SET FILESYSTEMIO_OPTIONS=SETALL SCOPE=SPFILE;
System altered.
SQL> SHOW PARAMETER FILESYSTEMIO_OPTIONS
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options                 string      SETALL
SQL>

Direct path read wait

You see direct path read waits only when you are doing a parallel full-scan. When Oracle performs a parallel full-table scan, the database blocks are read directly into the program global area (PGA), bypassing the data buffer RAM. If the I/O subsystem doesn't support asynchronous I/Os, then each wait corresponds to a physical read request.
The direct path read wait occurs during Direct Path operations when the data is asynchronously read directly from the database files into the PGA, bypassing the SGA.
If the I/O subsystem supports asynchronous I/O, then the process overlaps read requests with processing the blocks already in the PGA. With asynchronous I/O, processing continues while the I/O request is submitted and processed. This allows asynchronous I/O to bypass some of the performance bottlenecks associated with I/O operations.
 When the process attempts to access a block in the PGA that has not yet been read from disk, it issues a wait call and updates the statistics for this event. So, the number of waits is not always the same as the number of read requests.

Direct path write temp wait
The direct path write temp wait event and direct path write wait event can occur when a process is writing buffers directly from PGA. The process will wait on this event for the write call to complete. Operations that could perform direct path writes include when a sort goes to disk, parallel DML operations, direct-path INSERTs, parallel create table as select, and some LOB operations. Like direct path reads, the number of waits is not the same as number of write calls issued if the I/O subsystem supports asynchronous writes. The session will wait if it has processed the buffers in the PGA and can't continue work until an I/O request completes.

Causes for the direct path read temp wait
  • High disk sorts - If the sorts are too large to fit in memory and get sent to disk, this wait can occur.
  • Parallel slaves - Parallel slaves are used for scanning data or parallel DML may be used to create and populate objects. These may lead to direct path read wait and direct path write wait respectively.
  • Direct path loads - The direct path API is used to pass data to the load engine in the server and can cause the related direct path write wait.
  • Server process ahead of I/O - The server process is processing buffers faster than the I/O system can return the buffers. This can indicate an overloaded I/O system
  • Data Warehouse - Sorts in a data warehouse environment may always go to disk leading to high waits on direct path read temp and/or direct path write temp.
  • Hash area size - For query plans that call for a hash join, excessive I/O could result from having HASH_AREA_SIZE too small.

PGA Memory Management
The Program Global Area (PGA) is a private memory region containing data and control information for a server process. Access to it is exclusive to that server process and is read and written only by the Oracle code acting on behalf of it. An example of such information is the runtime area of a cursor. Each time a cursor is executed, a new runtime area is created for that cursor in the PGA memory region of the server process executing that cursor.
For complex queries (for example, decision support queries), a big portion of the runtime area is dedicated to work areas allocated by memory intensive operators, such as the following:

  • Sort-based operators, such as ORDER BY, GROUP BY, ROLLUP, and window functions
  • Hash-join
  • Bitmap merge
  • Bitmap create
  • Write buffers used by bulk load operations

A sort operator uses a work area (the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (the hash area) to build a hash table from its left input.
The size of a work area can be controlled and tuned. Generally, bigger work areas can significantly improve the performance of a particular operator at the cost of higher memory consumption. Ideally, the size of a work area is big enough that it can accommodate the input data and auxiliary memory structures allocated by its associated SQL operator. This is known as the optimal size of a work area. When the size of the work area is smaller than optimal, the response time increases, because an extra pass is performed over part of the input data. This is known as the one-pass size of the work area. Under the one-pass threshold, when the size of a work area is far too small compared to the input data size, multiple passes over the input data are needed. This could dramatically increase the response time of the operator. This is known as the multi-pass size of the work area. For example, a serial sort operation that needs to sort 10GB of data needs a little more than 10GB to run optimal and at least 40MB to run one-pass. If this sort gets less that 40MB, then it must perform several passes over the input data.
The goal is to have most work areas running with an optimal size (for example, more than 90% or even 100% for pure OLTP systems), while a smaller fraction of them are running with a one-pass size (for example, less than 10%). Multi-pass execution should be avoided. Even for DSS systems running large sorts and hash-joins, the memory requirement for the one-pass executions is relatively small. A system configured with a reasonable amount of PGA memory should not need to perform multiple passes over the input data.
Automatic PGA memory management simplifies and improves the way PGA memory is allocated. By default, PGA memory management is enabled. In this mode, Oracle dynamically adjusts the size of the portion of the PGA memory dedicated to work areas, based on 20% of the SGA memory size. The minimum value is 10MB.


10.3.2 buffer busy waits
This wait indicates that there are some buffers in the buffer cache that multiple processes are attempting to access concurrently. Query V$WAITSTAT for the wait statistics for each class of buffer. Common buffer classes that have buffer busy waits include data block, segment header, undo header, and undo block.
Check the following V$SESSION_WAIT parameter columns: 
  • P1 - File ID
  • P2 - Block ID
  • P3 - Class ID

10.3.2.1 Causes
To determine the possible causes, first query V$SESSION to identify the value of ROW_WAIT_OBJ# when the session waits for buffer busy waits. For example:
SELECT row_wait_obj#  FROM V$SESSION
 WHERE EVENT = 'buffer busy waits';
To identify the object and object type contended for, query DBA_OBJECTS using the value for ROW_WAIT_OBJ# that is returned from V$SESSION. For example:
SELECT owner, object_name, subobject_name, object_type  FROM DBA_OBJECTS
 WHERE data_object_id = &row_wait_obj;


10.3.2.2 Actions
The action required depends on the class of block contended for and the actual segment.
10.3.2.2.1 segment header
If the contention is on the segment header, then this is most likely free list contention.
Automatic segment-space management in locally managed tablespaces eliminates the need to specify the PCTUSED, FREELISTS, and FREELIST GROUPS parameters. If possible, switch from manual space management to automatic segment-space management (ASSM).

The following information is relevant if you are unable to use automatic segment-space management (for example, because the tablespace uses dictionary space management).
A free list is a list of free data blocks that usually includes blocks existing in a number of different extents within the segment. Free lists are composed of blocks in which free space has not yet reached PCTFREE or used space has shrunk below PCTUSED. Specify the number of process free lists with the FREELISTS parameter. The default value of FREELISTS is one. The maximum value depends on the data block size.
To find the current setting for free lists for that segment, run the following:

SELECT SEGMENT_NAME, FREELISTS
  FROM DBA_SEGMENTS
 WHERE SEGMENT_NAME = segment name
   AND SEGMENT_TYPE = segment type;
Set free lists, or increase the number of free lists. If adding more free lists does not alleviate the problem, then use free list groups (even in single instance this can make a difference). If using Oracle Real Application Clusters, then ensure that each instance has its own free list group(s).
10.3.2.2.2 data block
If the contention is on tables or indexes (not the segment header):
Check for right-hand indexes. These are indexes that are inserted into at the same point by many processes. For example, those that use sequence number generators for the key values.
Consider using automatic segment-space management (ASSM), global hash partitioned indexes, or increasing free lists to avoid multiple processes attempting to insert into the same block.
10.3.2.2.3 undo header
For contention on rollback segment header:
If you are not using automatic undo management, then add more rollback segments.
10.3.2.2.4 undo block
For contention on rollback segment block:
If you are not using automatic undo management, then consider making rollback segment sizes larger.

10.3.3 db file scattered read

This event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return. A db file scattered read issues a scattered read to read the data into multiple discontinuous memory locations. A scattered read is usually a multiblock read. It can occur for a fast full scan (of an index) in addition to a full table scan.
The db file scattered read wait event identifies that a full scan is occurring. When performing a full scan into the buffer cache, the blocks read are read into memory locations that are not physically adjacent to each other. Such reads are called scattered read calls, because the blocks are scattered throughout memory. This is why the corresponding wait event is called 'db file scattered read'. multiblock (up to DB_FILE_MULTIBLOCK_READ_COUNT blocks) reads due to full scans into the buffer cache show up as waits for 'db file scattered read'.

Check the following V$SESSION_WAIT parameter columns:

  • P1 - The absolute file number
  • P2 - The block being read
  • P3 - The number of blocks (should be greater than 1)




Backup Sets and Image Copies:


The backup command lets you make two types of RMAN backups: backup sets and image
copies. By default, all RMAN backups are in the form of backup sets. Each backup set contains one or more backup pieces, which are files in an RMAN-specific format. Backup sets are the default backup type for both disk- and tape-based backups.

A backup set is a logical structure that consists of a minimum of one backup piece, which is a physical, RMAN-specific format file that actually contains the backed-up data. A backupset can contain data from one or more datafiles, archived redo log files, or control files.

By default, a backup set contains just one backup piece. However, you can limit the size of a backup piece by using the maxpiecesize parameter. If you do this and the backup set size is larger than the backup piece size specified by the maxpiecesize parameter, there’ll be multiple backup pieces within that backup set.

 RMAN determines the number of backup sets for a backup according to an internal algorithm. However, you can limit the size of a backup set by specifying the maxsetsize parameter. You can also indirectly control the number of backup sets made by RMAN for each backup by specifying the filesperset parameter, which limits the number of input files (datafiles, archived redo log files, and so on) that can be backed up into a single backup set.

The key difference between an image copy and a backup set is that:
  • RMAN can write blocks from many files into the same backup set (known as multiplexing) but can’t do so in the case of an image copy—an image copy is identical, byte by byte, to the original datafile, control file, or archived redo log file.
  •  An RMAN image copy and a copy you make with an operating system copy command such as dd (which makes image copies) are identical.
  • Since RMAN image copies are identical to copies made with operating system copy commands, you may use user-made image copies for an RMAN restore and recovery operation after first making the copies “known” to RMAN by using the catalog command.
  • You can make image copies only on disk but not on a tape device. "backup as copy database;" Therefore, you can use the backup as copy option only for disk backups, and the backup as backupset option is the only option you have for making tape backups.
After this point, there’s no difference between those image copies made by you and those made by RMAN. During a restore operation, if you have both image copies and backup sets from the same time period, RMAN prefers to use an image copy over a backup set.
  • This is because there is more overhead involved in sorting through a backup set to get the files to restore.
  • In addition image copies offer yet another benefit during a restore and recovery operation. If you need to restore a current datafile and happen to have an image copy of that datafile available, you can use the switch command to simply point the database to the replacement file instead of the original datafile. This eliminates the need to restore the datafile, thus speeding up database recovery considerably.

Deleting data doesn't reduce the size of the backup

You sometimes may have a fairly large database that is taking many hours to backup. You are doing full image backups. You or your users may be tempted to delete data to "reduce the size of the backup" and/or to "reduce the time it takes to backup" and/or to "reduce the duration that datafiles are in backup mode".
Unfortunately, merely deleting rows [even 10% or 99%] in table[s] doesn't really reduce the size or duration of most image backups. The delete operation only adds to the total amount of undo and a very large increase in redo and archivelog generation for which you may have to scramble for disk space.
An image backup [whether by "cp" or "cpio" or "tar"] of the Datafiles will still see the Datafiles as having the same size because the delete doesn't shrink the datafiles.
An RMAN backup will still have to backup as many datablocks as before the delete. This is true _even_ if you are successful in deleting rows in such a manner that some or many blocks are 100% empty. Once a datablock has got formatted it will always be backed-up by RMAN, whether there are rows or no rows in it subsequently.



ORA-00054: resource busy and acquire with NOWAIT specified

How to check Blocking session /Locks -- oracle 10g

SELECT oracle_username || ' (' || s.osuser || ')' username, s.SID || ',' || s.serial# "SID,SESSION#"
, owner || '.' || object_name OBJECT, object_type,
DECODE( l.BLOCK, 0, 'Not Blocking', 1, 'Blocking', 2, 'Global') status
, DECODE(v.locked_mode, 0, 'None', 1, 'Null', 2, 'Row-S (SS)', 3, 'Row-X (SX)', 4, 'Share', 5, 'S/Row-X (SSX)', 6, 'Exclusive', TO_CHAR(lmode)) mode_held
FROM v$locked_object v, dba_objects d, v$lock l, v$session s
WHERE v.object_id = d.object_id
AND v.object_id = l.id1
AND v.session_id = s.SID
ORDER BY oracle_username,session_id;

To identify the locked rows, use the below query:

select do.object_name, row_wait_obj#, row_wait_file#, row_wait_block#
, row_wait_row#, dbms_rowid.rowid_create (1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#,
ROW_WAIT_BLOCK#, ROW_WAIT_ROW#) ROW_ID
from v$session s, dba_objects do
where s.ROW_WAIT_OBJ# = do.OBJECT_ID
and do.object_name=upper('<object_name_from_above_query>');


select name,open_mode,log_mode,host_name,database_role,to_char(startup_time,'dd/mm/yyyy hh:mm:ss')startup_time from v$database,v$instance;


SQL> -- Show all BLOCKERS and who they are blocking(BLOCKEES)...

set lines 110 wrap on echo off
 column id1 noprint
 column id2 noprint
 col TYPE format a10
 col blockedby format a10
 col Usr format a20
column minheld format 99999.9 heading "Mins|Held"

     select /*+ ORDERED */
         '* Blocker' TYPE, rpad(S.OSUSER,8)||'/'||rpad(S.USERNAME,8) Usr,
           l.sid, '         ' blockedby,
          s.serial#, l.type, l.lmode "LMode", l.ctime/60 minheld, l.id1, l.id2,
          substr(s.module,1,30) module
     from v$lock l, v$session s, v$parameter p
   where l.block > 0
     and l.sid = s.sid
     and p.name = 'parallel_server' and p.value='FALSE'
 union
  select /*+ ORDERED */
         '  Blockee' TYPE, rpad(S.OSUSER,8)||'/'||rpad(S.USERNAME,8) Usr,
        l.sid, substr(to_char(b.sid),1,9) blockedby,
         s.serial#, l.type, l.lmode "LMode", l.ctime/60 minheld, l.id1, l.id2,
        substr(s.module,1,30) module
    from v$lock l, v$session s, v$parameter p,
        (select sid, id1, id2 from v$lock where block > 0) b
  where l.block = 0
     and l.sid = s.sid
    and l.id1 = b.id1
     and l.id2 = b.id2
   and p.name = 'parallel_server' and p.value='FALSE'
   union
  select /*+ ORDERED */
         '* Blocker' TYPE, rpad(S.OSUSER,8)||'/'||rpad(S.USERNAME,8) Usr,
         l.sid, '         ' blockedby,
        s.serial#, l.type, l.lmode "LMode", l.ctime/60 minheld, l.id1, l.id2,
         substr(s.module,1,30) module
    from v$lock l, v$session s, v$parameter p
    where l.block > 0
     and l.sid = s.sid
     and l.lmode = 6
     and l.type = 'TX'
    and p.name = 'parallel_server' and p.value='TRUE'
  union
  select /*+ ORDERED */
         '  Blockee', rpad(S.OSUSER,8)||'/'||rpad(S.USERNAME,8) Usr,
         l.sid, substr(to_char(b.sid),1,9) blockedby,
         s.serial#, l.type, l.lmode "LMode", l.ctime/60 minheld, l.id1, l.id2,
       substr(s.module,1,30) module
  from v$lock l, v$session s, v$parameter p,
        (select bl.sid, bl.id1, bl.id2 from v$lock bl
          where bl.block > 0
            and bl.lmode = 6
            and bl.type = 'TX') b
 where l.block > 0
 and l.sid = s.sid
    and l.id1 = b.id1
    and l.id2 = b.id2
     and l.lmode = 0
     and l.type = 'TX'
      and p.name = 'parallel_server' and p.value='TRUE'
    order by 1 desc, 8, 9;
--
-- Lock Types:
--   TM(DML)
--   TX(transaction enqueue)
--   UL(user supplied)
-- Lock Modes:
--   0 None
--   1 Null(NULL)
--   2 Row-S(SS)
--   3 Row-X(SX)
--   4 Share(S)
--   5 S/Row-X(SSX)
--   6 Exclusive(X)



ALTER SYSTEM KILL SESSION 'SID,SESSION' ;

To simulate the occurrence of this error for testing/learning purposes, do as below in 2 sessions:

Session 1:
SQL> conn scott/tiger;

Connected.

SQL> create table lock_test (x number);

Table created.

SQL> insert into lock_test values(100);

1 row created.

SQL> commit;

Commit complete.

SQL> update lock_test set x=500;

1 row updated.

-- Do not commit.

Session 2:
SQL> conn scott/tiger;

Connected.

declare y number;

begin

select x into y from lock_test for update nowait;

end;

/
ERROR at line 1:

ORA-00054: resource busy and acquire with NOWAIT specified

ORA-06512: at line 4


column username format a15
column sid format 999990 heading SID
column type format a4
column lmode format 990 heading HELD
column request format 990 heading REQ
column id1 format 99999990
column id2 format 99999990
break on id1 skip 1 dup
select
sn.username,
sn.sid,
sn.sql_id,
m.type,
decode(m.lmode, 0, 'None',
1, 'Null',
2, 'Row Share',
3, 'Row Excl',
4, 'Share',
5, 'S/Row Excl',
6, 'Exclusive',
lmode, ltrim(to_char(lmode,'990'))) lmode,
decode(m.request, 0, 'None',
1, 'Null',
2, 'Row Share',
3, 'Row Excl',
4, 'Share',
5, 'S/Row Excl',
6, 'Exclusive',
request, ltrim(to_char(m.request, '990'))) request,
m.id1 id1,
m.id2 id2,
m.block,
sn.logon_time,
sn.seconds_in_wait,
sn.program
from
ops$oracle.v_session sn,
ops$oracle.v_lock m
where
(sn.sid = m.sid and m.request != 0)
or (sn.sid = m.sid and m.request = 0 and lmode != 4
and (id1,id2) in (select s.id1, s.id2 from ops$oracle.v_lock s where request != 0 and s.id1 = m.id1 and s.id2 = m.id2))

Wednesday 22 August 2012

The RMAN Executable

The interactive tool used to manage RMAN is rman.exe on windows or rman on unix in the ORACLE_HOME/bin directory.This provides a command line interface to RMAN.It connects to the database via a listener and a server process , prompts fro commanf from the user and send them to instance for execution.The executable can be run from the same ORACLE_HOME as the database that is being backed up, or it can be run from another computer anywhere on the network ,provided that the versions of RMAN executable and the target database match.

Can we open RMAN even if control files are lost !!

YES !!


RMAN is implemented with the package DBMS_BACKUP_RESTORE .This package is kernelized,meaning that it is part of the instance itself.This is a very significant feature:You do not need to have an open database in order to use RMAN.Normal PL/SQL stored procedures are stored within the data dictionary ,so if the database isn't open you can't get to them .Because the RMAN procedures are part of the kernel , they are always available so long as the instance exists in memory.This means that even if the database is so severely damaged that you cannot even mount it -because you have lost all your control files -that RMAN executable will still have access to the procedures it needs to carry out a restore operation.

Tuesday 21 August 2012

Oracle/PLSQL: ORA-01422 Error

Error: ORA-01422: exact fetch returns more than requested number of rows

Cause: You tried to execute a SELECT INTO statement and more than one row was returned.

Action: The options to resolve this Oracle error are:
  • Rewrite your SELECT INTO statement so that only one row is returned.(Use distinct)
  • Replace your SELECT INTO statement with a cursor.
For example, if you tried to execute the following SQL statement:

SELECT doc_id
 INTO document_listing
 from documents
 where document_name = 'Testing_Doc';

And there was more than one record in the documents table with the document_name of Testing_Doc , you would receive the ORA-01422 error message.

In this case, it might be more prudent to create a cursor and retrieve each row if you are unsure of how many records you might retrieve.

Wednesday 8 August 2012

Submit a Task to Run in the Background

The nohup command can be used to submit a task as a background process. This is useful for long-running Oracle jobs, because it frees up your command prompt so that you can do other work. It is especially useful when you are dialed in to an Oracle server using a modem, and you want to free up your terminal session.
Assume that you have a script named run_sql.ksh that executes SQL*Plus commands. The following nohup command can be used to submit that script for background processing:
nohup sh run_sql.ksh > logfile.lst 2>&1 &
There's obviously more to this command than just nohup, and it's important to understand just what each element of the command is doing. For this example, the elements are as follows:
nohup
Submits the task so that it continues to run even after you disconnect your terminal session.
run_sql.ksh
Specifies the Unix shell script that you want to run in the background.
> logfile.lst
Redirects standard output to the specified file.
2 > &1
Redirects standard error messages to the standard output device. The 2 represents the standard error device, and 1 represents the standard output device.
&
Runs the task in the background.
You need to have a space in front of the trailing ampersand (&) character, and it's that & that causes the task to run as a background task. The nohup command is frequently used with background tasks, because without it all your background tasks would terminate the moment you logged off of Unix. The nohup command allows a task to continue running long after you've logged off and gone home for the night.

Watch the Execution of a Background Process
If you've redirected the output of a background job to a file, you can monitor the execution of that background process by using the tail -f command. For example:
tail -f logfile.lst
The tail -f command continuously displays new lines as they are written to the output file, allowing you to easily watch the progress of your background task. To exit the tail -f command, enter Ctrl-C at any time.

kill all Oracle processes on Linux

ps -ef|grep "ora_"|grep -v grep|grep $ORACLE_SID|awk '{print $2}'|xargs kill -9

To begin, the Unix kill command is used to kill a process. The basic format of the kill command is as follows:
kill -9 PID1 PID2 PID3...PIDn
PID1 through PIDn represent the list of process IDs for the processes that you want to kill. The -9 option directs Unix to kill the processes immediately.

ps -ef
If you execute ps -ef on your server, you'll see a long list of processe for Oracle and for many other things. However, you want to limit your output to only those processes that are related to the Oracle database. The grep command can be used to do this. Oracle background process names always begin with "ora_", so piping the output of ps -ef through grep "ora_" will remove all but the Oracle background processes. For example:

>ps -ef|grep "ora_"
oracle 13022 1 0 May 07 ... ora_db02_vald
oracle 14796 42726 0 09:00:46 0:00 grep ora_
oracle 17778 1 0 May 07 ... ora_smon_devp
oracle 18134 1 0 May 07 ... ora_snp1_vald
oracle 19516 1 0 May 07 ... ora_db04_prod
oracle 21114 1 0 May 07 ... ora_snp0_devp
oracle 28436 1 0 May 07 ... ora_arch_prod
oracle 17748 1 0 May 07 ... ora_smon_prod
oracle 18134 1 0 May 07 ... ora_snp1_prod
oracle 12516 1 0 May 07 ... ora_pmon_prod
oracle 21714 1 0 May 07 ... ora_reco_prod
oracle 21814 1 0 May 07 ... ora_dbwr_prod

One thing you'll notice about this output is that it includes the process that's running the grep command. Pipe this output through grep -v grep to remove the grep command, so you don't kill your own process. The -v option makes grep work in a way that's opposite its usual manner. Whereas grep finds and includes strings, grep -v excludes strings. In this next example, you'll see that the grep line is now missing from the output:

>ps -ef|grep "ora_"|grep -v grep
oracle 13022 1 0 May 07 ... ora_db02_vald
oracle 17778 1 0 May 07 ... ora_smon_devp
oracle 18134 1 0 May 07 ... ora_snp1_vald
oracle 19516 1 0 May 07 ... ora_db04_prod
oracle 21114 1 0 May 07 ... ora_snp0_devp
oracle 28436 1 0 May 07 ... ora_arch_prod
oracle 17748 1 0 May 07 ... ora_smon_prod
oracle 18134 1 0 May 07 ... ora_snp1_prod
oracle 12516 1 0 May 07 ... ora_pmon_prod
oracle 21714 1 0 May 07 ... ora_reco_prod
oracle 21814 1 0 May 07 ... ora_dbwr_prod

Next, you should filter out all processes except those for the current ORACLE_SID. That way you delete the background processes only for that one instance instead of for all instances. Do that by grepping for the SID name:
>ps -ef|grep "ora"|grep -v grep|grep $ORACLE_SID
oracle 17748 1 0 May 07 ... ora_smon_prod
oracle 18134 1 0 May 07 ... ora_snp1_prod
oracle 12516 1 0 May 07 ... ora_pmon_prod
oracle 21714 1 0 May 07 ... ora_reco_prod
oracle 21814 1 0 May 07 ... ora_dbwr_prod

Now that you have an accurate list of processes that you want to kill, you can use the awk command to get the process ID (PID) for each of these processes. The PID is in the second column, so use the awk '{print $2}' command to display only that column:

>ps -ef|grep "ora_"|grep -v grep|grep $ORACLE_SID|awk '{ print $2 }'
17748
18134
12516
21714
21814

Now you have a list of process ID numbers for the Oracle background processes. For the last step, you use the xargs command to pipe this list of PIDs to the kill command. For example:

ps -ef|grep "ora_"|grep -v grep|grep $ORACLE_SID|awk '{ print $2 }'|xargs kill -9



Monday 6 August 2012

dgodbc 100% CPU Issue (96GB RAM)

When you run a query to fetch data from SQL Server into Oracle, and that query hangs, it leaves a process (after you kill off the sql*plus instance) running on your system called dg4odbc. dg4odbc is the database gateway application that allows Oracle to access SQL Server. In my case, my Oracle server had its CPU pegged at 100%, most of which was shared among several of a single  dg4odbc instance.

I went ahead and killed off the dg4odbc process, and my CPU went back to normal.

Wednesday 1 August 2012

UTL_MAIL Implementation in Oracle 10g

[oracle@BACKUPSERVER ~]$ echo $ORACLE_HOME
/home/oracle/10.2.0/product/db_1
[oracle@BACKUPSERVER ~]$ cd /home/oracle/10.2.0/product/db_1/rdbms/admin/utlmail.sql
-bash: cd: /home/oracle/10.2.0/product/db_1/rdbms/admin/utlmail.sql: Not a directory
[oracle@BACKUPSERVER ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Aug 1 17:01:10 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> @/home/oracle/10.2.0/product/db_1/rdbms/admin/utlmail.sql

Package created.


Synonym created.

SQL> @/home/oracle/10.2.0/product/db_1/rdbms/admin/prvtmail.plb

Package body created.

No errors.
SQL> alter system set smtp_out_server ='127.0.0.1:25' scope=both;

System altered.

Create and Execute the below procedure to test UTL_MAIL:

CREATE OR REPLACE PROCEDURE Sumit_Utl_Mail
AS
subtext VARCHAR(20) DEFAULT 'abc';
BEGIN
Utl_Mail.send(
sender => 'test@gmail.com',
recipients => 'sumit2007tyagi@gmail.com',
cc=>'dba.tyagisumit@gmail.com',
subject => subtext,
mime_type=>'text/html; charset=us-ascii'
);
END;

/

To enable other DB users to use this functionality, grant execute permission on UTL_MAIL package.

eg: grant execute on utl_mail to apps;