Total Pageviews

Friday 26 October 2012

Why can't you use RMAN to back up Online Redo Log Files

The Oracle database uses the three types of "live" files during its operation:datafiles,control files and online redo log files.Of these files RMAN only backs up the datafiles and the control files.You can't use RMAN to backup the online redo log files.
If you are operating in No-Archivelog mode,then you won't need the online redo log files ,since the database files are always consistent when you backup the database using the only permitted modes of backing up a database in No-archivelog mode,which are closed whole backups.
You won't need the online redo log backups if you are operating in Archivelog mode either ,since RMAN is continually backing up all your archived redo logs.
However you must make sure you always multiplex the online redo log so you won't lose all the members of a group.

Thursday 25 October 2012

Flash Recovery Area

What is Flash Recovery Area ??

FRA is a special area on disk that is used by the database as a backup location.By default RMAN creates backups of all types – regular backup sets, image copies, and archive logs – in that area.Since RMAN knows about the existence of this area,it automatically deletes unneeded backups based on redundancy and retension periods to make room for new backups.
In addition to backups, the flash recovery area can also store online redo log files, archived redo log files, and control files. Again, these are optional; you can always define the location of those files to be anywhere, not necessarily inside the flash recovery area. Since the flash recovery area is generally used for backup files, you should consider creating it on disks different from your main database disks. Doing so helps protect you from losing both your main database and your backup files from a single failure. You can further take advantage of this probability by putting one member of the redo log group or one control file on the flash recovery area. This reduces the possibility of all members of a redo log group or all control files getting damaged at the same time.
How to create the FRA ??
Things to consider before creating the FRA:
  • ·        FRA Size:Depends on the size of backups , archive logs etc
  •          FRA location: As a best practice you should  avoid putting the FRA and the database files on the same mount point or disk group (if on ASM).
Having the answers to these questions in mind, you can then use the following process to create the flash recovery area:

Disable the parameters log_archive_dest and log_archive_duplex_dest, if they are set in the database. You can do that by issuing the following commands:

alter system set log_archive_duplex_dest = '';
alter system set log_archive_dest = '';

Log on as a user with the sysdba role and issue the following commands to size and create the flash recovery area:

alter system set db_recovery_size = 4G;

alter system set db_recovery_dest = '/home/oracle/flasharea';

The sequence of these commands is important; you have to issue them in that order, not the reverse. However, do replace the size and path name with the values you have chosen for your system.That’s it; the flash recovery area is ready for operation.
What if you don't want archived redo logs in FRA ??

Remember, you can always define a different location for archived redo logs. If you use a different location, then you can’t just erase the values of the parameters log_archive_dest and log_archive_duplex_dest, as suggested in the earlier solution:

alter system set log_archive_duplex_dest = '';
alter system set log_archive_dest = '';
To place your log files elsewhere than the flash recovery area, you should use a different parameter to specify the archived redo log location; use log_archive_dest_1 instead of log_archive_dest. Suppose log_archive_dest used to be /dbarch. You can use log_archive_dest_1 to specify the same location for archived redo logs. First, check the value of the parameter log_archive_dest:
SQL>show parameter log_archive_dest
NAME TYPE VALUE
------------------------------------ ----------- ----------------------------
log_archive_dest string /dbarch
The current setting of the archived redo log destination is /home/backup/archives. Next, set the log_archive_dest_1 
parameter to that location:
SQL> alter system set log_archive_dest_1 = 'location=/home/backup/archives';
Now set log_archive_dest to NULL:
SQL> alter system set log_archive_dest = '';
If you have set the two parameters—log_archive_dest and log_archive_duplex_dest—in the initialization parameter file, you should edit the file to remove these two parameters completely.

Tuesday 23 October 2012

ORA-00020: maximum number of processes (%s) exceeded

This means that you have too many database connections basically. It also means that you as a DBA cannot connect internal to do something about it!

You need to either ask a couple of users to log out, or you need to kill some sessions at the OS level. Once you managed to connect, see if you can spot anyone making abnormal numbers of connections. If you can't find anything you will need to increase the init parameter 'processes'. Unfortunately, this will require a restart of the database to take effect:

SQL> show parameter processes

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
processes integer 250

SQL> select count(*) from v$process;

COUNT(*)
----------
  249

SQL> alter system set processes=500 scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

SQL> show parameter processes

NAME TYPE VALUE
------------------------------
processes integer 500

Monday 15 October 2012

PGA memory management Oracle 10/11g


A program global area (PGA) is a memory region that contains data and control information for a server process. It is a nonshared memory created by Oracle when a server process is started. Access to it is exclusive to that server process and is read and written only by Oracle code acting on behalf of it. The total PGA memory allocated by each server process attached to an Oracle instance is also referred to as the aggregated PGA memory allocated by the instance.


Starting with Oracle9i Release 1 and above, there are two ways to manage this other non-UGA memory in the PGA:
  • Manual PGA memory management, where you tell Oracle how much memory it can use to sort and hash any time it needs to sort or hash in a specific process.
  • Automatic PGA memory management, where you tell Oracle how much memory it should attempt to use system wide.


This setting is controlled by the database initialization parameter WORKAREA_SIZE_POLICY and may be altered at session level.

For automatic PGA memory management: WORKAREA_SIZE_POLICY=AUTO

For manual PGA memory management:    WORKAREA_SIZE_POLICY=MANUAL



Starting in Oracle 11g Release 1, automatic PGA memory management can be implemented using one of two techniques:

  •  By setting the PGA_AGGREGATE_TARGET initialization parameter and telling Oracle how much PGA memory to attempt to use.
  •  By setting the MEMORY_TARGET initialization parameter and telling Oracle how much total memory the database instance should use for both the SGA and the PGA; the PGA size will be figured out by the database itself from this parameter.

MANUAL PGA Memory Management:
  • SORT_AREA_SIZE: The total amount of RAM that will be used to sort information before swapping out to disk.
  • SORT_AREA_RETAINED_SIZE: The amount of memory that will be used to hold sorted data after the sort is complete.That is if SORT_AREA_SIZE is 512KB and SORT_AREA_RETAINED_SIZE is 250KB,your server process would use upto 512 KB of memory to sort the data during the initial processing of the query.When the sort was complete,the sorting area would “shrink” to 250KB, and any sorted data that did not fit in that 250KB would be written out to the temporary tablespace.
  • HASH_AREA_SIZE: The amount of memory your server process can use to store hash tables in memory.These structures are used during a hash join, typically when joining a large set with another one.The smaller of the two sets would behashed into memory any anything that didn’t fit in the hash area region of memory would be sorted in the temp tablespace.

Automatic PGA memory management:


Enter automatic PGA memory management. Here, you first simply set up and size the SGA. The SGA is a fixed-size piece of memory so you can very accurately see how big it is, and that will be its total size (unless and until you change it). You then tell Oracle, "This is how much memory you should try to limit yourself to across all work areas” (a new umbrella term for the sorting and hashing areas you use). Now, you could in theory take a machine with 2GB of physical memory and allocate 768MB of memory to the SGA and 768MB of memory to the PGA, leaving 512MB of memory for the OS and other processes. I say "in theory" because it doesn't work exactly that cleanly, but it's close. Before I discuss why that’s true,


let’s take a look at how to set up automatic PGA memory management and turn it on.The process of setting this up involves deciding on the proper values for two instance initialization parameters:
  • WORKAREA_SIZE_POLICY: This parameter may be set to either MANUAL, which will use the sort area and hash area size parameters to control the amount of memory allocated, or AUTO, in which case the amount of memory allocated willb vary based on the current workload in the database. The default and recommended value is AUTO.
  • PGA_AGGREGATE_TARGET: This parameter controls how much memory the instance should allocate, in total, for all work areas used to sort or hash data. Its default value varies by version and may be set by various tools such as the DBCA. In general, if you are using automatic PGA memory management, you should explicitly set this parameter.

In Oracle 11g Release 1 and above, instead of setting the PGA_AGGREGATE_TARGET, you can set the

MEMORY_TARGET parameter. When the database uses the MEMORY_TARGET parameter, it decides how much memory to allocate to the SGA and PGA respectively. It may also decide to reallocate these memory amounts while the database is up and running. This fact, however, doesn’t affect how automatic PGA memory management (described below) works; rather it just decides the setting for the PGA_AGGREGATE_TARGET.


So, the entire goal of automatic PGA memory management is to maximize the use of RAM while at the same time not using more RAM than you want. Under manual memory management, this was a virtually impossible goal to achieve. If you set SORT_AREA_SIZE to 10MB, when one user was performing a sort operation that user would use up to 10MB for the sort work area. If 100 users were doing the same, they would use up to 1,000MB of memory. If you had 500MB of free memory, the single user performing a sort by himself could have used much more memory, and the 100 users should have used much less.That is what automatic PGA memory management was designed to do. Under a light workload, memory usage could be maximized as the load increases on the system, and as more users perform sort or hash operations, the amount of memory allocated to them would decrease—to reach the goal of using all available RAM, but not attempting to use more than physically exists.

What you’d like to happen is for this memory to be allocated differently as the memory demands on the system grow and shrink. The more users, the less RAM each should use. The fewer users, the more RAM each should use. Setting WORKAREA_SIZE_POLICY = AUTO is just the way to achieve this. The DBA specifies a single size now, the PGA_AGGREGATE_TARGET or the maximum amount of PGA memory that the database should strive to use. Oracle then distributes this memory over the active sessions as it sees fit.


Are there times, however, when you won't want to use AUTOMATIC PGA Memory Management? Absolutely, but fortunately they seem to be the exception and not the rule. The automatic memory management was designed to be multiuser “fair.” In anticipation of additional users joining the system, the automatic memory management will limit the amount of memory allocated as a percentage of the PGA_AGGREGATE_TARGET. But what happens when you don't want to be fair, when you know that you should get all of the memory available? Well, that would be time to use the ALTER SESSION command to disable automatic memory management in your session (leaving it in place for all others) and to manually set your SORT|HASH_AREA_SIZE as needed. For example, that large batch process that takes place at 2:00 am and does tremendously large hash joins, some index builds, and the like? It should be permitted to use all of the resources on the machine.It does not want to be "fair" about memory use—it wants it all, as it knows it is the only thing happening in the database right now. That batch job can certainly issue the ALTER SESSION command and make use of all resources available.

So, in short, I prefer to use automatic PGA memory management for end-user sessions—for the applications that run day to day against my database. Manual memory management makes sense for large batch jobs that run during periods when they are the only activities in the database.


"How is this memory allocated?" and "What will be the amount of RAM used by my session?"


When using things that begin with "A"—for automatic—you lose a degree of control, as the underlying algorithms decide what to do and how to control things.


We can make some observations based on information from MetaLink note 147806.1:

  • The PGA_AGGREGATE_TARGET is a goal of an upper limit. It is not a value that is preallocated when the database is started up. You can observe this by setting the PGA_AGGREGATE_TARGET to a value much higher than the amount of physical memory you have available on your server. You will not see any large allocation of memory as a result.
  • A serial (nonparallel query) session will use a small percentage of the PGA_AGGREGATE_TARGET, typically about 5 percent or less. So, if you’ve set the PGA_AGGREGATE_TARGET to 100MB, you’d expect to use no more than about 5MB per work area (e.g., the sort or hash work area). You may well have multiple work areas in your session for multiple queries, or more than one sort or hash operation in a single query, but each work area will be about 5 percent or less of the PGA_AGGREGATE_TARGET. Note that this 5 percent is not a hard and fast rule; things change over time, the automatic algorithms can and will change in the database.
  • As the workload on your server goes up (more concurrent queries, concurrent users), the amount of PGA memory allocated to your work areas will go down. The database will try to keep the sum of all PGA allocations under the threshold set by PGA_AGGREGATE_TARGET. This is analogous to having a DBA sit at a console all day,setting the SORT_AREA_SIZE and HASH_AREA_SIZE parameters based on the amount of work being performed in the database.
  • A parallel query may use up to about 30 percent of the PGA_AGGREGATE_TARGET, with each parallel process getting its slice of that 30 percent. That is, each parallel process would be able to use about 0.3 * PGA_AGGREGATE_TARGET / (number of parallel processes).

The  PGA_AGGREGATE_TARGET is just that: a target, not a directive. We can and will exceed this value for various reasons.


PGA and UGA Wrap-up


So far, we have looked at two memory structures: the PGA and the UGA. You should understand now that the PGA is private to a process. It is the set of variables that an Oracle dedicated or shared server needs to have independent of a session. The PGA is a "heap" of memory in which other structures may be allocated. The UGA is also a heap of memory in which various session-specific structures may be defined. The UGA is allocated from the PGA when you use a dedicated server to connect to Oracle, and from the SGA under a shared server connection. This implies that when using a shared server, you must size your SGA's large pool to have enough space to cater to every possible user that will ever connect to your database concurrently. So, the SGA of a database supporting shared server connections is generally much larger than the SGA for a similarly configured dedicated server mode-only database.


Thursday 4 October 2012

log file sync

If you are faced with log contention, what you might observe is a large wait time on the “log file sync” event and long write times evidenced in the “log file parallel write” event in a Statspack report. If you see this, you may be experiencing contention on the redo logs; they are not being written fast enough. This can happen for many reasons. One application reason (one the DBA can’t fix, but the developer must) is that you are committing too frequently—committing inside of a loop doing INSERTs, for example. As committing too frequently, aside from being a bad programming practice, is a surefire way to introduce lots of log file sync waits.

Assuming all of your transactions are correctly sized (you are not committing more frequently than your business rules dictate), the most common causes for log file waits are as follows:

  • Putting redo on a slow device: The disks are just performing poorly. It is time to buy faster disks.
  • Putting redo on the same device as other files that are accessed frequently: Redo is designed to be written with sequential writes and to be on dedicated devices. If other components of your system—even other Oracle components—are attempting to read and write to this device at the same time as LGWR, you will experience some degree of contention. Here, you want to ensure LGWR has exclusive access to these devices if at all possible.
  • Putting redo on a slow technology, such as RAID-5: RAID-5 is great for reads, but it is generally terrible for writes. As we saw earlier regarding what happens during a COMMIT, we must wait for LGWR to ensure the data is on disk. Using any technology that slows this down is not a good idea

Thread 1 cannot allocate new log



Thread 1 cannot allocate new log, sequence 1466

Checkpoint not complete

Current log# 3 seq# 1465 mem# 0: /…/…redo03.log


It might say Archival required instead of Checkpoint not complete, but the effect is pretty much the same. This is really something the DBA should be looking out for. This message will be written to alert.log on the server whenever the database attempts to reuse an online redo log file and finds that it can’t. This happens when DBWR has not yet finished checkpointing the data protected by the redo log or ARCH has not finished copying the redo log file to the archive destination.

 At this point, the database effectively halts as far as the end user is concerned. It stops cold. DBWR or ARCH will be given priority to flush the blocks to disk. Upon completion of the checkpoint or archival, everything goes back to normal.

The reason the database suspends user activity is that there is simply no place to record the changes the users are making. Oracle is attempting to reuse an online redo log file, but because either the file would be needed to recover the database in the event of a failure (Checkpoint not complete), or the archiver has not yet finished copying it (Archival required), Oracle must wait (and the end users will wait) until the redo log file can safely be reused.

If you see that your sessions spend a lot of time waiting on a log file switch,” “log buffer space,” or “log file switch checkpoint or archival incomplete,” you are most likely hitting this. You will notice it during prolonged periods of database modifications if your log files are sized incorrectly, or because DBWR and ARCH need to be tuned by the DBA or system administrator. I frequently see this issue with the “starter” database that has not been customized. The “starter” database typically sizes the redo logs far too small for any significant amount of work (including the initial database build of the data dictionary itself). As soon as you start loading up the database, you will notice that the first 1,000 rows go fast, and then things start going in spurts: 1,000 go fast, then hang, then go fast, then hang, and so on. These are the indications you are hitting this condition.


There are a couple of things you can do to solve this issue:

  • Make DBWR faster. Have your DBA tune DBWR by enabling ASYNC I/O, using DBWR I/O slaves, or using multiple DBWR processes. Look at the I/O on the system and see if one disk or a set of disks is “hot” and you need to therefore spread the data out.The same general advice applies for ARCH as well. The pros of this are that you get “something for nothing” here—increased performance without really changing any logic/structures/code. There really are no downsides to this approach.
  • Add more redo log files. This will postpone the Checkpoint not complete in some cases and, after a while, it will postpone the Checkpoint not complete so long that it perhaps doesn’t happen (because you gave DBWR enough breathing room to checkpoint). The same applies to the Archival required message. The benefit of this approach is the removal of the “pauses” in your system. The downside is it consumes more disk, but the benefit far outweighs any downside here.
  • Re-create the log files with a larger size. This will extend the amount of time between the time you fill the online redo log and the time you need to reuse it. The same applies to the Archival required message, if the redo log file usage is“bursty.” If you have a period of massive log generation (nightly loads, batch processes) followed by periods of relative calm, then having larger online redo logs can buy enough time for ARCH to catch up during the calm periods. The pros and cons are identical to the preceding approach of adding more files. Additionally, it may postpone a checkpoint from happening until later, since checkpoints happen at each log switch (at least), and the log switches will now be further apart.
  • Make checkpointing happen more frequently and more continuously. Use a smaller block buffer cache (not entirely desirable) or various parameter settings such  as FAST_START_MTTR_TARGET, LOG_CHECKPOINT_INTERVAL, and LOG_CHECKPOINT_TIMEOUT. This will force DBWR to flush dirty blocks more frequently. The benefit to this approach is that recovery time from a failure is reduced. There will always be less work in the online redo logs to be applied. The downside is that blocks may be written to disk more frequently if they are modified often. The buffer cache willnot be as effective as it could be, and it can defeat the block cleanout mechanism.


The COMMIT !!

A COMMIT is generally a very fast operation, regardless of the transaction size. You might think that the bigger a transaction (in other words, the more data it affects), the longer a COMMIT would take. This is not true. The response time of a COMMIT is generally “flat,” regardless of the transaction size. This is because a COMMIT does not really have too much work to do, but what it does do is vital.
One of the reasons this is an important fact to understand and embrace is that it will lead to let your transactions be as big as they should be. Many developers artificially constrain the size of their transactions, committing every so many rows, instead of committing when a logical unit of work has been performed. They do this in the mistaken belief that they are preserving scarce system resources, when in fact they are increasing them. If a COMMIT of one row takes X units of time, and the COMMIT of 1,000 rows takes the same X units of time, then performing work in a manner that does 1,000 one-row COMMITs will take an additional 1,000*X units of time to perform. By committing only when you have to (when the logical unit of work is complete), you will not only increase performance, and you’ll also reduce contention for shared resources (log files, various internal latches, and the like).

The more often you commit, the longer you wait (your mileage will vary on this). And the amount of time you wait is more or less directly proportional to the number of times you commit .Remember, this is just a single-user scenario; with multiple users doing the same work, all committing too frequently, the numbers will go up rapidly. We’ve heard the same story, time and time again, with similar situations.  We must perform operations only when we need to—a COMMIT is just another such operation. It is best to size our transactions based on business need, not based on misguided attempts to lessen resource usage on the database.

There are two factors contributing to the expense of the COMMIT in this example
  •  We’ve obviously increased the round-trips to and from the database. If we commit every record, we are generating that much more traffic back and forth. I didn’t even measure that, which would add to the overall runtime.
  • Every time we commit, we must wait for our redo to be written to disk. This will result in a “wait.” In this case, the wait is named “log file sync.”
So, we committed after every INSERT, we waited every time for a short period of time—and if you wait a little bit of time but you wait often, it all adds up. Fully thirty seconds of our runtime was spent waiting for a COMMIT to complete when we committed 10,000 times—in other words, waiting for LGWR to write the redo to disk. In stark contrast, when we committed once, we didn’t wait very long (not a measurable amount of time actually). This proves that a COMMIT is a fast operation; we expect the response time to be more or less flat, not a function of the amount of work we’ve done.

So, why is a COMMIT’s response time fairly flat, regardless of the transaction size? It is because before we even go to COMMIT in the database, we’ve already done the really hard work. We’ve already modified the data in the database, so we’ve already done 99.9 percent of the work. For example, operations such as the following have already taken place:
  • Undo blocks have been generated in the SGA.
  • Modified data blocks have been generated in the SGA.
  • Buffered redo for the preceding two items has been generated in the SGA.
  • Depending on the size of the preceding three items and the amount of time spent, some combination   of the previous data may be flushed onto disk already.
  • All locks have been acquired.
When we COMMIT, all that is left to happen is the following:
  •  A System Change Number (SCN) is generated for our transaction. In case you are not familiar with it, the SCN is a simple timing mechanism Oracle uses to guarantee the ordering of transactions and to enable recovery from failure. It is also used to guarantee read-consistency and checkpointing in the database.Think of the SCN as a ticker; every time someone COMMITs, the SCN is incremented by one.
  • LGWR writes all of our remaining buffered redo log entries to disk and records the SCN in the online redo log files as well. This step is actually the COMMIT. If this step occurs, we have committed. Our transaction entry is “removed” from V$TRANSACTION—this shows that we have committed.
  • All locks recorded in V$LOCK held by our session are released, and everyone who was enqueued waiting on locks we held will be woken up and allowed to proceed with their work.
  • Some of the blocks our transaction modified will be visited and “cleaned out” in a fast mode if they are still in the buffer cache. Block cleanout refers to the lockrelated information we store in the database block header. Basically, we are cleaning out our transaction information on the block, so the next person who visits the block won’t have to. We are doing this in a way that need not generate redo log information, saving considerable work later

As you can see, there is very little to do to process a COMMIT. The lengthiest operation is, and always will be, the activity performed by LGWR, as this is physical disk I/O. The amount of time spent by LGWR here will be greatly reduced by the fact that it has already been flushing the contents of the redo log buffer on a recurring basis. LGWR will not buffer all of the work you do for as long as you do it. Rather, it will incrementally flush the contents of the redo log buffer in the background as you are going along.

This is to avoid having a COMMIT wait for a very long time in order to flush all of your redo at once.So, even if we have a long-running transaction, much of the buffered redo log it generates would have been flushed to disk, prior to   committing. On the flip side is the fact that when we COMMIT, we must typically wait until all buffered redo that has not been written yet is safely on disk. That is, our call to LGWR is by default a synchronous one. While LGWR may use asynchronous I/O to write in parallel to our log files, our transaction will normally wait for LGWR to complete all writes and receive confirmation that the data exists on disk before returning.


Monday 1 October 2012

INSERT allocated storage but ROLLBACK does not release it !!

It is a common misconception that undo is used to restore the database physically to the way it was before the statement or transaction executed, but this is not so. The database is logically restored to the way it was—any changes are logically undone—but the data structures, the database blocks themselves,may well be different after a rollback. The reason for this lies in the fact that, in any multiuser system, there will be tens or hundreds or thousands of concurrent transactions. One of the primary functions of a database is to mediate concurrent access to its data. The blocks that our transaction modifies are, in general, being modified by many other transactions as well. Therefore, we can’t just put a block back exactly the way it was at the start of our transaction—that could undo someone else’s work!

For example, suppose our transaction executed an INSERT statement that caused the allocation of a new extent (i.e., it caused the table to grow). Our INSERT would cause us to get a new block, format it for use, and put some data into it. At that point, some other transaction might come along and insert data into this block. If we roll back our transaction, obviously we can’t unformat and unallocate this block.

Therefore, when Oracle rolls back, it is really doing the logical equivalent of the opposite of what we did
in the first place. For every INSERT, Oracle will do a DELETE. For every DELETE, Oracle will do an INSERT. For every UPDATE, Oracle will do an “anti-UPDATE,” or an UPDATE that puts the row back the way it was prior to our modification.

How can we see this in action? Perhaps the easiest way is to follow these steps:
1. Create an empty table.
2. Full-scan the table and observe the amount of I/O performed to read it.
3. Fill the table with many rows (no commit).
4. Roll back that work and undo it.
5. Full-scan the table a second time and observe the amount of I/O performed.


SQL> create table test as select * from dba_objects where 1=3;

Table created.

And now we’ll query it, with AUTOTRACE enabled in SQL*Plus to measure the I/O.

SQL> select * from test;

no rows selected

SQL> set autotrace traceonly statistics;

SQL> select * from test;

no rows selected

Statistics

----------------------------------------------------------

0 recursive calls

0 db block gets

0 consistent gets

0 physical reads

0 redo size

 #The query initially takes no I/Os to full-scan the table  (Zero consistent gets)

SQL>

SQL> set autotrace off;

SQL> insert into test select * from dba_objects;

50017 rows created.

SQL> rollback;

Rollback complete.

SQL> select * from t;

no rows selected

SQL> set autotrace traceonly statistics;

#Now, if we query the table again, we’ll discover that it takes considerably more I/Os to #read the table this time:



SQL> select * from test;

no rows selected

Statistics

---------------------------------------------------------

28 recursive calls

0 db block gets

760 consistent gets

0 physical reads

0 redo size

SQL> set autotrace off;

SQL>


The blocks that our INSERT caused to be added under the table’s high-water mark (HWM) are still there—formatted, but empty. Our full scan had to read them to see if they contained any rows.
Moreover, the first time we ran the query, we observed zero I/Os. That was due to the default mode of
table creation in Oracle Database 11g Release 2—using deferred segment creation. When we issued that
CREATE TABLE, no storage, not a single extent, was allocated. The segment creation was deferred until theINSERT took place, and when we rolled back, the segment persisted. You can see this easily with a smaller example, I’ll explicitly request deferred segment creation this time although it is enabled by default in
11g Release 2:


SQL> create table t ( x int ) segment creation deferred;
Table created.
SQL> select extent_id, bytes, blocks from user_extents where segment_name = 'T' order by extent_id;
no rows selected
SQL> insert into t(x) values (1);
1 row created.
SQL> rollback;
Rollback complete.
SQL> select extent_id, bytes, blocks from user_extents where segment_name = 'T' order by extent_id;
EXTENT_ID BYTES BLOCKS
---------- ---------- --------
0         65536         8



As you can see, after the table create there was no allocated storage—no extents were used by this
table. Upon performing an INSERT, followed immediately by ROLLBACK, we can see the INSERT allocated storage—but the ROLLBACK does not “release” it.

Those two things together—that the segment was actually created by the INSERT but not uncreated by the ROLLBACK, and that the new formatted blocks created by the INSERT were scanned the second time around—show that a rollback is a logical “put the database back the way it was” operation.The database will not be exactly the way it was, just logically the same.

enq: TX - row lock contention

This is indicative of a session waiting for a row lock held by another session; the amount of wait time associated with this wait event is excessive and can be responsible for performance issues observed in the application. TX enqueue are acquired exclusive when a transaction initiates its first change and held until the transaction does a COMMIT or ROLLBACK.

There are several situations of TX enqueue:

Waits for TX in mode 6 occurs when a session is waiting for a row level lock that is already held by another session. This occurs when one user is updating or deleting a row, which another session wishes to update or delete. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.

To solve this you would have the first session already holding the lock perform a COMMIT or ROLLBACK.

Waits for TX in mode 4 can occur if a session is waiting due to potential duplicates in UNIQUE index. If two sessions try to insert the same key value the second session has to wait to see if an ORA-0001 should be raised or not. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.

To solve this again you have the first session already holding the lock perform a COMMIT or ROLLBACK.

Waits for TX in mode 4 is also possible if the session is waiting due to shared bitmap index fragment. Bitmap indexes index key values and a range of ROWIDs. Each ‘entry’ in a bitmap index can cover many rows in the actual table. If two sessions want to update rows covered by the same bitmap index fragment, then the second session waits for the first transaction to either COMMIT or ROLLBACK by waiting for the TX lock in mode 4. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.

For which SQL currently is waiting on:

select sid, sql_text from v$session s, v$sql q where sid in (select sid from v$session where state in ('WAITING') and wait_class != 'Idle' and event='enq: TX - row lock contention' and (q.sql_id = s.sql_id or q.sql_id = s.prev_sql_id));

The blocking session is:

select blocking_session, sid, serial#, wait_class, seconds_in_wait from v$session where blocking_session is not NULL order by blocking_session;