Total Pageviews
Wednesday, 30 January 2013
Thursday, 17 January 2013
You suspect ur Rman Backup Piece Integrity !!
Problem
You need to perform a restore and recovery, but first
you want to validate only that the backup pieces are available and structurally
sound before you actually restore any datafiles.
Solution
You can use either the restore ... validate or
validate command to verify the availability and integrity of backup pieces
required by RMAN to perform the restore operation. These commands do not
restore datafiles. You can additionally specify the check logical clause to
instruct RMAN to check for logical corruption.
RMAN> Restore database validate
- Verify integrity of backups (Physical corruption)
- By default RMAN checks only for physical corruption when validating. Physical corruption is when the block contents don't match the physical format that oracle expects.
To check for logical corruption using RMAN
Logical corruption is when the block is in correct format but the contents aren't consistent with what oracle expects.Same info will be reflected in alert.log file & v$database_block_corruption view.RMAN cannot recover logical corrupt blocks.
To recover logically corrupt blocks restore the datafile from a backup and perform media recovery.
When rman can't find a backup piece or detects corruption it will issue a "failover to previous backup" message and automatically searches for a previously taken backup abnd will stop when it finds a good backup or untill it has searched through all known backups without finding a good one
You suspect your archived redo log is bad/corrupted. !!
Problem
You need to perform a database recovery, but you
suspect one of your archived redo log files is bad. You want to perform a test
to see whether all of the redo is available and can be applied.
Solution
The recover ... test command instructs Oracle to apply
the redo necessary to perform recovery but does not make the changes permanent
in the datafiles. When you recover in test mode, Oracle applies the required
redo but rolls back the changes at the end of the process. This example starts
up the database in mount mode, restores the entire database, and then does a
test recovery:
RMAN> connect target /
RMAN> startup mount;
RMAN> restore database;
RMAN> recover database test;
Here is a partial snippet of the output showing that
the test recovery was successful:
ORA-10574: Test recovery did not corrupt any data
block
ORA-10573: Test recover tested redo from change
847960 to 848243
ORA-10570: Test recovery complete
You can test a recovery with most recover commands.
Here are some examples:
RMAN> recover tablespace users, tools test;
RMAN> recover datafile 1 test;
If you’re missing archived redo log files or online
redo log files that are needed for recovery, you’ll receive a message similar
to this:
ORA-06053: unable to perform media recovery because of missing log
Wednesday, 16 January 2013
What if table already exists during import
TABLE_EXISTS_ACTION TABLE_EXISTS_ACTION=[SKIP
| APPEND | TRUNCATE | REPLACE]
The
possible values have the following effects:
· SKIP leaves the
table as is and moves on to the next object. This is not a valid option if the CONTENT
parameter is set to DATA_ONLY.
· APPEND loads rows from
the source and leaves existing rows unchanged.
· TRUNCATE deletes
existing rows and then loads rows from the source.
· REPLACE drops the
existing table and then creates and loads it from the source.
This
is not a valid option if the CONTENT parameter
is set to DATA_ONLY.
The
following considerations apply when you are using these options:
· When you use TRUNCATE
or REPLACE, ensure that
rows in the affected tables are not targets of any referential constraints.
· When you use SKIP,
APPEND, or TRUNCATE,
existing table-dependent objects in the source, such as indexes, grants,
triggers, and constraints, are not modified. For REPLACE,
the dependent objects are dropped and re-created from the source, if they were
not explicitly or implicitly excluded (using EXCLUDE)
and they exist in the source dump file or system.
· When you use APPEND
or TRUNCATE, checks are
made to ensure that rows from the source are compatible with the existing table
before performing any action. If the existing table has active constraints and
triggers, then it is loaded using the external tables access method. If any row
violates an active constraint, then the load fails and no data is loaded. You
can override this behavior by specifying DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
on the Import command line. If you have data that must be
loaded, but may cause constraint violations, then consider disabling the
constraints, loading the data, and then deleting the problem rows before
reenabling the constraints.
· When you use APPEND,
the data is always loaded into new space; existing space, even if available, is
not reused. For this reason, you may want to compress your data after the load.
When
Data Pump detects that the source table and target table do not match (the two
tables do not have the same number of columns or the target table has a column
name that is not present in the source table), it compares column names between
the two tables. If the tables have at least one column in common, then the data
for the common
columns
is imported into the table (assuming the datatypes are compatible).
Export-Import Subset of Production Data
SAMPLE SAMPLE=[[schema_name.]table_name:]sample_percent
Allows you to
specify a percentage of the data rows to be sampled and unloaded from the
source database.
This parameter allows you to export subsets of data
by specifying the percentage of data to be sampled and exported. The
sample_percent indicates the probability that a row will be selected as part of
the sample. The sample_percent can be applied to specific tables. In the
following example, 50% of the HR.EMPLOYEES table will be exported:
SAMPLE="HR"."EMPLOYEES":50
If you specify a schema, then you must also specify a
table. However, you can specify a table without specifying a schema; the
current user will be assumed. If no table is specified, then the sample_percent
value applies to the entire export job.
Secure Your Production Data during Import
REMAP_DATA
The REMAP_DATA parameter allows you to specify a
remap function that takes as a source the original value of the designated
column and returns a remapped value that will replace the original value in the
dump file. A common use for this option is to mask data when moving from a
production system to a test system. For example, a column of sensitive customer
data such as credit card numbers could be replaced with numbers generated by a
REMAP_DATA function. This would allow the data to retain its essential
formatting and processing characteristics without exposing private data to
unauthorized personnel.
The same function can be applied to multiple columns
being dumped. This is useful when you want to guarantee consistency in
remapping both the child and parent column in a referential constraint.
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function
schema -- the schema containing the table
to be remapped. By default, this is the schema of the user doing the export.
tablename -- the table whose column will be
remapped.
column_name -- the column whose data is to be
remapped. The maximum number of columns that can be remapped for a single table
is 10.
schema -- the schema containing the PL/SQL
package you have created that contains the remapping function. As a default,
this is the schema of the user doing the export.
pkg -- the name of the PL/SQL package
you have created that contains the remapping function.
function -- the name of the function within
the PL/SQL that will be called to remap the column table in each row of the
specified table.
The following example assumes a package named remap
has been created that contains functions named minus10 and plusx
which change the values for employee_id and first_name in the employees table.
> expdp hr DIRECTORY=dpump_dir1
DUMPFILE=remap1.dmp TABLES=employees
REMAP_DATA=hr.employees.employee_id:hr.remap.minus10
REMAP_DATA=hr.employees.first_name:hr.remap.plusx
REUSE_DUMPFILES Specifies whether to overwrite a
preexisting dump file. Normally, Data Pump Export will return an error if you
specify a dump file name that already exists. The REUSE_DUMPFILES parameter
allows you to override that behavior and reuse a dump file name.
Kernel Parameters SHMMAX and SHMALL in REDHAT
SHMMAX and
SHMALL are two key shared memory parameters that directly impact’s the way by
which Oracle creates an SGA. Shared memory is nothing but part of Unix IPC
System (Inter Process Communication) maintained by kernel where multiple processes
share a single chunk of memory to communicate with each other.
While
trying to create an SGA during a database startup, Oracle chooses from one of
the 3 memory management models
- one-segment or
- contiguous-multi segment or
- Non-contiguous multi segment.
Adoption of any of these models is dependent
on the size of SGA and values defined for the shared memory parameters in the
linux kernel, most importantly SHMMAX.
So what are these parameters -
SHMMAX and SHMALL?
SHMMAX is
the maximum size of a single shared
memory segment set in “bytes”.
sumit-db:~
# cat /proc/sys/kernel/shmmax
536870912
SHMALL is
the total size of Shared Memory Segments System
wide set in “pages”.
sumit-db:~
# cat /proc/sys/kernel/shmall
1415577
The key
thing to note here is the value of SHMMAX is set in "bytes" but the
value of SHMMALL is set in "pages".
What’s the optimal value for SHMALL?
As SHMALL
is the total size of Shard Memory Segments System wide, it should always be
less than the Physical Memory on the System and should also be less than sum of
SGA’s of all the oracle databases on the server. Once this value (sum of SGA’s)
hit the limit, i.e. the value of shmall, then any attempt to start a new
database (or even an existing database with a resized SGA) will result in an
“out of memory” error (below). This is because there won’t be any more shared
memory segments that Linux can allocate for SGA.
ORA-27102:
out of memory
Linux-x86_64
Error: 28: No space left on device.
So above
can happen for two reasons. Either the value of shmall is not set to an optimal
value or you have reached the threshold on this server.
Setting the value for SHMALL to optimal is straight forward. All
you want to know is how much “Physical Memory” (excluding Cache/Swap) you have
on the system and how much of it should be set aside for Linux Kernel and to be
dedicated to Oracle Databases.
For e.g.
Let say the Physical Memory of a system is 6GB, out of which you want to set
aside 1GB for Linux Kernel for OS Operations and dedicate the rest of 5GB to
Oracle Databases. Then here’s how you will get the value for SHMALL.
Convert
this 5GB to bytes and divide by page size. Remember SHMALL should be set in
“pages” not “bytes”.
So here
goes the calculation.
Determine
Page Size first, can be done in two ways. In my case it’s 4096 and that’s the
recommended and default in most cases which you can keep the same.
sumit-db:~
# getconf PAGE_SIZE
4096
Convert 5GB
into bytes and divide by page size, I used the linux calc to do the math.
sumit-db:~
# echo "( 5 * 1024 * 1024 * 1024 ) / 4096 " | bc -l
1310720.00000000000000000000
What’s the optimal value for SHMMAX?
Oracle
makes use of one of the 3 memory management models to create the SGA during
database startup and it does this in following sequence. First Oracle attempts
to use the one-segment model and if this fails, it proceeds with the next one
which's the contiguous multi-segment model and if that fails too, it goes with
the last option which is the non-contiguous multi-segment model.
So during
startup it looks for shmmax parameter and compares it with the initialization
parameter *.sga_target. If shmmax > *.sga_target, then oracle goes with
one-segment model approach where the entire SGA is created within a single
shared memory segment.
But the
above attempt (one-segment) fails if SGA size otherwise *.sga_target > shmmax, then Oracle proceeds with the
2nd option – contiguous multi-segment model. Contiguous allocations, as the
name indicates are a set of shared memory segments which are contiguous within
the memory and if it can find such a set of segments then entire SGA is created
to fit in within this set.
But if
cannot find a set of contiguous allocations then last of the 3 option’s is
chosen – non-contiguous multi-segment allocation and in this Oracle has to grab
the free memory segments fragmented between used spaces.
So let’s
say if you know the max size of SGA of any database on the server stays below
1GB, you can set shmmax to 1 GB. But say if you have SGA sizes for different
databases spread between 512MB to 2GB, then set shmmax to 2Gigs and so on.
Tuesday, 15 January 2013
Importance of orainstRoot.sh
[oracle@DBBACKUPSERVER oraInventory]$ cat orainstRoot.sh
#!/bin/sh
if [ -d "/etc" ]; then
chmod 755 /etc;
fi
if [ -f
"/home/oracle/10.2.0/product/oraInventory/oraInst.loc" ]; then
cp /home/oracle/10.2.0/product/oraInventory/oraInst.loc
/etc/oraInst.loc;
else
INVPTR=/etc/oraInst.loc
INVLOC=/home/oracle/10.2.0/product/oraInventory
GRP=oinstall
PTRDIR="`dirname $INVPTR`";
# Create the software inventory location pointer file
if [ ! -d "$PTRDIR" ]; then
mkdir -p $PTRDIR;
fi
echo "Creating the Oracle inventory pointer file
($INVPTR)";
echo
inventory_loc=$INVLOC > $INVPTR
echo
inst_group=$GRP >> $INVPTR
chmod 644 $INVPTR
# Create the inventory directory if it doesn't exist
if [ ! -d "$INVLOC" ];then
echo "Creating
the Oracle inventory directory ($INVLOC)";
mkdir -p $INVLOC;
fi
fi
echo "Changing permissions of
/home/oracle/10.2.0/product/oraInventory to 770.";
chmod -R 770 /home/oracle/10.2.0/product/oraInventory;
if [ $? != 0 ]; then
echo
"OUI-35086:WARNING: chmod of /home/oracle/10.2.0/product/oraInventory to
770 failed!";
fi
echo "Changing groupname of
/home/oracle/10.2.0/product/oraInventory to oinstall.";
chgrp oinstall /home/oracle/10.2.0/product/oraInventory;
if [ $? != 0 ]; then
echo
"OUI-10057:WARNING: chgrp of /home/oracle/10.2.0/product/oraInventory to
oinstall failed!";
fi
echo "The execution of the script is complete"
Subscribe to:
Posts (Atom)