Total Pageviews

Tuesday 31 March 2015

Increase swap size in oracle vm linux

Determine the size of the new swap file in megabytes and multiple by 1024 to determine the block size. For example, the block size of a 2 GB swap file is 2*1024*1024 .

Check the swap size :

[root@node2 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:          2006       1940         66          0          8        650
-/+ buffers/cache:       1282        724
Swap:         1799        173       1626


At a shell prompt as root, type the following command with count being equal to the desired block size:

[root@node2 ~]# dd if=/dev/zero of=/swapfile bs=1024 count=2097152
2097152+0 records in
2097152+0 records out
2147483648 bytes (2.1 GB) copied, 154.49 s, 13.9 MB/s

Setup the swap file with the command:

[root@node2 ~]# mkswap /swapfile
mkswap: /swapfile: warning: don't erase bootbits sectors
        on whole disk. Use -f to force.
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=be934b65-5665-440d-b761-d8b4674d9161

To enable the swap file immediately but not automatically at boot time:

[root@node2 ~]# swapon /swapfile

[root@node2 ~]# free -m 
                  total       used       free     shared    buffers     cached
Mem:          2006       1936         70          0          9        641
-/+ buffers/cache:       1285        721
Swap:         3847        172       3675
[root@node2 ~]# 

To enable it at boot time, edit /etc/fstab to include:

/swapfile               swap                    swap    defaults        0 0
The next time the system boots, it enables the new swap file.

[root@node2 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Mon Dec 15 18:13:22 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_node1-lv_root /                       ext4    defaults        1 1
UUID=aa1f9eb9-7a79-43fc-80e0-fe2b62650201 /boot                   ext4    defaults        1 2
/dev/mapper/vg_node1-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/swapfile               swap                    swap    defaults        0 0

After adding the new swap file and enabling it, verify it is enabled by viewing the output of the command cat /proc/swaps or free.

root@node2 ~]# cat /proc/swaps
Filename Type Size Used Priority
/dev/dm-1                               partition 1843196 173280 -1

/swapfile                               file 2097148 0 -2

Monday 30 March 2015

How to Debug CVU / Collect CVU Trace Generated by RUNCLUVFY.SH

For 11gR2/12c:

Set environment variable CV_TRACELOC to a directory that's writable by grid user; trace should be generated in there once runcluvfy.sh starts. For example, as grid user:


rm -rf /tmp/cvutrace
mkdir /tmp/cvutrace
export CV_TRACELOC=/tmp/cvutrace
export SRVM_TRACE=true
export SRVM_TRACE_LEVEL=1
<STAGE_AREA>/runcluvfy.sh stage -pre crsinst -n <node1>,<node2> -verbose


Note:

A. STAGE_AREA refers to the location where Oracle Clusterware is unzipped.
B. Replace above "runcluvfy.sh stage -pre crsinst -n <node1>,<node2> -verbose" if other stage/comp needs to be traced, i.e. "./runcluvfy.sh comp ocr -verbose"
C. For 12.1.0.2, the following can be set for additional tracing from exectask command:

export EXECTASK_TRACE=true
The trace will be in <TMP>/CVU_<version>_<user>/exectask.trc, i.e. /tmp/CVU_12.1.0.2.0_grid/exectask.trc



For 10gR2, 11gR1 or 11gR2:

1. As crs/grid user, backup runcluvfy.sh. For 10gR2, it's located in <STAGE_AREA>/cluvfy/runcluvfy.sh; and for 11gR1 and 11gR2, <STAGE_AREA>/runcluvfy.sh


cd <STAGE_AREA>
cp runcluvfy.sh runcluvfy.debug

2. Locate the following lines in runcluvfy.debug:


# Cleanup the home for cluster verification software
$RM -rf $CV_HOME

Comment out the remove command so runtime files including trace won't be removed once CVU finishes.


# Cleanup the home for cluster verification software
# $RM -rf $CV_HOME

3. As crs/grid user, set environment variable CV_HOME to anywhere as long as the location is writable by crs/grid user and has 400MB of free space:


mkdir /tmp/cvdebug
CV_HOME=/tmp/cvdebug
export CV_HOME

This step is optional, if CV_HOME is unset, CVU files will be generated in /tmp.

4. As crs/grid user, execute runcluvfy.debug:


export SRVM_TRACE=true
export SRVM_TRACE_LEVEL=1
cd <STAGE_AREA>
./runcluvfy.debug stage -pre crsinst -n <node1>,<node2> -verbose

Note:

A. SRVM_TRACE_LEVEL is effective for 11gR2 only.
B. Replace above "runcluvfy.sh stage -pre crsinst -n <node1>,<node2> -verbose" if other stage/comp needs to be traced, i.e. "./runcluvfy.sh comp ocr -verbose"


5. Regardless whether above command finishes or not, CVU trace should be generated in:

10gR2: $CV_HOME/<pid>/cv/log
11gR1: $CV_HOME/bootstrap/cv/log
11gR2: $CV_HOME/bootstrap/cv/log

If CV_HOME is unset, trace will be in /tmp/<pid>/cv/log or /tmp/bootstrap/cv/log depend on CVU version.

6. Clean up temporary files generated by above runcluvfy.debug:


rm -rf $CV_HOME/bootstrap

Changing the First Node for Install and Upgrade


If the first node becomes inaccessible, you can force another node to be the first node for installation or upgrade. During installation, if root.sh fails to complete on the first node, run the following command on another node using the -force option

root.sh -force -first

For upgrade, run the following command:

rootupgrade.sh -force -first

Completing an Oracle Clusterware Upgrade when Nodes Become Unreachable

If some nodes become unreachable in the middle of an upgrade, then you cannot complete the upgrade, because the upgrade script (rootupgrade.sh) did not run on the unreachable nodes. Because the upgrade is incomplete, Oracle Clusterware remains in the previous release. You can confirm that the upgrade is incomplete by entering the command crsctl query crs activeversion.

To resolve this problem, run the rootupgrade command with the -force flag on any of the nodes where the rootupgrade.sh script has already completed as follows:

Grid_home/rootupgrade.sh -force

For example:

# /u01/app/12.1.0/grid/rootupgrade.sh -force
This command forces the upgrade to complete. Verify that the upgrade has completed by using the command crsctl query crs activeversion. The active release should be the upgrade release.

The force cluster upgrade has the following limitations:

All active nodes must be upgraded to the newer release.

All inactive nodes (accessible or inaccessible) may be either upgraded or not upgraded.

For inaccessible nodes, after patch set upgrades, you can delete the node from the cluster. If the node becomes accessible later, and the patch version upgrade path is supported, then you can upgrade it to the new patch version.

If the cluster was previously forcibly upgraded, then ensure that all inaccessible nodes have been deleted from the cluster or joined to the cluster before starting the upgrade.

 Upgrading Inaccessible Nodes After Forcing an Upgrade


Starting with Oracle Grid Infrastructure 12c, after you complete a force cluster upgrade, you can join inaccessible nodes to the cluster as an alternative to deleting the nodes, which was required in earlier releases. To use this option, Oracle Grid Infrastructure 12c Release 1 (12.1) software must already be installed on the nodes.

To complete the upgrade of nodes that were inaccessible or unreachable:

Log in as the Grid user on the node that is to be joined to the cluster.

Change directory to the /crs/install directory in the Oracle Grid Infrastructure 12c Release 1 (12.1) Grid home. For example:

$ cd /u01/12.1.0/grid/crs/install
Run the following PERL command, where existingnode is the name of the option and upgraded_node is any node that was successfully upgraded and is currently part of the cluster:

$ rootupgrade.sh -join -existingnode upgraded_node

B.8.4 Changing the First Node for Install and Upgrade
If the first node becomes inaccessible, you can force another node to be the first node for installation or upgrade. During installation, if root.sh fails to complete on the first node, run the following command on another node using the -force option:

Note:
The -join operation is not supported for Oracle Clusterware releases earlier than 11.2.0.1.0. In such cases, delete the node and add it to the clusterware using the addNode command.


Friday 27 March 2015

Add disk in a diskgroup

[root@oracleDBlinux.domain.com ~]# which oracleasm
/usr/sbin/oracleasm
[root@oracleDBlinux.domain.com ~]# /usr/sbin/oracleasm listdisks
DATA001
DATA002
DATA003


[root@oracleDBlinux.domain.com ~]# /usr/sbin/oracleasm listdisks | xargs oracleasm querydisk -p |grep /dev/emcpwr
/dev/emcpwra1: LABEL="DATA001" TYPE="oracleasm"
/dev/emcpwrb1: LABEL="DATA002" TYPE="oracleasm"
/dev/emcpwrc1: LABEL="DATA003" TYPE="oracleasm"

 [root@oracleDBlinux.domain.com ~]# lvmdiskscan |grep /dev/emcpwr
  /dev/emcpwra1 [     802.48 GiB]
  /dev/emcpwrb1 [     802.48 GiB]
  /dev/emcpwrc1 [     802.48 GiB]
  /dev/emcpwrd1 [     802.48 GiB] ----------------------------------------------> Not added in any asm disk


[root@oracleDBlinux.domain.com ~]#  cat /proc/partitions |grep emcpwr
 120       32  841466880 emcpwrc
 120       33  841460571 emcpwrc1
 120        0  841466880 emcpwra
 120        1  841460571 emcpwra1
 120       16  841466880 emcpwrb
 120       17  841460571 emcpwrb1
 120       48  841466880 emcpwrd
 120       49  841460571 emcpwrd1



[root@oracleDBlinux.domain.com ~]# /usr/sbin/oracleasm createdisk DATA004 /dev/emcpwrd1
Device "/dev/emcpwrd1" is already labeled for ASM disk ""

Solution :

[root@oracleDBlinux.domain.com ~]# /usr/sbin/oracleasm renamedisk /dev/emcpwrd1 DATA004
WARNING: Changing the label of an disk marked for ASM is a very dangerous
         operation.  If this is really what you mean to do, you must
         ensure that all Oracle and ASM instances have ceased using
         this disk.  Otherwise, you may LOSE DATA.

If you are really sure you wish to change the label and are sure that
all of your Oracle and ASM instances have ceased using the disk,
rerun this command with the '-f' option.

[root@oracleDBlinux.domain.com ~]# /usr/sbin/oracleasm renamedisk -f /dev/emcpwrd1 DATA004
Writing disk header: done
Instantiating disk "DATA004": done
[root@oracleDBlinux.domain.com ~]#

[root@oracleDBlinux.domain.com ~]# /usr/sbin/oracleasm listdisks
DATA001
DATA002
DATA003
DATA004

sqlplus / as sysasm 

SQL> ALTER DISKGROUP FRA ADD DISK 'ORCL:DATA004' REBALANCE POWER 2 WAIT ;
ALTER DISKGROUP FRA ADD DISK '/dev/emcpwrd1' REBALANCE POWER 2 WAIT
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15031: disk specification '/dev/emcpwrd1' matches no disks
ORA-15025: could not open disk "/dev/emcpwrd1"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 42
Additional information: 1
Additional information: 1

chmod 660 /dev/emcpwrd1
chown grid:oinstall /dev/emcpwrd1


SQL> ALTER DISKGROUP FRA ADD DISK '/dev/emcpwrd1' REBALANCE POWER 2 WAIT ;
ALTER DISKGROUP FRA ADD DISK '/dev/emcpwrd1' REBALANCE POWER 2 WAIT
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15031: disk specification '/dev/emcpwrd1' matches no disks
ORA-15014: path '/dev/emcpwrd1' is not in the discovery set


SELECT NVL(a.name, '[CANDIDATE]') disk_group_name , b.path disk_file_path , b.name disk_file_name
    FROM v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
    ORDER BY a.name;

DISK_GROUP_NAME                DISK_FILE_PATH              DISK_FILE_NAME
------------------------------ ---------------------------------------------
DATA                           ORCL:DATA001                DATA001
DATA                           ORCL:DATA002                DATA002
FRA                            ORCL:DATA003                DATA003
[CANDIDATE]                    ORCL:DATA004


SQL> create pfile='/home/grid/pfile.ora' from spfile ;

File created.

SQL> ! cat /home/grid/pfile.ora
+ASM.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
*.asm_diskgroups='DATA','FRA'
*.asm_diskstring='ORCL:DATA003,ORCL:DATA002,ORCL:DATA001'
*.asm_power_limit=1
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'


alter system set asm_diskstring='ORCL:DATA003,ORCL:DATA002,ORCL:DATA001,ORCL:DATA004' sid='*'  ; ---This  will not work

alter system set asm_diskstring='/dev/emcpwr*' sid='*'  ; ---this will not work 


SQL> alter system set asm_diskstring='ORCL:DATA003,ORCL:DATA002,ORCL:DATA001,ORCL:DATA004' sid='*'  ;
alter system set asm_diskstring='ORCL:DATA003,ORCL:DATA002,ORCL:DATA001,ORCL:DATA004' sid='*'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-15014: path 'ORCL:DATA001' is not in the discovery set


SQL> ALTER DISKGROUP FRA ADD DISK 'ORCL:DATA004' REBALANCE POWER 2 WAIT ;

Diskgroup altered.

SQL> select name , total_mb/1024 , free_mb/1024 from v$asm_diskgroup ;

NAME                           TOTAL_MB/1024 FREE_MB/1024
------------------------------ ------------- ------------
FRA                               1604.95703   1585.57031
DATA                              1604.95703   577.573242

SQL>
SQL> SELECT NVL(a.name, '[CANDIDATE]') disk_group_name , b.path disk_file_path , b.name disk_file_name
    FROM v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
    ORDER BY a.name;  2    3

Disk Group Name Path              File Name
--------------- ----------------- --------------------
DATA            ORCL:DATA001      DATA001
                ORCL:DATA002      DATA002

FRA             ORCL:DATA003      DATA003
                ORCL:DATA004      DATA004


ASM Disk Report

   SET LINESIZE  145

   SET LINESIZE  145
    SET PAGESIZE  9999
      SET VERIFY    off

   COLUMN disk_group_name        FORMAT a15           HEAD 'Disk Group Name'
   COLUMN disk_file_path         FORMAT a17           HEAD 'Path'
   COLUMN disk_file_name         FORMAT a20           HEAD 'File Name'
   COLUMN disk_file_fail_group   FORMAT a20           HEAD 'Fail Group'
   COLUMN total_mb               FORMAT 999,999,999   HEAD 'File Size (MB)'
   COLUMN used_mb                FORMAT 999,999,999   HEAD 'Used Size (MB)'
   COLUMN pct_used               FORMAT 999.99        HEAD 'Pct. Used'

      break on report on disk_group_name skip 1

   compute sum label ""              of total_mb used_mb on disk_group_name
   compute sum label "Grand Total: " of total_mb used_mb on report

   SELECT
        NVL(a.name, '[CANDIDATE]')                       disk_group_name
      , b.path                                           disk_file_path
      , b.name                                           disk_file_name
      , b.failgroup                                      disk_file_fail_group
      , b.total_mb                                       total_mb
      , (b.total_mb - b.free_mb)                         used_mb
      , b.os_mb                                                                                         os_mb
   FROM
       v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
   ORDER BY
       a.name
   /

Disk Group Name Path              File Name  Fail Group   File Size (MB) Used Size (MB)      OS_MB Pct. Used
--------------- ----------------- -------------------- -------------------- -------------- -------------- -------------
DATA            ORCL:DATA001      DATA001    DATA001       821,738        526,024     821738     64.01
                      ORCL:DATA002      DATA002    DATA002       821,738        526,017     821738     64.01
***************                                                     -------------- ---------------------------
                                                                                          1,643,476      1,052,041

FRA             ORCL:DATA003      DATA003    DATA003      821,738         68,667     821738      8.36
                     ORCL:DATA004      DATA004    DATA004      821,738         68,591     821738      8.35
***************                                                             -------------- ------------------
                                                                                 1,643,476        137,258

                                                                            -------------- --------------------
Grand Total:                                                                     3,286,952      1,189,299

Thursday 26 March 2015

Root.sh Fails with Inappropriate ioctl for device

Root.sh Fails with "Adding Clusterware entries to upstart Failed to start oracleohasd,
error: Inappropriate ioctl for device"

Error: During a Grid Infrastructure 11.2.0.4 installation on a Linux x8664 EL6 (or OL6) cluster, root.sh fails with error:

...
Adding Clusterware entries to upstart
Failed to start oracleohasd,
error: Inappropriate ioctl for device
INT at /data/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1446.

$GRID_HOME/cfgtoollogs/crsconfig/rootcrs_<node name>.log shows:

20131203
12:42:46: Glob file list =
20131203
12:42:51: Executing cmd: /sbin/restorecon iF
/etc/init/oracleohasd.
conf
20131203
12:42:51: Executing cmd: /sbin/initctl start oracleohasd
20131203
12:42:51: Command output:
> initctl: Unknown job: oracleohasd
>End Command output
20131203
12:42:51: Failed to start oracleohasd,
error: Inappropriate ioctl for device
20131203
12:42:51: Failed to start service 'ohasd'

Cause :

OS upstart fails to reload the oracleohasd entry automatically. Upstart (a feature from EL6) is an eventbased replacement for the /sbin/init daemon which handles starting of tasks and services.
"initctl list" command shows that no entry exists for "oracleohasd".

Solution :

1. Force init to reload its configuration via initctl reloadconfiguration command:

# initctl reload-configuration
# initctl start oracle-ohasd
# initctl list |grep oracle
oracle-ohasd

start/running, process 6855

2. Deconfigure the clusterware:

# $GRID_HOME/crs/install/rootcrs.pl -deconfig -force -verbose

(for Oracle Restart use roothas.pl instead)

3. Rerun root script:

# $GRID_HOME/root.sh


Now root.sh finishes successfully.

Tuesday 24 March 2015

Extract Archive logs from RMAN Backup

DB11G:oracle> rman target /

Recovery Manager: Release 10.2.0.4.0 - Production on Tue Mar 24 01:38:40 2015

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: DB11G (DBID=2907657564)

RMAN> RESTORE ARCHIVELOG FROM SEQUENCE 177133  UNTIL SEQUENCE 177293 ;

Starting restore at Mar 24 2015 01:38:49
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=1401 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=1766 devtype=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: sid=1595 devtype=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: sid=1286 devtype=DISK

channel ORA_DISK_1: starting archive log restore to default destination
channel ORA_DISK_2: starting archive log restore to default destination
channel ORA_DISK_3: starting archive log restore to default destination
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177175
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177236
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177133
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177176
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177237
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177134
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177177
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177238
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177135
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177178
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177239
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177136
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177179
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177240
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177180
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177137
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177181
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177241
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177138
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177182
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177242
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177139
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177183
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177243
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177140
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177184
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177244
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177141
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177185
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177245
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177142
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177186
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177246
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177143
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177187
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177247
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177144
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177188
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177248
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177145
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177189
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177249
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177190
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177250
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177146
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177191
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177251
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177147
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177192
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177252
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177148
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177193
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177253
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177149
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177194
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177254
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177150
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177195
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177255
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177151
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177196
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177256
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177152
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177197
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177257
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177153
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177198
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177258
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177154
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177199
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177259
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177155
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177200
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177260
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177156
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177201
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177261
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177157
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177202
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177262
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177158
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177203
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177263
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177159
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177204
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177264
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177160
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177205
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177265
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177161
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177206
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177266
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177162
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177207
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177267
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177163
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177208
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177268
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177164
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177209
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177269
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177165
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177210
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177270
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177166
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177211
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177271
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177167
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177212
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177272
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177168
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177213
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177273
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177169
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177214
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177274
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177170
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177215
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177275
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177171
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177216
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177276
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177172
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177217
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177277
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177173
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177218
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177278
channel ORA_DISK_3: restoring archive log
archive log thread=1 sequence=177174
channel ORA_DISK_3: reading from backup piece /orabackup/DB11G.sumit/RMAN_BACKUP/archivelog_backup_DB11G/DB11G_arch_bkp_23Mar7qq2i5h6_1_1
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177219
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177279
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177220
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177280
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177221
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177281
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177222
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177282
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177223
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177283
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177224
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177284
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177225
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177285
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177226
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177286
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177227
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177287
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177228
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177288
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177229
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177289
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177230
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177290
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177231
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177291
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177232
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177292
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177233
channel ORA_DISK_2: restoring archive log
archive log thread=1 sequence=177293
channel ORA_DISK_2: reading from backup piece /orabackup/DB11G.sumit/RMAN_BACKUP/archivelog_backup_DB11G/DB11G_arch_bkp_23Mar7sq2i5hr_1_1
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177234
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=177235
channel ORA_DISK_1: reading from backup piece /orabackup/DB11G.sumit/RMAN_BACKUP/archivelog_backup_DB11G/DB11G_arch_bkp_23Mar7lq2i47u_1_1
channel ORA_DISK_2: restored backup piece 1
piece handle=/orabackup/DB11G.sumit/RMAN_BACKUP/archivelog_backup_DB11G/DB11G_arch_bkp_23Mar7sq2i5hr_1_1 tag=TAG20150323T130911
channel ORA_DISK_2: restore complete, elapsed time: 00:17:47
channel ORA_DISK_3: restored backup piece 1
piece handle=/orabackup/DB11G.sumit/RMAN_BACKUP/archivelog_backup_DB11G/DB11G_arch_bkp_23Mar7qq2i5h6_1_1 tag=TAG20150323T130911
channel ORA_DISK_3: restore complete, elapsed time: 00:18:22
channel ORA_DISK_1: restored backup piece 1
piece handle=/orabackup/DB11G.sumit/RMAN_BACKUP/archivelog_backup_DB11G/DB11G_arch_bkp_23Mar7lq2i47u_1_1 tag=TAG20150323T130911
channel ORA_DISK_1: restore complete, elapsed time: 00:19:27
Finished restore at Mar 24 2015 01:58:24

Monday 23 March 2015

Deinstalling Oracle EM 12C Agents

1. Check the Agent home 

You can find the agent home from the  inventory.xml . To check inventory.xml check under central inventory .


On Unix/Linux, the location of the oraInventory is defined by the content of oraInst.loc, at:

– /var/opt/oracle/oraInst.loc on Solaris, HP-UX and Tru64
– /etc/oraInst.loc on Linux and AIX

cat /u01/app/oraInventroy/ContentXML/inventory.xml  and you can see the value of oracle_home where agent is installed . In our case location is "/u03/agent_12c/core/12.1.0.2.0"

Step2 : cd to the perl directory and execute AgentDeinstall.pl :


cd /u03/agent_12c/core/12.1.0.2.0/perl/bin/


./perl /u03/agent_12c/core/12.1.0.2.0/sysman/install/AgentDeinstall.pl -agentHome /u03/agent_12c/core/12.1.0.2.0

Agent Oracle Home: /u03/agent_12c/core/12.1.0.2.0

NOTE: The agent base directory: /u03/agent_12c will be removed after successful deinstallation of agent home.



DetachHome Command executed:/u03/agent_12c/core/12.1.0.2.0/oui/bin/runInstaller -detachHome -force -depHomesOnly -silent ORACLE_HOME=/u03/agent_12c/core/12.1.0.2.0 -waitForCompletion -invPtrLoc /u03/agent_12c/core/12.1.0.2.0/oraInst.loc

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 137503 MB Passed

The inventory pointer is located at /u03/agent_12c/core/12.1.0.2.0/oraInst.loc
The inventory is located at /u01/app/oraInventory
'DetachHome' was successful.
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 137503 MB Passed

The inventory pointer is located at /u03/agent_12c/core/12.1.0.2.0/oraInst.loc
The inventory is located at /u01/app/oraInventory
'DetachHome' was successful.


Deinstall Command executed:/u03/agent_12c/core/12.1.0.2.0/oui/bin/runInstaller -deinstall -silent "REMOVE_HOMES={/u03/agent_12c/core/12.1.0.2.0}" -waitForCompletion -removeAllFiles -invPtrLoc /u03/agent_12c/core/12.1.0.2.0/oraInst.loc

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 137503 MB Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-19_05-17-32PM. Please wait ...Oracle Universal Installer, Version 11.1.0.9.0 Production
Copyright (C) 1999, 2012, Oracle. All rights reserved.

Starting deinstall



Deinstall in progress (Thursday, March 19, 2015 5:18:12 PM EDT)

Configuration assistant "Agent Deinstall Assistant" succeeded
............................................................... 100% Done.

Deinstall successful


End of install phases.(Thursday, March 19, 2015 5:26:00 PM EDT)

End of deinstallations
Please check '/u01/app/oraInventory/logs/silentInstall2015-03-19_05-17-32PM.log' for more details.
Can't locate Carp.pm in @INC (@INC contains: /u03/agent_12c/core/12.1.0.2.0/perl/lib/5.10.0/x86_64-linux-thread-multi /u03/agent_12c/core/12.1.0.2.0/perl/lib/5.10.0 /u03/agent_12c/core/12.1.0.2.0/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u03/agent_12c/core/12.1.0.2.0/perl/lib/site_perl/5.10.0 /u03/agent_12c/core/12.1.0.2.0/perl/lib/5.10.0/x86_64-linux-thread-multi /u03/agent_12c/core/12.1.0.2.0/perl/lib/5.10.0/x86_64-linux-thread-multi /u03/agent_12c/core/12.1.0.2.0/perl/lib/5.10.0 /u03/agent_12c/core/12.1.0.2.0/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /u03/agent_12c/core/12.1.0.2.0/perl/lib/site_perl/5.10.0 /u03/agent_12c/core/12.1.0.2.0/perl/lib/site_perl .) at /u03/agent_12c/core/12.1.0.2.0/perl/lib/5.10.0/File/Path.pm line 32.

Above perl error can be ignored . 

[oracle]/u03/agent_12c/>cat /u01/app/oraInventory/logs/silentInstall2015-03-19_05-17-32PM.log
silentInstall2015-03-19_05-17-32PM.log
Starting deinstall
This deinstallation was successful
Please check '/u01/app/oraInventory/logs/silentInstall2015-03-19_05-17-32PM.log' for more details.
End of deinstallations

To Change agent port in Oracle Enterprise Manager Cloud Control 12C

Agent port is an integral part of Agent target name and EMD_URL property of Agent's monitored targets in Repository tables. Every port change would lead to duplicate targets in Repository and would need manual corrections.

Agent port change is one of the least frequent operations one would do. Using emctl set property can change  EMD_URL property value however Display name remain the same (old name)
But can see the new EMD_URL value with new Portnumber:

Steps to change agent port

$/u03/agent_12c/core/12.1.0.2.0/bin/emctl stop agent  

2. Set  EMD_URL property

$/u03/agent_12c/core/12.1.0.2.0/bin/emctl setproperty agent -name EMD_URL -value https://oracleDBlinux.domain.com:3870/emd/main/
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
EMD setproperty succeeded

3. Verify the properties and targets.xml file

$ grep EMD_URL  /u03/agent_12c/core/12.1.0.2.0/sysman/config/emd.properties
EMD_URL=https://oracleDBlinux.domain.com:3870/emd/main/


Verify modified the EMD_URL value in targets.xml

$ grep -i 'oracle_emd' /u03/agent_12c/core/12.1.0.2.0/sysman/emd/targets.xml
        <Target TYPE="oracle_emd" NAME="oracleDBlinux.domain.com:3870" DISPLAY_NAME="oracleDBlinux.domain.com:3870" ON_HOST="" EMD_URL="https://oracleDBlinux.domain.com:3870/emd/main/" TIMEZONE_REGION="" IDENTIFIER="TARGET_GUID=3472BF9E47A45FGGRGD47AA90073F387"/>

Taregts.xml is not yet modified  



4. Start agent

$/u03/agent_12c/core/12.1.0.2.0/bin/emctl start agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
Starting agent ................... started.

$/u03/agent_12c/core/12.1.0.2.0/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version     : 12.1.0.2.0
OMS Version       : 12.1.0.2.0
Protocol Version  : 12.1.0.1.0
Agent Home        : /u03/agent_12c/agent_inst
Agent Binaries    : /u03/agent_12c/core/12.1.0.2.0/
Agent Process ID  : 8734
Parent Process ID : 8645
Agent URL         : https://oracleDBlinux.domain.com:3870/emd/main/
Repository URL    : https://oracleEM_OMS.domain.com:4902/empbs/upload
Started at        : 2013-06-21 13:49:44
Started by user   : oracle
Last Reload       : (none)
Last successful upload                       : 2013-06-21 13:49:55
Last attempted upload                        : 2013-06-21 13:49:55
Total Megabytes of XML files uploaded so far : 0
Number of XML files pending upload           : 1
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 96.32%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2013-06-21 13:49:54
Last successful heartbeat to OMS             : 2013-06-21 13:49:54
Next scheduled heartbeat to OMS              : 2013-06-21 13:50:54

---------------------------------------------------------------
Agent mis Running and Ready

5. Verify the new port number in EMD_URL property

$/u03/agent_12c/core/12.1.0.2.0/bin/emctl getproperty agent -name EMD_URL
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
EMD_URL=https://oracleDBlinux.domain.com:3870/emd/main/

 On Console Agent Display name will be shown with old name and EMD_URL will be shown with new port#

Editing targets.xml file

Strictly not recommended to edit the targets.xml file. This may block agent or impact agent functionality or agent would become outof sync with repository or upload failure

Tuesday 17 March 2015

Converting to cluster database and adding instances

Converting to cluster database and Adding instances :

Source DB :
=========

DB_NAME=orcl
ORACLE_HOME= /data01/app/oracle/product/11.2.0.4/db_1/
SPFILE/PFILE= /data01/app/oracle/product/11.2.0.4/db_1/dbs/initorcl.ora
ORALCE_SID=ORCL
Cluster DB=FALSE
HOSTNAME=NODE1

Target DB:
========
DB_NAME=orcl
ORACLE_HOME= /data01/app/oracle/product/11.2.0.4/db_1/
SPFILE/PFILE= +DATA
cluster_database=true
orcl2.instance_number=2
orcl1.instance_number=1
Hostname : node1 , node2

Step 1: Check the cluster database parameter for source DB and shu the DB to make changes in pfile:

SQL> show parameter cluster ;

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
cluster_database     boolean FALSE
cluster_database_instances     integer 1
cluster_interconnects     string
 
Step 2 : Make the cluster parameter true in the pfile , also add other rac specific parameters as highlighted below:

[oracle@node1 trace]$ cat /data01/app/oracle/product/11.2.0.4/db_1/dbs/initorcl.ora
*.audit_file_dest='/data01/app/oracle/admin/orcl/adump'
*.audit_trail='db'
*.compatible='11.2.0.4.0'
*.control_files='+DATA/orcl/controlfile/current.269.874539303','+FRA/orcl/controlfile/current.256.874539311'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='orcl'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=3670016000
*.diagnostic_dest='/data01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.log_archive_format='%t_%s_%r.dbf'
*.memory_target=839909376
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
orcl1.undo_tablespace='UNDOTBS1'
orcl2.undo_tablespace='UNDOTBS2' 
*.cluster_database=true
orcl2.instance_number=2
orcl1.instance_number=1
orcl2.thread=2
orcl1.thread=1

Step 3: Startup DB using the modified pfile and Create spfile from pfile ; 

SQL> create spfile='+DATA' from pfile ; 

File created.

you can check the location of spfile create using asmcmd :

ASMCMD> pwd
+DATA/ORCL/PARAMETERFILE
ASMCMD> ls
spfile.264.874545139
ASMCMD>

Step 4: Shutdown and Create pfiles for both instance orcl1 and orcl2 :
Modify pfile to have only entry of spfile as below , SPFILE path should be shared and accessible from both node1 and node2 :

[oracle@node1 trace]$ mv /data01/app/oracle/product/11.2.0.4/db_1/dbs/initorcl.ora /data01/app/oracle/product/11.2.0.4/db_1/dbs/initorcl1.ora

[oracle@node1 trace]$ cat /data01/app/oracle/product/11.2.0.4/db_1/dbs/initorcl1.ora
SPFILE='+DATA/ORCL/PARAMETERFILE/spfile.264.874545139'
[oracle@node1 trace]$
[oracle@node1 trace]$ cat /data01/app/oracle/product/11.2.0.4/db_1/dbs/initorcl2.ora
SPFILE='+DATA/ORCL/PARAMETERFILE/spfile.264.874545139'

Step5 : Startup instance using SPFILE :

SQL>  startup;
ORACLE instance started.

Total System Global Area  839282688 bytes
Fixed Size    2257880 bytes
Variable Size  608177192 bytes
Database Buffers  226492416 bytes
Redo Buffers    2355200 bytes
Database mounted.
Database opened.


SQL> show parameter spfile ;

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile     string +DATA/orcl/parameterfile/spfile.264.874545139
SQL>

Step 6: Add logfile thread for orcl2 and enable the thread:

SQL> ALTER DATABASE ADD LOGFILE THREAD 2 group 5 size 50m ;

Database altered.


SQL> ALTER DATABASE ADD LOGFILE THREAD 2 group  6 size 50m;

Database altered.

SQL> alter database enable public thread 2;

Database altered.

Step 7: Add database to cluster using srvctl :

oracle@node1 trace]$ which srvctl
/data01/app/oracle/product/11.2.0.4/db_1/bin/srvctl


[oracle@node1 trace]$ srvctl add database -d orcl -n node1 -o /data01/app/oracle/product/11.2.0.4/db_1/ -p +DATA/orcl/parameterfile/spfile.264.874545139

[oracle@node1 trace]$ srvctl config database -d orcl
Database unique name: orcl
Database name: node1
Oracle home: /data01/app/oracle/product/11.2.0.4/db_1/
Oracle user: oracle
Spfile: +DATA/orcl/parameterfile/spfile.264.874545139
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances:
Disk Groups:
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[oracle@node1 trace]$

Step8 : Add instance orcl1 and orcl2 for database orcl :

[oracle@node1 trace]$ srvctl add instance -d orcl -i orcl1 -n node1
[oracle@node1 trace]$ srvctl add instance -d orcl -i orcl2 -n node2

step 9 : Check the database configuration and try to start orcl1 & orcl2 using srvctl :

[oracle@node1 trace]$ srvctl config database -d orcl
Database unique name: orcl
Database name: node1
Oracle home: /data01/app/oracle/product/11.2.0.4/db_1/
Oracle user: oracle
Spfile: +DATA/orcl/parameterfile/spfile.264.874545139
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2
Disk Groups:
Mount point paths:
Services:
Type: RAC
Database is administrator managed

Step 9 : You can start both instance one by one using srvctl and fix any issue if there :

[oracle@node1 trace]$ srvctl start instance -d orcl -i orcl2
[oracle@node1 trace]$ srvctl start instance -d orcl -i orcl2

Step 10: Stop both instance and start using srvctl :

[oracle@node1 trace]$ srvctl stop database  -d orcl
[oracle@node1 trace]$ srvctl start database  -d orcl

[oracle@node1 trace]$ srvctl status database  -d orcl
Instance orcl1 is running on node node1
Instance orcl2 is running on node node2






Tuesday 3 March 2015

SPSET to update spfile path

If the spfile path in GPNP profile is incorrect , you can start the ASM instance as follows:

To check the spfile location is gpnp you can use the command as below:

[grid@node1 peer]$ /data01/app/11.2.0/grid_11204/bin/gpnptool.bin getpval -asm_spf -p=profile.xml -o-
+DATA/node-cluster/asmparameterfile/registry.253.866681801


1. Create a pfile with one line in it that identifies the path to the spfile

Create a pfile $ORACLE_HOME/dbs/pfileasm.ora with entry :

SPFILE='+DATA/asm/asmparameterfile/asmspfile.ora'

2. Start up the instance using the init pfile :

SQL > Startup pfile='$ORACLE_HOME/dbs/pfileasm.ora' ;

3. After the instance is started you can update the spfile location using SPSET command using ASMCMD :

ASMCMD > spset +DATA/asm/asmparameterfile/asmspfile.ora