Total Pageviews

Tuesday 28 April 2015

Which rac node is the master node

Importance of master node in a cluster:

- Master node has the least Node-id in the cluster. Node-ids are  assigned to the nodes in the same order as the nodes join the cluster. Hence, normally the node which joins the cluster first is the master node.
- CRSd process on the Master node is responsible to initiate the OCR backup as per the backup policy
- Master node  is also responsible to sync OCR cache across the nodes
- CRSd process oth the master node reads from and writes to OCR on disk
- In case of node eviction, The cluster is divided into two sub-clusters. The sub-cluster containing fewer no. of nodes is evicetd. But, in case both the sub-clusters have same no. of nodes, the sub-cluster having the master node survives whereas the other sub-cluster is evicted.

1. Check which node is taking OCR backup  .

[root@node1 ~]# /data01/app/11204/grid_11204/bin/ocrconfig  -manualbackup

node2     2015/04/28 12:05:53     /data01/app/11204/grid_11204/cdata/node-cluster/backup_20150428_120553.ocr

2, Scan the logs for crsd.log

[grid@node2 crsd]$ cat $GRID_HOME/log/node2/crsd.log | grep -i "New master"  | tail -2
2015-04-28 10:34:31.625: [   CRSSE][1272964864]{2:23089:2} Master Change Event; New Master Node ID:2 This Node's ID:2
2015-04-28 10:34:46.096: [UiServer][1272964864]{2:23089:2} Master change notification has received. New master: 2

Thursday 16 April 2015

How to Deconfigure and Deinstall Grid Infrastructure

[grid@rac12cn1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details    
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac12cn1                 STABLE
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.FRA.dg
               ONLINE  ONLINE       rac12cn1                 STABLE
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac12cn1                 STABLE
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.OCR_VOTE.dg
               ONLINE  ONLINE       rac12cn1                 STABLE
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.asm
               ONLINE  ONLINE       rac12cn1                 Started,STABLE
               ONLINE  ONLINE       rac12cn2                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       rac12cn1                 STABLE
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.ons
               ONLINE  ONLINE       rac12cn1                 STABLE
               ONLINE  ONLINE       rac12cn2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac12cn1                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac12cn1                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac12cn1                 STABLE
ora.cvu
      1        ONLINE  ONLINE       rac12cn2                 STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.rac12cn1.vip
      1        ONLINE  ONLINE       rac12cn1                 STABLE
ora.rac12cn2.vip
      1        ONLINE  ONLINE       rac12cn2                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac12cn1                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac12cn1                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac12cn1                 STABLE
--------------------------------------------------------------------------------
[grid@rac12cn1 ~]$

Before deconfiguring a node, ensure it's not pinned, i.e.:

[grid@rac12cn1 ~]$ olsnodes -s -t
rac12cn1 Active Unpinned
rac12cn2 Active Unpinned
[grid@rac12cn1 ~]$
If a node is pinned, unpin it first, i.e. as root user:
<GI_HOME>/bin/crsctl unpin css -n <racnode1>

If OCR or Voting Disks are on ASM and there is user data in OCR/Voting Disk ASM diskgroup:
If GI version is 11.2.0.3 AND fix for bug 13058611 and bug 13001955 has been applied, or GI version is 11.2.0.3.2 GI PSU (includes both fixes) or higher:
On all remote nodes, as root execute:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose



[root@rac12cn1 ~]# /data01/app/12.1.0.1.0/grid_121010/crs/install/rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: /data01/app/12.1.0.1.0/grid_121010/crs/install/crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.56.0/255.255.255.0/eth0, static
Subnet IPv6:
VIP exists: network number 1, hosting node rac12cn1
VIP Name: rac12cn1-vip.localdomain
VIP IPv4 Address: 192.168.56.81
VIP IPv6 Address:
VIP exists: network number 1, hosting node rac12cn2
VIP Name: rac12cn2-vip.localdomain
VIP IPv4 Address: 192.168.56.82
VIP IPv6 Address:
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cn1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac12cn1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac12cn1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac12cn1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac12cn1'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac12cn1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'rac12cn1' succeeded
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'rac12cn1'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'rac12cn1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac12cn1'
CRS-2677: Stop of 'ora.asm' on 'rac12cn1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac12cn1'
CRS-2677: Stop of 'ora.net1.network' on 'rac12cn1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac12cn1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac12cn1' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'rac12cn1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac12cn1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac12cn1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cn1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac12cn1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cn1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac12cn1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac12cn1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac12cn1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac12cn1' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac12cn1'
CRS-2677: Stop of 'ora.storage' on 'rac12cn1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac12cn1'
CRS-2677: Stop of 'ora.asm' on 'rac12cn1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac12cn1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac12cn1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac12cn1'
CRS-2677: Stop of 'ora.cssd' on 'rac12cn1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac12cn1'
CRS-2677: Stop of 'ora.gipcd' on 'rac12cn1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cn1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2015/04/16 11:43:20 CLSRSC-336: Successfully deconfigured Oracle clusterware stack on this node


[root@rac12cn1 ~]# /data01/app/12.1.0.1.0/grid_121010/bin/crsctl status res -t
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Status failed, or completed with errors.
[root@rac12cn1 ~]#
[root@rac12cn1 ~]#
[root@rac12cn1 ~]# ssh rac12cn2
root@rac12cn2's password:
Last login: Thu Dec 25 08:21:03 2014 from rac12cn1.localdomain
[root@rac12cn2 ~]#
[root@rac12cn2 ~]#
[root@rac12cn2 ~]#
[root@rac12cn2 ~]# /data01/app/12.1.0.1.0/grid_121010/bin/crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details    
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.FRA.dg
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.OCR_VOTE.dg
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.asm
               ONLINE  ONLINE       rac12cn2                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       rac12cn2                 STABLE
ora.ons
               ONLINE  ONLINE       rac12cn2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac12cn2                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac12cn2                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac12cn2                 STABLE
ora.cvu
      1        ONLINE  ONLINE       rac12cn2                 STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.rac12cn2.vip
      1        ONLINE  ONLINE       rac12cn2                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac12cn2                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac12cn2                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac12cn2                 STABLE
--------------------------------------------------------------------------------
[root@rac12cn2 ~]#


Once the above command finishes on all remote nodes, on local node, as root execute:
/data01/app/12.1.0.1.0/grid_121010/crs/install/rootcrs.pl -deconfig -force -verbose -keepdg -lastnode
[root@rac12cn2 ~]# /data01/app/12.1.0.1.0/grid_121010/crs/install/rootcrs.pl -deconfig -force -verbose -keepdg -lastnode
Using configuration parameter file: /data01/app/12.1.0.1.0/grid_121010/crs/install/crsconfig_params
2015/04/16 11:46:01 CLSRSC-332: CRS resources for listeners are still configured

OC4J failed to stop
PRCC-1016 : oc4j was already stopped
PRCR-1005 : Resource ora.oc4j is already stopped
Network 1 exists
Subnet IPv4: 192.168.56.0/255.255.255.0/eth0, static
Subnet IPv6:
VIP exists: network number 1, hosting node rac12cn2
VIP Name: rac12cn2-vip.localdomain
VIP IPv4 Address: 192.168.56.82
VIP IPv6 Address:
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac12cn2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac12cn2'
CRS-2677: Stop of 'ora.FRA.dg' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'rac12cn2'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac12cn2'
CRS-2677: Stop of 'ora.asm' on 'rac12cn2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac12cn2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cn2'
CRS-2677: Stop of 'ora.storage' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac12cn2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac12cn2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac12cn2'
CRS-2677: Stop of 'ora.cssd' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac12cn2'
CRS-2677: Stop of 'ora.gipcd' on 'rac12cn2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cn2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cn2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cn2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac12cn2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cn2'
CRS-2676: Start of 'ora.evmd' on 'rac12cn2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac12cn2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cn2'
CRS-2676: Start of 'ora.gpnpd' on 'rac12cn2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cn2'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cn2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cn2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac12cn2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac12cn2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cn2'
CRS-2676: Start of 'ora.diskmon' on 'rac12cn2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac12cn2' succeeded
CRS-4611: Successful deletion of voting disk +OCR_VOTE.
ASM de-configuration trace file location: /tmp/asmcadc_clean2015-04-16_11-54-32-AM.log
ASM Clean Configuration START
ASM Clean Configuration END

ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2015-04-16_11-54-32-AM.log for details.

2015/04/16 11:59:19 CLSRSC-170: Failed to deconfigure Oracle ASM (error code 0) -- Can be ignored

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cn2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac12cn2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac12cn2'
CRS-2677: Stop of 'ora.ctssd' on 'rac12cn2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac12cn2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac12cn2'
CRS-2677: Stop of 'ora.cssd' on 'rac12cn2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac12cn2'
CRS-2677: Stop of 'ora.gipcd' on 'rac12cn2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cn2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2015/04/16 12:00:16 CLSRSC-336: Successfully deconfigured Oracle clusterware stack on this node


 Grid Infrastructure Deinstall

 As grid user, execute:

$ <$GRID_HOME>/deinstall/deinstall

If there's any error, deconfigure the failed GI with steps in Section A - C, and deinstall manually with note 1364419.1

export ORACLE_HOME=<clusterware-home>

## detach ORACLE_HOME
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME

## confirm $ORACLE_HOME is removed from central inventory:
$ORACLE_HOME/OPatch/opatch lsinventory -all

## remove files in ORACLE_HOME manually on all nodes
/bin/rm -rf $ORACLE_HOME               ##>> if grid user fails to remove all files, switch to root user

unset ORACLE_HOME
If it fails for any reason, as clusterware user execute the following on all nodes:

export ORACLE_HOME=<clusterware-home>

## detach ORACLE_HOME
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME

## confirm $ORACLE_HOME is removed from central inventory:
$ORACLE_HOME/OPatch/opatch lsinventory -all

## remove files in ORACLE_HOME manually
/bin/rm -rf $ORACLE_HOME               ##>> if grid user fails to remove all files, switch to root user

unset ORACLE_HOME

Reference: How to Deconfigure/Reconfigure(Rebuild OCR) or Deinstall Grid Infrastructure (Doc ID 1377349.1)

How to Deinstall Old Clusterware Home Once Upgrade to Newer Version is Complete

To remove old home, as clusterware user execute the following on any node:


## please replace $OLD_HOME with the path of pre-upgrade clusterware home
export ORACLE_HOME=$OLD_HOME

## detach OLD_HOME
$OLD_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$OLD_HOME

## confirm $OLD_HOME is removed from central inventory:
$OLD_HOME/OPatch/opatch lsinventory -all   

## remove files in OLD_HOME manually:
/bin/rm -rf $OLD_HOME

unset ORACLE_HOME

If it fails for any reason, as clusterware user execute the following on all nodes:

export ORACLE_HOME=$OLD_HOME

## detach OLD_HOME
$OLD_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$OLD_HOME

## confirm $OLD_HOME is removed from central inventory:
$OLD_HOME/OPatch/opatch lsinventory -all   

## remove files in OLD_HOME manually:
/bin/rm -rf $OLD_HOME


unset ORACLE_HOME

Friday 10 April 2015

Check logical corruption details

set lines 200 pages 10000
col segment_name format a30
SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file#, greatest(e.block_id, c.block#) corr_start_block#
, least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block#, least(e.block_id+e.blocks-1, c.block#+c.blocks-1)- greatest(e.block_id, c.block#) + 1 blocks_corrupted, null description
FROM dba_extents e, v$database_block_corruption c
WHERE e.file_id = c.file#
AND e.block_id <= c.block# + c.blocks - 1
AND e.block_id + e.blocks - 1 >= c.block#
UNION
SELECT s.owner, s.segment_type, s.segment_name, s.partition_name, c.file#, header_block corr_start_block#, header_block corr_end_block#
, 1 blocks_corrupted, 'Segment Header' description FROM dba_segments s, v$database_block_corruption c
WHERE s.header_file = c.file#
AND s.header_block between c.block# and c.block# + c.blocks - 1
UNION
SELECT null owner, null segment_type, null segment_name, null partition_name, c.file#
, greatest(f.block_id, c.block#) corr_start_block#
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block#
, least(f.block_id+f.blocks-1, c.block#+c.blocks-1)
- greatest(f.block_id, c.block#) + 1 blocks_corrupted
, 'Free Block' description
FROM dba_free_space f, v$database_block_corruption c
WHERE f.file_id = c.file#
AND f.block_id <= c.block# + c.blocks - 1
AND f.block_id + f.blocks - 1 >= c.block#
ORDER BY file#, corr_start_block#;

Wednesday 8 April 2015

Grid Infrastructure 11.2.0.4 to 12.1.0.2.0 Upgrade for 2 Node RAC

STEP 1:  Back Up the Oracle Software Before Upgrades and check the current cluster details

·         Backup your GRID and DB binaries .

    Before starting the Upgrade check & spool the existing status for GRID and RDBMS :

crsctl stat res -t | tee /tmp/crsctl_bef_patch.txt <= Is anything other than gsd OFFLINE ?
crsctl query crs activeversion | tee /tmp/crsversion_bef_patch.txt
crsctl query crs softwareversion
crsctl stat res -p | tee /tmp/crs_stat_p_bef_patch.txt
crsctl query css votedisk | tee /tmp/qry_css_bef_patch.txt
ocrcheck | tee /tmp/ocrchk_bef_patch.txt
crsctl check cluster -all
srvctl status database -d orcl
srvctl config database -d orcl
·

Create Directory Structure for 12c Grid infrastructure

11g Details  :

ORACLE_BASE=/data01/app/grid
ORACLE_HOME = /data01/app/11.2.0/grid_11204

New Oracle_HOME for Grid 12c on all cluster  nodes  :

ORACLE_BASE=/data01/app/grid
ORACLE_HOME = /data01/app/12C/grid_121020
cd /data01/app/
mkdir -p /data01/app/12C/grid_121020
chown -R grid:oinstall 12C/
chmod -R 775 12C/

About Oracle Grid Infrastructure and Oracle ASM Upgrade and Downgrade

  •        You can upgrade in rolling / Non-Rolling mode . We will follow Rolling mode . Rolling Upgrade involves upgrading individual nodes without stopping Oracle Grid Infrastructure on other nodes in the cluster
  •        All upgrades are out-of-place upgrades, meaning that the software binaries are placed in a different Grid home from the Grid home used for the prior release.
  •       Download the 12c Grid binaries

Restrictions and Guidelines for Oracle Grid Infrastructure Upgrades




  • When you upgrade from Oracle Grid Infrastructure 11g or Oracle Clusterware and Oracle ASM 10g releases to Oracle Grid Infrastructure 12cRelease 1 (12.1), you upgrade to a standard cluster configuration. You can enable Oracle Flex Cluster configuration after the upgrade.
  • If the Oracle Cluster Registry (OCR) and voting file locations for your current installation are on raw or block devices, then you must migrate them to Oracle ASM disk groups or shared file systems before upgrading to Oracle Grid Infrastructure 12c. How to Upgrade to 12c Grid Infrastructure if OCR or Voting File is on Raw/Block Device (Doc ID 1572925.1)
  • If you want to upgrade Oracle Grid Infrastructure releases before Oracle Grid Infrastructure 11g Release 2 (11.2), where the OCR and voting files are on raw or block devices, and you want to migrate these files to Oracle ASM rather than to a shared file system, then you must upgrade to Oracle Grid Infrastructure 11g Release 2 (11.2) before you upgrade to Oracle Grid Infrastructure 12c.
  • To upgrade existing Oracle Clusterware installations to a standard configuration Oracle Grid Infrastructure 12c cluster, your release must be greater than or equal to Oracle Clusterware 10g Release 1 (10.1.0.5), Oracle Clusterware 10g Release 2 (10.2.0.3), Oracle Grid Infrastructure 11gRelease 1 (11.1.0.6), or Oracle Grid Infrastructure 11g Release 2 (11.2).
  • To upgrade existing Oracle Grid Infrastructure installations from Oracle Grid Infrastructure 11g Release 2 (11.2.0.2) to a later release, you must apply patch 11.2.0.2.3 (11.2.0.2 PSU 3) or later.
  • Do not delete directories in the Grid home. For example, do not delete the directory Grid_home/Opatch. If you delete the directory, then the Grid infrastructure installation owner cannot use OPatch to patch the grid home, and OPatch displays the error message "'checkdir' error: cannot createGrid_home/OPatch".
  • To upgrade existing Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 12c Release 1 (12.1), you must first verify if you need to apply any mandatory patches for upgrade to succeed. We will use CVU to check this below .
  • Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades. You cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.
  • If the existing Oracle Clusterware home is a shared home, note that you can use a non-shared home for the Oracle Grid Infrastructure for a cluster home for Oracle Clusterware and Oracle ASM 12c Release 1 (12.1).
  • The same user that owned the earlier release Oracle Grid Infrastructure software must perform the Oracle Grid Infrastructure 12c Release 1 (12.1) upgrade. Before Oracle Database 11g, either all Oracle software installations were owned by the Oracle user, typically oracle, or Oracle Database software was owned by oracle, and Oracle Clusterware software was owned by a separate user, typically crs.
  • Oracle ASM and Oracle Clusterware both run in the Oracle Grid Infrastructure home.
  • During a major release upgrade to Oracle Grid Infrastructure 12c Release 1 (12.1), the software in the 12c Release 1 (12.1) Oracle Grid Infrastructure home is not fully functional until the upgrade is completed. Running srvctl, crsctl, and other commands from the new Grid homes are not supported until the final rootupgrade.sh script is run and the upgrade is complete across all nodes.
  • To manage databases in existing earlier release database homes during the Oracle Grid Infrastructure upgrade, use the srvctl from the existing database homes.
  • You can perform upgrades on a shared Oracle Clusterware home.
  • During Oracle Clusterware installation, if there is a single instance Oracle ASM release on the local node, then it is converted to a clustered Oracle ASM 12c Release 1 (12.1) installation, and Oracle ASM runs in the Oracle Grid Infrastructure home on all nodes.
  • If a single instance (non-clustered) Oracle ASM installation is on a remote node, which is a node other than the local node (the node on which the Oracle Grid Infrastructure installation is being performed), then it will remain a single instance Oracle ASM installation. However, during installation, if you select to place the Oracle Cluster Registry (OCR) and voting files on Oracle ASM, then a clustered Oracle ASM installation is created on all nodes in the cluster, and the single instance Oracle ASM installation on the remote node will become nonfunctional.
  • After completing the force upgrade of a cluster to a release, all inaccessible nodes must be deleted from the cluster or joined to the cluster before starting the cluster upgrade to a later release.


For each node, use Cluster Verification Utility to ensure that you have completed preinstallation steps. It can generate Fixup scripts to help you to prepare servers. In addition, the installer will help you to ensure all required prerequisites are met.

runcluvfy.sh stage -pre crsinst -upgrade [-rolling] -src_crshome src_Gridhome -dest_crshome dest_Gridhome -dest_version dest_release [-fixup][-method {sudo|root} [-location dir_path] [-user user_name]] [-verbose]

/runcluvfy.sh stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome /data01/app/11.2.0/grid_11204 -dest_crshome /data01/app/12C/grid_121020 -dest_version 12.1.0.2.0 -fixup -fixupdir /home/grid/logs -verbose | tee /home/grid/logs/runcluvfy.out


OR

Download latest cluvfy and perform the pre checks for grid upgrade :

http://www.oracle.com/technetwork/database/options/clustering/downloads/index.html
Downloaded : cvupack_Linux_x86_64.zip
unzip cvupack_Linux_x86_64.zip -d /home/grid/cvu
[grid@node1 cvu]$  /home/grid/cvu/bin/cluvfy -version
12.1.0.1.0 Build 112713x8664

 /home/grid/cvu/bin/cluvfy stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome $ORACLE_HOME -dest_crshome /data01/app/12C/grid_121020/ -dest_version 12.1.0.2.0  -fixup
-fixupdir /tmp -verbose | tee /home/grid/11_to_12c_upgrade/cluvfy_pre_upgrade.log


Download latest cluvfy and perform the pre checks for grid upgrade :

http://www.oracle.com/technetwork/database/options/clustering/downloads/index.html

Downloaded : cvupack_Linux_x86_64.zip 

unzip cvupack_Linux_x86_64.zip -d /home/grid/cvu

[grid@node1 cvu]$  /home/grid/cvu/bin/cluvfy -version
12.1.0.1.0 Build 112713x8664

 /home/grid/cvu/bin/cluvfy stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome $ORACLE_HOME -dest_crshome /data01/app/12C/grid_121020/ -dest_version 12.1.0.2.0  -fixup -fixupdir /tmp -verbose | tee /home/grid/11_to_12c_upgrade/cluvfy_pre_upgrade.log

[grid@node1 11_to_12c_upgrade]$ cat cluvfy_pre_upgrade.log |more

Performing pre-checks for cluster services setup 

Checking node reachability...

Check: Node reachability from node "node1"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  node1                                 yes                     
  node2                                 yes                     
Result: Node reachability check passed from node "node1"


Checking user equivalence...

Check: User equivalence for user "grid"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  node2                                 passed                  
  node1                                 passed                  
Result: User equivalence check passed for user "grid"

Checking CRS user consistency
Result: CRS user consistency check successful

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  node2                                 passed              
    node1                                 passed                  

Verification of the hosts config file successful


Interface information for node "node2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.72   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:ED 1500  
 eth0   192.168.56.82   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:ED 1500  
 eth0   192.168.56.91   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:ED 1500  
 eth1   192.168.10.2    192.168.10.0    0.0.0.0         UNKNOWN         08:00:27:53:6F:4D 1500  
 eth1   169.254.231.223 169.254.0.0     0.0.0.0         UNKNOWN         08:00:27:53:6F:4D 1500  


Interface information for node "node1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.71   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:E6 1500  
 eth0   192.168.56.93   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:E6 1500  
 eth0   192.168.56.92   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:E6 1500  
 eth0   192.168.56.81   192.168.56.0    0.0.0.0         UNKNOWN         08:00:27:C4:E7:E6 1500  
 eth1   192.168.10.1    192.168.10.0    0.0.0.0         UNKNOWN         08:00:27:53:6F:46 1500  
 eth1   169.254.204.171 169.254.0.0     0.0.0.0         UNKNOWN         08:00:27:53:6F:46 1500  


Check: Node connectivity for interface "eth0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  node2[192.168.56.72]            node2[192.168.56.82]            yes             
  node2[192.168.56.72]            node2[192.168.56.91]            yes             
  node2[192.168.56.72]            node1[192.168.56.71]            yes             
  node2[192.168.56.72]            node1[192.168.56.93]            yes             
  node2[192.168.56.72]            node1[192.168.56.92]            yes             
  node2[192.168.56.72]            node1[192.168.56.81]            yes             
  node2[192.168.56.82]            node2[192.168.56.91]            yes             
  node2[192.168.56.82]            node1[192.168.56.71]            yes             
  node2[192.168.56.82]            node1[192.168.56.93]            yes             
  node2[192.168.56.82]            node1[192.168.56.92]            yes             
  node2[192.168.56.82]            node1[192.168.56.81]            yes             
  node2[192.168.56.91]            node1[192.168.56.71]            yes             
  node2[192.168.56.91]            node1[192.168.56.93]            yes             
  node2[192.168.56.91]            node1[192.168.56.92]            yes             
  node2[192.168.56.91]            node1[192.168.56.81]            yes             
  node1[192.168.56.71]            node1[192.168.56.93]            yes             
  node1[192.168.56.71]            node1[192.168.56.92]            yes             
  node1[192.168.56.71]            node1[192.168.56.81]            yes             
  node1[192.168.56.93]            node1[192.168.56.92]            yes             
  node1[192.168.56.93]            node1[192.168.56.81]            yes             
  node1[192.168.56.92]            node1[192.168.56.81]            yes             
Result: Node connectivity passed for interface "eth0"


Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  node1:192.168.56.71             node2:192.168.56.72             passed          
  node1:192.168.56.71             node2:192.168.56.82             passed          
  node1:192.168.56.71             node2:192.168.56.91             passed      
   node1:192.168.56.71             node1:192.168.56.92             passed          
  node1:192.168.56.71             node1:192.168.56.81             passed          
Result: TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  node2[192.168.10.2]             node1[192.168.10.1]             yes             
Result: Node connectivity passed for interface "eth1"


Check: TCP connectivity of subnet "192.168.10.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  node1:192.168.10.1              node2:192.168.10.2              passed          
Result: TCP connectivity check passed for subnet "192.168.10.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "192.168.10.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking OCR integrity...

OCR integrity check passed

Checking ASMLib configuration.
  Node Name                             Status                  
  ------------------------------------  ------------------------
  node2                                 passed                  
  node1                                 passed                  
Result: Check for ASMLib configuration passed.

Check: Total memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         1.9598GB (2055048.0KB)    1.5GB (1572864.0KB)       passed    
  node1         1.9598GB (2055048.0KB)    1.5GB (1572864.0KB)       passed    
Result: Total memory check passed

Check: Available memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         733.3867MB (750988.0KB)   50MB (51200.0KB)          passed    
  node1         742.1133MB (759924.0KB)   50MB (51200.0KB)          passed    
Result: Available memory check passed

Check: Swap space 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         1.7578GB (1843196.0KB)    2.9398GB (3082572.0KB)    failed    
  node1         1.7578GB (1843196.0KB)    2.9398GB (3082572.0KB)    failed    
Result: Swap space check failed

Check: Free disk space for "node2:/data01/app/12C/grid_121020/,node2:/tmp" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /data01/app/12C/grid_121020/  node2         /             8.8936GB      7.5GB         passed      
  /tmp              node2         /             8.8936GB      7.5GB         passed      
Result: Free disk space check passed for "node2:/data01/app/12C/grid_121020/,node2:/tmp"

Check: Free disk space for "node1:/data01/app/12C/grid_121020/,node1:/tmp" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /data01/app/12C/grid_121020/  node1         /             6.709GB       7.5GB         failed      
  /tmp              node1         /             6.709GB       7.5GB         failed      
Result: Free disk space check failed for "node1:/data01/app/12C/grid_121020/,node1:/tmp"

Check: User existence for "grid" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  node2         passed                    exists(201)             
  node1         passed                    exists(201)             

Checking for multiple users with UID value 201
Result: Check for multiple users with UID value 201 passed 
Result: User existence check passed for "grid"
Check: Group existence for "oinstall" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  node2         passed                    exists                  
  node1         passed                    exists                  
Result: Group existence check passed for "oinstall"

Check: Membership of user "grid" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             yes           yes           yes           yes           passed      
  node1             yes           yes           yes           yes           passed      
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

Check: Run level 
  Node Name     run level                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         5                         3,5                       passed    
  node1         5                         3,5                       passed    
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  node2             hard          65536         65536         passed          
  node1             hard          65536         65536         passed          
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  node1             soft          65536         1024          passed          
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  node2             hard          16384         16384         passed          
  node1             hard          16384         16384         passed          
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  node2             soft          16384         2047          passed          
  node1             soft          16384         2047          passed          
Result: Soft limits check passed for "maximum user processes"

There are no oracle patches required for home "/data01/app/11.2.0/grid_11204".

There are no oracle patches required for home "/data01/app/12C/grid_121020/".

Check: System architecture 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         x86_64                    x86_64                    passed    
  node1         x86_64                    x86_64                    passed    
Result: System architecture check passed

Check: Kernel version 
  Node Name     Available                 Required                  Status    
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed    
  node1         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed    
Result: Kernel version check passed

Check: Kernel parameter for "semmsl" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             250           250           250           passed          
  node1             250           250           250           passed          
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             32000         32000         32000         passed          
  node1             32000         32000         32000         passed          
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             100           100           100           passed          
  node1             100           100           100           passed          
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
 node2             128           128           128           passed          
  node1             128           128           128           passed          
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             4294967295    4294967295    1052184576    passed          
  node1             4294967295    4294967295    1052184576    passed          
Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             4096          4096          4096          passed          
  node1             4096          4096          4096          passed          
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             2097152       2097152       2097152       passed          
  node1             2097152       2097152       2097152       passed          
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             6815744       6815744       6815744       passed          
  node1             6815744       6815744       6815744       passed       
  Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed          
  node1             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed          
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             262144        262144        262144        passed          
  node1             262144        262144        262144        passed          
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             4194304       4194304       4194304       passed          
  node1             4194304       4194304       4194304       passed          
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             262144        262144        262144        passed          
  node1             262144        262144        262144        passed          
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             1048576       1048576       1048576       passed          
  node1             1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  node2             1048576       1048576       1048576       passed          
  node1             1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "binutils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed    
  node1         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed    
Result: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         compat-libcap1-1.10-1     compat-libcap1-1.10       passed    
  node1         compat-libcap1-1.10-1     compat-libcap1-1.10       passed    
Result: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed    
  node1         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed    
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.4      passed    
  node1         libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.4      passed    
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.4   passed    
  node1         libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.4   passed    
Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed    
  node1         libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed    
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed    
  node1         libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed    
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed    
  node1         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed    
Result: Package existence check passed for "sysstat"

Check: Package existence for "gcc" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         gcc-4.4.7-3.el6           gcc-4.4.4                 passed    
  node1         gcc-4.4.7-3.el6           gcc-4.4.4                 passed    
Result: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         gcc-c++-4.4.7-3.el6       gcc-c++-4.4.4             passed    
  node1         gcc-c++-4.4.7-3.el6       gcc-c++-4.4.4             passed    
Result: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         ksh-20100621-19.el6       ksh-20100621              passed    
  node1         ksh-20100621-19.el6       ksh-20100621              passed    
Result: Package existence check passed for "ksh"
Check: Package existence for "make" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         make-3.81-20.el6          make-3.81                 passed    
  node1         make-3.81-20.el6          make-3.81                 passed    
Result: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         glibc(x86_64)-2.12-1.107.el6  glibc(x86_64)-2.12        passed    
  node1         glibc(x86_64)-2.12-1.107.el6  glibc(x86_64)-2.12        passed    
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         glibc-devel(x86_64)-2.12-1.107.el6  glibc-devel(x86_64)-2.12  passed    
  node1         glibc-devel(x86_64)-2.12-1.107.el6  glibc-devel(x86_64)-2.12  passed    
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed    
  node1         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed    
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed    
  node1         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed    
Result: Package existence check passed for "libaio-devel(x86_64)"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed 

Check: Current group ID 
Result: Current group ID check passed

Starting check for consistency of primary group of root user
  Node Name                             Status                  
  ------------------------------------  ------------------------
  node2                                 passed                  
  node1                                 passed                  

Check for consistency of root user's primary group passed

Check: Package existence for "cvuqdisk" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  node2         cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed    
  node1         cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed    
Result: Package existence check passed for "cvuqdisk"

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synch
ronization on the cluster nodes

Checking daemon liveness...

Check: Liveness for "ntpd"
  Node Name                             Running?                
  ------------------------------------  ------------------------
  node2                                 yes                     
  node1                                 yes                     
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes

Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
  Node Name                             Slewing Option Set?     
  ------------------------------------  ------------------------
  node2                                 no                      
  node1                                 no                      
Result: 
NTP daemon slewing option check failed on some nodes
PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"
Result: Clock synchronization check using Network Time Protocol(NTP) failed

Checking Core file name pattern consistency...
Core file name pattern consistency check passed.

Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  node2         passed                    does not exist          
    node1         passed                    does not exist          
Result: User "grid" is not part of "root" group. Check passed

Check default user file creation mask
  Node Name     Available                 Required                  Comment   
  ------------  ------------------------  ------------------------  ----------
  node2         0022                      0022                      passed    
  node1         0022                      0022                      passed    
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "node2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
  Node Name                             Status                  
  ------------------------------------  ------------------------
  node2                                 failed                  
  node1                                 failed                  
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: node2,node1

File "/etc/resolv.conf" is not consistent across nodes

UDev attributes check for OCR locations started...
Result: UDev attributes check passed for OCR locations 


UDev attributes check for Voting Disk locations started...
Result: UDev attributes check passed for Voting Disk locations 

Check: Time zone consistency 
Result: Time zone consistency check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Checking Oracle Cluster Voting Disk configuration...

ASM Running check passed. ASM is running on all specified nodes

Oracle Cluster Voting Disk configuration check passed

Clusterware version consistency passed

Starting check for Reverse path filter setting ...
Reverse path filter setting is correct for all private interconnect network interfaces on node "node2.localdomain".
Reverse path filter setting is correct for all private interconnect network interfaces on node "node1.localdomain".

Check for Reverse path filter setting passed

Pre-check for cluster services setup was unsuccessful on all the nodes. 

In our case above failed parameters can be ignored as this is only a test server demo installation and swap and NTP failure can be ignored.

[grid@node1 11_to_12c_upgrade]$ cat cluvfy_pre_upgrade.log |grep -i failed
  node2         1.7578GB (1843196.0KB)    2.9398GB (3082572.0KB)    failed    
  node1         1.7578GB (1843196.0KB)    2.9398GB (3082572.0KB)    failed    
Result: Swap space check failed
  /data01/app/12C/grid_121020/  node1         /             6.709GB       7.5GB         failed      
  /tmp              node1         /             6.709GB       7.5GB         failed      
Result: Free disk space check failed for "node1:/data01/app/12C/grid_121020/,node1:/tmp"
NTP daemon slewing option check failed on some nodes
Result: Clock synchronization check using Network Time Protocol(NTP) failed
  node2                                 failed                  
  node1                                 failed                  

[root@node1 data01]# cat /proc/sys/kernel/panic_on_oops
1
[root@node1 data01]# ssh node2 cat /proc/sys/kernel/panic_on_oops
root@node2's password: 
1
[root@node1 data01]# 


Unset Oracle Environment Variables
Known Issue :Environment Variable ORA_CRS_HOME MUST be UNSET in 11gR2/12c GI (Doc ID 1502996.1)

For the installation owner running the installation, if you have environment variables set for the existing installation, then unset the environment variables $ORACLE_HOME and $ORACLE_SID, as these environment variables are used during upgrade. For example:

unset ORACLE_HOME
unset ORACLE_BASE
unset ORACLE_SID
unset ORA_CRS_HOME

If you have set ORA_CRS_HOME as an environment variable, following instructions from Oracle Support, then unset it before starting an installation or upgrade. You should never use ORA_CRS_HOME as an environment variable except under explicit direction from Oracle Support.
Check to ensure that installation owner login shell profiles (for example, .profile or .cshrc) do not have ORA_CRS_HOME set.
If you have had an existing installation on your system, and you are using the same user account to install this installation, then unset the following environment variables: ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN; and any other environment variable set for the Oracle installation user that is connected with Oracle software homes.
Also, ensure that the $ORACLE_HOME/bin path is removed from your PATH environment variable.


Check for some known issues :

NOTE:1917543.1 - FAILS TO START ORA.CSSD WHEN UPGRADING GRID 12C ON NODE 1
Environment Variable ORA_CRS_HOME MUST be UNSET in 11gR2/12c GI (Doc ID 1502996.1)
NOTE:1918426.1 - 12.1.0.2 root script fails to start ora.ctssd if nodes name length are not the same
NOTE:1922908.1 - 12.1.0.2 GI: oratab being wrongly modified after instance restarts
NOTE:19185876.8 - Bug 19185876 - ORA-600 [kjshash:!mhint] from ASM LMON process during rolling upgrade from 11.2 to 12c
NOTE:1917917.1 - 12c GI rootupgrade.sh Fails on First Node With ORA-01034 if Node Number Starts From 0
NOTE:1580360.1 - GI 12c/12.1.0.x rootupgrade.sh fails: PRCR-1065 : Failed to stop resource ora.gsd
CLSRSC-507: The root script cannot proceed on this node <node-n> because either the first-node operations have not completed on node <node-1> or there was an error in obtaining the status of the first-node operations. (Doc ID 1919825.1)
GI Upgrade from 11.2.0.3.6+ to 11.2.0.4 or 12.1.0.1 Fails with User(qosadmin) is deactivated. AUTH FAILURE. (Doc ID 1577072.1)


START THE RUNINSTALLER :

unset ORACLE_HOME
unset ORACLE_BASE
unset ORACLE_SID
unset ORA_CRS_HOME


[grid@node1 grid]$ ./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 415 MB.   Actual 6544 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 1519 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-03-31_09-51-28AM. Please wait ...














Execute rootupgrade.sh on node1 and check all the grid for upgraded version . Check all services are up and running for node1 ,

Execute rootupgrade.sh on node2 and check the version is upgraded now .

[grid@node2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[grid@node2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [node2] is [12.1.0.2.0]
[grid@node2 ~]$ crsctl query crs softwareversion node1
Oracle Clusterware version on node [node1] is [12.1.0.2.0]
[grid@node2 ~]$ crsctl query crs activeversion node1
Oracle Clusterware active version on the cluster is [12.1.0.2.0]