Total Pageviews

Saturday 30 July 2016

ERROR OGG-01232 Receive TCP params error: TCP/IP error 104 (Connection reset by peer), endpoint: node2:7819.

GGSCI (node1.localdomain) 2> view report dphr01 , detail


***********************************************************************
                 Oracle GoldenGate Capture for Oracle
   Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
   Linux, x64, 64bit (optimized), Oracle 11g on Aug  7 2014 09:31:26

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2016-07-30 06:55:59
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Fri Feb 22 18:16:18 PST 2013, Release 2.6.39-400.17.1.el6uek.x86_64
Node: node1.localdomain
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 8384

Description:

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************

2016-07-30 06:55:59  INFO    OGG-03059  Operating system character set identified as UTF-8.

2016-07-30 06:55:59  INFO    OGG-02695  ANSI SQL parameter syntax is used for parameter parsing.
EXTRACT DPHR01
PASSTHRU
RMTHOST node2 , MGRPORT 7809
RMTTRAIL dirdat/hr
TABLE HR.* ;

2016-07-30 06:55:59  INFO    OGG-01851  filecaching started: thread ID: 139969427404544.

2016-07-30 06:55:59  INFO    OGG-01815  Virtual Memory Facilities for: COM
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /data01/oracle/ggs/dirtmp.

CACHEMGR virtual memory values (may have been adjusted)
CACHESIZE:                               64G
CACHEPAGEOUTSIZE (default):               8M
PROCESS VM AVAIL FROM OS (min):         128G
CACHESIZEMAX (strict force to disk):     96G

2016-07-30 06:56:04  INFO    OGG-01226  Socket buffer size set to 27985 (flush size 27985).

2016-07-30 06:56:04  INFO    OGG-01052  No recovery is required for target file dirdat/hr000000, at RBA 0 (file not opened).

2016-07-30 06:56:04  INFO    OGG-01478  Output file dirdat/hr is using format RELEASE 12.1.

***********************************************************************
**                     Run Time Messages                             **
***********************************************************************

Opened trail file dirdat/hr000000 at 2016-07-30 06:56:04

Switching to next trail file dirdat/hr000001 at 2016-07-30 06:56:04 due to EOF, with current RBA 1411
Opened trail file dirdat/hr000001 at 2016-07-30 06:56:04


Switching to next trail file dirdat/hr000002 at 2016-07-30 06:56:04 due to EOF, with current RBA 1472
Opened trail file dirdat/hr000002 at 2016-07-30 06:56:04


Source Context :
  SourceModule            : [er.extrout]
  SourceID                : [/scratch/aime1/adestore/views/aime1_adc4150384/oggcore/OpenSys/src/app/er/extrout.c]
  SourceFunction          : [complete_tcp_msg]
  SourceLine              : [1663]
  ThreadBacktrace         : [17] elements
                          : [/data01/oracle/ggs/libgglog.so(CMessageContext::AddThreadContext()+0x1e) [0x7f4d37239dce]]
                          : [/data01/oracle/ggs/libgglog.so(CMessageFactory::CreateMessage(CSourceContext*, unsigned int, ...)+0x340) [0x7f4d37234ae0]]
                          : [/data01/oracle/ggs/libgglog.so(_MSG_ERR_TCP_RECEIVE_PARAMS_ERROR(CSourceContext*, char const*, CMessageFactory::MessageDisposition)+0x31) [
0x7f4d3721642b]]
                          : [/data01/oracle/ggs/extract(complete_tcp_msg(extract_def*)+0x711) [0x55540d]]
                          : [/data01/oracle/ggs/extract(flush_tcp(extract_def*, int)+0x23a) [0x555660]]
                          : [/data01/oracle/ggs/extract(flush_buffers(int)+0x5c) [0x54f4fc]]
                          : [/data01/oracle/ggs/extract(checkpoint_extract_info(chkpt_context_t*)+0x20) [0x54f520]]
                          : [/data01/oracle/ggs/extract(checkpoint_status(short, short, long)+0x179) [0x5a93d9]]
                          : [/data01/oracle/ggs/extract(check_messages(chkpt_context_t*, short, char const*, bool)+0x1197) [0x58a5c7]]
                          : [/data01/oracle/ggs/extract(process_extract_loop()+0x19a4) [0x5b9854]]
                          : [/data01/oracle/ggs/extract(extract_main(int, char**)+0x418) [0x5b6aa8]]
                          : [/data01/oracle/ggs/extract(ggs::gglib::MultiThreading::MainThread::ExecMain()+0x4f) [0x6a1e2f]]
                          : [/data01/oracle/ggs/extract(ggs::gglib::MultiThreading::Thread::RunThread(ggs::gglib::MultiThreading::Thread::ThreadArgs*)+0x104) [0x6a2334]
]
                          : [/data01/oracle/ggs/extract(ggs::gglib::MultiThreading::MainThread::Run(int, char**)+0x8b) [0x6a26fb]]
                          : [/data01/oracle/ggs/extract(main+0x3f) [0x5b643f]]
                          : [/lib64/libc.so.6(__libc_start_main+0xfd) [0x3a6181ecdd]]
                          : [/data01/oracle/ggs/extract() [0x52ad09]]

2016-07-30 07:06:51  ERROR   OGG-01232  Receive TCP params error: TCP/IP error 104 (Connection reset by peer), endpoint: node2:7819.

***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************


Report at 2016-07-30 07:06:51 (activity since 2016-07-30 06:55:59)
Output to dirdat/hr:
No records extracted.
Last log location read:
     FILE:      dirdat/hr000002
     SEQNO:     2
     RBA:       1472
     TIMESTAMP: Not Available
     EOF:       YES
     READERR:   400

2016-07-30 07:06:51  ERROR   OGG-01668  PROCESS ABENDING.

Cause :

Target site /etc/host file contains an INVALID id for localhost

Solution :

1. Check /etc/hosts and remove the below  highlighted line from and restart MGR process on target :

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6   ---> Remove this entry for localhost 

192.168.10.1 node1-priv.localdomain.com node1-priv
192.168.10.2 node2-priv.localdomain.com node2-priv

192.168.56.71 node1.localdomain node1
192.168.56.72 node2.localdomain node2

192.168.56.81 node1-vip.localdomain node1-vip
192.168.56.82 node2-vip.localdomain node2-vip

192.168.56.91 node-scan.localdomain node-scan
192.168.56.92 node-scan.localdomain node-scan
192.168.56.93 node-scan.localdomain node-scan

2. Restart MGR on target node

3. Start Extract process and check the status :
GGSCI (node1.localdomain) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                          
EXTRACT     RUNNING     DPHR01      00:00:00      00:00:01  
EXTRACT     RUNNING     EXHR01      05:01:28      00:00:01  

Wednesday 27 July 2016

RMAN TRACING

C:\>rman target / log=rmanLog.txt trace=rmanTrace.txt
RMAN> debug on;
RMAN> restore datafile 1;
RMAN> debug off;
RMAN> exit;

rmanLog.txt
===========
RMAN-03090: Starting restore at 07-SEP-2007 21:07:14
RMAN-06009: using target database control file instead of recovery catalog
RMAN-08030: allocated channel: ORA_DISK_1
RMAN-08500: channel ORA_DISK_1: sid=49 devtype=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 09/07/2007 21:08:34
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 1 found to restore

rmanTrace.txt
=============
(Search the debug trace for Checkpoint SCN: "360352" of datafile #1 in backupset #3)
 ...
 DBGRCVMAN:  CheckRecAction called 08/19/07 22:31:27; rlgscn=289857
 DBGRCVMAN: CheckRecAction: inc=1,toscn=360352exceeds 360002
 DBGRCVMAN: CheckRecAction:belongs to orphan branch of this incarnation:
 ...

RMAN Recovery Script

col "Restore Command" for a100 col "Applied Logs" for a100 col "Catalog Logs" for a100 col "Recover Command" for a80 select ' restore archivelog from logseq ' || applied_arc.startNo ||
' until logseq ' || catalog_arc.endNo || ' thread=' ||
catalog_arc.thread# || ';' "Restore Command" from --(select thread#,max(sequence#) + 1 startNo from gv$archived_log
where applied='YES' group by thread#) applied_arc, (select thread#,max(sequence#) startNo from gv$archived_log where
applied='YES' group by thread#) applied_arc, (select thread#, max(sequence#) endNo from v$backup_archivelog_details
group by thread#) catalog_arc where applied_arc.thread# = catalog_arc.thread#; prompt '=========== Archive Log Info =============' select distinct 'Thread ' || thread# || ': last applied archive log '
|| sequence# || ' at ' || to_char(next_time, 'MON/DD/YYYY HH24:MI:SS')
|| ' next change# ' || next_change# "Applied Logs" from v$archived_log where thread# || '_' || sequence# in (select thread# || '_' || max(sequence#) from v$archived_log where
applied='YES' group by thread#) --and applied='YES' ; select 'Thread ' || thread# || ': last cataloged archive log '
|| sequence# || ' at ' || to_char(next_time, 'MON/DD/YYYY HH24:MI:SS')
|| ' next change# ' || next_change# "Catalog Logs" from v$backup_archivelog_details where thread# || '_' || sequence# in (select thread# || '_' || max(sequence#) from
v$backup_archivelog_details group by thread#) ; prompt '=========== recover point ================' --select 'recover database until sequence ' || seq# || ' thread '
|| thread# || ' delete archivelog maxsize 4000g; ' Content select 'set until sequence ' || seq# || ' thread ' || thread#
|| '; ' || chr(13)|| chr(10) || 'recover database
delete archivelog maxsize 4000g; ' "Recover Command" from ( select * from ( select thread#, sequence# + 1 seq#, next_change# from ( select * from v$backup_archivelog_details where thread# || '_' || sequence# in (select thread# || '_' || max(sequence#) from
v$backup_archivelog_details group by thread#) ) order by next_change# ) where rownum = 1 ) ; https://weidongzhou.wordpress.com/2014/09/20/
script-to-identify-the-restore-and-recover-point-for-archive-logs/

Saturday 23 July 2016

ETA for long running Rollback

set serveroutput on
DECLARE
  type t_undoblocks is table of number index by varchar2(100);
  type t_ublk is table of number index by varchar2(100);
  v_undoblocks t_undoblocks;
  v_ublk t_ublk;
  v_eta number;
  v_sleep number := 3;
BEGIN
  for r in (SELECT cast(b.XID as varchar2(100)) xid, b.used_urec FROM v$transaction b)
  LOOP
     v_ublk(r.xid) := r.used_urec;
  end loop;
  dbms_output.put_line('Checking if SMON is recovering any transactions');
  for r in (select cast(XID as varchar2(100)) xid, state,undoblocksdone,undoblockstotal,RCVSERVERS from V$FAST_START_TRANSACTIONS where state<>'RECOVERED')
  LOOP
    v_undoblocks(r.xid) := r.undoblocksdone;
    dbms_output.put_line(rpad('TransactionID',25) || rpad('state',15) || rpad('recover_servers',20) || rpad('undo_blocks_total',20) || rpad('undo_blocks_done',20));
    dbms_output.put_line(rpad(r.XID,25) || rpad(r.state,25) || rpad(to_char(r.RCVSERVERS),20) || rpad(to_char(r.undoblockstotal),20) || rpad(to_char(r.undoblocksdone),20));
  end loop;

  dbms_output.put_line(chr(10) ||'Sleep '||v_sleep||' seconds to check again...');
  dbms_lock.sleep(v_sleep);

  for r in (select cast(XID as varchar2(100)) xid, state,undoblocksdone,undoblockstotal,RCVSERVERS from V$FAST_START_TRANSACTIONS where state<>'RECOVERED')
  LOOP
    if v_undoblocks.exists(r.xid) then
       if r.undoblocksdone > v_undoblocks(r.xid) then
         v_eta := round((r.undoblockstotal-r.undoblocksdone)*v_sleep/60/(r.undoblocksdone-v_undoblocks(r.xid)),1);
         dbms_output.put_line('SMON is rolling back '||r.xid||'...'||r.undoblocksdone||' out of '||r.undoblockstotal||' blocks are done...ETA
is '||v_eta||' minutes');
       else
         dbms_output.put_line('SMON is rolling back '||r.xid||'...'||r.undoblocksdone||' out of '||r.undoblockstotal||' blocks are done...ETA
is unknown, pls try again');
       end if;
    end if;
  end loop;

  dbms_output.put_line(chr(10) ||'Checking if any transaction is rolling back by itself');
  for r in (SELECT a.sid, cast(b.XID as varchar2(100)) xid, b.used_urec FROM v$session a, v$transaction b WHERE a.saddr = b.ses_addr)
  LOOP
      if v_ublk.exists(r.xid) then
         if v_ublk(r.xid) > r.used_urec THEN
            v_eta := round(r.used_urec * v_sleep/60/(v_ublk(r.xid) - r.used_urec), 1);
            dbms_output.put_line('SID,XID : '||r.sid||','||r.xid||' is rolling back...'||r.used_urec||' blocks to go...ETA is '||v_eta||' minutes');
         end if;
      end if;
  end loop;
end;
/
OUTPUT
------------------
Checking if SMON is recovering any transactions

TransactionID                        state          recover_servers     undo_blocks_total   undo_blocks_done
000A0006077F6G56         RECOVERING               0         4093224                            117092

Sleep 3 seconds to check again...
SMON is rolling back 000A0006077F6G56...117278 out of 4093224 blocks are done...ETA
is 1068.8 minutes

Checking if any transaction is rolling back by itself

PL/SQL procedure successfully completed.

Solution :
======

Reference : https://alexzeng.wordpress.com/2011/09/11/how-to-check-rollback-transaction/

Saturday 9 July 2016

RAC Rolling Restart during maintenance activity

1. Check the server status . 

[root@node1 ~]# /data01/app/11204/grid_11204/bin/crsctl status server -f
NAME=node1
STATE=ONLINE
ACTIVE_POOLS=Generic ora.racdb
STATE_DETAILS=

NAME=node2
STATE=ONLINE
ACTIVE_POOLS=Generic ora.racdb
STATE_DETAILS=

2. Check your cluster name :

root@node1 ~]# /data01/app/11204/grid_11204/bin/cemutlo -n
node-cluster
[root@node1 ~]#

3. Check status of all nodeapps :
----------------------------------------

[oracle@node1 ~]$ srvctl status nodeapps
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
Network is enabled
Network is running on node: node1
Network is running on node: node2
GSD is disabled
GSD is not running on node: node1
GSD is not running on node: node2
ONS is enabled
ONS daemon is running on node: node1
ONS daemon is running on node: node2

Check scan and scan listener status :
============================

[oracle@node1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1
[oracle@node1 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node1
[oracle@node1 ~]$

Check the status of CRS on specific node:
===================================================
[grid@node1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[grid@node1 ~]$ crsctl check cluster
===========================================
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[grid@node1 ~]$ crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@node1 ~]$ srvctl status database -d racdb -v
Instance racdb1 is running on node node1. Instance status: Open.
Instance racdb2 is running on node node2. Instance status: Open.

How to check the status of complete clusterware stack on all nodes:
=======================================================================

[root@node1 ~]# /data01/app/11204/grid_11204/bin/crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_NEW.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.FRA.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.OCRNEW.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.OCRVOTE.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.OCR_VOTE.dg
               ONLINE  OFFLINE      node1                                      
               ONLINE  OFFLINE      node2                                      
ora.asm
               ONLINE  ONLINE       node1                    Started            
               ONLINE  ONLINE       node2                    Started            
ora.gsd
               OFFLINE OFFLINE      node1                                      
               OFFLINE OFFLINE      node2                                      
ora.net1.network
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.ons
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.registry.acfs
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node2                                      
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node1                                      
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node1                                      
ora.cvu
      1        ONLINE  ONLINE       node2                                      
ora.node1.vip
      1        ONLINE  ONLINE       node1                                      
ora.node2.vip
      1        ONLINE  ONLINE       node2                                      
ora.oc4j
      1        ONLINE  OFFLINE                                                  
ora.racdb.db
      1        ONLINE  ONLINE       node1                    Open              
      2        ONLINE  ONLINE       node2                    Open              
ora.scan1.vip
      1        ONLINE  ONLINE       node2                                      
ora.scan2.vip
      1        ONLINE  ONLINE       node1                                      
ora.scan3.vip
      1        ONLINE  ONLINE       node1                            


[oracle@node1 ~]$ ps -ef | grep d.bin
root      2579     1  1 02:07 ?        00:00:19 /data01/app/11204/grid_11204/bin/ohasd.bin reboot
grid      2857     1  0 02:07 ?        00:00:00 /data01/app/11204/grid_11204/bin/mdnsd.bin
grid      2868     1  0 02:07 ?        00:00:02 /data01/app/11204/grid_11204/bin/gpnpd.bin
grid      2878     1  0 02:07 ?        00:00:14 /data01/app/11204/grid_11204/bin/gipcd.bin
root      2892     1  2 02:07 ?        00:00:44 /data01/app/11204/grid_11204/bin/osysmond.bin
grid      2948     1  0 02:07 ?        00:00:14 /data01/app/11204/grid_11204/bin/ocssd.bin
root      3145     1  0 02:08 ?        00:00:07 /data01/app/11204/grid_11204/bin/octssd.bin reboot
grid      3177     1  0 02:08 ?        00:00:05 /data01/app/11204/grid_11204/bin/evmd.bin
root      4426     1  1 02:14 ?        00:00:13 /data01/app/11204/grid_11204/bin/crsd.bin reboot
oracle    9034  8298  0 02:37 pts/2    00:00:00 grep d.bin
[oracle@node1 ~]$

How to stop crs on specific node:
===================================
[oracle@node1 ~]$ srvctl stop instance -d racdb -i racdb1

[oracle@node1 ~]$ srvctl status  database -d racdb -v
Instance racdb1 is not running on node node1
Instance racdb2 is running on node node2. Instance status: Open.

[root@node1 ~]# /data01/app/11204/grid_11204/bin/crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.OCRNEW.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.OCRVOTE.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA_NEW.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node1'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'node1'
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.node1.vip' on 'node2'
CRS-2677: Stop of 'ora.scan2.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'node2'
CRS-2677: Stop of 'ora.scan3.vip' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'node2'
CRS-2676: Start of 'ora.node1.vip' on 'node2' succeeded
CRS-2676: Start of 'ora.scan2.vip' on 'node2' succeeded
CRS-2676: Start of 'ora.scan3.vip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'node2'
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'node2'
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.OCRNEW.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA_NEW.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'node1' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'node2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'node2' succeeded
CRS-2677: Stop of 'ora.OCRVOTE.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'node1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.crf' on 'node1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

[root@node2 ~]# /data01/app/11204/grid_11204/bin/crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_NEW.dg
               ONLINE  ONLINE       node2                                      
ora.FRA.dg
               ONLINE  ONLINE       node2                                      
ora.LISTENER.lsnr
               ONLINE  ONLINE       node2                                      
ora.OCRNEW.dg
               ONLINE  ONLINE       node2                                      
ora.OCRVOTE.dg
               ONLINE  ONLINE       node2                                      
ora.OCR_VOTE.dg
               ONLINE  OFFLINE      node2                                      
ora.asm
               ONLINE  ONLINE       node2                    Started            
ora.gsd
               OFFLINE OFFLINE      node2                                      
ora.net1.network
               ONLINE  ONLINE       node2                                      
ora.ons
               ONLINE  ONLINE       node2                                      
ora.registry.acfs
               ONLINE  ONLINE       node2                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node2                                      
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node2                                      
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node2                                      
ora.cvu
      1        ONLINE  ONLINE       node2                                      
ora.node1.vip
      1        ONLINE  INTERMEDIATE node2                    FAILED OVER        
ora.node2.vip
      1        ONLINE  ONLINE       node2                                      
ora.oc4j
      1        ONLINE  ONLINE       node2                                      
ora.racdb.db
      1        OFFLINE OFFLINE                               Instance Shutdown  
      2        ONLINE  ONLINE       node2                    Open              
ora.scan1.vip
      1        ONLINE  ONLINE       node2                                      
ora.scan2.vip
      1        ONLINE  ONLINE       node2                                      
ora.scan3.vip
      1        ONLINE  ONLINE       node2                                

[oracle@node2 ~]$ srvctl status database -d racdb -v
Instance racdb1 is not running on node node1
Instance racdb2 is running on node node2. Instance status: Open.

[oracle@node2 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node2
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node2

[oracle@node2 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node2
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node2

======================================================================
                   COMPLETE THE MAINTENANCE ACTIVITY ON NODE1 AND
                                         RESTART RESOURCES ON NODE1 
======================================================================

[root@node1 ~]# /data01/app/11204/grid_11204/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@node1 ~]# ps -ef | grep d.bin
root     14259     1  1 03:11 ?        00:00:02 /data01/app/11204/grid_11204/bin/ohasd.bin reboot
grid     14430     1  0 03:11 ?        00:00:00 /data01/app/11204/grid_11204/bin/mdnsd.bin
grid     14441     1  0 03:11 ?        00:00:00 /data01/app/11204/grid_11204/bin/gpnpd.bin
grid     14452     1  1 03:11 ?        00:00:01 /data01/app/11204/grid_11204/bin/gipcd.bin
root     14466     1  1 03:11 ?        00:00:01 /data01/app/11204/grid_11204/bin/osysmond.bin
grid     14511     1  1 03:11 ?        00:00:01 /data01/app/11204/grid_11204/bin/ocssd.bin
root     14627     1  0 03:11 ?        00:00:00 /data01/app/11204/grid_11204/bin/octssd.bin reboot
grid     14647     1  0 03:11 ?        00:00:00 /data01/app/11204/grid_11204/bin/evmd.bin
root     14911     1  2 03:12 ?        00:00:00 /data01/app/11204/grid_11204/bin/crsd.bin reboot
root     15215 10009  0 03:13 pts/2    00:00:00 grep d.bin

[oracle@node1 ~]$ srvctl start instance -d racdb -i racdb1

[oracle@node1 ~]$ srvctl status database -d racdb -v
Instance racdb1 is running on node node1. Instance status: Open.
Instance racdb2 is running on node node2. Instance status: Open.
[oracle@node1 ~]$

[oracle@node1 ~]$ /data01/app/11204/grid_11204/bin/crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_NEW.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.FRA.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.OCRNEW.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.OCRVOTE.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.OCR_VOTE.dg
               ONLINE  OFFLINE      node1                                      
               ONLINE  OFFLINE      node2                                      
ora.asm
               ONLINE  ONLINE       node1                    Started            
               ONLINE  ONLINE       node2                    Started            
ora.gsd
               OFFLINE OFFLINE      node1                                      
               OFFLINE OFFLINE      node2                                      
ora.net1.network
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.ons
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.registry.acfs
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node1                                      
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node2                                      
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node2                                      
ora.cvu
      1        ONLINE  ONLINE       node2                                      
ora.node1.vip
      1        ONLINE  ONLINE       node1                                      
ora.node2.vip
      1        ONLINE  ONLINE       node2                                      
ora.oc4j
      1        ONLINE  ONLINE       node2                                      
ora.racdb.db
      1        ONLINE  ONLINE       node1                    Open              
      2        ONLINE  ONLINE       node2                    Open              
ora.scan1.vip
      1        ONLINE  ONLINE       node1                                      
ora.scan2.vip
      1        ONLINE  ONLINE       node2                                      
ora.scan3.vip
      1        ONLINE  ONLINE       node2                                      
[oracle@node1 ~]$

========================================================================
Follow same process on node2:

[oracle@node2 ~]$ srvctl stop instance -d racdb -i racdb2

[root@node2 ~]# /data01/app/11204/grid_11204/bin/crsctl stop crs

======================================================================
                   COMPLETE THE MAINTENANCE ACTIVITY ON NODE2 AND
                                         RESTART RESOURCES ON NODE2
======================================================================


[root@node2 ~]# /data01/app/11204/grid_11204/bin/crsctl start crs
[oracle@node2 ~]$ srvctl start instance -d racdb -i racdb2

======================================================================
If you want to stop both nodes at same time ( Non-Rolling)  . You can use below steps :
=====================================================================

[oracle@node1 ~]$ srvctl stop database -d racdb

[oracle@node1 ~]$ srvctl status database -d racdb -v
Instance racdb1 is not running on node node1
Instance racdb2 is not running on node node2
[oracle@node1 ~]$
[oracle@node1 ~]$ /data01/app/11204/grid_11204/bin/crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_NEW.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.FRA.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.OCRNEW.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.OCRVOTE.dg
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.OCR_VOTE.dg
               ONLINE  OFFLINE      node1                                      
               ONLINE  OFFLINE      node2                                      
ora.asm
               ONLINE  ONLINE       node1                    Started            
               ONLINE  ONLINE       node2                    Started            
ora.gsd
               OFFLINE OFFLINE      node1                                      
               OFFLINE OFFLINE      node2                                      
ora.net1.network
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.ons
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
ora.registry.acfs
               ONLINE  ONLINE       node1                                      
               ONLINE  ONLINE       node2                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node1                                      
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       node2                                      
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       node2                                      
ora.cvu
      1        ONLINE  ONLINE       node2                                      
ora.node1.vip
      1        ONLINE  ONLINE       node1                                      
ora.node2.vip
      1        ONLINE  ONLINE       node2                                      
ora.oc4j
      1        ONLINE  ONLINE       node2                                      
ora.racdb.db
      1        OFFLINE OFFLINE                               Instance Shutdown  
      2        OFFLINE OFFLINE                               Instance Shutdown  
ora.scan1.vip
      1        ONLINE  ONLINE       node1                                      
ora.scan2.vip
      1        ONLINE  ONLINE       node2                                      
ora.scan3.vip
      1        ONLINE  ONLINE       node2                      

[root@node1 ~]# /data01/app/11204/grid_11204/bin/crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.OCRNEW.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.OCRVOTE.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.DATA_NEW.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node1'
CRS-2677: Stop of 'ora.FRA.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA_NEW.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.OCRNEW.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'node1' succeeded
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'node2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'node2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node2'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'node2'
CRS-2673: Attempting to stop 'ora.OCRNEW.dg' on 'node2'
CRS-2673: Attempting to stop 'ora.OCRVOTE.dg' on 'node2'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node2'
CRS-2673: Attempting to stop 'ora.DATA_NEW.dg' on 'node2'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node2'
CRS-2673: Attempting to stop 'ora.cvu' on 'node2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.node2.vip' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'node2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'node2'
CRS-2677: Stop of 'ora.cvu' on 'node2' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.node2.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.OCRNEW.dg' on 'node2' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'node2' succeeded
CRS-2677: Stop of 'ora.DATA_NEW.dg' on 'node2' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.OCRVOTE.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.OCRVOTE.dg' on 'node2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node2'
CRS-2677: Stop of 'ora.asm' on 'node2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node2'
CRS-2677: Stop of 'ora.ons' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node2'
CRS-2677: Stop of 'ora.net1.network' on 'node2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node2' has completed
CRS-2677: Stop of 'ora.crsd' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node2'
CRS-2673: Attempting to stop 'ora.evmd' on 'node2'
CRS-2673: Attempting to stop 'ora.asm' on 'node2'
CRS-2677: Stop of 'ora.evmd' on 'node2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node2' succeeded
CRS-2677: Stop of 'ora.asm' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node2'
CRS-2677: Stop of 'ora.cssd' on 'node2' succeeded

[root@node1 ~]# /data01/app/11204/grid_11204/bin/crsctl check has
CRS-4638: Oracle High Availability Services is online

[root@node1 ~]# ps -ef | grep d.bin
root     14259     1  0 03:11 ?        00:00:10 /data01/app/11204/grid_11204/bin/ohasd.bin reboot
grid     14430     1  0 03:11 ?        00:00:00 /data01/app/11204/grid_11204/bin/mdnsd.bin
grid     14441     1  0 03:11 ?        00:00:01 /data01/app/11204/grid_11204/bin/gpnpd.bin
grid     14452     1  0 03:11 ?        00:00:08 /data01/app/11204/grid_11204/bin/gipcd.bin
root     14466     1  1 03:11 ?        00:00:19 /data01/app/11204/grid_11204/bin/osysmond.bin

[root@node1 ~]# /data01/app/11204/grid_11204/bin/crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.crf' on 'node1'
CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
CRS-2677: Stop of 'ora.crf' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

NODE 2 :

[root@node2 ~]# /data01/app/11204/grid_11204/bin/crsctl check has
CRS-4638: Oracle High Availability Services is online
[root@node2 ~]# ps -ef | grep d.bin
root      7169     1  0 02:59 ?        00:00:14 /data01/app/11204/grid_11204/bin/ohasd.bin reboot
grid      7303     1  0 02:59 ?        00:00:00 /data01/app/11204/grid_11204/bin/mdnsd.bin
grid      7315     1  0 02:59 ?        00:00:01 /data01/app/11204/grid_11204/bin/gpnpd.bin
grid      7326     1  0 02:59 ?        00:00:10 /data01/app/11204/grid_11204/bin/gipcd.bin
root      7340     1  1 02:59 ?        00:00:38 /data01/app/11204/grid_11204/bin/osysmond.bin
root     14636  5007  0 03:41 pts/0    00:00:00 grep d.bin

[root@node2 ~]# /data01/app/11204/grid_11204/bin/crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node2'
CRS-2673: Attempting to stop 'ora.crf' on 'node2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node2' succeeded
CRS-2677: Stop of 'ora.crf' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node2'
CRS-2677: Stop of 'ora.mdnsd' on 'node2' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node2'
CRS-2677: Stop of 'ora.gpnpd' on 'node2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node2 ~]#

Tuesday 5 July 2016

Send a logfile as attachment using mailx

$ uuencode test.log test.log |mailx -s "Send a logfile as attachment using mailx " xyz@gmail.com