Total Pageviews

Sunday 20 December 2015

METHOD_OPT : 'FOR ALL INDEXED COLUMNS' Vs 'FOR ALL COLUMNS SIZE AUTO'

Create a table with 3 columns with randomly generated numbers . Below is the some of the attributes of the table columns :

SQL> desc TEST ;
 Name                Type
  ----------------------------
 PRIMKEY         NUMBER
 COL2                 NUMBER
 COL3                 NUMBER


  • Total rows in table = 800000
  • Column PRIMKEY is having 800000 distinct values
  • Column COL2 has only 4 distinct values
  • Column COL3 has only 3 distinct values


begin
      dbms_stats.gather_table_stats( ownname => 'SYS' ,tabname => 'test' ,  estimate_percent => 100 ,
       cascade => true);
   end;
   /

Check Column statistics  :

SQL > select COLUMN_NAME COL_NAME, NUM_DISTINCT NDV, HISTOGRAM, NUM_BUCKETS, to_char(LAST_ANALYZED,'yyyy-dd-mm hh24:mi:ss') LAST_ANALYZED
   from user_tab_col_statistics where table_name='TEST'
    /

COL_NAME        NDV        HISTOGRAM                  NUM_BUCKETS     LAST_ANALYZED
------------------------------ ------------ --------------- ----------- ---------------------------------------------
PRIMKEY              800000   NONE                                    1                         2015-19-12 16:12:22
COL2                         4           FREQUENCY                      4                         2015-19-12 16:12:22
COL3                         3           NONE                                    1                        2015-19-12 16:12:22


Create a unique index on primkey :

 create unique index PK_T1 on TEST(PRIMKEY);

Gather stat : method_opt => 'for all indexed columns' 

If you are using FOR ALL INDEXED COLUMNS as part of your METHOD_OPT you probably should not be.   "FOR ALL INDEXED COLUMNS"  tells DBMS_STATS to only gather stats on columns that participate in an index . No stats are calculated for non indexed columns of the table .

SQL> begin
    dbms_stats.delete_table_stats(user,'TEST');
   dbms_stats.gather_table_stats( ownname => user ,tabname => 'TEST' ,estimate_percent => 100 ,        method_opt => 'for all indexed columns'
  ,cascade => true);
  end;
  /

PL/SQL procedure successfully completed.


SQL>select COLUMN_NAME COL_NAME, NUM_DISTINCT NDV, HISTOGRAM, NUM_BUCKETS, to_char(LAST_ANALYZED,'yyyy-dd-mm hh24:mi:ss') LAST_ANALYZED
   from user_tab_col_statistics where table_name='TEST'
    /

COL_NAME        NDV  HISTOGRAM                  NUM_BUCKETS     LAST_ANALYZED
------------------------------ ------------ --------------- ----------- ---------------------------------------------------
PRIMKEY         800000   HEIGHT BALANCED      75                    2015-19-12 16:13:36

Notice that in above cases only column PRIMKEY has stats on it. Columns COL2  and COL3 do not have any stats collected on them. Also note that when no SIZE clause is specified, it defaults to 75 buckets.

It is wasteful to create a histogram on a column that is never referenced by a SQL statement.  It is the SQL workload that determines if a column needs histograms.

Gather stat : method_opt => 'for all indexed columns size auto' 

Using MENTOD_OPT=>'FOR ALL INDEXED COLUMNS SIZE AUTO' tells DBMS_STATS: “only gather stats on columns that participate in an index and based on data distribution and the workload of those indexed columns decide if a histogram should be created and how many buckets it should contain”.

The default value of  "for all columns size auto" is the Oracle default and this option will analyze histograms on what the optimizer considers to be "important columns".  The optimizer does this by examining your current workload when you run the dbms_stats command, interrogating all current SQL in the library cache to see which columns might benefit from a histogram.

SQL> begin
   dbms_stats.delete_table_stats(user,'TEST');

   dbms_stats.gather_table_stats(
     ownname => user ,
     tabname => 'TEST' ,
     estimate_percent => 100 ,
     method_opt => 'for all indexed columns size auto' ,
     cascade => true);
 end;
 /

PL/SQL procedure successfully completed.


SQL>select COLUMN_NAME COL_NAME, NUM_DISTINCT NDV, HISTOGRAM, NUM_BUCKETS, to_char(LAST_ANALYZED,'yyyy-dd-mm hh24:mi:ss') LAST_ANALYZED
   from user_tab_col_statistics where table_name='TEST'
    /

COL_NAME        NDV     HISTOGRAM    NUM_BUCKETS     LAST_ANALYZED
------------------------------ ------------ --------------- ----------- -----------------------------------------
PRIMKEY            800000        NONE                        1                     2015-19-12 16:14:39
 

Impact of METHOD_OPT on cardinality  estimates and access path 


CASE1 : method_opt => 'for all indexed columns size auto'  with only single predicate in query

SQL> select /*+ gather_plan_statistics */ count(*) from test where COL2=4000 ;

  COUNT(*)
----------
    660000

SQL>  select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
SQL_ID 288h7750wj3z1, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from test where COL2=4000
Plan hash value: 1950795681
-------------------------------------------------------------------------------------
| Id  | Operation    | Name | Starts | E-Rows | A-Rows |  A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |   |  1 |     |    1 |00:00:00.27 |    1856 |
|   1 |  SORT AGGREGATE    |   |  1 |   1 |    1 |00:00:00.27 |    1856 |
|*  2 |   TABLE ACCESS FULL| TEST |  1 |   8000 |  660K|00:00:00.18 |    1856 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
  2 - filter("COL2"=4000)


Notice the E-Rows estimate for TEST. The Optimizer is estimating 8000 rows when in reality there is 660000. The estimate is off by more than an order of magnitude (80x).

Normally the calculation for the cardinality would be (for a one table single equality predicate): number of rows in TEST * 1/NDV = 800000 * 1/4 = 200K but in this case 8K is the estimate. Because there are no column stats for TEST.COL2, the Optimizer is forced to make a guess and that guess is 8k/200k =0.04 (4%)

CASE 2: Now that we have seen  how poor the cardinality estimation was with a single equality predicate, let’s see what two equality predicates gives us for a cardinality estimate.

SQL> select /*+ gather_plan_statistics */ count(*) from test where COL2=4000 and col3=3000 ;

  COUNT(*)
----------
     30000

SQL> select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
SQL_ID 216fcwk49zkn2, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from test where COL2=4000 and col3=3000 ;

Plan hash value: 1950795681

-------------------------------------------------------------------------------------
| Id  | Operation    | Name | Starts | E-Rows | A-Rows |  A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |   |  1 |     |    1 |00:00:00.03 |    1856 |
|   1 |  SORT AGGREGATE    |   |  1 |   1 |    1 |00:00:00.03 |    1856 |
|*  2 |   TABLE ACCESS FULL| TEST |  1 |  80 |  30000 |00:00:00.03 |    1856 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("COL2"=4000 AND "COL3"=3000))


Hopefully you can see where this is going. If you don’t, here is the all too common chain of events:

Non representative (or missing) statistics lead to :

---> Poor cardinality estimates which leads to
--->Poor access path selection which leads to
---> Poor join method selection which leads to
---> Poor join order selection which leads to
---> Poor SQL execution times

CASE 3 :  method_opt => 'for all columns size auto'   with a single equality predicate

 In this case we have statistics calculated not only for indexed columns but for all table columns :
Now let us check the cardinality estimates in case of single predicates in query .
Query will have spot on cardinality estimates,in this case see the details below :

  SQL> begin
    dbms_stats.delete_table_stats(user,'TEST');
dbms_stats.gather_table_stats(ownname => user , tabname => 'TEST' , estimate_percent => 100 ,degree => 8, method_opt => 'for all columns size auto' , cascade => true);
  end;
  /

PL/SQL procedure successfully completed.

SQL>select COLUMN_NAME COL_NAME, NUM_DISTINCT NDV, HISTOGRAM, NUM_BUCKETS, to_char(LAST_ANALYZED,'yyyy-dd-mm hh24:mi:ss') LAST_ANALYZED
   from user_tab_col_statistics where table_name='TEST'
    /

COL_NAME        NDV     HISTOGRAM         NUM_BUCKETS     LAST_ANALYZED
------------------------------ ------------ --------------- ----------- --------------------------------------------------
PRIMKEY         800000                   NONE                       1                2015-19-12 16:34:58
COL2                  4                            FREQUENCY          4                2015-19-12 16:34:58
COL3                  3                            FREQUENCY          3                2015-19-12 16:34:58


SQL> select /*+ gather_plan_statistics */ count(*) from test where COL2=4000 ;

  COUNT(*)
----------
    660000

SQL> select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
SQL_ID 288h7750wj3z1, child number 1
-----------------------------------------------------------------------------------
select /*+ gather_plan_statistics */ count(*) from test where COL2=4000

Plan hash value: 1950795681
-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name | Starts | E-Rows | A-Rows |  A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |  1        |          |    1 |00:00:00.14 |    1856 |
|   1 |  SORT AGGREGATE      |           |  1        |   1     |    1 |00:00:00.14 |    1856 |
|*  2 |   TABLE ACCESS FULL| TEST | 1        | 660K   | 660K |00:00:00.09 |    1856 |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("COL2"=4000)

CASE 4 :  method_opt => 'for all columns size auto'   with two equality predicates

In this case we have statistics calculated not only for indexed columns but for all table columns :
Now let us check the cardinality estimates in case of multiple predicates in query .
Query will have spot on cardinality estimates,in this case see the details below :

  SQL> begin
    dbms_stats.delete_table_stats(user,'TEST');
dbms_stats.gather_table_stats(ownname => user , tabname => 'TEST' , estimate_percent => 100 ,degree => 8, method_opt => 'for all columns size auto' , cascade => true);
  end;
  /

PL/SQL procedure successfully completed.

SQL>select COLUMN_NAME COL_NAME, NUM_DISTINCT NDV, HISTOGRAM, NUM_BUCKETS, to_char(LAST_ANALYZED,'yyyy-dd-mm hh24:mi:ss') LAST_ANALYZED
   from user_tab_col_statistics where table_name='TEST'
    /

COL_NAME        NDV     HISTOGRAM         NUM_BUCKETS     LAST_ANALYZED
------------------------------ ------------ --------------- ----------- --------------------------------------------------
PRIMKEY         800000                   NONE                       1                2015-19-12 16:34:58
COL2                  4                            FREQUENCY          4                2015-19-12 16:34:58
COL3                  3                            FREQUENCY          3                2015-19-12 16:34:58


SQL> select /*+ gather_plan_statistics */ count(*) from test where COL2=4000 and col3=3000 ;

  COUNT(*)
----------
     30000

SQL> select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
SQL_ID 216fcwk49zkn2, child number 1
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from test where COL2=4000 and col3=3000

Plan hash value: 1950795681

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name | Starts | E-Rows | A-Rows |  A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |            |  1 |              |    1        |00:00:00.03 |    1856 |
|   1 |  SORT AGGREGATE        |            |  1 |   1         |    1        |00:00:00.03 |    1856 |
|*  2 |   TABLE ACCESS FULL| TEST |  1 |  30000     |  30000 |00:00:00.03 |    1856 |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("COL3"=3000 AND "COL2"=4000))

Thursday 17 December 2015

Oracle Clustering factor

Clustering factor statistic is used by the optimizer during query optimization to determine the relative efficiency of an index. In short, the index clustering factor is a measure of how many I/Os the database would perform if it were to read every row in that table via the index in index order. 

If the rows of a table on disk are sorted in about the same order as the index keys, the database will perform a minimum number of I/Os on the table to read the entire table via the index. That is because the next row needed from an index key would likely be the next row in the table block as well. The query would not be skipping all over the table to find row after row—they are naturally next to each other on the block.

Conversely, if the rows in the table are not in the same order on disk as the index keys—if the data is scattered—the query will tend to perform the maximum number of I/Os on the table, as many as one I/O for every row in the table. That is because as the database scans through the index, the next row needed will probably not be on the same block as the last row. The database will have to discard that block and get another block from the buffer cache. The query will end up reading every block from the buffer as many times as it has rows on it.

So if a table and an index key are in about the same order, the clustering factor will be near the number of blocks in the table and the index will be useful for very large index range scans and for retrieving numerous rows from the table. 

On the other hand, if the data is randomly scattered, the clustering factor will be near the number of rows in the table, and given that the number of rows in a table is usually at least an order of magnitude more than the number of blocks, the index will be less efficient for returning numerous rows.

Example of clustering factor and query cost :
================================

---> Create a table with data sorted on column object_name :

SQL> create table GoodCF as select x.* from ( select * from all_objects order by object_name)x ;

Table created.

---> Create a table with disorganized/scattered data . Used dbms_random to sort it randomly :

SQL> create table Scattered as select x.* from ( select * from all_objects order by dbms_random.random)x ;

Table created.

---> Created index on sorted column :

SQL> create index GoodCF_IDX on GoodCF(object_name) ;

Index created.

---> Created index on scattered data column :

SQL> create index Scattered_IDX on Scattered(object_name) ;

Index created.

---> Gather stat on table GOODCF :

SQL> begin dbms_stats.gather_table_stats ( 'SYS', 'GOODCF',estimate_percent => 100,method_opt=>'for all indexed columns size 254'); end ;
  /
PL/SQL procedure successfully completed.

----> Gather stat on table scattered :

SQL> begin dbms_stats.gather_table_stats ( 'SYS', 'SCATTERED',estimate_percent => 100,method_opt=>'for all indexed columns size 254'); end ;
    /
PL/SQL procedure successfully completed.

----> Let's check the number of block and rows for both the tables . Both tables are having same data and same number of BLOCKS :

SQL> select table_name, blocks, num_rows from dba_tables where table_name in ('GOODCF','SCATTERED') order by 1;

TABLE_NAME   BLOCKS   NUM_ROWS
------------------------------ ---------- ----------
GOODCF     1020      71713
SCATTERED     1020      71714

----> Let's check the CLUSTERING_FACTOR for both the indexes . I see a large difference between the two. The GOODCF_IDX index has a clustering factor very near the number of blocks in the table, whereas the SCATTERED_IDX index has a clustering factor near the number of rows in the table. 

Again, this clustering factor metric is a measure of how many I/Os the database will perform against the table in order to read every row via the index.We can verify this with the below example :

SQL> select table_name, index_name, clustering_factor from dba_indexes where table_name in ('GOODCF','SCATTERED') order by 1;

TABLE_NAME       INDEX_NAME         CLUSTERING_FACTOR
------------------------------ ------------------------------ ---------------------------------------- -----------------
GOODCF       GOODCF_IDX      1020
SCATTERED               SCATTERED_IDX       71640

SQL> show parameter user_dump_dest

NAME             TYPE VALUE
------------------------------------ ----------- ------------------------------
user_dump_dest   string /data01/app/oracle/diag/rdbms/ggtrg/ggtrg/trace

SQL> ALTER SESSION SET EVENTS '10046 trace name context forever,level 12';

Session altered.

SQL> ALTER SESSION SET tracefile_identifier='MYTRACE_GoodCF' ;

Session altered.

SQL> select /*+ index( GOODCF GOODCF_IDX) */ count(subobject_name) from GOODCF;

COUNT(SUBOBJECT_NAME)
---------------------
 457

SQL> ALTER SESSION SET EVENTS '10046 trace name context off';

Session altered.

========================================================================
I/O performed against the index   = 357 I/Os 
Total I/Os performed by the query = 1377 I/Os
Note : 1377 - 357 = 1020  ( Equal to the Clusterfing factor of the index GOODCF_IDX ) 
========================================================================

PARSING IN CURSOR #1 len=73 dep=0 uid=0 oct=3 lid=0 tim=1450372523290892 hv=966910787 ad='a266a890' sqlid='9d45484wu3su3'
select /*+ index( GOODCF GOODCF_IDX) */ count(subobject_name) from GOODCF
END OF STMT
PARSE #1:c=0,e=69,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2836254517,tim=1450372523290891
EXEC #1:c=0,e=39,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2836254517,tim=1450372523290955
WAIT #1: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1450372523291015
*** 2015-12-17 12:15:23.334
FETCH #1:c=38994,e=43776,p=0,cr=1377,cu=0,mis=0,r=1,dep=0,og=1,plh=2836254517,tim=1450372523334808
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=1377 pr=0 pw=0 time=0 us)'
STAT #1 id=2 cnt=71713 pid=1 pos=1 obj=74669 op='TABLE ACCESS BY INDEX ROWID GOODCF (cr=1377 pr=0 pw=0 time=37284 us cost=1378 size=1219121 card=71713)'
STAT #1 id=3 cnt=71713 pid=2 pos=1 obj=74671 op='INDEX FULL SCAN GOODCF_IDX (cr=357 pr=0 pw=0 time=11189 us cost=358 size=0 card=71713)'
WAIT #1: nam='SQL*Net message from client' ela= 193 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1450372523335123
FETCH #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=2836254517,tim=1450372523335144
WAIT #1: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1450372523335155
*** 2015-12-17 12:15:33.973
WAIT #1: nam='SQL*Net message from client' ela= 10637912 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1450372533973100
CLOSE #1:c=1000,e=763,dep=0,type=0,tim=1450372533973915
========================================================================

SQL> show parameter user_dump_dest

NAME             TYPE VALUE
------------------------------------ ----------- ------------------------------
user_dump_dest   string /data01/app/oracle/diag/rdbms/ggtrg/ggtrg/trace


SQL> ALTER SESSION SET EVENTS '10046 trace name context forever,level 12';

Session altered.

SQL> ALTER SESSION SET tracefile_identifier='MYTRACE_SCATTERED' ;

Session altered.

SQL> select /*+ index( GOODCF GOODCF_IDX) */ count(subobject_name) from GOODCF;

COUNT(SUBOBJECT_NAME)
---------------------
 457

SQL> ALTER SESSION SET EVENTS '10046 trace name context off';

Session altered.

========================================================================
I/O performed against the index   = 357 I/Os 
Total I/Os performed by the query = 71997 I/Os
Note : 71997 - 357 = 71640  ( Equal to the Clusterfing factor of the index SCATTERED_IDX ) 
========================================================================

PARSING IN CURSOR #1 len=81 dep=0 uid=0 oct=3 lid=0 tim=1450373201636876 hv=15917473 ad='a24955d0' sqlid='8fzruxc0g5sd1'
select /*+ index(SCATTERED SCATTERED_IDX) */ count(subobject_name) from SCATTERED
END OF STMT
FETCH #1:c=117983,e=124417,p=1377,cr=71997,cu=0,mis=0,r=1,dep=0,og=1,plh=300164232,tim=1450373201761488
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=71997 pr=1377 pw=0 time=0 us)'
STAT #1 id=2 cnt=71714 pid=1 pos=1 obj=74670 op='TABLE ACCESS BY INDEX ROWID SCATTERED (cr=71997 pr=1377 pw=0 time=116666 us cost=72014 size=1219138 card=71714)'
STAT #1 id=3 cnt=71714 pid=2 pos=1 obj=74672 op='INDEX FULL SCAN SCATTERED_IDX (cr=357 pr=357 pw=0 time=19344 us cost=358 size=0 card=71714)'
WAIT #1: nam='SQL*Net message from client' ela= 154 driver id=1650815232 #bytes=1 p3=0 obj#=74672 tim=1450373201761751
FETCH #1:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=300164232,tim=1450373201761780
WAIT #1: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=74672 tim=1450373201761790
========================================================================

Saturday 12 December 2015

AWR : Response Time Per Txn

select min(begin_time), max(end_time),
       sum(case metric_name when 'Response Time Per Txn' then average end) ResponseTimePerTxn,
       snap_id
from dba_hist_sysmetric_summary
group by snap_id
order by snap_id;


TPS Historical

WITH hist_snaps
     AS (SELECT instance_number,
                snap_id,
                round(begin_interval_time,'MI') datetime,
                  (  begin_interval_time
                   + 0
                   - LAG (
                        begin_interval_time + 0)
                     OVER (PARTITION BY dbid, instance_number
                           ORDER BY snap_id))
                * 86400      -- seconds per day
                   diff_time -- calculate difference in time from
                             -- previous snapshot
                   FROM dba_hist_snapshot )
   , hist_stats
     AS (SELECT dbid,
                instance_number,
                snap_id,
                stat_name,
                  VALUE
                - LAG (
                     VALUE)
                  OVER (PARTITION BY dbid,instance_number,stat_name
                        ORDER BY snap_id)
                  delta_value -- difference in value from previous
                              -- snapshot for each stat_name
           FROM dba_hist_sysstat
          WHERE stat_name IN ('user commits', 'user rollbacks'))
  SELECT to_char(cast(datetime as date ), 'YYYY/MM/DD HH24:MI'),
         ROUND (SUM (delta_value) / 3600, 2) "Transactions/s"
     FROM hist_snaps sn, hist_stats st
   WHERE     st.instance_number = sn.instance_number
         AND st.snap_id = sn.snap_id
         AND diff_time IS NOT NULL
GROUP BY datetime
ORDER BY 1 desc;

Tuesday 1 December 2015

How Do Indexes Become Unusable

Maintenance operations causing index partitions to become INDEX UNUSABLE (IU):

   1. IMPORT PARTITION or conventional path SQL*Loader.

   2. Direct-path SQL*Loader leaves affected local index partitions and  global indexes in an IU state if it does not complete successfully.

   3. Partition maintenance operations like ALTER TABLE MOVE PARTITION.

   4. Partition maintenance operations like ALTER TABLE TRUNCATE PARTITION.

   5. Partition maintenance operations like ALTER TABLE SPLIT PARTITION.

   6. Index maintenance operations like ALTER INDEX SPLIT PARTITION.

   7. Adding a Partition to a Hash-Partitioned Table

Adding a Partition to a Hash-Partitioned Table

When you add a partition to a hash-partitioned table, the database populates the new partition with rows rehashed from an existing partition (selected by the database) as determined by the hash function. Consequently, if the table contains data, then it may take some time to add a hash partition.

The following statements show two ways of adding a hash partition to table scubagear. Choosing the first statement adds a new hash partition whose partition name is system generated, and which is placed in the default tablespace. The second statement also adds a new hash partition, but that partition is explicitly named p_named and is created in tablespace gear5.

ALTER TABLE scubagear ADD PARTITION;

ALTER TABLE scubagear ADD PARTITION p_named TABLESPACE gear5;

Indexes may be marked UNUSABLE as explained in the following table:

Table Type               Index Behavior
Regular (Heap)         Unless you specify UPDATE INDEXES as part of the ALTER TABLE statement:
                              The local indexes for the new partition, and for the existing partition from which rows were redistributed, are marked UNUSABLE and                               must be rebuilt.
                              All global indexes, or all partitions of partitioned global indexes, are marked UNUSABLE and must be rebuilt.


Index-organized       For local indexes, the behavior is identical to heap tables.
                              All global indexes remain usable.

Each of these operations may cause index partitions to be marked IU which will require the index partitions to be rebuilt.

Version 10.2
=============
By default, many table maintenance operations on partitioned tables invalidate (mark UNUSABLE) the corresponding indexes or index partitions. You must then
rebuild the entire index or, in the case of a global index, each of its partitions. The database lets you override this default behavior if you specify
UPDATE INDEXES in your ALTER TABLE statement for the maintenance operation. Specifying this clause tells the database to update the index at the time it
executes the maintenance operation DDL statement.

The following operations support the UPDATE INDEXES clause:
    *  ADD PARTITION | SUBPARTITION
    *  COALESCE PARTITION | SUBPARTITION
    *  DROP PARTITION | SUBPARTITION
    *  EXCHANGE PARTITION | SUBPARTITION
    *  MERGE PARTITION | SUBPARTITION
    *  MOVE PARTITION | SUBPARTITION
    *  SPLIT PARTITION | SUBPARTITION
    *  TRUNCATE PARTITION | SUBPARTITION

Reference : Oracle support metalink

Saturday 28 November 2015

-bash: /usr/bin/find: Argument list too long



[oracle@test  adump]$

 find /u01/app/oracle/admin/prod/adump/prod_ora_*.aud -mtime +365 -exec rm {} \;

-bash: /usr/bin/find: Argument list too long

Solution :
------------

[oracle@test  adump]$

find /u01/app/oracle/admin/prod/adump/ -name 'prod_ora_*.aud'  -mtime +365  -exec /bin/rm {} \;




Tuesday 27 October 2015

Install MongoDB on Redhat 6

Compiled versions of MongoDB for Linux provide a simple option for installing MongoDB for other Linux systems without supported packages.

Install for 64-bit Linux

1 . Download the binary files for the desired release of MongoDB.Download the binaries from https://www.mongodb.org/downloads.

2 Extract the files from the downloaded archive.

[oracle@node1 mongoDB]$ tar -zxvf mongodb-linux-x86_64-rhel62-3.0.7
mongodb-linux-x86_64-rhel62-3.0.7/README
mongodb-linux-x86_64-rhel62-3.0.7/THIRD-PARTY-NOTICES
mongodb-linux-x86_64-rhel62-3.0.7/GNU-AGPL-3.0
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongodump
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongorestore
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongoexport
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongoimport
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongostat
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongotop
mongodb-linux-x86_64-rhel62-3.0.7/bin/bsondump
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongofiles
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongooplog
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongoperf
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongod
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongos
mongodb-linux-x86_64-rhel62-3.0.7/bin/mongo


3 Copy the extracted archive to the target directory.
Copy the extracted folder to the location from which MongoDB will run.

mkdir -p  /data01/mongoDB/mongodb/
cp -R -n mongodb-linux-x86_64-rhel62-3.0.7/ /data01/mongoDB/mongodb/


4
Ensure the location of the binaries is in the PATH variable.
The MongoDB binaries are in the bin/ directory of the archive. To ensure that the binaries are in your PATH, you can modify your PATH.

For example, you can add the following line to your shell’s rc file (e.g. ~/.bashrc):

export PATH=<mongodb-install-directory>/bin:$PATH
Replace <mongodb-install-directory> with the path to the extracted MongoDB archive.


export PATH=/data01/mongoDB/mongodb/bin:$PATH


Run MongoDB

1. Create the data directory.

Before you start MongoDB for the first time, create the directory to which the mongod process will write data. By default, the mongod process uses the /data/db directory. If you create a directory other than this one, you must specify that directory in the dbpath option when starting the mongod process later in this procedure.

The following example command creates the data directory . ( Default location is /data/db. )

mkdir -p /data01/mongoDB/db/

2 .Set permissions for the data directory.

Before running mongod for the first time, ensure that the user account running mongod has read and write permissions for the directory.

3 . Run MongoDB.
To run MongoDB, run the mongod process at the system prompt. If necessary, specify the path of the mongod or the data directory. See the following examples.

Run without specifying paths
If your system PATH variable includes the location of the mongod binary and if you use the default data directory (i.e., /data/db), simply enter mongod at the system prompt:

mongod
Specify the path of the mongod . If your PATH does not include the location of the mongod binary, enter the full path to the mongod binary at the system prompt:

<path to binary>/mongod

[oracle@node1 ~]$ which mongod

/data01/mongoDB/mongodb/bin/mongod


Specify the path of the data directory
If you do not use the default data directory (i.e., /data/db), specify the path to the data directory using the --dbpath option:

[oracle@node1 ~]$ mongod --dbpath /data01/mongoDB/db/

mongod: /usr/lib64/libssl.so.10: no version information available (required by mongod)
mongod: /usr/lib64/libcrypto.so.10: no version information available (required by mongod)
mongod: /usr/lib64/libcrypto.so.10: no version information available (required by mongod)
2015-10-27T01:26:59.213-0400 I JOURNAL  [initandlisten] journal dir=/data01/mongoDB/db/journal
2015-10-27T01:26:59.213-0400 I JOURNAL  [initandlisten] recover : no journal files present, no recovery needed
2015-10-27T01:26:59.583-0400 I JOURNAL  [initandlisten] preallocateIsFaster=true 4.26
2015-10-27T01:26:59.854-0400 I JOURNAL  [initandlisten] preallocateIsFaster=true 3
2015-10-27T01:27:01.562-0400 I JOURNAL  [initandlisten] preallocateIsFaster=true 10.6
2015-10-27T01:27:01.562-0400 I JOURNAL  [initandlisten] preallocating a journal file /data01/mongoDB/db/journal/prealloc.0
2015-10-27T01:27:08.844-0400 I -        [initandlisten]   File Preallocator Progress: 471859200/1073741824 43%
2015-10-27T01:27:11.207-0400 I -        [initandlisten]   File Preallocator Progress: 681574400/1073741824 63%
2015-10-27T01:27:14.075-0400 I -        [initandlisten]   File Preallocator Progress: 775946240/1073741824 72%
2015-10-27T01:27:17.134-0400 I -        [initandlisten]   File Preallocator Progress: 859832320/1073741824 80%
2015-10-27T01:27:21.530-0400 I -        [initandlisten]   File Preallocator Progress: 933232640/1073741824 86%
2015-10-27T01:27:26.223-0400 I -        [initandlisten]   File Preallocator Progress: 996147200/1073741824 92%
2015-10-27T01:27:37.468-0400 I JOURNAL  [initandlisten] preallocating a journal file /data01/mongoDB/db/journal/prealloc.1
2015-10-27T01:27:40.330-0400 I -        [initandlisten]   File Preallocator Progress: 471859200/1073741824 43%
2015-10-27T01:27:43.282-0400 I -        [initandlisten]   File Preallocator Progress: 566231040/1073741824 52%
2015-10-27T01:27:47.742-0400 I -        [initandlisten]   File Preallocator Progress: 608174080/1073741824 56%
2015-10-27T01:27:50.298-0400 I -        [initandlisten]   File Preallocator Progress: 744488960/1073741824 69%
2015-10-27T01:27:53.087-0400 I -        [initandlisten]   File Preallocator Progress: 817889280/1073741824 76%
2015-10-27T01:27:56.464-0400 I -        [initandlisten]   File Preallocator Progress: 901775360/1073741824 83%
2015-10-27T01:27:59.441-0400 I -        [initandlisten]   File Preallocator Progress: 964689920/1073741824 89%
2015-10-27T01:28:02.206-0400 I -        [initandlisten]   File Preallocator Progress: 1038090240/1073741824 96%
2015-10-27T01:28:17.693-0400 I JOURNAL  [initandlisten] preallocating a journal file /data01/mongoDB/db/journal/prealloc.2
2015-10-27T01:28:20.164-0400 I -        [initandlisten]   File Preallocator Progress: 492830720/1073741824 45%
2015-10-27T01:28:23.287-0400 I -        [initandlisten]   File Preallocator Progress: 566231040/1073741824 52%
2015-10-27T01:28:26.361-0400 I -        [initandlisten]   File Preallocator Progress: 671088640/1073741824 62%
2015-10-27T01:28:29.421-0400 I -        [initandlisten]   File Preallocator Progress: 765460480/1073741824 71%
2015-10-27T01:28:32.484-0400 I -        [initandlisten]   File Preallocator Progress: 849346560/1073741824 79%
2015-10-27T01:28:35.216-0400 I -        [initandlisten]   File Preallocator Progress: 954204160/1073741824 88%
2015-10-27T01:28:38.353-0400 I -        [initandlisten]   File Preallocator Progress: 1038090240/1073741824 96%
2015-10-27T01:28:54.789-0400 I JOURNAL  [durability] Durability thread started
2015-10-27T01:28:54.789-0400 I JOURNAL  [journal writer] Journal writer thread started
2015-10-27T01:28:54.815-0400 I CONTROL  [initandlisten] MongoDB starting : pid=3131 port=27017 dbpath=/data01/mongoDB/db/ 64-bit host=node1.localdomain
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten]
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten]
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten]
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] db version v3.0.7
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] git version: 6ce7cbe8c6b899552dadd907604559806aa2e9bd
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] allocator: tcmalloc
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] options: { storage: { dbPath: "/data01/mongoDB/db/" } }
2015-10-27T01:28:54.922-0400 I INDEX    [initandlisten] allocating new ns file /data01/mongoDB/db/local.ns, filling with zeroes...
2015-10-27T01:28:55.376-0400 I STORAGE  [FileAllocator] allocating new datafile /data01/mongoDB/db/local.0, filling with zeroes...
2015-10-27T01:28:55.376-0400 I STORAGE  [FileAllocator] creating directory /data01/mongoDB/db/_tmp
2015-10-27T01:28:55.554-0400 I STORAGE  [FileAllocator] done allocating datafile /data01/mongoDB/db/local.0, size: 64MB,  took 0.156 secs
2015-10-27T01:28:55.569-0400 I NETWORK  [initandlisten] waiting for connections on port 27017

[oracle@node1 sf_SOFTWARES]$ ps -ef| grep mongo
oracle    3131  2884  2 01:26 pts/0    00:00:03 mongod --dbpath /data01/mongoDB/db/


4 . Begin using MongoDB.

[oracle@node1 bin]$ ./mongo

./mongo: /usr/lib64/libssl.so.10: no version information available (required by ./mongo)
./mongo: /usr/lib64/libcrypto.so.10: no version information available (required by ./mongo)
./mongo: /usr/lib64/libcrypto.so.10: no version information available (required by ./mongo)
MongoDB shell version: 3.0.7
connecting to: test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten]
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten]
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-10-27T01:28:54.823-0400 I CONTROL  [initandlisten]
>
> db
test

Saturday 10 October 2015

Configure DDL replication for GoldenGate

[oracle@node1 ggs]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

GGSCI (node1.localdomain) 1> edit params ./GLOBALS
GGSCHEMA ggadmin

SQL> alter system set recyclebin=off deferred ;

System altered.


SQL> grant restricted session to ggadmin ;

Grant succeeded.

[oracle@node1 ggs]$ pwd
/data01/oracle/ggs
[oracle@node1 ggs]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat Oct 10 03:34:49 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> @marker_setup.sql

Marker setup script

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter Oracle GoldenGate schema name:GGADMIN


Marker setup table script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGADMIN

MARKER TABLE
-------------------------------
OK

MARKER SEQUENCE
-------------------------------
OK

Script complete.


SQL> @ddl_setup.sql

Oracle GoldenGate DDL Replication setup script

Verifying that current user has privileges to install DDL Replication...

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: For an Oracle 10g source, the system recycle bin must be disabled. For Oracle 11g and later, it can be enabled.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter Oracle GoldenGate schema name:GGADMIN

Working, please wait ...
Spooling to file ddl_setup_spool.txt

Checking for sessions that are holding locks on Oracle Golden Gate metadata tables ...

Check complete.



Using GGADMIN as a Oracle GoldenGate schema name.

Working, please wait ...

DDL replication setup script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGADMIN

CLEAR_TRACE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

CREATE_TRACE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

TRACE_PUT_LINE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

INITIAL_SETUP STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLVERSIONSPECIFIC PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLREPLICATION PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLREPLICATION PACKAGE BODY STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL IGNORE TABLE
-----------------------------------
OK

DDL IGNORE LOG TABLE
-----------------------------------
OK

DDLAUX PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLAUX PACKAGE BODY STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

SYS.DDLCTXINFO PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

SYS.DDLCTXINFO PACKAGE BODY STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL HISTORY TABLE
-----------------------------------
OK

DDL HISTORY TABLE(1)
-----------------------------------
OK

DDL DUMP TABLES
-----------------------------------
OK

DDL DUMP COLUMNS
-----------------------------------
OK

DDL DUMP LOG GROUPS
-----------------------------------
OK

DDL DUMP PARTITIONS
-----------------------------------
OK

DDL DUMP PRIMARY KEYS
-----------------------------------
OK

DDL SEQUENCE
-----------------------------------
OK

GGS_TEMP_COLS
-----------------------------------
OK

GGS_TEMP_UK
-----------------------------------
OK

DDL TRIGGER CODE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL TRIGGER INSTALL STATUS
-----------------------------------
OK

DDL TRIGGER RUNNING STATUS
-----------------------------------
ENABLED

STAYMETADATA IN TRIGGER
-----------------------------------
OFF

DDL TRIGGER SQL TRACING
-----------------------------------
0

DDL TRIGGER TRACE LEVEL
-----------------------------------
0

LOCATION OF DDL TRACE FILE
------------------------------------------------------------------------------------------------------------------------
/data01/app/oracle/diag/rdbms/ggsrc/ggsrc/trace/ggs_ddl_trace.log

Analyzing installation status...


STATUS OF DDL REPLICATION
------------------------------------------------------------------------------------------------------------------------
SUCCESSFUL installation of DDL Replication software components

Script complete.
SQL>

SQL> @role_setup.sql

GGS Role setup script

This script will drop and recreate the role GGS_GGSUSER_ROLE
To use a different role name, quit this script and then edit the params.sql script to change the gg_role parameter to the preferred name. (Do not run the script.)

You will be prompted for the name of a schema for the GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter GoldenGate schema name:GGADMIN
Wrote file role_setup_set.txt

PL/SQL procedure successfully completed.


Role setup script complete

Grant this role to each user assigned to the Extract, GGSCI, and Manager processes, by using the following SQL command:

GRANT GGS_GGSUSER_ROLE TO <loggedUser>

where <loggedUser> is the user assigned to the GoldenGate processes.

SQL> grant GGS_GGSUSER_ROLE to HR ;

Grant succeeded.

SQL> revoke GGS_GGSUSER_ROLE from ggadmin ;

Revoke succeeded.

SQL> @ddl_enable.sql

Trigger altered.

Oracle is now ready to replicate DDL operation along with DML operations . You can now configure the extract process with DDLOPTIONS paramter to  control DDL replication for that extract process .

Create dblink for an user

declare
uid number;
sqltext varchar2(1000) := 'CREATE DATABASE LINK TEST CONNECT TO "TEST_USER" IDENTIFIED BY "TestPWD" USING ''TEST_TNS''' ;
myint integer;
begin
select user_id into uid from all_users where username like 'MYUSER';
myint:=sys.dbms_sys_sql.open_cursor();
sys.dbms_sys_sql.parse_as_user(myint,sqltext,dbms_sql.native,UID);
sys.dbms_sys_sql.close_cursor(myint);
end ;


SQL> select host , db_link , owner  from  dba_db_links where db_link like '%TEST%' ;

    DB_LINK                 OWNER               HOST
----------------------------------------------------------------------
    TEST                    MYUSER              TEST_TNS

Sunday 27 September 2015

Stop oem agent and oms from autostartup on reboot

For OEM agent depending upon the OS below is the location of oratab :

The general EM startup script is /etc/rc.d/init.d/gcstartup.

From the above script you can find the oratab file path for em agent :
-------------------------------------------------------------------------------

if [ -f /etc/oragchomelist ]
then
     oratab="/etc/oragchomelist"
else
oratab="/var/opt/oracle/oragchomelist"
fi

Just comment the path of agent home and oms home in file /etc/oragchomelist to prevent it from auto-start at server reboot :

[root@grid12c ~]# cat /etc/oragchomelist

#/u01/app/oracle/oms12cr2/oms
#/u01/app/oracle/agent12cr2/core/12.1.0.2.0

Sunday 20 September 2015

GoldenGate Extract Process failed with : OGG-00717 Found unsupported in-memory undo record

GGSCI (node1.localdomain as ggadmin@ggsrc) 4> start extract exhr01

Sending START request to MANAGER ...
EXTRACT EXHR01 starting


GGSCI (node1.localdomain as ggadmin@ggsrc) 5> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                        
EXTRACT     ABENDED     EXHR01      00:28:25      00:00:03  


GGSCI (node1.localdomain as ggadmin@ggsrc) 6> view report exhr01


***********************************************************************
                 Oracle GoldenGate Capture for Oracle
   Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
   Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:40:21

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2015-09-20 12:26:29
***********************************************************************

Operating System Version:
Linux
Version #1 SMP Fri Feb 22 18:16:18 PST 2013, Release 2.6.39-400.17.1.el6uek.x86_64
Node: node1.localdomain
Machine: x86_64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :    unlimited    unlimited
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 5706

Description:

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************

2015-09-20 12:26:29  INFO    OGG-03059  Operating system character set identified as UTF-8.

2015-09-20 12:26:29  INFO    OGG-02695  ANSI SQL parameter syntax is used for parameter parsing.
EXTRACT EXHR01
SETENV (ORACLE_SID=ggsrc)

2015-09-20 12:26:29  INFO    OGG-02095  Successfully set environment variable ORACLE_SID=ggsrc.
USERID ggadmin , PASSWORD *******

2015-09-20 12:26:30  INFO    OGG-03522  Setting session time zone to source database time zone 'GMT'.
EXTTRAIL ./dirdat/hr
TABLE HR.* ;

2015-09-20 12:26:30  INFO    OGG-01815  Virtual Memory Facilities for: BR
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /data01/oracle/ggs/BR/EXHR01.

Bounded Recovery Parameter:
BRINTERVAL = 4HOURS
BRDIR      = /data01/oracle/ggs

2015-09-20 12:26:30  INFO    OGG-01851  filecaching started: thread ID: 140605740996352.

2015-09-20 12:26:30  INFO    OGG-01815  Virtual Memory Facilities for: COM
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /data01/oracle/ggs/dirtmp.

CACHEMGR virtual memory values (may have been adjusted)
CACHESIZE:                               64G
CACHEPAGEOUTSIZE (default):               8M
PROCESS VM AVAIL FROM OS (min):         128G
CACHESIZEMAX (strict force to disk):     96G

Database Version:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
PL/SQL Release 12.1.0.1.0 - Production
CORE 12.1.0.1.0 Production
TNS for Linux: Version 12.1.0.1.0 - Production
NLSRTL Version 12.1.0.1.0 - Production

Database Language and Character Set:
NLS_LANGUAGE     = "AMERICAN"
NLS_TERRITORY    = "AMERICA"
NLS_CHARACTERSET = "WE8MSWIN1252"

2015-09-20 12:26:30  INFO    OGG-01513  Positioning to Sequence 9, RBA 5464080, SCN 0.0.

2015-09-20 12:26:30  INFO    OGG-01516  Positioned to Sequence 9, RBA 5464080, SCN 0.0, Sep 20, 2015 11:58:05 AM.

2015-09-20 12:26:30  INFO    OGG-01055  Recovery initialization completed for target file ./dirdat/hr000001, at RBA 1474.

2015-09-20 12:26:30  INFO    OGG-01478  Output file ./dirdat/hr is using format RELEASE 12.1.

2015-09-20 12:26:30  INFO    OGG-01026  Rolling over remote file ./dirdat/hr000001.

2015-09-20 12:26:30  INFO    OGG-01053  Recovery completed for target file ./dirdat/hr000002, at RBA 1474.

2015-09-20 12:26:30  INFO    OGG-01057  Recovery completed for all targets.

***********************************************************************
**                     Run Time Messages                             **
***********************************************************************


2015-09-20 12:26:30  INFO    OGG-01517  Position of first record processed Sequence 9, RBA 5464080, SCN 0.1863330, Sep 20, 20
15 11:59:57 AM.

Source Context :
  SourceModule            : [er.redo.ora.rdr]
  SourceID                : [/scratch/aime1/adestore/views/aime1_adc4150384/oggcore/OpenSys/src/app/er/redo/oracle/redoorardr
.c]
  SourceFunction          : [REDOORARDR_get_next_record]
  SourceLine              : [6988]
  ThreadBacktrace         : [14] elements
                          : [/data01/oracle/ggs/libgglog.so(CMessageContext::AddThreadContext()+0x1e) [0x7fe160262dae]]
                          : [/data01/oracle/ggs/libgglog.so(CMessageFactory::CreateMessage(CSourceContext*, unsigned int, ...
)+0x340) [0x7fe16025dac0]]
                          : [/data01/oracle/ggs/libgglog.so(_MSG_ERR_ORACLE_IN_MEMORY_UNDO_UNHANDLED(CSourceContext*, unsigne
d int, long, char const*, CMessageFactory::MessageDisposition)+0x41) [0x7fe160245c85]]
                          : [/data01/oracle/ggs/extract(REDOORARDR_get_next_record(log_context_t*, log_cursor_t*, unsigned in
t, int, char*, unsigned int*, unsigned int*, long*, rec_position_t*, char (&) [2048])+0x1bba) [0x7ae66a]]
                          : [/data01/oracle/ggs/extract(REDO_read(char*, int*, long*, char*, short*, int, char (&) [2048])+0x
50e) [0x7586ee]]
                          : [/data01/oracle/ggs/extract(READ_EXTRACT_RECORD(chkpt_context_t*, short, char*, unsigned long, __
std_rec_hdr*, int*, int*, long*, short, char*, short*, char*, short*, long*, char (&) [2048])+0xe9) [0x53ed19]]
                          : [/data01/oracle/ggs/extract(process_extract_loop()+0x7f6) [0x5c1c96]]
                          : [/data01/oracle/ggs/extract(extract_main(int, char**)+0x418) [0x5c0098]]
                          : [/data01/oracle/ggs/extract(ggs::gglib::MultiThreading::MainThread::ExecMain()+0x4f) [0x6ab4df]]
                          : [/data01/oracle/ggs/extract(ggs::gglib::MultiThreading::Thread::RunThread(ggs::gglib::MultiThread
ing::Thread::ThreadArgs*)+0x104) [0x6ab9e4]]
                          : [/data01/oracle/ggs/extract(ggs::gglib::MultiThreading::MainThread::Run(int, char**)+0x8b) [0x6ab
dab]]
                          : [/data01/oracle/ggs/extract(main+0x3f) [0x5bfa2f]]
                          : [/lib64/libc.so.6(__libc_start_main+0xfd) [0x34a3e1ecdd]]
                          : [/data01/oracle/ggs/extract() [0x534319]]

2015-09-20 12:26:30  ERROR   OGG-00717  Found unsupported in-memory undo record in sequence 9, at RBA 5474956, with SCN 0.186
3343 (1863343) ... Minimum supplemental logging must be enabled to prevent data loss.

***********************************************************************
*                   ** Run Time Statistics **                         *
***********************************************************************


Report at 2015-09-20 12:26:30 (activity since 2015-09-20 12:26:29)

Output to ./dirdat/hr:

No records extracted.


2015-09-20 12:26:30  ERROR   OGG-01668  PROCESS ABENDING.

CACHE OBJECT MANAGER statistics

CACHE MANAGER VM USAGE
vm current     =      0    vm anon queues =      0
vm anon in use =      0    vm file        =      0
vm used max    =      0    ==> CACHE BALANCED

CACHE CONFIGURATION
cache size            =  64G   cache force paging =  96G
buffer min            =  64K   buffer max (soft)  =   8M
pageout eligible size =   8M

================================================================================
RUNTIME STATS FOR SUPERPOOL

CACHE Transaction Stats
trans active    =      0    max concurrent =      1

Solution :
=======
SQL> alter database add supplemental log data ; 

Enable schema level supplemental loging using ggsci 

GGSCI> add schematrandata HR

Alter the extract to start to position when supplemental log has been enabled :

GGSCI (node1.localdomain as ggadmin@ggsrc) 7> alter extract, exhr01 begin now 
EXTRACT altered.

GGSCI (node1.localdomain as ggadmin@ggsrc) 8> start extract exhr01

Sending START request to MANAGER ...
EXTRACT EXHR01 starting


GGSCI (node1.localdomain as ggadmin@ggsrc) 9> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           

EXTRACT     RUNNING     EXHR01      00:00:00      00:00:01    

GGSCI (node1.localdomain as ggadmin@ggsrc) 11> info extract exhr01

EXTRACT    EXHR01    Last Started 2015-09-20 12:28   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Process ID           5735
Log Read Checkpoint  Oracle Redo Logs
                     2015-09-20 12:46:51  Seqno 10, RBA 802304
                     SCN 0.1867251 (1867251)


GGSCI (node1.localdomain as ggadmin@ggsrc) 12> info extract exhr01 , detail
EXTRACT    EXHR01    Last Started 2015-09-20 12:28   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           5735
Log Read Checkpoint  Oracle Redo Logs
                     2015-09-20 12:46:51  Seqno 10, RBA 802304
                     SCN 0.1867251 (1867251)

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type
  ./dirdat/hr                                          3       1474        100 EXTTRAIL

  Extract Source                          Begin             End            
  /data01/app/oracle/oradata/ggsrc/redo01.log  2015-09-20 12:28  2015-09-20 12:46
  /data01/app/oracle/oradata/ggsrc/redo03.log  * Initialized *   2015-09-20 12:28
  /data01/app/oracle/oradata/ggsrc/redo03.log  2015-09-20 11:58  2015-09-20 11:58
  /data01/app/oracle/oradata/ggsrc/redo03.log  2015-09-20 11:58  2015-09-20 11:58
  /data01/app/oracle/oradata/ggsrc/redo03.log  2015-09-20 11:58  2015-09-20 11:58
  Not Available                           * Initialized *   2015-09-20 11:58
  Not Available                           * Initialized *   2015-09-20 11:58

Current directory    /data01/oracle/ggs

Report file          /data01/oracle/ggs/dirrpt/EXHR01.rpt
Parameter file       /data01/oracle/ggs/dirprm/exhr01.prm
Checkpoint file      /data01/oracle/ggs/dirchk/EXHR01.cpe
Process file         /data01/oracle/ggs/dirpcs/EXHR01.pce
Error log            /data01/oracle/ggs/ggserr.log
GGSCI (node1.localdomain as ggadmin@ggsrc) 13>

Wednesday 2 September 2015

ASM backup strategies

With 11g R2 , Oracle provided two new commands md_backup and md_restore , which can be used within the ASMCMD utility to backup and restore the disk group metadata .

[grid@node1 ~]$ asmcmd md_backup /tmp/asmbackups/databkps2 -G DATA

Disk group metadata to be backed up: DATA
Current alias directory path: _MGMTDB/TEMPFILE
Current alias directory path: _MGMTDB
Current alias directory path: RACDB/ONLINELOG
Current alias directory path: RACDB/PARAMETERFILE
Current alias directory path: node-cluster/ASMPARAMETERFILE
Current alias directory path: node-cluster
Current alias directory path: RACDB/CONTROLFILE
Current alias directory path: RACDB
Current alias directory path: ASM/PASSWORD
Current alias directory path: RACDB/TEMPFILE
Current alias directory path: _MGMTDB/CONTROLFILE
Current alias directory path: _MGMTDB/PARAMETERFILE
Current alias directory path: RACDB/PASSWORD
Current alias directory path: RACDB/DATAFILE
Current alias directory path: _MGMTDB/DATAFILE
Current alias directory path: node-cluster/OCRFILE
Current alias directory path: _MGMTDB/ONLINELOG
Current alias directory path: ASM

[grid@node1 ~]$

[grid@node1 ~]$ ls -l  /tmp/asmbackups/databkps2
-rw-r--r-- 1 grid oinstall 19098 Sep  2 02:59 /tmp/asmbackups/databkps2



[grid@node1 ~]$ asmcmd md_backup /tmp/asmbackups/datapbkps

Disk group metadata to be backed up: OCRVOTE
Disk group metadata to be backed up: DATA
Current alias directory path: node-cluster/ASMPARAMETERFILE
Current alias directory path: node-cluster
Current alias directory path: node-cluster/OCRFILE
Current alias directory path: _MGMTDB/TEMPFILE
Current alias directory path: _MGMTDB
Current alias directory path: RACDB/ONLINELOG
Current alias directory path: RACDB/PARAMETERFILE
Current alias directory path: node-cluster/ASMPARAMETERFILE
Current alias directory path: node-cluster
Current alias directory path: RACDB/CONTROLFILE
Current alias directory path: RACDB
Current alias directory path: ASM/PASSWORD
Current alias directory path: RACDB/TEMPFILE
Current alias directory path: _MGMTDB/CONTROLFILE
Current alias directory path: _MGMTDB/PARAMETERFILE
Current alias directory path: RACDB/DATAFILE
Current alias directory path: RACDB/PASSWORD
Current alias directory path: _MGMTDB/DATAFILE
Current alias directory path: node-cluster/OCRFILE
Current alias directory path: _MGMTDB/ONLINELOG
Current alias directory path: ASM
[grid@node1 ~]$


[grid@node1 ~]$ ls -lrth /tmp/asmbackups/datapbkps
-rw-r--r-- 1 grid oinstall 32K Sep  2 03:01 /tmp/asmbackups/datapbkps
[grid@node1 ~]$


md_restore :

To restore entire diskgroup settings :

md_restore --full -G data  /tmp/asmbackups/databkps

Recreates DATA diskgroup and its metadata :

md_resore -nodeg -G data /tmp/asmbackups/databkps

Re-creates only metadata for the data disk group :

md_restore --newdg -o 'data:data2' /tmp/asmbackups/databkps

This example helps you to re-create full DATA diskgroup into a new diskgroup named data2





Create Diskgroup in silent mode

asmca -silent -creatediskgroup -diskGroupName DG_DATA  -disk '/dev/sdf1'
-redundancy External
-compatible.asm='11.2'
-compatible.rdbms='11.2'

Thursday 27 August 2015

To find complete path of all asm disks

[grid@node1 ~]$  kfod disks=all
--------------------------------------------------------------------------------
 Disk          Size Path                                     User     Group  
================================================================================
   1:       5914 Mb /dev/oracleasm/disks/DATA                grid     oinstall
   2:       6345 Mb /dev/oracleasm/disks/DATA2               grid     oinstall
   3:       3176 Mb /dev/oracleasm/disks/DATA3               grid     oinstall
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME                                                        
================================================================================
     +ASM2 /data01/app/11204/grid_11204                                        
     +ASM1 /data01/app/11204/grid_11204                                        
[grid@node1 ~]$ 

Sunday 9 August 2015

Tablespace growth Report

SQL> select TS# from v$tablespace where name='ABC' ;

       TS#
----------
         6

set serverout on
set verify off
set lines 200
set pages 2000
DECLARE
v_ts_id number;
not_in_awr EXCEPTION;
v_ts_name varchar2(200) := UPPER('&Tablespace_Name');
v_ts_block_size number;
v_begin_snap_id number;
v_end_snap_id number;
v_begin_snap_date date;
v_end_snap_date date;
v_numdays number;
v_ts_begin_size number;
v_ts_end_size number;
v_ts_growth number;
v_count number;
v_ts_begin_allocated_space number;
v_ts_end_allocated_space number;
BEGIN
SELECT ts# into v_ts_id FROM v$tablespace where name = v_ts_name;
SELECT count(*) INTO v_count FROM dba_hist_tbspc_space_usage where tablespace_id=v_ts_id;
IF v_count = 0 THEN
RAISE not_in_awr;
END IF ;
SELECT block_size into v_ts_block_size FROM dba_tablespaces where tablespace_name = v_ts_name;
SELECT min(snap_id), max(snap_id), min(trunc(to_date(rtime,'MM/DD/YYYY HH24:MI:SS'))), max(trunc(to_date(rtime,'MM/DD/YYYY HH24:MI:SS')))
into v_begin_snap_id,v_end_snap_id, v_begin_snap_date, v_end_snap_date from dba_hist_tbspc_space_usage where tablespace_id=v_ts_id;
v_numdays := v_end_snap_date - v_begin_snap_date;

SELECT round(max(tablespace_size)*v_ts_block_size/1024/1024,2) into v_ts_begin_allocated_space from dba_hist_tbspc_space_usage where tablespace_id=v_ts_id and snap_id = v_begin_snap_id;
SELECT round(max(tablespace_size)*v_ts_block_size/1024/1024,2) into v_ts_end_allocated_space from dba_hist_tbspc_space_usage where tablespace_id=v_ts_id and snap_id = v_end_snap_id;
SELECT round(max(tablespace_usedsize)*v_ts_block_size/1024/1024,2) into v_ts_begin_size from dba_hist_tbspc_space_usage where tablespace_id=v_ts_id and snap_id = v_begin_snap_id;
SELECT round(max(tablespace_usedsize)*v_ts_block_size/1024/1024,2) into v_ts_end_size from dba_hist_tbspc_space_usage where tablespace_id=v_ts_id and snap_id = v_end_snap_id;
v_ts_growth := v_ts_end_size - v_ts_begin_size;
DBMS_OUTPUT.PUT_LINE(CHR(10));
DBMS_OUTPUT.PUT_LINE('Tablespace Block Size: '||v_ts_block_size);
DBMS_OUTPUT.PUT_LINE('---------------------------');
DBMS_OUTPUT.PUT_LINE(CHR(10));
DBMS_OUTPUT.PUT_LINE('Summary');
DBMS_OUTPUT.PUT_LINE('========');
DBMS_OUTPUT.PUT_LINE('1) Allocated Space: '||v_ts_end_allocated_space||' MB'||' ('||round(v_ts_end_allocated_space/1024,2)||' GB)');
DBMS_OUTPUT.PUT_LINE('2) Used Space: '||v_ts_end_size||' MB'||' ('||round(v_ts_end_size/1024,2)||' GB)');
DBMS_OUTPUT.PUT_LINE('3) Used Space Percentage: '||round(v_ts_end_size/v_ts_end_allocated_space*100,2)||' %');
DBMS_OUTPUT.PUT_LINE(CHR(10));
DBMS_OUTPUT.PUT_LINE('History');
DBMS_OUTPUT.PUT_LINE('========');
DBMS_OUTPUT.PUT_LINE('1) Allocated Space on '||v_begin_snap_date||': '||v_ts_begin_allocated_space||' MB'||' ('||round(v_ts_begin_allocated_space/1024,2)||' GB)');
DBMS_OUTPUT.PUT_LINE('2) Current Allocated Space on '||v_end_snap_date||': '||v_ts_end_allocated_space||' MB'||' ('||round(v_ts_end_allocated_space/1024,2)||' GB)');
DBMS_OUTPUT.PUT_LINE('3) Used Space on '||v_begin_snap_date||': '||v_ts_begin_size||' MB'||' ('||round(v_ts_begin_size/1024,2)||' GB)' );
DBMS_OUTPUT.PUT_LINE('4) Current Used Space on '||v_end_snap_date||': '||v_ts_end_size||' MB'||' ('||round(v_ts_end_size/1024,2)||' GB)' );
DBMS_OUTPUT.PUT_LINE('5) Total growth during last '||v_numdays||' days between '||v_begin_snap_date||' and '||v_end_snap_date||': '||v_ts_growth||' MB'||' ('||round(v_ts_growth/1024,2)||' GB)');
IF (v_ts_growth <= 0 OR v_numdays <= 0) THEN
DBMS_OUTPUT.PUT_LINE(CHR(10));
DBMS_OUTPUT.PUT_LINE('!!! NO DATA GROWTH WAS FOUND FOR TABLESPCE '||V_TS_NAME||' !!!');
ELSE
DBMS_OUTPUT.PUT_LINE('6) Per day growth during last '||v_numdays||' days: '||round(v_ts_growth/v_numdays,2)||' MB'||' ('||round((v_ts_growth/v_numdays)/1024,2)||' GB)');
DBMS_OUTPUT.PUT_LINE(CHR(10));
DBMS_OUTPUT.PUT_LINE('Expected Growth');
DBMS_OUTPUT.PUT_LINE('===============');
DBMS_OUTPUT.PUT_LINE('1) Expected growth for next 30 days: '|| round((v_ts_growth/v_numdays)*30,2)||' MB'||' ('||round(((v_ts_growth/v_numdays)*30)/1024,2)||' GB)');
DBMS_OUTPUT.PUT_LINE('2) Expected growth for next 60 days: '|| round((v_ts_growth/v_numdays)*60,2)||' MB'||' ('||round(((v_ts_growth/v_numdays)*60)/1024,2)||' GB)');
DBMS_OUTPUT.PUT_LINE('3) Expected growth for next 90 days: '|| round((v_ts_growth/v_numdays)*90,2)||' MB'||' ('||round(((v_ts_growth/v_numdays)*90)/1024,2)||' GB)');
END IF;
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE(CHR(10));
DBMS_OUTPUT.PUT_LINE('!!! TABLESPACE DOES NOT EXIST !!!');
WHEN NOT_IN_AWR THEN
DBMS_OUTPUT.PUT_LINE(CHR(10));
DBMS_OUTPUT.PUT_LINE('!!! TABLESPACE USAGE INFORMATION NOT FOUND IN AWR !!!');

END;
/


Summary
========
1) Allocated Space: 2048 MB (2 GB)
2) Used Space: 1558.44 MB (1.52 GB)
3) Used Space Percentage: 76.1 %


History
========
1) Allocated Space on 06-DEC-14: 2048 MB (2 GB)
2) Current Allocated Space on 10-JAN-15: 2048 MB (2 GB)
3) Used Space on 06-DEC-14: 1273 MB (1.24 GB)
4) Current Used Space on 10-JAN-15: 1558.44 MB (1.52 GB)
5) Total growth during last 35 days between 06-DEC-14 and 10-JAN-15: 285.44 MB (.28 GB)
6) Per day growth during last 35 days: 8.16 MB (.01 GB)


Expected Growth
===============
1) Expected growth for next 30 days: 244.66 MB (.24 GB)
2) Expected growth for next 60 days: 489.33 MB (.48 GB)
3) Expected growth for next 90 days: 733.99 MB (.72 GB)

PL/SQL procedure successfully completed.



Wednesday 5 August 2015

Monthly Growth of the Database

You can use the following query to know the Growth of the database in a monthly basis , This data is from the control file :

select to_char(creation_time, 'RRRR Month') "Month",
sum(bytes)/1024/1024 "Growth in Meg"
from sys.v_$datafile
where creation_time > SYSDATE-365
group by to_char(creation_time, 'RRRR Month');


Specific Segment Growth in the past days based on the AWR snapshots :

column "Percent of Total Disk Usage" justify right format 999.99
column "Space Used (MB)" justify right format 9,999,999.99
column "Total Object Size (MB)" justify right format 9,999,999.99
set linesize 150
set pages 80
set feedback off
select * from (select to_char(end_interval_time, 'Mon/DD/YYYY') mydate, sum(space_used_delta) / 1024 / 1024 "Space used (MB)", avg(c.bytes) / 1024 / 1024 "Total Object Size (MB)",
round(sum(space_used_delta) / sum(c.bytes) * 100, 2) "Percent of Total Disk Usage"
from
dba_hist_snapshot sn,
dba_hist_seg_stat a,
dba_objects b,
dba_segments c
where begin_interval_time > trunc(sysdate) – &days_back
and sn.snap_id = a.snap_id
and b.object_id = a.obj#
and b.owner = c.owner
and b.object_name = c.segment_name
and c.segment_name = '&segment_name'
group by to_char(end_interval_time, 'Mon/DD/YYYY'))
order by to_date(mydate, 'Mon/DD/YYYY');


Growth of Specific database Schema in the past days based on the AWR snapshots :

set feedback off
set pages 80
set linesize 150
ttitle "Total Disk Used"
select sum(space_used_delta) / 1024 / 1024 "Space used (M)", sum(c.bytes) / 1024 / 1024 "Total Schema Size (M)",
round(sum(space_used_delta) / sum(c.bytes) * 100, 2) || '%' "Percent of Total Disk Usage"
from
dba_hist_snapshot sn,
dba_hist_seg_stat a,
dba_objects b,
dba_segments c
where end_interval_time > trunc(sysdate) – &days_back
and sn.snap_id = a.snap_id
and b.object_id = a.obj#
and b.owner = c.owner
and b.object_name = c.segment_name
and c.owner = '&schema_name'
and space_used_delta > 0;



Growth of Specific database Schema per Object Type in the past days based on the AWR snapshots :

title "Total Disk Used by Object Type"
select c.segment_type, sum(space_used_delta) / 1024 / 1024 "Space used (M)", sum(c.bytes) / 1024 / 1024 "Total Space (M)",
round(sum(space_used_delta) / sum(c.bytes) * 100, 2) || '%' "Percent of Total Disk Usage"
from
dba_hist_snapshot sn,
dba_hist_seg_stat a,
dba_objects b,
dba_segments c
where end_interval_time > trunc(sysdate) – &days_back
and sn.snap_id = a.snap_id
and b.object_id = a.obj#
and b.owner = c.owner
and b.object_name = c.segment_name
and space_used_delta > 0
and c.owner = '&schema_name'
group by rollup(segment_type);


Friday 31 July 2015

OEM12c: List of targets not uploading

SELECT c.target_name Agent,a.target_type,a.target_name Target,
      to_char(alt.last_load_time,'yyyy-mm-dd hh24:mi:ss') Last_load_time ,
      to_char(sysdate,'yyyy-mm-dd hh24:mi:ss') Current_Time
 FROM sysman.mgmt_targets a, sysman.mgmt_targets_load_times alt,
      sysman.mgmt_current_availability b,
      sysman.mgmt_targets c
WHERE c.target_guid in (select target_guid from sysman.mgmt_targets where
target_type='oracle_emd')
  AND c.emd_url = a.emd_url
  AND a.target_guid = b.target_guid
  AND alt.target_guid = a.target_guid
  AND b.current_status = 1
  AND a.broken_reason = 0
  AND a.target_type NOT IN ('oracle_emd','oracle_beacon','oracle_emrep')
  AND alt.last_load_time < ((mgmt_global.sysdate_tzrgn(a.timezone_region)) - (120 / (24 * 60))) order by 3,4;


AGENT           TARGET_TYPE          TARGET               LAST_LOAD_TIME      CURRENT_TIME
--------------- -------------------- -------------------- ------------------- ---------------
abcdb02:3872    rac_database       PRDOAU          2015-07-30 05:54:54 2015-07-31 10:31:47
abcdb02:3872    rac_database       PRDSPOT         2015-07-30 05:55:19 2015-07-31 10:31:47
abcdb02:3872    rac_database       PRDTAX8         2015-07-30 05:54:36 2015-07-31 10:31:47
abcdb02:3872    cluster            abcdb-cluster01 2015-07-30 05:52:48 2015-07-31 10:31:47

Wednesday 29 July 2015

Compare scan on all nodes


mybox.linux.com:/app/grid/product/12.1.0/grid/bin>cluvfy comp scan

Verifying SCAN

Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN listeners...
TCP connectivity to SCAN listeners exists on all cluster nodes

Checking name resolution setup for "mybox-scan01.linux.com"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking SCAN IP addresses...
Check of SCAN IP addresses passed

Verification of SCAN VIP and listener setup passed

Verification of SCAN was successful.