Oracle DBA & Data Science Enthusiast
Total Pageviews
Saturday 21 July 2018
Tuesday 22 May 2018
Saturday 19 May 2018
Jupyter Notebook - how to enable Intellisense
At the top of your note book add this line
%config IPCompleter.greedy=True
Then when you have an object, for example numpy (np) do this
np.
After the . press [TAB] and it will show you all the methods available.
Friday 18 May 2018
Documentation
https://pandas.pydata.org/pandas-docs/stable/tutorials.html
https://docs.scipy.org/doc/numpy/user/quickstart.html
https://docs.scipy.org/doc/numpy/user/quickstart.html
Data Science
Essential skills required for DS :
1. Extract and clean data using python/R
2. Analyse data using statistics
3. Present data using python ( numpy/pandas ) or tools like Tableau
4. Build predictive modles using machine learning algorithms
you should know :
1. pyhton
2. R
3. Statistics
4. Machine learning algorithms like Liner Regression , Logistical Regression etc
5. tools like Tableau
You can use platforms like Kaggle (https://www.kaggle.com/) to work on Data science projects .
1. Extract and clean data using python/R
2. Analyse data using statistics
3. Present data using python ( numpy/pandas ) or tools like Tableau
4. Build predictive modles using machine learning algorithms
you should know :
1. pyhton
2. R
3. Statistics
4. Machine learning algorithms like Liner Regression , Logistical Regression etc
5. tools like Tableau
You can use platforms like Kaggle (https://www.kaggle.com/) to work on Data science projects .
https://www.analyticsvidhya.com/blog/2017/01/the-most-comprehensive-data-science-learning-plan-for-2017/
3.2: Basics of Mathematics and Statistics
Time suggested: 8 weeks (February 2017 – March 2017)
Topics to be covered:
- Descriptive Statistics – 1 week
- Probability – 2 weeks
- Inferential Statistics – 2 weeks
- Linear Algebra – 1 week
- Structured Thinking – 2 weeks
Descriptive Statistics – 1 week
- Course (mandatory) – Descriptive Statistics from Udacity is a basic and must do course to get started.
- Books (optional) – Supplement your online course with online stats book. A good book for any one looking for learning basic statistics.
Probability – 2 weeks
- Course (mandatory) – Introduction to probability – The science of uncertainty is an excellent course on edX to learn concepts of probability like conditional probability and probability distributions.
- Books (optional) – The textbook Introduction to probability – Berkley’s stats 134 standard textbook will supplement the course above and can be used as a good reference material.
Inferential Statistics – 2 weeks
- Course (mandatory) – Intro to Inferential Statistics from Udacity – Once you have gone through the descriptive statistics course, this course will take you through statistical modeling techniques and advanced statistics.
- Books (optional) – Online Stats Book – This online book can be used for a quick reference for inference tasks.
Linear Algebra – 1 week
- Course (mandatory)
- Linear Algebra – Khan Academy : This concise and an excellent course on Khan Academy will equip you with the skills necessary for Data Science and Machine Learning.
- Books (optional)
- Linear Algebra/ Levandosky – This is an often cited book to Stanford graduates for Linear Algebra.
- The Manga guide to Linear Algebra – This is a fun filled Linear Algebra book which keeps Machine Learning in context. You will never forget these Algebra lessons for sure.
- Linear Algebra – Khan Academy : This concise and an excellent course on Khan Academy will equip you with the skills necessary for Data Science and Machine Learning.
- Linear Algebra/ Levandosky – This is an often cited book to Stanford graduates for Linear Algebra.
- The Manga guide to Linear Algebra – This is a fun filled Linear Algebra book which keeps Machine Learning in context. You will never forget these Algebra lessons for sure.
Structured Thinking – 2 weeks
- Articles (mandatory): These articles will guide you to structure your thinking process to approach problems in a better way so as to improve your efficiency.
- Competitions (mandatory): No amount of theory can beat practice. This is a strategic thinking problem which will test you on your thinking process. Also, keep an eye on business case studies as they help in structuring your thoughts tremendously.
3.3: Introducing the tool – R / Python
Time suggested: 8 weeks (April 2017 – May 2017)
Topics to be covered:
- Tools (R/Python) – 4 weeks
- Exploration and Visualization (R/Python) – 4 weeks
- Feature Selection/ Engineering
Tools
1. R
- Course – Interactive Intro to R Programming Language by DataCamp – An excellent course by DataCamp to give you hands-on experience in R. The course includes interactive examples You will never feel bored while learning R.
- Books – R for Data Science – This is your one stop solution for referencing basic materials on R.
- Blogs/Articles
- This article will serve a great point for collating the entire process of model building starting from installation of RStudio/R.
- R-bloggers – This is one of the most recommended blog for R- users. Every R practitioner should keep this blog bookmarked. It has some of the most effective and practical R tutorials. Bookmark it now.
- This article will serve a great point for collating the entire process of model building starting from installation of RStudio/R.
- R-bloggers – This is one of the most recommended blog for R- users. Every R practitioner should keep this blog bookmarked. It has some of the most effective and practical R tutorials. Bookmark it now.
2. Python
- Course (mandatory) – Intro to Python for Data Science – An interactive course developed by DataCamp to facilitate Data Science learning using Python.
- Books (mandatory) – Python for Data Analysis – This book covers various aspects of Data Science including loading data to manipulating, processing, cleaning and visualizing data. Must keep reference guide for Pandas users.
- Blogs/Articles (optional)
- A Complete Tutorial to Learn Data Science with Python from Scratch: This article will serve as a quick guide to learning Data Science using Python.
- A Complete Tutorial to Learn Data Science with Python from Scratch: This article will serve as a quick guide to learning Data Science using Python.
Exploration and Visualization
1. R
- Course
- Exploratory Data Analysis – This is an awesome course by Johns Hopkins University on Coursera. You will need no other course to perform visualization and exploratory work in R.
- Blogs/Articles
- Comprehensive guide to Data Exploration in R – This will be a one-stop article that I will suggest you to go through carefully and follow every step. This is because the steps mentioned in the article are the same steps you will be using while solving any data problem or a hackathon problem.
- Cheat sheet – Data Exploration in R – This cheat sheet contains all the steps in data exploration with codes. I suggest you to take out a print and paste it on your wall for quick reference.
- Exploratory Data Analysis – This is an awesome course by Johns Hopkins University on Coursera. You will need no other course to perform visualization and exploratory work in R.
- Comprehensive guide to Data Exploration in R – This will be a one-stop article that I will suggest you to go through carefully and follow every step. This is because the steps mentioned in the article are the same steps you will be using while solving any data problem or a hackathon problem.
- Cheat sheet – Data Exploration in R – This cheat sheet contains all the steps in data exploration with codes. I suggest you to take out a print and paste it on your wall for quick reference.
2. Python
- Course (optional)
- Intro to Data Analysis – This is an excellent course by Udacity on Data Exploration using Numpy and Pandas.
- Blogs/Articles (mandatory)
- Comprehensive guide to Data Exploration using Python NumPy, Matplotlib and Pandas – This is a sufficient and comprehensive article which uses the most popular Python libraries for exploration and visualization purposes.
- 9 popular ways to perform Data Visualization in Python – This article presents the most commonly used graphs and plots used in Data Exploration along with Python codes. This is a must bookmarked article for people working in Data Science using Python.
- Books (optional) – Python for Data Analysis – A one stop solution for your Data Exploration and Visualization in Python.
- Intro to Data Analysis – This is an excellent course by Udacity on Data Exploration using Numpy and Pandas.
- Comprehensive guide to Data Exploration using Python NumPy, Matplotlib and Pandas – This is a sufficient and comprehensive article which uses the most popular Python libraries for exploration and visualization purposes.
- 9 popular ways to perform Data Visualization in Python – This article presents the most commonly used graphs and plots used in Data Exploration along with Python codes. This is a must bookmarked article for people working in Data Science using Python.
Feature Selection/ Engineering
- Blog – A Comprehensive Guide to Data Exploration: This article will explain underlying techniques of feature engineering and different methods for feature creation
- Books (optional) – Mastering Feature Engineering: This book is master piece to learn feature engineering. Not only will you learn how to implement feature engineering in a systematic way. You will also learn different methods involved in feature engineering.
3.4: Basic & Advanced machine learning tools
Time suggested: 12 weeks (June 2017 – August 2017)
Topics to be covered (June 2017 – July 2017):
- Basic Machine Learning Algorithms.
- Linear Regression
- Logistic Regression
- Decision Trees
- KNN (K- Nearest Neighbours)
- K-Means
- Naïve Bayes
- Dimensionality Reduction
- Advanced algorithms (August 2017)
- Random Forests
- Dimensionality Reduction Techniques
- Support Vector Machines
- Gradient Boosting Machines
- XGBOOST
- Linear Regression
- Logistic Regression
- Decision Trees
- KNN (K- Nearest Neighbours)
- K-Means
- Naïve Bayes
- Dimensionality Reduction
- Random Forests
- Dimensionality Reduction Techniques
- Support Vector Machines
- Gradient Boosting Machines
- XGBOOST
Linear Regression
- Course
- Machine Learning by Andrew Ng – There is no better resource to learn Linear Regression than this course. It will give you a thorough understanding of linear regression and there is a reason why Andrew Ng is considered the rockstar of Machine Learning.
- Blogs/Articles
- Books
- The Elements of Statistical Learning – This book is sometimes considered the holy grail of Machine Learning and Data Science. It explains Machine Learning concepts mathematically from a Statistics perspective.
- Machine Learning with R – This is a book I personally use to have a brief understanding of Machine Learning algorithms along with their implementation code.
- Practice
- Black Friday – Like I already said – No amount of theory can beat practice. Here is a regression problem that you can try your hands on for a deeper understanding.
- Machine Learning by Andrew Ng – There is no better resource to learn Linear Regression than this course. It will give you a thorough understanding of linear regression and there is a reason why Andrew Ng is considered the rockstar of Machine Learning.
- The Elements of Statistical Learning – This book is sometimes considered the holy grail of Machine Learning and Data Science. It explains Machine Learning concepts mathematically from a Statistics perspective.
- Machine Learning with R – This is a book I personally use to have a brief understanding of Machine Learning algorithms along with their implementation code.
- Black Friday – Like I already said – No amount of theory can beat practice. Here is a regression problem that you can try your hands on for a deeper understanding.
Logistic Regression
- Course (mandatory)
- Machine Learning by Andrew Ng– The week 3 of this course will give you a deeper understanding of the one of the most widely used classification algorithm.
- Machine Learning: Classification – Week 1 and 2 of this practical oriented Specialization course using Python will satiate your knowledge thirst about Logistic Regression.
- Blogs/Articles (optional)
- Logistic Regression by Machine Learning Mastery – This is an excellent non-code based approach to Logistic regression to deepen your knowledge. I suggest you to have a look at it.
- Books (optional)
- Introduction to Statistical Learning – This is an excellent book with a quality content on Logistic Regression’s underlying assumptions, statistical nature and mathematical linkage.
- Practice (mandatory)
- Loan Prediction – This is an excellent competition to practice and test your new Logistic Regression skills to predict whether loan status for a person was approved or not.
- Machine Learning by Andrew Ng– The week 3 of this course will give you a deeper understanding of the one of the most widely used classification algorithm.
- Machine Learning: Classification – Week 1 and 2 of this practical oriented Specialization course using Python will satiate your knowledge thirst about Logistic Regression.
- Logistic Regression by Machine Learning Mastery – This is an excellent non-code based approach to Logistic regression to deepen your knowledge. I suggest you to have a look at it.
- Introduction to Statistical Learning – This is an excellent book with a quality content on Logistic Regression’s underlying assumptions, statistical nature and mathematical linkage.
- Loan Prediction – This is an excellent competition to practice and test your new Logistic Regression skills to predict whether loan status for a person was approved or not.
Decision Trees
- Course (mandatory)
- Machine Learning: Classification – Week 3 and 4 in this course is about the working of decision trees, preventing overfitting and handling missing values
- Blogs/Articles (mandatory)
- Technical Overview of decision trees – This is a quick overview of decision trees and a must read for anyone new to decision trees.
- Complete tutorial on tree based modeling – This is a python based tutorial on decision trees. For the sake of decision trees, read only sections 1-6 in this article.
- Books (mandatory)
- Introduction to Statistical Learning – Section 8.1 and 8.3 explain the basics of decision trees through theory and practical examples.
- Machine Learning with R – Chapter 5 of this book provides you the best explanation of Machine Learning Algorithms available in the market. Here, the decision trees are explained in an extremely non-intimidating and easier style.
- Practice (mandatory)
- Loan Prediction – This is an excellent competition to practice and test your new Logistic Regression skills to predict whether loan status for a person was approved or not.
- Machine Learning: Classification – Week 3 and 4 in this course is about the working of decision trees, preventing overfitting and handling missing values
- Technical Overview of decision trees – This is a quick overview of decision trees and a must read for anyone new to decision trees.
- Complete tutorial on tree based modeling – This is a python based tutorial on decision trees. For the sake of decision trees, read only sections 1-6 in this article.
- Introduction to Statistical Learning – Section 8.1 and 8.3 explain the basics of decision trees through theory and practical examples.
- Machine Learning with R – Chapter 5 of this book provides you the best explanation of Machine Learning Algorithms available in the market. Here, the decision trees are explained in an extremely non-intimidating and easier style.
- Loan Prediction – This is an excellent competition to practice and test your new Logistic Regression skills to predict whether loan status for a person was approved or not.
KNN (K- Nearest Neighbors)
- Course (mandatory)
- Machine Learning – Clustering and Retrieval: Week 2 of this course progresses to k-nearest neighbors from 1-nearest neighbor and also describes the best ways to approximate the nearest neighbors. It explains all the concepts of KNN using python.
- Blogs/Articles (mandatory)
- Introduction to k-nearest neighbors: simplified – This basic article describes when to use KNN, the ways in which k can be chosen and the way in which KNN algorithm works.
- Learning KNN algorithm using R – This article is a comprehensive guide to learning KNN with hands-on codes for future references.
- Machine Learning – Clustering and Retrieval: Week 2 of this course progresses to k-nearest neighbors from 1-nearest neighbor and also describes the best ways to approximate the nearest neighbors. It explains all the concepts of KNN using python.
- Introduction to k-nearest neighbors: simplified – This basic article describes when to use KNN, the ways in which k can be chosen and the way in which KNN algorithm works.
- Learning KNN algorithm using R – This article is a comprehensive guide to learning KNN with hands-on codes for future references.
K-Means
- Course
- Machine Learning Course – Unsupervised Learning with K-means algorithm: Week 8 of this discusses how to use course how K-means algorithm is used for handling unstructured data.
- Blog
- An Introduction to Clustering and different methods of clustering: In this article, you will learn what is k-means clustering and the intricacies involved in that. It will give you a step by step approach how K-means algorithm works.
- Machine Learning Course – Unsupervised Learning with K-means algorithm: Week 8 of this discusses how to use course how K-means algorithm is used for handling unstructured data.
- An Introduction to Clustering and different methods of clustering: In this article, you will learn what is k-means clustering and the intricacies involved in that. It will give you a step by step approach how K-means algorithm works.
Naive Bayes
- Course
- Intro to Machine Learning: Take this course to see Naive Bayes in action. In this course, Sebastian Thrun has explained Naive Bayes in Simple English.
- Blog / Article
- 6 Easy Steps to Learn Naive Bayes Algorithm (with code in Python) : This article will take you through Naive Bayes algorithm in detail. In this guide, you will learn how Naive Bayes algorithm works, applications and many more. It will also give you hands-on knowledge of building a model using Naive Bayes.
- Naive Bayes for Machine Learning : This is one of the most comprehensive articles I have come across. Go through this article to have a complete understanding of why naive bayes algorithm is important for machine learning.
- Intro to Machine Learning: Take this course to see Naive Bayes in action. In this course, Sebastian Thrun has explained Naive Bayes in Simple English.
- 6 Easy Steps to Learn Naive Bayes Algorithm (with code in Python) : This article will take you through Naive Bayes algorithm in detail. In this guide, you will learn how Naive Bayes algorithm works, applications and many more. It will also give you hands-on knowledge of building a model using Naive Bayes.
- Naive Bayes for Machine Learning : This is one of the most comprehensive articles I have come across. Go through this article to have a complete understanding of why naive bayes algorithm is important for machine learning.
Dimensionality Reduction
- Course
- Machine Learning – Dimensionality Reduction: Week 8 of this course will walk you through dimensionality reduction and how Principal Components Analysis can be used for data compression of complex data.
- Blog / Article
- Beginners Guide To Learn Dimension Reduction Techniques: In this article, you will learn why dimension reduction is important in machine learning and the various techniques of dimension reduction.
- Machine Learning – Dimensionality Reduction: Week 8 of this course will walk you through dimensionality reduction and how Principal Components Analysis can be used for data compression of complex data.
- Beginners Guide To Learn Dimension Reduction Techniques: In this article, you will learn why dimension reduction is important in machine learning and the various techniques of dimension reduction.
Random Forests
- Videos (mandatory)
- How Random Forest algorithm works? – Watch this video to have a visual perspective of how the Random Forest algorithm works.
- Books (optional)
- Introduction to Statistical Learning – Section 8 explains the basics of Random Forests including bagging and boosting through theory and practical examples.
- Applied predictive modeling – Chapter 8
- Blogs/Articles (mandatory)
- A tutorial on tree based modeling from scratch – This is an excellent article on trees based modeling using python. I suggest you to bookmark it right now.
- Random Forests – This blog explains the entire working, nuts and bolts of Random Forest.
- How Random Forest algorithm works? – Watch this video to have a visual perspective of how the Random Forest algorithm works.
- Introduction to Statistical Learning – Section 8 explains the basics of Random Forests including bagging and boosting through theory and practical examples.
- Applied predictive modeling – Chapter 8
- A tutorial on tree based modeling from scratch – This is an excellent article on trees based modeling using python. I suggest you to bookmark it right now.
- Random Forests – This blog explains the entire working, nuts and bolts of Random Forest.
Gradient Boosting Machines
- Blogs/Articles (mandatory)
- Presentation (mandatory): Here is an excellent presentation on GBM. It contains the prominent features of GBM and the advantages and disadvantages of using it to solve real-world problems. It is must see article for somebody trying to understand GBM.
XGBOOST
- Blogs /Articles (mandatory)
- Official Introduction XGBOOST – Read the documentation of hackathons winning algorithm. It is an improvement over GBM and is right now the most widely used algorithm for winning competitions.
- Using XGBOOST in R – An excellent article on deploying XGBOOST in R using a practical problem at hand.
- XGBOOST for applied Machine Learning – An article by Machine Learning Mastery to evaluate the performance of XGBOOST over other algorithms.
- Official Introduction XGBOOST – Read the documentation of hackathons winning algorithm. It is an improvement over GBM and is right now the most widely used algorithm for winning competitions.
- Using XGBOOST in R – An excellent article on deploying XGBOOST in R using a practical problem at hand.
- XGBOOST for applied Machine Learning – An article by Machine Learning Mastery to evaluate the performance of XGBOOST over other algorithms.
Support Vector Machines
- Course (mandatory)
- Machine Learning by Andrew Ng – Week 7 of this course is an interesting place to start your SVM journey.
- Books (mandatory)
- Introduction to Statistical Learning – Chapter 9 of the book contains a detail discussion about SVMs and the ways to deploy them.
- Blogs/Articles (optional)
- Understanding support vector machines – This is an excellent article to understand an algorithm practically using examples.
- SVM by Machine Learning Mastery – This article discusses the different types of kernels employed in SVM and their uses.
- Machine Learning by Andrew Ng – Week 7 of this course is an interesting place to start your SVM journey.
- Introduction to Statistical Learning – Chapter 9 of the book contains a detail discussion about SVMs and the ways to deploy them.
- Understanding support vector machines – This is an excellent article to understand an algorithm practically using examples.
- SVM by Machine Learning Mastery – This article discusses the different types of kernels employed in SVM and their uses.
3.5: Building your profile
Time suggested: 8 weeks (September 2017 – October 2017)
Topics to be covered:
- GitHub Profile Building
- Practice via competitions
- Discussion Portals
GitHub Profile Building (mandatory)
It is very important for a Data Scientist to have a GitHub profile to host all the codes of the project he/she has undertaken. Potential employers not only see what you have done, how you have coded and how frequently / how long you have been practicing data science.
Also, codes on GitHub open up avenues for open source projects which can highly boost your learning. If you don’t know how to use Git, you can learn from Git and GitHub on Udacity. This is one of the best and easy to learn course to manage the repositories through terminal.
Practice via competitions (mandatory)
Time and again, I have stressed on the fact that practice beats theory. Moreover coding in hackathons brings you closer to developing data products in real life for solving real world problems. Below are most popular platforms to participate in Data Science/ Machine Learning Competitions.
Discussion Forums (optional)
Discussions are a great way to learn in a peer-to-peer setup from finding an answer to a question you stuck to providing answers to someone else’s questions. Below are some of the discussion rich platforms which you should keep a tab on to clear your doubts.
3.6: Apply for Jobs & Internships
Time suggested: 8 weeks (November 2017 – December 2017)
Topics to be covered: Jobs / Internships
If you are here after diligently following the above steps, then you can be sure that you are ready for a Job / Internship position at any Data Science / Analytics or Machine Learning firms. But it becomes quite difficult to identify the right jobs. So, for the purpose of saving the trouble, I have created a list of portals which lists down Data Science/ Machine Learning jobs and Internships.
In order to prepare for these interviews, you should go through this Damn Good Hiring Guide
Sunday 18 June 2017
creation of standby using duplicate
192.168.56.75 node1.localdomain node1 -- Primary
192.168.56.76 node2.localdomain node2 -- Physical Stnadby
192.168.56.73 node3.localdomain node3 -- cascade physical standby
On Primary :
-----------------
ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(ORCL_node1,ORCL_node2)';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=ORCL_node2 NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL_node2';
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;
ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' SCOPE=SPFILE;
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=10;
ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE;
ALTER SYSTEM SET FAL_SERVER=ORCL_node2;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
orapwd file=$ORACLE_HOME/dbs/orapwORCL password=sysadmin force=y
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/orcl_stndby.ctl' ;
SQL> create pfile='/tmp/pfile.ora' from spfile ;
Amend the PFILE making the entries relevant for the standby database.
*.db_unique_name='ORCL_node2'
*.fal_server='ORCL_node1'
*.log_archive_dest_2='SERVICE=ORCL_node1 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL_node1'
On standby :
$ mkdir -p /data01/app/oracle/oradata/orcl/ --> Datafiles
$ mkdir -p /data01/app/oracle/arch --> Archive
$ mkdir -p /data01/app/oracle/admin/orcl/adump --> audit_file_dest
Copy the files from the primary to the standby server:
# Standby controlfile to all locations.
scp /tmp/orcl_stndby.ctl oracle@node2:/data01/app/oracle/oradata/orcl/control01.ctl
scp /tmp/orcl_stndby.ctl oracle@node2:/data01/app/oracle/oradata/orcl/control02.ctl
$ # Parameter file.
$ scp /tmp/pfile.ora oracle@node2:/tmp/pfile_orcl_node2.ora
$ # Remote login password file.
scp /data01/app/oracle/product/11.2.0.3/db_1/dbs/orapwORCL oracle@node2:/data01/app/oracle/product/11.2.0.3/db_1/dbs/orapwORCL
Start Listener on standby :
Create a static listener and start is on standby :
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /data01/app/oracle/product/11.2.0.3/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(SID_NAME = orcl)
(ORACLE_HOME = /data01/app/oracle/product/11.2.0.3/db_1)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2.localdomain)(PORT = 1521))
)
)
Create Standby Redo Logs on Primary Server :
ALTER DATABASE ADD STANDBY LOGFILE ('/data01/app/oracle/oradata/orcl/standby_redo01.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/data01/app/oracle/oradata/orcl/standby_redo02.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/data01/app/oracle/oradata/orcl/standby_redo03.log') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE ('/data01/app/oracle/oradata/orcl/standby_redo04.log') SIZE 50M;
On standby :
----------------
export ORACLE_SID=orcl
SQL> startup nomount pfile='/tmp/pfile_orcl_node2.ora' ;
ORACLE instance started.
Total System Global Area 839282688 bytes
Fixed Size 2233000 bytes
Variable Size 494931288 bytes
Database Buffers 339738624 bytes
Redo Buffers 2379776 bytes
[oracle@node2 ~]$ rman target sys/sysadmin@ORCL_node1 AUXILIARY sys/sysadmin@ORCL_node2
Recovery Manager: Release 11.2.0.3.0 - Production on Sun Jun 18 04:58:16 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1470927095)
connected to auxiliary database: ORCL (not mounted)
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE DORECOVER
SPFILE
SET db_unique_name='ORCL_node2'
SET LOG_ARCHIVE_DEST_2='SERVICE=ORCL_node1 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL_node1'
SET FAL_SERVER='ORCL_node1'
NOFILENAMECHECK;2> 3> 4> 5> 6>
Starting Duplicate Db at 18-JUN-17
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=17 device type=DISK
contents of Memory Script:
{
backup as copy reuse
targetfile '/data01/app/oracle/product/11.2.0.3/db_1/dbs/orapworcl' auxiliary format
'/data01/app/oracle/product/11.2.0.3/db_1/dbs/orapworcl' targetfile
'/data01/app/oracle/product/11.2.0.3/db_1/dbs/spfileorcl.ora' auxiliary format
'/data01/app/oracle/product/11.2.0.3/db_1/dbs/spfileorcl.ora' ;
sql clone "alter system set spfile= ''/data01/app/oracle/product/11.2.0.3/db_1/dbs/spfileorcl.ora''";
}
executing Memory Script
Starting backup at 18-JUN-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=51 device type=DISK
Finished backup at 18-JUN-17
sql statement: alter system set spfile= ''/data01/app/oracle/product/11.2.0.3/db_1/dbs/spfileorcl.ora''
contents of Memory Script:
{
sql clone "alter system set db_unique_name =
''ORCL_node2'' comment=
'''' scope=spfile";
sql clone "alter system set LOG_ARCHIVE_DEST_2 =
''SERVICE=ORCL_node1 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL_node1'' comment=
'''' scope=spfile";
sql clone "alter system set FAL_SERVER =
''ORCL_node1'' comment=
'''' scope=spfile";
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script
sql statement: alter system set db_unique_name = ''ORCL_node2'' comment= '''' scope=spfile
sql statement: alter system set LOG_ARCHIVE_DEST_2 = ''SERVICE=ORCL_node1 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL_node1'' comment= '''' scope=spfile
sql statement: alter system set FAL_SERVER = ''ORCL_node1'' comment= '''' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 839282688 bytes
Fixed Size 2233000 bytes
Variable Size 536874328 bytes
Database Buffers 297795584 bytes
Redo Buffers 2379776 bytes
contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary format '/data01/app/oracle/oradata/orcl/control01.ctl';
restore clone controlfile to '/data01/app/oracle/oradata/orcl/control02.ctl' from
'/data01/app/oracle/oradata/orcl/control01.ctl';
}
executing Memory Script
Starting backup at 18-JUN-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/data01/app/oracle/product/11.2.0.3/db_1/dbs/snapcf_orcl.f tag=TAG20170618T050116 RECID=3 STAMP=946962085
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
Finished backup at 18-JUN-17
Starting restore at 18-JUN-17
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=18 device type=DISK
channel ORA_AUX_DISK_1: copied control file copy
Finished restore at 18-JUN-17
contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script
sql statement: alter database mount standby database
RMAN-05538: WARNING: implicitly using DB_FILE_NAME_CONVERT
contents of Memory Script:
{
set newname for tempfile 1 to
"/data01/app/oracle/oradata/orcl/temp01.dbf";
switch clone tempfile all;
set newname for datafile 1 to
"/data01/app/oracle/oradata/orcl/system01.dbf";
set newname for datafile 2 to
"/data01/app/oracle/oradata/orcl/sysaux01.dbf";
set newname for datafile 3 to
"/data01/app/oracle/oradata/orcl/undotbs01.dbf";
set newname for datafile 4 to
"/data01/app/oracle/oradata/orcl/users01.dbf";
set newname for datafile 5 to
"/data01/app/oracle/oradata/orcl/example01.dbf";
set newname for datafile 6 to
"/data01/app/oracle/oradata/orcl/ggtbs.dbf";
backup as copy reuse
datafile 1 auxiliary format
"/data01/app/oracle/oradata/orcl/system01.dbf" datafile
2 auxiliary format
"/data01/app/oracle/oradata/orcl/sysaux01.dbf" datafile
3 auxiliary format
"/data01/app/oracle/oradata/orcl/undotbs01.dbf" datafile
4 auxiliary format
"/data01/app/oracle/oradata/orcl/users01.dbf" datafile
5 auxiliary format
"/data01/app/oracle/oradata/orcl/example01.dbf" datafile
6 auxiliary format
"/data01/app/oracle/oradata/orcl/ggtbs.dbf" ;
sql 'alter system archive log current';
}
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to /data01/app/oracle/oradata/orcl/temp01.dbf in control file
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting backup at 18-JUN-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/data01/app/oracle/oradata/orcl/system01.dbf
output file name=/data01/app/oracle/oradata/orcl/system01.dbf tag=TAG20170618T050153
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:55
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/data01/app/oracle/oradata/orcl/sysaux01.dbf
output file name=/data01/app/oracle/oradata/orcl/sysaux01.dbf tag=TAG20170618T050153
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:02:05
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/data01/app/oracle/oradata/orcl/example01.dbf
output file name=/data01/app/oracle/oradata/orcl/example01.dbf tag=TAG20170618T050153
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:05
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=/data01/app/oracle/oradata/orcl/ggtbs.dbf
output file name=/data01/app/oracle/oradata/orcl/ggtbs.dbf tag=TAG20170618T050153
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/data01/app/oracle/oradata/orcl/undotbs01.dbf
output file name=/data01/app/oracle/oradata/orcl/undotbs01.dbf tag=TAG20170618T050153
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/data01/app/oracle/oradata/orcl/users01.dbf
output file name=/data01/app/oracle/oradata/orcl/users01.dbf tag=TAG20170618T050153
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 18-JUN-17
sql statement: alter system archive log current
contents of Memory Script:
{
backup as copy reuse
archivelog like "/data01/app/oracle/arch/ORCL_NODE1/archivelog/2017_06_18/o1_mf_1_10_dndjq5b9_.arc" auxiliary format
"/data01/app/oracle/arch/ORCL_NODE2/archivelog/2017_06_18/o1_mf_1_10_%u_.arc" ;
catalog clone recovery area;
switch clone datafile all;
}
executing Memory Script
Starting backup at 18-JUN-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=10 RECID=6 STAMP=946962469
output file name=/data01/app/oracle/arch/ORCL_NODE2/archivelog/2017_06_18/o1_mf_1_10_08s73019_.arc RECID=0 STAMP=0
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 18-JUN-17
searching for all files in the recovery area
List of Files Unknown to the Database
=====================================
File Name: /data01/app/oracle/arch/ORCL_NODE2/archivelog/2017_06_18/o1_mf_1_10_08s73019_.arc
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /data01/app/oracle/arch/ORCL_NODE2/archivelog/2017_06_18/o1_mf_1_10_08s73019_.arc
datafile 1 switched to datafile copy
input datafile copy RECID=3 STAMP=946962481 file name=/data01/app/oracle/oradata/orcl/system01.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=4 STAMP=946962482 file name=/data01/app/oracle/oradata/orcl/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=946962482 file name=/data01/app/oracle/oradata/orcl/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=6 STAMP=946962483 file name=/data01/app/oracle/oradata/orcl/users01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=7 STAMP=946962484 file name=/data01/app/oracle/oradata/orcl/example01.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=8 STAMP=946962485 file name=/data01/app/oracle/oradata/orcl/ggtbs.dbf
contents of Memory Script:
{
set until scn 1155236;
recover
standby
clone database
delete archivelog
;
}
executing Memory Script
executing command: SET until clause
Starting recover at 18-JUN-17
using channel ORA_AUX_DISK_1
starting media recovery
archived log for thread 1 with sequence 10 is already on disk as file /data01/app/oracle/arch/ORCL_NODE2/archivelog/2017_06_18/o1_mf_1_10_08s73019_.arc
archived log file name=/data01/app/oracle/arch/ORCL_NODE2/archivelog/2017_06_18/o1_mf_1_10_08s73019_.arc thread=1 sequence=10
media recovery complete, elapsed time: 00:00:06
Finished recover at 18-JUN-17
Finished Duplicate Db at 18-JUN-17
SQL> SQL> select process,sequence# , status from v$managed_standby ;
PROCESS SEQUENCE# STATUS
--------- ---------- ------------
ARCH 0 CONNECTED
ARCH 0 CONNECTED
ARCH 0 CONNECTED
ARCH 0 CONNECTED
ARCH 0 CONNECTED
ARCH 11 CLOSING
RFS 0 IDLE
RFS 12 IDLE
RFS 0 IDLE
MRP0 12 APPLYING_LOG
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;
Primary Site:
==============
DB Name : ORCL
DB Unique name : ORCL_node1
Hostname : node1
Cascading Site:
===============
DB Name : ORCL
DB Unique name : ORCL_node2
Hostname : node2
Cascaded Site:
================
DB Name : ORCL
DB Unique name : ORCL_node3
Hostname : node3
Make sure that the TNS entries for ORCL_node1/ORCL_node2/ORCL_node3 exist on each of the 3 nodes. Copy the tns files on all nodes
Add a static entry about the details of ORCL_node3 instance to the listener.ora file on the cascaded standby host “node3"and start the listener .
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /data01/app/oracle/product/11.2.0.3/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(SID_NAME = orcl)
(ORACLE_HOME = /data01/app/oracle/product/11.2.0.3/db_1)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node3.localdomain)(PORT = 1521))
)
)
--> Copy the password file from “ORCL_node2” to the cascaded standby site “ORCL_node3” :
scp /data01/app/oracle/product/11.2.0.3/db_1/dbs/orapwORCL oracle@node3:/data01/app/oracle/product/11.2.0.3/db_1/dbs/orapwORCL
Create a copy of pfile ( node1) and remove all standby related parameters scp it to node3:
------------------------------------------------------------------------------------
pfile -- removed all the standby parameter
[oracle@node3 admin]$ cat /tmp/pfile_standby_cascade.ora
*.audit_file_dest='/data01/app/oracle/admin/orcl/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.control_files='/data01/app/oracle/oradata/orcl/control01.ctl','/data01/app/oracle/oradata/orcl/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='orcl'
*.db_recovery_file_dest_size=21474836480
*.db_recovery_file_dest='/data01/app/oracle/arch'
*.db_unique_name='ORCL_node3'
*.diagnostic_dest='/data01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=10
*.memory_target=839909376
*.open_cursors=300
*.processes=150
*.recyclebin='OFF'
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
--> Create standby controlfile and mount ORCL_node3 in physical standby mode
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/orcl_node3.ctl' ;
Database altered.
scp /tmp/orcl_node3.ctl oracle@node3:/data01/app/oracle/oradata/orcl/control01.ctl
scp /tmp/orcl_node3.ctl oracle@node3:/data01/app/oracle/oradata/orcl/control02.ctl
[oracle@node3 admin]$
SQL> export ORALCE_SID=orcl
SQL> startup mount pfile='/tmp/pfile_standby_cascade.ora' ;
ORACLE instance started.
Total System Global Area 839282688 bytes
Fixed Size 2233000 bytes
Variable Size 494931288 bytes
Database Buffers 339738624 bytes
Redo Buffers 2379776 bytes
Database mounted.
SQL> select database_role from v$database ;
DATABASE_ROLE
----------------
PHYSICAL STANDBY
RMAN> restore database ; #### Restore from full backup of primary
Starting restore at 18-JUN-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /data01/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /data01/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /data01/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /data01/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /data01/app/oracle/oradata/orcl/example01.dbf
channel ORA_DISK_1: restoring datafile 00006 to /data01/app/oracle/oradata/orcl/ggtbs.dbf
channel ORA_DISK_1: reading from backup piece /data01/app/oracle/rman/rman_bkp_new_0bs735tk_1_1
channel ORA_DISK_1: piece handle=/data01/app/oracle/rman/rman_bkp_new_0bs735tk_1_1 tag=TAG20170618T064820
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:35
Finished restore at 18-JUN-17
Now that the cascaded standby database is restored ,set the parameters on the primary and the cascading standby databases accordingly.
================================================================================================================================
=======
Node1
=======
alter system set log_archive_config='DG_CONFIG=(ORCL_node1,ORCL_node2,ORCL_node3)' sid='*' ;
=======
Node2
=======
alter system set log_archive_config='DG_CONFIG=(ORCL_node1,ORCL_node2,ORCL_node3)' sid='*' ;
alter system set log_archive_dest_3='service=ORCL_node3 valid_for=(all_logfiles,all_roles) db_unique_name=ORCL_node3' sid='*' ;
========
Node3
========
alter system set log_archive_config='DG_CONFIG=(ORCL_node1,ORCL_node2,ORCL_node3)' sid='*' ;
alter system set standby_file_management=AUTO sid='*' ;
alter system set fal_server='ORCL_node2' sid='*' ;
alter system set fal_client='ORCL_node3' sid='*' ;
Now start the redo ship on cascadig standby :
===============================================
SQL> alter database recover managed standby database disconnect from session using current logfile;
Database altered.
PROCESS STATUS SEQUENCE#
--------- ------------ ----------
ARCH CONNECTED 0
ARCH CONNECTED 0
ARCH CONNECTED 0
ARCH CONNECTED 0
ARCH CONNECTED 0
ARCH CONNECTED 0
ARCH CONNECTED 0
ARCH CONNECTED 0
ARCH CONNECTED 0
ARCH CONNECTED 0
RFS IDLE 0
RFS IDLE 0
MRP0 APPLYING_LOG 19
RFS IDLE 0
RFS IDLE 0
Subscribe to:
Posts (Atom)