Travel time can be writing time and while sitting in the departure lounge waiting for my flight I use the opportunity to add part 3 of the series. In the previous two parts you could read how to add a second SCAN and the necessary infrastructure to the cluster. Now it is time to create the standby database. It is assumed that a RAC One Node database has already been created on the primary cluster and is in archivelog mode.
Static Registration with the Listeners
The first step is to statically register the databases with their respective listeners. The example below is for the primary database first and standby next, it is equally applicable to the standby. The registration is needed during switchover operations when the broker restarts databases as needed. Without static registration you cannot connect to the database remotely while it is shut down.
The static listener registration has to be performed on each cluster node, primary and standby cluster alike.
Database Duplication-Creating the Physical Standby
With the listener registration complete you can start the duplication. I am using backup-based duplication here, you could equally go for a duplication from active database. To be able to perform the network duplication you have to have a backup first. I created it on the primary:
Don’t forget to backup the current controlfile for a standby database – that’s immensely important for backup-based duplication but will be taken care of by the duplication “from active database”. Once the backup is created, ensure it is available on the standby host. In my case I am using the same backup location on the primary as well as on the standby. Being a good citizen and because I want test active duplication from backup later I created a new TNS entry in /u01/app/oracle/product/12.1.0.1/dbhome_1/network/admin/tnsnames.ora
This is needed since there won’t be a cross-registration of the new SCAN listener with the listener_dg:
If you are not using active duplication you also need to take care of the password file. After a lot of trial-and-error it became apparent that a) you can’t have the password file in ASM and b) you need to copy the original password file from the primary database. If the source password is in ASM you can use asmcmdp cp +data/RON/orapwRON.ora /tmp to copy it out of ASM. When I tried to create a password file locally I could not connect-every time I tried I had an ‘ORA-1033 “ORACLE initialization or shutdown in progress”‘ For the purpose of the duplication I am using node 1 on the standby cluster. Later on you can extend the database to both nodes.
You will need a password file to start the standby database, here is what I used:
Since the SIDs will be sby_1 and sby_2 I created symlinks for these, pointing to the pfile. With everything in place it was time to duplicate!
After the duplication finished ensure you switch to the use of SPFILEs instead of the pfile. You also need to add the *.control_files to the (s)pfile. You could also create the spfile in ASM like I did here.
Registration with Clusterware
With the duplication complete you can register the database and a service with Clusterware:
You don’t need to start that service now, it will be started when the database is opened:
Done for now
This concludes the third part of this article. We are getting closer! There is a primary and a standby database now, and all that remains to be done is the creation of the Data Guard configuration to start log shipping. You can read about that in part 4. The final part will also include demonstrations of Data Guard behaviour when you relocate the instance from node 1 to node 2. I will also show you how to perform a switchover operation.