Skip to main content

"Seamless Conversion: Switching Oracle RAC Physical Standby to Snapshot Standby and Vice versa"

 Oracle Data Guard allows database administrators to create a physical standby database to ensure high availability and disaster recovery. However, in certain scenarios, it is necessary to temporarily convert a physical standby database to a snapshot standby database for testing or development purposes. This guide provides step-by-step instructions on how to perform this conversion and revert it back to a physical standby database.


Converting Physical Standby to Snapshot Standby

DATABASE NAME: STANDBY
INSTANCES: STANDBY1,STANDBY2

Step 1: Check the Current Standby Status

SELECT status, instance_name, database_role, open_mode FROM v$database, gv$Instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE    OPEN_MODE
------------ ---------------- ------------------------------------- --------------------
OPEN                  standby1                   PHYSICAL STANDBY MOUNTED
OPEN                  standby2                   PHYSICAL STANDBY MOUNTED

Run the following queries to check the last received and applied sequence numbers, along with the managed recovery process (MRP) status:


Query 1: Query to check MRP status:

select process,sequence#,status from gv$managed_standby;

Query 2: To Check the Timing until which the DR database is in sync with primary database:

select sequence#, thread#, to_char(FIRST_TIME,'dd-mm-yyyy hh24:mi:ss') "SYNC TIME" from v$log_history where sequence# in (select max(sequence#) from gv$log_history group by thread#) order by thread#;

Step 2: Cancel Managed Recovery

Cancel the managed recovery process on the standby database:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Verify that the recovery process has stopped:

SELECT process, sequence#, thread#, status, blocks, block#, inst_id FROM gv$managed_standby WHERE process='MRP0';

Step 3: Stop the Standby Database

Check the database configuration:

srvctl config database                  ##To get the list of database running 

srvctl status database -d standby -v      ##To get to know instance status

Stop the standby database:

srvctl stop database -d standby -o immediate

Step 4: Convert to Snapshot Standby

Start the instance in mount mode:

srvctl start instance -d standby -i standby1 -o mount

connect to the instance and convert the database to snapshot standby:

ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;

ALTER DATABASE OPEN;

Verify the conversion:

select name, status, instance_name, database_role, open_mode FROM v$database, gv$Instance;

STATUS     INSTANCE_NAME    DATABASE_ROLE    OPEN_MODE

------------ ---------------- ---------------- --------------------

OPEN         standby1                       SNAPSHOT STANDBY READ WRITE

Start the second instance:

srvctl start instance -d standby -i standby2 -o open

Verify the database status again:

select status, instance_name, database_role, open_mode FROM v$database, gv$Instance;

STATUS     INSTANCE_NAME    DATABASE_ROLE    OPEN_MODE

------------ ---------------- ---------------- --------------------

OPEN         standby1                       SNAPSHOT STANDBY READ WRITE

OPEN         standby2                       SNAPSHOT STANDBY READ WRITE


Converting Snapshot Standby Back to Physical Standby

Step 1: Verify Snapshot Standby Status

Check the current status of the snapshot standby database:

select status, instance_name, database_role, open_mode FROM v$database, gv$Instance;

STATUS     INSTANCE_NAME    DATABASE_ROLE    OPEN_MODE

------------ ---------------- ---------------- --------------------

OPEN         standby1                       SNAPSHOT STANDBY READ WRITE

OPEN         standby2                       SNAPSHOT STANDBY READ WRITE


Step 2: Stop the Database

Stop the snapshot standby database:

srvctl stop database -d standby


Step 3: Start in Mount Mode and Convert to Physical Standby

Start the instance in mount mode:

srvctl start instance -d standby -i standby1 -o mount

Verify that it is in the correct mode:

select status, instance_name, database_role, open_mode FROM v$database, gv$Instance;

Convert back to physical standby:

ALTER DATABASE CONVERT TO PHYSICAL STANDBY;


Step 4: Restart Standby and Resume Recovery:

Start the second standby instance:

srvctl start instance -d standby -i standby2 -o mount

Verify the status:

select status, instance_name, database_role, open_mode FROM v$database, gv$Instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE    OPEN_MODE
------------ ---------------- ------------------------------------- --------------------
OPEN                  standby1                   PHYSICAL STANDBY MOUNTED
OPEN                  standby2                   PHYSICAL STANDBY MOUNTED

Start managed recovery:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;


Step 5: Verify Standby Synchronization:

Check the MRP process and sequence synchronization:

select process,sequence#,status from gv$managed_standby;

select sequence#, thread#, to_char(FIRST_TIME,'dd-mm-yyyy hh24:mi:ss') "SYNC TIME" from v$log_history where sequence# in (select max(sequence#) from gv$log_history group by thread#) order by thread#;


Comments

Popular posts from this blog

πŸš€ Automating Oracle Database Patching with Ansible: A Complete Guide

Oracle database patching has long been the bane of DBAs everywhere. It's a critical task that requires precision, expertise, and often results in extended maintenance windows. What if I told you that you could automate this entire process, reducing both risk and downtime while ensuring consistency across your Oracle estate? πŸ’‘ In this comprehensive guide, I'll walk you through a production-ready Ansible playbook that completely automates Oracle patch application using OPatch. Whether you're managing a single Oracle instance or hundreds of databases across your enterprise, this solution will transform your patch management strategy! 🎯 πŸ”₯ The Challenge: Why Oracle Patching is Complex Before diving into the solution, let's understand why Oracle patching is so challenging: πŸ”— Multiple dependencies : OPatch versions, Oracle Home configurations, running processes ⚠️ Risk of corruption : Incorrect patch application can render databases unusable ⏰ Downtime requirements : Da...

🐳Oracle 19c Database Deployment with Docker

Oracle 19c Database Deployment with Docker 🐳 Oracle 19c Database Deployment with Docker Welcome to this comprehensive guide on deploying, configuring, and managing Oracle 19c Database using Docker containers. This blog will walk you through the entire process from setup to production best practices with practical code examples. Docker provides an excellent way to run Oracle databases in isolated, portable containers, making it easy to deploy and manage Oracle 19c instances for development, testing, and production environments. This approach offers numerous benefits: πŸ”’ Isolation : Run Oracle in a containerized environment without affecting your host system 🚚 Portability : Easily move your database between different environments πŸ”„ Reproducibility : Quickly spin up identical database instances ⚡ Resource Efficiency : Use Docker's resource management capabilities to control CPU, memory, and stor...

Mastering Oracle RAC with CRSCTL commands

Mastering Oracle Clusterware Administration: Essential Commands & Best Practices Oracle Clusterware is a key component for managing cluster environments, ensuring high availability and resource management for Oracle databases. Below are essential commands for managing Oracle Clusterware effectively. What is crsctl? crsctl (Cluster Ready Services Control) is a command-line utility provided by Oracle to manage Oracle Clusterware. It allows administrators to start, stop, check, and configure various aspects of cluster services. With crsctl , DBAs can control cluster resources, manage voting disks, check the status of Oracle High Availability Services, and ensure the proper functioning of Oracle RAC environments. Starting and Stopping Oracle Clusterware On Local Node Stop Clusterware: crsctl stop cluster Start Clusterware: crsctl start cluster On RAC Standalone/Oracle Restart Stop Cluster: crsctl stop has Start Cluster: crsctl start has On All Nodes or All Hub Nodes Start Clusterware:...