Skip to main content

Oracle Golden Gate Bi-directional Replication Implementation Guide


Oracle GoldenGate (OGG) is a comprehensive software package for real-time data integration and replication in heterogeneous IT environments. Bi-directional replication enables organizations to maintain synchronized data across multiple data centers, providing high availability, disaster recovery, and load distribution capabilities. This detailed guide provides step-by-step instructions for implementing Oracle GoldenGate bi-directional replication between two Oracle databases.

Architecture Overview

In this setup, we'll configure:

  • TestDC1: Primary data center with TestDB1
  • TestDC2: Secondary data center with TestDB2
  • Bi-directional sync: Changes flow in both directions with conflict resolution

Step 1: Software Installation

⚠️ SERVER EXECUTION: Perform these steps on BOTH TestDC1 and TestDC2 servers

Step 1.1: Download and Prepare Software

First, create the necessary directory structure and prepare for installation:

# Create directory for the software
mkdir /data01/ogg_setup
cd /data01/ogg_setup

# Copy Oracle GoldenGate software to this directory
# Example binary: 193000_fbo_ggs_Linux_x64_Oracle_shiphome.zip

Why this matters: Having a dedicated setup directory keeps installation files organized and provides a central location for patches and upgrades.

Step 1.2: Create GoldenGate Home Directory

Establish the GoldenGate installation directory with proper ownership:

# Create directory for GoldenGate home
mkdir -p /opt/app/ogg21c
chown -R testuser:oinstall /opt/app/ogg21c

Important: The oinstall group is Oracle's standard installation group, and proper ownership prevents permission issues during installation and operation.

Step 1.3: Configure Response File

Edit the response file (oggcore.rsp) with these specific values:

INSTALL_OPTION=ORA21c
SOFTWARE_LOCATION=/opt/app/ogg21c
START_MANAGER=FALSE
MANAGER_PORT=7809
DATABASE_LOCATION=/opt/oracle/product/19.3.0/db_1
INVENTORY_LOCATION=/opt/app/oraInventory
UNIX_GROUP_NAME=oinstall

Configuration Notes:

  • INSTALL_OPTION=ORA21c: Specifies Oracle 21c compatibility
  • START_MANAGER=FALSE: We'll start the manager manually after configuration
  • MANAGER_PORT=7809: Standard GoldenGate manager port

Step 1.4: Execute Silent Installation

Run the installer using the response file:

./runInstaller -silent -showprogress -waitforcompletion -ignoreSysPrereqs -responseFile /data01/ogg_setup/fbo_ggs_Linux_x64_Oracle_shiphome/Disk1/response/oggcore.rsp

Parameter Explanation:

  • -silent: Runs without user interaction
  • -showprogress: Displays installation progress
  • -waitforcompletion: Waits for installation to finish
  • -ignoreSysPrereqs: Bypasses system prerequisite checks

Step 1.5: Verify Installation

Confirm successful installation:

cd /opt/app/ogg21c/
./ggsci

GGSCI> info All


Step 2: Patching Process

⚠️ SERVER EXECUTION: Perform these steps on BOTH TestDC1 and TestDC2 servers

Step 2.1: Download Required Patches

Create a patches directory and obtain necessary files:

# Create patch directory
mkdir /data01/ogg_setup/patches

# Download and copy patch files:
# - p37777869_2118000OGGRU_Linux-x86-64.zip (GoldenGate patch)
# - p6880880_190000_Linux-x86-64.zip (OPatch utility)

Patch Importance: Patches contain security fixes, bug fixes, and performance improvements critical for production environments.

Step 2.2: Update OPatch Utility

Replace the existing OPatch with the latest version:

cd /opt/app/ogg21c
mv OPatch OPatch_bkp
unzip /data01/ogg_setup/patches/p6880880_190000_Linux-x86-64.zip

export ORACLE_HOME=/opt/app/ogg21c/
export PATH=$PATH:$ORACLE_HOME/OPatch:$ORACLE_HOME/bin

Why Update OPatch: The latest OPatch version ensures compatibility with newer patches and provides enhanced functionality.

Step 2.3: Check for Patch Conflicts

Before applying patches, verify compatibility:

cd /data01/ogg_setup/patches/temp/37777869
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./

Conflict Detection: This prevents installation failures and identifies any existing patches that might conflict.

Step 2.4: Apply the Patch

Apply the GoldenGate patch:

$ORACLE_HOME/OPatch/opatch apply

Follow the prompts and verify successful patch application.

Step 3: Database Configuration

⚠️ SERVER EXECUTION: Perform these steps on BOTH database servers (TestDB1 and TestDB2)

Step 3.1: Create GoldenGate Database User

📍 Execute on: Both TestDB1 and TestDB2 databases

Connect to both databases and create the GoldenGate user:

-- Drop existing user if present
DROP USER gguser CASCADE;

-- Create the GoldenGate user
CREATE USER "gguser" IDENTIFIED BY "ggpassword" 
    DEFAULT TABLESPACE "GG_DATA_01" 
    TEMPORARY TABLESPACE "TEMP";

ALTER USER "gguser" DEFAULT ROLE ALL;

User Purpose: The gguser account will be used by GoldenGate processes to connect to the database and perform replication operations.

Step 3.2: Grant Comprehensive Permissions

📍 Execute on: Both TestDB1 and TestDB2 databases

Grant all necessary privileges to the GoldenGate user:

-- Transaction and session management
GRANT SELECT ANY TRANSACTION TO gguser;
GRANT CREATE SESSION TO gguser;
GRANT ALTER SESSION TO gguser;

-- Job and scheduling permissions
GRANT CREATE JOB TO gguser;

-- Table and index operations
GRANT ALTER ANY INDEX TO gguser;
GRANT CREATE ANY INDEX TO gguser;
GRANT DROP ANY TABLE TO gguser;
GRANT ALTER ANY TABLE TO gguser;
GRANT CREATE ANY TABLE TO gguser;
GRANT CREATE TABLE TO gguser;

-- Data manipulation
GRANT INSERT ANY TABLE TO gguser;
GRANT UPDATE ANY TABLE TO gguser;
GRANT DELETE ANY TABLE TO gguser;

-- System and dictionary access
GRANT SELECT ANY DICTIONARY TO gguser;
GRANT SELECT ANY TABLE TO gguser;
GRANT FLASHBACK ANY TABLE TO gguser;
GRANT ALTER SYSTEM TO gguser;
GRANT LOCK ANY TABLE TO gguser;

-- Standard roles and tablespace
GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO gguser;
ALTER USER gguser QUOTA UNLIMITED ON GG_DATA_01;

-- Package execution rights
GRANT EXECUTE ON DBMS_FLASHBACK TO gguser;
GRANT EXECUTE ON UTL_FILE TO gguser;

-- GoldenGate administrative privileges
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('gguser');

Permission Rationale: These extensive privileges allow GoldenGate to read transaction logs, manipulate data, and maintain its internal structures.

Step 3.3: Enable Database Logging

📍 Execute on: Both TestDB1 and TestDB2 databases

Configure the database for optimal GoldenGate operation:

-- Check current logging status
SELECT supplemental_log_data_min, force_logging FROM v$database;

-- Enable necessary logging options
ALTER DATABASE FORCE LOGGING;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

-- Verify the changes
SELECT supplemental_log_data_min, force_logging FROM v$database;
ALTER SYSTEM SWITCH LOGFILE;

-- Enable GoldenGate-specific parameters
ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH sid='*';
ALTER SYSTEM SET STREAMS_POOL_SIZE=1024M SCOPE=BOTH sid='*';

Logging Configuration Explained:

  • FORCE LOGGING: Ensures all operations are logged
  • SUPPLEMENTAL LOG DATA: Provides additional information in redo logs
  • ENABLE_GOLDENGATE_REPLICATION: Optimizes database for GoldenGate
  • STREAMS_POOL_SIZE: Allocates memory for replication processes

Step 4: GoldenGate Basic Configuration

⚠️ SERVER EXECUTION: Perform these steps on BOTH TestDC1 and TestDC2 servers

Step 4.1: Configure the Manager Process

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Start GGSCI and create the basic directory structure:

cd /opt/app/ogg21c
./ggsci

GGSCI> CREATE SUBDIRS

Create the manager parameter file:

GGSCI> EDIT PARAM MGR

Add the following content:

PORT 7809                                     
DYNAMICPORTLIST 7810-7899

AUTOSTART EXTRACT  EXT_*
AUTOSTART EXTRACT  PMP_*
AUTOSTART REPLICAT R*

COMMENT AUTORESTART EXTRACT  *, WAITMINUTES 1, RETRIES 5
COMMENT AUTORESTART REPLICAT *, WAITMINUTES 1, RETRIES 5

Start the manager:

GGSCI> START MANAGER
GGSCI> INFO MANAGER

Manager Role: The Manager process controls and monitors all other GoldenGate processes, handles port allocation, and manages process restarts.

Step 4.2: Configure TNS Entries

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Before setting up the credential store, configure database connectivity by adding entries to the TNS names file:

cd /opt/app/ogg21c/instantclient
vi tnsnames.ora

Add the following database connection entries:

TESTDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = testdb1-scan.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testdb1)
    )
  )

TESTDB2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = testdb2-scan.example.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testdb2)
    )
  )

TNS Configuration Importance: These entries must be configured before creating the credential store as the aliases reference these TNS names for database connectivity.

Test the TNS connectivity:

# Test connection to TestDB1
sqlplus gguser/ggpassword@testdb1

# Test connection to TestDB2  
sqlplus gguser/ggpassword@testdb2

Step 4.3: Create Trail File Directories

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Create directories for trail files on both servers:

mkdir -p /data01/ogg_trails/testdc1
mkdir -p /data01/ogg_trails/testdc2
chown -R testuser:oinstall /data01/ogg_trails

Trail Files: These files store captured transactions before they're applied to target databases, providing a buffer for data transfer.

Step 4.4: Configure Credential Store

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Set up secure credential management:

GGSCI> ADD CREDENTIALSTORE

GGSCI> ALTER CREDENTIALSTORE ADD USER gguser@testdb1 ALIAS gg_testdb1
Password: ********

GGSCI> ALTER CREDENTIALSTORE ADD USER gguser@testdb2 ALIAS gg_testdb2
Password: ********

GGSCI> INFO CREDENTIALSTORE

Security Benefit: Credential store eliminates hardcoded passwords in parameter files and provides centralized credential management.

Step 4.5: Create Global Parameters

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Configure global settings:

GGSCI> EDIT PARAMS ./GLOBALS

Add the following content:

ALLOWOUTPUTDIR /data01/ogg_trails/testdc1/
ALLOWOUTPUTDIR /data01/ogg_trails/testdc2/
ENABLEMONITORING
GGSCHEMA gguser
CHECKPOINTTABLE gguser.CKPTAB
ENABLE_HEARTBEAT_TABLE

Global Parameters Explained:

  • ALLOWOUTPUTDIR: Specifies allowed directories for trail files
  • ENABLEMONITORING: Enables GoldenGate monitoring features
  • GGSCHEMA: Default schema for GoldenGate objects
  • CHECKPOINTTABLE: Table for storing checkpoint information

Step 4.6: Create Checkpoint Tables

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Create checkpoint tables on both databases:

GGSCI> DBLOGIN USERIDALIAS gg_testdb1
GGSCI> ADD CHECKPOINTTABLE gguser.CKPTAB

GGSCI> DBLOGIN USERIDALIAS gg_testdb2
GGSCI> ADD CHECKPOINTTABLE gguser.CKPTAB

Checkpoint Tables: Store process position information, enabling recovery from specific points in case of failures.

Step 5: Implement Bi-directional Replication

Step 5.1: TestDC1 to TestDC2 Setup

⚠️ SERVER EXECUTION: Perform these steps ONLY on TestDC1 server

Step 5.1.1: Enable Schema Trandata on TestDC1

📍 Execute on: TestDC1 GoldenGate server (connecting to TestDB1)

GGSCI> DBLOGIN USERIDALIAS gg_testdb1
GGSCI> ADD SCHEMATRANDATA test_schema

Trandata: Enables supplemental logging for specific schema objects required for replication.

Step 5.1.2: Create Extract Process on TestDC1

📍 Execute on: TestDC1 GoldenGate server

Create the Extract parameter file:

GGSCI> EDIT PARAMS EXT_DC1

Add the following configuration:

EXTRACT ext_dc1

SETENV (TNS_ADMIN="/opt/app/ogg21c/instantclient")
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)

USERIDALIAS gg_testdb1 DOMAIN OracleGoldenGate

EXTTRAIL /data01/ogg_trails/testdc1/et

WARNLONGTRANS 5m, CHECKINTERVAL 300
CHECKPOINTSECS 1
TRANLOGOPTIONS _LCRPINGFREQUENCY 1
TRANLOGOPTIONS EXCLUDEUSER gguser
TRANLOGOPTIONS INTEGRATEDPARAMS(_CKPT_RETENTION_CHECK_FREQ 300)
TRANLOGOPTIONS INCLUDEREGIONID
TRANLOGOPTIONS INTEGRATEDPARAMS (_LOGMINER_READ_BUFFERS 256)

LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
REPORTCOUNT EVERY 15 MINUTES, RATE

TABLE test_schema.*;
TABLEEXCLUDE test_schema.TEMP_DATA;

Add the Extract process:

GGSCI> ADD EXTRACT EXT_DC1, INTEGRATED TRANLOG, BEGIN NOW
GGSCI> ADD EXTTRAIL /data01/ogg_trails/testdc1/et, EXTRACT EXT_DC1, MEGABYTES 250
GGSCI> REGISTER EXTRACT EXT_DC1, DATABASE

Extract Configuration Details:

  • INTEGRATED TRANLOG: Uses integrated capture for better performance
  • WARNLONGTRANS 5m: Warns about long-running transactions
  • EXCLUDEUSER gguser: Prevents recursive replication
  • LOGALLSUPCOLS: Logs all supplemental columns

Step 5.1.3: Create Data Pump on TestDC1

📍 Execute on: TestDC1 GoldenGate server

Create the Data Pump parameter file:

GGSCI> EDIT PARAMS PMP_DC2

Add the configuration:

EXTRACT pmp_dc2

RMTHOST testdc2.example.com, MGRPORT 7809, TCPBUFSIZE 1000000, TCPFLUSHBYTES 1000000
RMTTRAIL /data01/ogg_trails/testdc1/rt
PASSTHRU
REPORTROLLOVER AT 00:00
REPORTCOUNT EVERY 15 MINUTES, RATE
WILDCARDRESOLVE IMMEDIATE

TABLE test_schema.*;

Add the Data Pump:

GGSCI> ADD EXTRACT PMP_DC2, EXTTRAILSOURCE /data01/ogg_trails/testdc1/et
GGSCI> ADD RMTTRAIL /data01/ogg_trails/testdc1/rt, EXTRACT PMP_DC2, MEGABYTES 250

Data Pump Purpose: Transfers trail files across the network to the remote site with buffering and error handling.

Step 5.1.4: Start Processes on TestDC1

📍 Execute on: TestDC1 GoldenGate server

GGSCI> START EXTRACT EXT_DC1
GGSCI> START EXTRACT PMP_DC2

Step 5.2: TestDC2 to TestDC1 Setup

⚠️ SERVER EXECUTION: Perform these steps ONLY on TestDC2 server

Step 5.2.1: Enable Schema Trandata on TestDC2

📍 Execute on: TestDC2 GoldenGate server (connecting to TestDB2)

GGSCI> DBLOGIN USERIDALIAS gg_testdb2
GGSCI> ADD SCHEMATRANDATA test_schema

Step 5.2.2: Create Extract Process on TestDC2

📍 Execute on: TestDC2 GoldenGate server

GGSCI> EDIT PARAMS EXT_DC2

Add the same configuration as EXT_DC1, but with appropriate database alias:

EXTRACT ext_dc2

SETENV (TNS_ADMIN="/opt/app/ogg21c/instantclient")
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)

USERIDALIAS gg_testdb2 DOMAIN OracleGoldenGate

EXTTRAIL /data01/ogg_trails/testdc2/et

WARNLONGTRANS 5m, CHECKINTERVAL 300
CHECKPOINTSECS 1
TRANLOGOPTIONS _LCRPINGFREQUENCY 1
TRANLOGOPTIONS EXCLUDEUSER gguser
TRANLOGOPTIONS INTEGRATEDPARAMS(_CKPT_RETENTION_CHECK_FREQ 300)
TRANLOGOPTIONS INCLUDEREGIONID
TRANLOGOPTIONS INTEGRATEDPARAMS (_LOGMINER_READ_BUFFERS 256)

LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
REPORTCOUNT EVERY 15 MINUTES, RATE

TABLE test_schema.*;
TABLEEXCLUDE test_schema.TEMP_DATA;

Add the Extract process:

GGSCI> ADD EXTRACT EXT_DC2, INTEGRATED TRANLOG, BEGIN NOW
GGSCI> ADD EXTTRAIL /data01/ogg_trails/testdc2/et, EXTRACT EXT_DC2, MEGABYTES 250
GGSCI> REGISTER EXTRACT EXT_DC2, DATABASE

Step 5.2.3: Create Data Pump on TestDC2

📍 Execute on: TestDC2 GoldenGate server

GGSCI> EDIT PARAMS PMP_DC1

Configure for DC1 destination:

EXTRACT pmp_dc1

RMTHOST testdc1.example.com, MGRPORT 7809, TCPBUFSIZE 1000000, TCPFLUSHBYTES 1000000
RMTTRAIL /data01/ogg_trails/testdc2/rt
PASSTHRU
REPORTROLLOVER AT 00:00
REPORTCOUNT EVERY 15 MINUTES, RATE
WILDCARDRESOLVE IMMEDIATE

TABLE test_schema.*;

Add the Data Pump:

GGSCI> ADD EXTRACT PMP_DC1, EXTTRAILSOURCE /data01/ogg_trails/testdc2/et
GGSCI> ADD RMTTRAIL /data01/ogg_trails/testdc2/rt, EXTRACT PMP_DC1, MEGABYTES 250

Step 5.2.4: Start Processes on TestDC2

📍 Execute on: TestDC2 GoldenGate server

GGSCI> START EXTRACT EXT_DC2
GGSCI> START EXTRACT PMP_DC1

Step 5.3: Create Replicat on TestDC1 (Receiving from TestDC2)

📍 Execute on: TestDC1 GoldenGate server (applying changes to TestDB1)

Create the Replicat parameter file:

GGSCI> EDIT PARAMS RDC2DC1

Add the configuration with conflict detection:

REPLICAT rdc2dc1

SETENV (TNS_ADMIN="/opt/app/ogg21c/instantclient")
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)

USERIDALIAS gg_testdb1

BATCHSQL BATCHESPERQUEUE 1000, BATCHTRANSOPS 100000, OPSPERBATCH 100000, OPSPERQUEUE 5000
REPERROR DEFAULT, DISCARD
NODISCARDFILE
REPORTROLLOVER AT 00:00
REPORTCOUNT EVERY 15 MINUTES, RATE

-- Conflict detection for test_table1
MAP test_schema.test_table1, TARGET test_schema.test_table1, &
    SQLEXEC (ID lookuptable1, ON UPDATE, &
        QUERY 'SELECT COUNT(1) conflict FROM dual WHERE EXISTS(SELECT 1 FROM test_schema.test_table1 WHERE id = :p1 AND last_updated > :p2)', &
        PARAMS (p1 = id, p2 = last_updated), &
        BEFOREFILTER, ERROR REPORT, TRACE ALL), &
    FILTER (@GETVAL(lookuptable1.conflict) = 0), &
    REPERROR (21000, DISCARD);

-- Process all operations except conflicting updates
GETINSERTS
GETDELETES
IGNOREUPDATES
MAP test_schema.test_table1, TARGET test_schema.test_table1;

Add and start the Replicat:

GGSCI> ADD REPLICAT RDC2DC1, INTEGRATED, EXTTRAIL /data01/ogg_trails/testdc2/rt, BEGIN NOW
GGSCI> START REPLICAT RDC2DC1

Conflict Detection Explained:

  • The SQLEXEC clause checks if a record exists with a newer timestamp
  • If a conflict is detected, the operation is filtered out
  • This prevents overwriting newer changes with older ones

Step 5.4: Create Replicat on TestDC2 (Receiving from TestDC1)

📍 Execute on: TestDC2 GoldenGate server (applying changes to TestDB2)

GGSCI> EDIT PARAMS RDC1DC2

Add similar configuration for TestDC2:

REPLICAT rdc1dc2

SETENV (TNS_ADMIN="/opt/app/ogg21c/instantclient")
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)

USERIDALIAS gg_testdb2

BATCHSQL BATCHESPERQUEUE 1000, BATCHTRANSOPS 100000, OPSPERBATCH 100000, OPSPERQUEUE 5000
REPERROR DEFAULT, DISCARD
NODISCARDFILE
REPORTROLLOVER AT 00:00
REPORTCOUNT EVERY 15 MINUTES, RATE

-- Conflict detection for test_table1
MAP test_schema.test_table1, TARGET test_schema.test_table1, &
    SQLEXEC (ID lookuptable1, ON UPDATE, &
        QUERY 'SELECT COUNT(1) conflict FROM dual WHERE EXISTS(SELECT 1 FROM test_schema.test_table1 WHERE id = :p1 AND last_updated > :p2)', &
        PARAMS (p1 = id, p2 = last_updated), &
        BEFOREFILTER, ERROR REPORT, TRACE ALL), &
    FILTER (@GETVAL(lookuptable1.conflict) = 0), &
    REPERROR (21000, DISCARD);

-- Process all operations except conflicting updates
GETINSERTS
GETDELETES
IGNOREUPDATES
MAP test_schema.test_table1, TARGET test_schema.test_table1;

Add and start the Replicat:

GGSCI> ADD REPLICAT RDC1DC2, INTEGRATED, EXTTRAIL /data01/ogg_trails/testdc1/rt, BEGIN NOW
GGSCI> START REPLICAT RDC1DC2

Step 6: Monitoring and Verification

⚠️ SERVER EXECUTION: These monitoring steps can be performed on BOTH servers

Step 6.1: Check Overall Process Status

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Monitor all processes across both sites:

GGSCI> INFO ALL

This command shows the status of all Extract, Data Pump, and Replicat processes.

Step 6.2: Monitor Process Lag

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Check replication lag to ensure timely data delivery:

GGSCI> LAG EXTRACT *
GGSCI> LAG REPLICAT *

Lag Monitoring: High lag values indicate performance issues or bottlenecks requiring investigation.

Step 6.3: View Detailed Process Statistics

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Get comprehensive statistics for each process:

GGSCI> STATS EXTRACT EXT_DC1
GGSCI> STATS REPLICAT RDC2DC1

Statistics include records processed, processing rates, and error counts.

Step 6.4: Monitor Error Logs and Reports

📍 Execute on: Both TestDC1 and TestDC2 GoldenGate servers

Regularly check log files for errors and performance metrics:

# View extract report file
tail -100 /opt/app/ogg21c/dirrpt/EXT_DC1.rpt

# View replicat report file
tail -100 /opt/app/ogg21c/dirrpt/RDC2DC1.rpt

# Check for discarded operations
cat /opt/app/ogg21c/dirrpt/RDC2DC1_*.dsc

Log Analysis: Report files contain detailed processing information, error messages, and performance statistics essential for troubleshooting.

Step 6.5: Test Bi-directional Replication

Test Basic Replication

📍 Execute on: TestDC1 database server (TestDB1)

Insert a test record in TestDC1:

INSERT INTO test_schema.test_table1 (id, name, value, last_updated) 
VALUES (1, 'Test from DC1', 100, SYSDATE);
COMMIT;

Verify the record appears in TestDC2:

📍 Execute on: TestDC2 database server (TestDB2)

SELECT * FROM test_schema.test_table1 WHERE id = 1;

Test Reverse Replication

📍 Execute on: TestDC2 database server (TestDB2)

Update the record in TestDC2:

UPDATE test_schema.test_table1 
SET value = 200, last_updated = SYSDATE 
WHERE id = 1;
COMMIT;

Verify the update appears in TestDC1:

📍 Execute on: TestDC1 database server (TestDB1)

SELECT * FROM test_schema.test_table1 WHERE id = 1;

Test Conflict Resolution

Simultaneously update the same record on both databases:

📍 Execute on TestDC1 (TestDB1):

UPDATE test_schema.test_table1 
SET value = 300, last_updated = SYSDATE 
WHERE id = 1;
COMMIT;

📍 Execute on TestDC2 (TestDB2) - within seconds:

UPDATE test_schema.test_table1 
SET value = 400, last_updated = SYSDATE 
WHERE id = 1;
COMMIT;

Check which update wins based on the timestamp-based conflict resolution logic.

Troubleshooting Common Issues

Process Status Issues

If processes show ABENDED status:

  1. Check the process report file for error details
  2. Review database connectivity and credentials
  3. Verify trail file permissions and disk space
  4. Check for database parameter changes

High Lag Situations

When lag increases significantly:

  1. Check system resources (CPU, memory, I/O)
  2. Review batch size and processing parameters
  3. Monitor network connectivity between sites
  4. Analyze database load and performance

Conflict Resolution Problems

If conflicts aren't resolving correctly:

  1. Verify timestamp columns exist and are populated
  2. Check the conflict detection SQL syntax
  3. Review discard files for filtered operations
  4. Validate the conflict resolution logic

Performance Optimization Tips

Extract Process Optimization

  • Adjust CHECKPOINTSECS based on transaction volume
  • Use TRANLOGOPTIONS INTEGRATEDPARAMS for better performance
  • Monitor and tune _LOGMINER_READ_BUFFERS parameter

Replicat Process Optimization

  • Optimize BATCHSQL parameters for your workload
  • Use BULK operations for large data volumes
  • Consider parallel replicat for high-throughput scenarios

Network Optimization

  • Tune TCPBUFSIZE and TCPFLUSHBYTES for network conditions
  • Use compression for slow network links
  • Consider multiple data pump processes for large volumes

Best Practices and Recommendations

Security Considerations

  1. Use Credential Store: Avoid hardcoded passwords in parameter files
  2. Encrypt Connections: Configure SSL/TLS for network communication
  3. Limit Privileges: Grant only necessary database privileges
  4. Monitor Access: Regularly audit GoldenGate user activities

Operational Excellence

  1. Regular Monitoring: Implement automated monitoring for lag and errors
  2. Backup Strategy: Include checkpoint tables and parameter files in backups
  3. Documentation: Maintain up-to-date configuration documentation
  4. Testing: Regularly test failover and recovery procedures

Capacity Planning

  1. Trail File Management: Implement automatic trail file cleanup
  2. Storage Monitoring: Monitor disk space for trail file directories
  3. Performance Baselines: Establish normal operating parameters
  4. Scalability Planning: Plan for future volume growth

This comprehensive guide provides the detailed steps necessary to implement Oracle Golden Gate bi-directional replication. The configuration includes sophisticated conflict detection and resolution mechanisms that ensure data consistency across both data centers.

Key success factors include:

  1. Thorough Planning: Understand your data patterns and conflict scenarios
  2. Proper Configuration: Follow each step carefully and validate configurations
  3. Continuous Monitoring: Implement robust monitoring and alerting
  4. Regular Testing: Test the complete setup including disaster recovery scenarios
  5. Documentation: Maintain detailed documentation of all configurations and procedures

With proper implementation and ongoing maintenance, Oracle GoldenGate bi-directional replication provides a robust foundation for high availability, disaster recovery, and load distribution across multiple data centers. The investment in setup complexity pays dividends in operational reliability and business continuity capabilities.

Comments

Popular posts from this blog

🚀 Automating Oracle Database Patching with Ansible: A Complete Guide

Oracle database patching has long been the bane of DBAs everywhere. It's a critical task that requires precision, expertise, and often results in extended maintenance windows. What if I told you that you could automate this entire process, reducing both risk and downtime while ensuring consistency across your Oracle estate? 💡 In this comprehensive guide, I'll walk you through a production-ready Ansible playbook that completely automates Oracle patch application using OPatch. Whether you're managing a single Oracle instance or hundreds of databases across your enterprise, this solution will transform your patch management strategy! 🎯 🔥 The Challenge: Why Oracle Patching is Complex Before diving into the solution, let's understand why Oracle patching is so challenging: 🔗 Multiple dependencies : OPatch versions, Oracle Home configurations, running processes ⚠️ Risk of corruption : Incorrect patch application can render databases unusable ⏰ Downtime requirements : Da...

Oracle RAC Switchover & Switchback: Step-by-Step Guide

 Ensuring business continuity requires regular Disaster Recovery (DR) drills. This guide covers the Switchover and Switchback process between Primary (DC) and Standby (DR) databases . Pre-checks Before Performing Switchover Before starting the activity, ensure there are no active sessions in the database. If any are found, share the session details with the application team, get their confirmation, and terminate the sessions. Primary Database Name: PRIMARY Standby Database Name: STANDBY  Identify Active Sessions set lines 999 pages 999 col machine for a30 col username for a30 col program for a30 compute sum of count on report break on report select inst_id,username,osuser,machine,program,status,count(1) "count" from gv$session where inst_id=1 and program like 'JDBC%' group by inst_id,username,osuser,machine,program,status order by 1,2; select inst_id,username,osuser,machine,program,status,count(1) "count" from gv$session where inst_id=2 and program lik...

Mastering Oracle RAC with SRVCTL Commands

Oracle Real Application Clusters (RAC) provide high availability, scalability, and manageability for databases. One of the most powerful tools for managing RAC databases is srvctl , a command-line utility that allows administrators to control various database services. This blog explores essential srvctl commands to help you efficiently manage Oracle RAC environments. 1. Checking Database Configuration and Status  List all available databases on the host:                  srvctl config database   Check the status of a specific database and its instances:                    srvctl status database -d <database_name>   Retrieve detailed status information about a database, including its instances and states:                    srvctl status database -d <database_name> -v 2. Stopping and Starting Databases   ...