Skip to main content

Posts

Installing pgBackRest on Linux (RHEL 9 / Rocky / AlmaLinux)

 When you work with PostgreSQL in real environments, one thing becomes very clear very quickly: Backups are not optional. It doesn’t matter how stable your database is — failures happen. Disks crash Files get corrupted Someone runs the wrong delete query A patch goes wrong A standby needs rebuilding. So having a proper backup strategy is not just good practice, it’s survival. That’s exactly why tools like pgBackRest are so widely used in PostgreSQL production systems. In this post, I’ll walk through a simple installation of pgBackRest on a Linux server (RHEL 9 / Rocky Linux / AlmaLinux). Why pgBackRest? PostgreSQL already provides tools like: pg_dump pg_basebackup And yes, they work. But once your database grows, or you start managing HA setups, these tools start feeling limited. pgBackRest is built for serious PostgreSQL backup requirements. Some of the reasons it stands out: Supports full, differential, and incremental backups Built-in compre...
Recent posts

Why Your Bulk Data Load Slows Down Over Time: The Hidden Cost of Index Maintenance

  Introduction Have you ever started a bulk data load that ran lightning-fast initially, only to watch it grind to a crawl hours later? You're not alone. This is one of the most common—and most misunderstood—performance issues in Oracle databases. In this post, I'll share a real-world case study where a data load that started at 550,000 rows/hour degraded to just 40,000 rows/hour—a 93% performance drop. More importantly, I'll explain exactly why this happens and how to fix it. The Scenario Our team was loading data into a staging table as part of a nightly ETL process. The setup looked like this: Table Configuration: CREATE TABLE STG_CUSTOMER_EXTRACT ( CUSTOMER_ID NUMBER, ACCOUNT_NUMBER VARCHAR2(50), BILLING_ID VARCHAR2(30), REGION_CODE VARCHAR2(10), LOAD_DATE DATE ); -- Three indexes to support downstream queries CREATE INDEX IDX_CUSTOMER_ID ON STG_CUSTOMER_EXTRACT(CUSTOMER_ID); CREATE INDEX IDX_ACCOUNT_NUM ON STG_CUSTOMER...

How to Enable Oracle Golden Gate Trail File Encryption (Bidirectional Setup)

Introduction: Why Your GoldenGate Trails Need Encryption If you're running Oracle GoldenGate in production, your trail files are essentially the heartbeat of your data replication. They contain captured transactions, metadata, and complete before/after images of your data changes. Without encryption, these files are vulnerable—whether sitting on shared storage, captured in backups, or transmitted across networks. In this comprehensive guide, I'll walk you through implementing AES256 encryption for Oracle GoldenGate trail files in a bidirectional replication setup. We'll cover both local trail storage encryption and network transmission security between two hosts. What You'll Achieve By the end of this tutorial, you'll have: Encrypted trail files written to disk (data at rest protection) Encrypted network transmission between source and target systems Bidirectional encryption working seamlessly in both directions Minimal performance impact while mainta...

How to Move PostgreSQL Temporary Files to a Dedicated Disk for Faster Queries

Welcome back to my PostgreSQL optimization series! In previous posts, I covered installing PostgreSQL 18 with an optimized disk layout and relocating WAL files to a dedicated disk. Today, I'm tackling another critical performance optimization: moving temporary files to their own disk volume. If you've ever run complex queries involving large sorts or joins and watched your database slow to a crawl, this optimization is for you. Understanding PostgreSQL Temporary Files Before we dive into the how, let's understand the what and why. PostgreSQL creates temporary files when performing operations that exceed available memory. These files support operations like: Large sorts - When ORDER BY clauses process more data than fits in work_mem Hash joins - When joining large tables that exceed hash_mem limits Aggregations - GROUP BY operations on massive datasets Window functions - Complex analytical queries with partitioning Here's what makes temporary files unique:...

How to Move PostgreSQL WAL Files to a Dedicated Disk for Better Performance (Step-by-Step Guide)

  Moving PostgreSQL WAL Files to a Dedicated Disk: A Complete Guide In my previous post, I covered installing PostgreSQL 18 with a performance-optimized disk layout. Today, I'm diving deeper into one of the most impactful optimizations you can make: relocating your Write-Ahead Log (WAL) files to a dedicated disk volume. This seemingly simple change can dramatically reduce I/O contention and boost your database performance, especially for write-heavy workloads. Let me show you exactly how to do it safely. What Are WAL Files and Why Do They Matter? Write-Ahead Logs are the backbone of PostgreSQL's durability guarantees. They're sequential transaction logs that record every change made to your database before those changes hit the actual data files. Think of them as a detailed journal of everything happening in your database. WAL files serve two critical purposes: Crash Recovery : If PostgreSQL crashes unexpectedly, WAL files allow the database to replay transactions an...