Skip to main content

Complete Guide to PostgreSQL 18 Source Installation with Performance Optimization

PostgreSQL 18 brings powerful new features and performance improvements that make it an excellent choice for modern database workloads. In this comprehensive guide, I'll walk you through installing PostgreSQL 18 from source code while implementing a strategic disk layout that maximizes performance.

Why Install from Source?

While package managers offer convenience, building PostgreSQL from source gives you complete control over configuration and optimization options. This flexibility allows you to tailor the database precisely to your hardware and workload requirements.

Prerequisites

Before we begin, ensure you're working with a RHEL, CentOS, or Fedora-based system. We'll be installing several development tools and libraries needed for compilation.

Installation Process

1. Download PostgreSQL 18 Source Code

First, grab the latest PostgreSQL 18 source tarball:

wget https://ftp.postgresql.org/pub/source/v18.0/postgresql-18.0.tar.gz

2. Install Required Dependencies

Install all necessary development packages:

yum install -y readline-devel
yum install -y zlib-devel
yum install -y gcc
yum install -y make
yum install -y libicu-devel
yum install -y bison
yum install -y flex
yum install -y perl-FindBin
yum install -y perl-core

3. Extract and Explore Configuration Options

Extract the source archive and review available configuration options:

tar -xvf postgresql-18.0.tar.gz
cd postgresql-18.0
./configure --help

4. Create PostgreSQL User and Directory Structure

This is where our performance optimization begins. We'll create a dedicated PostgreSQL user and establish separate directories across different disk volumes:

useradd -d /home/postgres/ postgres
passwd postgres
id postgres

mkdir -p /u01/app/18/init
mkdir -p /u02/app/18/data
mkdir -p /u03/app/18/wal_files 
mkdir -p /u04/app/18/archive_logs
mkdir -p /u05/app/18/temp_files

Each directory serves a specific purpose and will reside on its own physical disk for optimal I/O distribution.

5. Configure the Build

Configure PostgreSQL with your desired installation prefix:

cd postgresql-18.0
./configure --prefix=/u01/app/18/init

6. Compile PostgreSQL

Build the PostgreSQL binaries. You have several options depending on what you need:

make

# Alternative options:
# make world (includes contrib modules, documentation, and man pages)
# make world-bin (everything except documentation)

7. Install the Binaries

Install PostgreSQL to your configured prefix:

make install

# Alternative options:
# make install-world
# make install-world-bin

8. Build and Install Contrib Modules

PostgreSQL's contrib modules provide additional functionality. Let's include them:

cd contrib
make
make install

9. Verify Installation

Confirm everything installed correctly:

cd /u01/app/18/init/bin
./pg_config

10. Set Proper Ownership

Ensure the postgres user owns all PostgreSQL directories:

chown -R postgres:postgres /u01/app/18/init
chown -R postgres:postgres /u02/app/18/data
chown -R postgres:postgres /u03/app/18/wal_files 
chown -R postgres:postgres /u04/app/18/archive_logs
chown -R postgres:postgres /u05/app/18/temp_files

11. Initialize the Database Cluster

Switch to the postgres user and initialize the data directory:

sudo su - postgres
cd /u01/app/18/init/bin
./initdb -D /u02/app/18/data

12. Start PostgreSQL

Launch your database server:

cd /u02/app/18/data
ls -ltr
cd /u01/app/18/init/bin
./pg_ctl -D /u02/app/18/data start

13. Configure Environment and Connect

Set up your environment variables and connect to PostgreSQL:

export LD_LIBRARY_PATH=/u01/app/18/init/lib:$LD_LIBRARY_PATH
export PATH=/u01/app/18/init/bin:$PATH
psql

The Power of Disk Segregation

Now that PostgreSQL is installed, let's discuss why we created that specific directory structure across five separate disk volumes.

Our Strategic Layout

  • /u01/app/18/init (xvdb): Binary files and executables
  • /u02/app/18/data (xvdc): Database data files
  • /u03/app/18/wal_files (xvdd): Write-Ahead Log files
  • /u04/app/18/archive_logs (xvde): Archive logs for point-in-time recovery
  • /u05/app/18/temp_files (xvdf): Temporary files for sorts and joins

Performance Benefits Explained

1. Parallel I/O Operations

When components live on separate physical disks, PostgreSQL can perform multiple I/O operations simultaneously. Data reads happen independently from WAL writes, and temporary file operations don't compete with either. This parallelism translates directly into better throughput.

2. Respecting Different I/O Patterns

Each PostgreSQL component has unique I/O characteristics. Data files exhibit random read and write patterns as users query different parts of your database. WAL files, on the other hand, require sequential writes with high durability guarantees. Archive logs need sequential writes for backup purposes, while temporary files handle high-volume storage for complex query operations like sorts and joins.

When these components share a disk, their conflicting I/O patterns create interference that degrades overall performance.

3. Eliminating Disk Contention

During peak load, PostgreSQL must juggle transaction processing on data files, ensure durability through WAL writes, support complex queries using temporary files, and maintain backups via archive logs. Without proper segregation, these competing demands create serious disk bottlenecks that limit your database's scalability.

4. Enhanced Disaster Recovery

Physical separation provides an additional layer of protection. If your data disk fails, having WAL files and archive logs on separate disks significantly improves your recovery options and reduces potential data loss.

5. Smart Hardware Utilization

Different storage types can be matched to different components based on their access patterns. Fast SSDs can be used for data files and WAL files where performance matters most. Standard storage works fine for archive logs that are written once and rarely accessed. High-capacity drives can accommodate temporary files that need space more than speed.

This approach delivers cost-effective performance optimization by using the right storage tier for each workload.

Real-World Impact

In production environments, proper disk segregation typically delivers impressive results. You can expect twenty to forty percent higher transaction throughput compared to a single-disk configuration. Response times improve dramatically under mixed workloads, and performance remains consistent even during backup operations. Most importantly, you'll achieve better recovery point objectives during failures, meaning less potential data loss when things go wrong.

What's Next?

By following this installation approach with segregated disks, you've positioned your PostgreSQL 18 installation for optimal performance, especially under demanding production workloads.

In upcoming posts, I'll cover how to move existing WAL files and temporary files to your desired locations, allowing you to optimize existing installations or adjust your configuration as requirements change.

Stay tuned for more PostgreSQL performance optimization techniques!


Comments

Popular posts from this blog

🚀 Automating Oracle Database Patching with Ansible: A Complete Guide

Oracle database patching has long been the bane of DBAs everywhere. It's a critical task that requires precision, expertise, and often results in extended maintenance windows. What if I told you that you could automate this entire process, reducing both risk and downtime while ensuring consistency across your Oracle estate? 💡 In this comprehensive guide, I'll walk you through a production-ready Ansible playbook that completely automates Oracle patch application using OPatch. Whether you're managing a single Oracle instance or hundreds of databases across your enterprise, this solution will transform your patch management strategy! 🎯 🔥 The Challenge: Why Oracle Patching is Complex Before diving into the solution, let's understand why Oracle patching is so challenging: 🔗 Multiple dependencies : OPatch versions, Oracle Home configurations, running processes ⚠️ Risk of corruption : Incorrect patch application can render databases unusable ⏰ Downtime requirements : Da...

Oracle RAC Switchover & Switchback: Step-by-Step Guide

 Ensuring business continuity requires regular Disaster Recovery (DR) drills. This guide covers the Switchover and Switchback process between Primary (DC) and Standby (DR) databases . Pre-checks Before Performing Switchover Before starting the activity, ensure there are no active sessions in the database. If any are found, share the session details with the application team, get their confirmation, and terminate the sessions. Primary Database Name: PRIMARY Standby Database Name: STANDBY  Identify Active Sessions set lines 999 pages 999 col machine for a30 col username for a30 col program for a30 compute sum of count on report break on report select inst_id,username,osuser,machine,program,status,count(1) "count" from gv$session where inst_id=1 and program like 'JDBC%' group by inst_id,username,osuser,machine,program,status order by 1,2; select inst_id,username,osuser,machine,program,status,count(1) "count" from gv$session where inst_id=2 and program lik...

Mastering Oracle RAC with SRVCTL Commands

Oracle Real Application Clusters (RAC) provide high availability, scalability, and manageability for databases. One of the most powerful tools for managing RAC databases is srvctl , a command-line utility that allows administrators to control various database services. This blog explores essential srvctl commands to help you efficiently manage Oracle RAC environments. 1. Checking Database Configuration and Status  List all available databases on the host:                  srvctl config database   Check the status of a specific database and its instances:                    srvctl status database -d <database_name>   Retrieve detailed status information about a database, including its instances and states:                    srvctl status database -d <database_name> -v 2. Stopping and Starting Databases   ...