Skip to main content

Resolving OPatch Error OUI-67064: OPatchSession Cannot Load Inventory

 

Resolving OPatch Error OUI-67064: OPatchSession Cannot Load Inventory

When applying patches using OPatch on Oracle Database or Grid Infrastructure, you may encounter error OUI-67064. This post explains the troubleshooting steps to resolve this common issue.

The Error

OUI-67064: OPatchSession cannot load inventory for the given Oracle Home /opt/oracle/product/19.3.0/db_1. 
Possible causes are:
- No read or write permission to ORACLE_HOME/.patch_storage
- Central Inventory is locked by another OUI instance
- No read permission to Central Inventory
- The lock file exists in ORACLE_HOME/.patch_storage
- The Oracle Home does not exist in Central Inventory

This error indicates that OPatch is unable to read or interact with the Oracle Inventory, which is critical for patching operations.

Root Cause Analysis

After examining the logs and directory structures, a mismatch in ownership within the central inventory directory /opt/app/oraInventory was identified.

The directory listing revealed:

drwxr-xr-x 6 grid oinstall 103 Apr 28 02:04 oraInventory
-rwxrwx--- 1 grid oinstall 1617 Aug 13 2024 orainstRoot.sh
-rw-rw---- 1 grid oinstall 56 Aug 13 2024 oraInst.loc
drwxrwx--- 2 ggudb oinstall 22 Mar 4 10:20 oui    <!-- Incorrect owner! -->
drwxrwx--- 9 grid oinstall 4096 Apr 14 08:11 backup
drwxrwx--- 3 grid oinstall 8192 Apr 23 06:12 logs
drwxrwx--- 2 grid oinstall 81 Apr 24 06:42 ContentsXML

The directory oui was owned by ggudb, not grid, which prevented proper access to the inventory.

Resolution Steps

1. Change Ownership of the 'oui' Directory

chown -R grid:oinstall /opt/app/oraInventory/oui

2. Ensure Proper Permissions on Inventory

chmod -R 775 /opt/app/oraInventory

3. Remove Any Stale Lock Files

Stale .lock files inside .patch_storage can block patching operations:

find /opt/oracle/product/19.3.0/db_1/.patch_storage -name '*.lock'
rm -f /opt/oracle/product/19.3.0/db_1/.patch_storage/*.lock

4. Verify oraInst.loc and Inventory Path

Check the contents of /etc/oraInst.loc:

inventory_loc=/opt/app/oraInventory
inst_group=oinstall

This confirms that the inventory path is correct and points to the right group.

5. Check for Running OUI or OPatch Sessions

To avoid conflicts:

ps -ef | grep -i oui
ps -ef | grep -i opatch

Terminate any zombie or leftover processes if found.

6. Retry the Patch

Once all the above steps are completed, re-run OPatch:

$ORACLE_HOME/OPatch/opatch lsinventory

This command should now execute successfully, confirming the issue is resolved.

Best Practices

  • Maintain consistent ownership and permissions across Oracle inventory directories
  • Clean up any stale processes or lock files before patching
  • Establish standard operating procedures for which user accounts should perform patching operations
  • When managing Oracle patches in a Grid Infrastructure or RAC setup, use the appropriate user (oracle vs grid) for applying patches

References

  • Oracle Support Doc ID 257800.1 – OPatch fails with Lock file error
  • Oracle Documentation – Understanding the Oracle Inventory


Comments

Popular posts from this blog

πŸš€ Automating Oracle Database Patching with Ansible: A Complete Guide

Oracle database patching has long been the bane of DBAs everywhere. It's a critical task that requires precision, expertise, and often results in extended maintenance windows. What if I told you that you could automate this entire process, reducing both risk and downtime while ensuring consistency across your Oracle estate? πŸ’‘ In this comprehensive guide, I'll walk you through a production-ready Ansible playbook that completely automates Oracle patch application using OPatch. Whether you're managing a single Oracle instance or hundreds of databases across your enterprise, this solution will transform your patch management strategy! 🎯 πŸ”₯ The Challenge: Why Oracle Patching is Complex Before diving into the solution, let's understand why Oracle patching is so challenging: πŸ”— Multiple dependencies : OPatch versions, Oracle Home configurations, running processes ⚠️ Risk of corruption : Incorrect patch application can render databases unusable ⏰ Downtime requirements : Da...

🐳Oracle 19c Database Deployment with Docker

Oracle 19c Database Deployment with Docker 🐳 Oracle 19c Database Deployment with Docker Welcome to this comprehensive guide on deploying, configuring, and managing Oracle 19c Database using Docker containers. This blog will walk you through the entire process from setup to production best practices with practical code examples. Docker provides an excellent way to run Oracle databases in isolated, portable containers, making it easy to deploy and manage Oracle 19c instances for development, testing, and production environments. This approach offers numerous benefits: πŸ”’ Isolation : Run Oracle in a containerized environment without affecting your host system 🚚 Portability : Easily move your database between different environments πŸ”„ Reproducibility : Quickly spin up identical database instances ⚡ Resource Efficiency : Use Docker's resource management capabilities to control CPU, memory, and stor...

Mastering Oracle RAC with CRSCTL commands

Mastering Oracle Clusterware Administration: Essential Commands & Best Practices Oracle Clusterware is a key component for managing cluster environments, ensuring high availability and resource management for Oracle databases. Below are essential commands for managing Oracle Clusterware effectively. What is crsctl? crsctl (Cluster Ready Services Control) is a command-line utility provided by Oracle to manage Oracle Clusterware. It allows administrators to start, stop, check, and configure various aspects of cluster services. With crsctl , DBAs can control cluster resources, manage voting disks, check the status of Oracle High Availability Services, and ensure the proper functioning of Oracle RAC environments. Starting and Stopping Oracle Clusterware On Local Node Stop Clusterware: crsctl stop cluster Start Clusterware: crsctl start cluster On RAC Standalone/Oracle Restart Stop Cluster: crsctl stop has Start Cluster: crsctl start has On All Nodes or All Hub Nodes Start Clusterware:...