The past four days have found me very frustrated and at wits' end while testing upgrades of standalone Oracle Grid Infrastructure (ASM) 220.127.116.11 to 18.104.22.168 on RHEL/OEL 5 VMs. The upgrade would seem to go fine, but after rebooting, I would see ASM and LISTENER running under the old (22.214.171.124) grid home directories again.
Looking at /etc/oratab, I saw this:
$ grep -i asm /etc/oratab +ASM:/u01/app/grid/product/11.2.0/grid_1:N # line added by Agent
grid_1 is the old grid home, I expect to see grid_2. The comment about being added by Agent led me to a path where I eventually took a look at /etc/init.d/ohasd, which is basically the master script that starts everything up. I noticed that this file hadn't been updated as part of the patching, and contained this:
I then ran some web searches for "oracle upgrade 126.96.36.199 ohasd" and found a blog post that had the same problem. Searching My Oracle Support then turned up DocID 1233183.1, titled "Standalone GI: init.ohasd/ohasd not updated after 11201 to 11202 upgrade".
The bug is basically what it says, those files are not being updated during upgrades of standalone grid infrastructure. This is due to a logic bug in roothas.pl. I suggest reading the document for details.
The workaround is to manually copy those two files after the upgrade finishes. First backup the old files:
$ mkdir ~/ohasd_init_backup $ cd ~/ohasd_init_backup $ ls /etc/init.d/*ohasd* /etc/init.d/init.ohasd /etc/init.d/ohasd $ cp /etc/init.d/*ohasd* . $ ls init.ohasd ohasd
Then copy the 188.8.131.52 files into place (as root):
# cd /etc/init.d/ # cp /u01/app/grid/product/11.2.0/grid_2/crs/init/init.ohasd . cp: overwrite `./init.ohasd'? y # cp /u01/app/grid/product/11.2.0/grid_2/crs/init/ohasd . cp: overwrite `./ohasd'? y
Obviously, substitute "/u01/app/grid/product/11.2.0/grid_2" with the path your 184.108.40.206 installation directory. NOTE: SLES users need to copy ohasd.sles, rather than ohasd. See the MOS document for details
Now, a quick check to make sure the proper home is used:
Now ensure that the two new scripts have the same ownership and permissions as the old ones. Then reboot to ensure that everything takes effect. After the server comes back up, ensure that all services and oratab are still pointing to the new grid home. Be sure to check "srvctl config" for the asm and listener services, and check the paths in /etc/oratab.
This is the second bug with the 220.127.116.11 upgrade process that I've encountered. The first one requires patching the 18.104.22.168 Grid Infrastructure with the July 2010 PSU just to be able to upgrade to 22.214.171.124. Let's hope for some stronger QA in the future.