DB NAME: RAC
INSTANCE NAME: RAC1 RAC2
HOSTNAME: oracle1 oracle2

Pre 11gR2 upgrade steps:

  • Make a backuup of CRS home, RDBMS home and crs releted files
  • Save crontab entries on all the db and apps nodes and disable them until the upgrade is completed

Install 11gR2 CRS Home:

  • Make sure owner of /racp02/u01/app/oracle dir is oracle
  • Make sure owner of /racp02/u01/app/oracle/crs dir is oracle
  • Make sure /tmp dir has more than 1GB free space on all the nodes
  • Run /gp04/u17/oradata/11gr2_stage/aix.ppc64_11gR2_grid/rootpre.sh on all the database nodes
  • Stop listeners on all the database nodes

Verify clock synchronization is setup correctly :

  • Option I : If you choose to use a vendor time sync service (like ntp), make sure t is configured AND running.
  • Option II : If you choose to let CTSSD handle time synchronization, de-configure the vendor time sync service.  For example, for NTP you may need to move or remove /etc/ntp.conf or /etc/xntp.conf.

Use below steps to sync the clocks

$ ps -ef|grep ntpd
$ stopsrc -s xntpd
$ startsrc -s xntpd -a -x
  • If clock synchronization is setup correctly, the below command will be completed successfuly while installing 11gR2 CRS home. Please note that we can manually run the below command only after 11gR2 CRM home is installed.
$ ./cluvfy comp clocksync
Verifying Clock Synchronization across the cluster nodes
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
CTSS resource check passed  Checking if CTSS Resource is running on all nodes...
Query of CTSS for time offset passed
Querying CTSS for time offset on all nodes...
Check CTSS state started...
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes was successful.
  • Login as oracle and run runInstaller from /gp04/u17/oradata/11gr2_stage/aix.ppc64_11gR2_grid
$ ./runInstaller
Select Installation Option: Upgrade Grid Infrastructure
Select Product Languages: English
Grid Infrastructure SCAN Information
SCAN Name: ecspat-scan
SCAN Port: 1525
Privileged Operating System Groups
ASM Database Administrator (OSDBA) Group: dba
ASM Instance Administration Operator (OSOPER) Group: dba
ASM Instance Administrator (OSASM) Group: dba
[INS-41808] Possible invalid choice for OSASM Group.
[INS-41809] Possible invalid choice for OSDBA Group.
[INS-41810] Possible invalid choice for OSOPER Group.
[INS-41813] OSDBA, OSOPER and OSASM are the same OS group.
Are you sure you want to continue? Yes
Specify Installation Location
Oracle Base: /racp04/u01/app/oracle/orabase
Software Location: /racp04/u01/app/oracle/crs/11gR2
Perform Prerequisite Checks :
Ignore the following Checks:
Node Connectivity
OS Patch:IZ41855
OS Patch:IZ51456
OS Patch:IZ52319

Note: if /tmp does not have enough free space DO NOT continue.

  • Run rootupgrade.sh on all the nodes from /racp04/u01/app/oracle/crs/11gR2 as root
  • Ignore the below error while running rootupgrade.sh as per Oracle
Start upgrade invoked..
CRS-4000: Command startupgrade failed, or completed with errors.
Command return code of 1 (256) from command: /racp04/u01/app/oracle/crs/11gR2/bin/crsctl startupgrade

Install 11gR2 RDBMS Home

  • Prepare to create the 11.2.0 Oracle home, the 11.2.0 Oracle home must be installed on the database server node in a different directory than the current ORACLE HOME
  • Install the base 11.2.0 software
  1. /tmp mount points on all the database nodes should have more than 1gb free space.
  2. Mount point for ORACLE HOME should have atleast 7GB free space
  3. During the 11g software install, If you get the below 3 os patches as missing, Pls neglect the message : Š IZ41855, Š IZ51456, Š IZ52319
  • Install Oracle Database 11g Products from the 11g Examples CD
  • Create nls/data/9idata directory

On the database server node, as the owner of the Oracle RDBMS file system and database instance, run the $ORACLE_HOME/nls/data/old/cr9idata.pl script to create the $ORACLE_HOME/nls/data/9idata directory.  After creating the directory, make sure that the ORA_NLS10 environment variable is set to the full path of the 9idata directory whenever you enable the 11g Oracle home.

Create a link

$ cd $ORACLE_HOME/oracore/zoneinfo
$ ln -s timezone_11.dat timezone.dat
$ ln -s timezlrg_11.dat timezlrg.dat

Apply additional 11.2.0.1 RDBMS patches to the 11.2.0 Oracle home

Note: Do not apply any of the patch special instructions, the scripts are run as part of the upgrade.

8897784   8964142   8796511   8772028   8771297   8761974   8685327 8570322

Steps to be executed on 10g RDBMS

  • Copy the Pre-Upgrade Information Tool utlu112i.sql from the Oracle Database 11g Release 2 (11.2) ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the temporary directory on your system.
  • Log in to the system as the owner of the Oracle home directory of the database to be upgraded.
  • Change the directory to where you copied utlu112i.sql file in step 1.
  • Connect to the database instance as a user with SYSDBA privileges.
  • Set the system to spool results to a log file for later analysis:

SQL>spool upgrade_info.log
SQL>@utlu112i.sql
SQL>spool off

Check for any errors in the upgrade logfile.

Drop SYS.ENABLED$INDEXES (conditional), if the SYS.ENABLED$INDEXES table exists.

SQL>drop table sys.enabled$indexes;

Upgrade the database instance

  • We recommend that you use 500 MB as the SYSAUX tablespace size. Set autoextend on for the SYSAUX tablespace.
  • When upgrading all statistics tables, note that Oracle E-Business Suite has only one statistics table, APPLSYS.FND_STATTAB, that needs to be upgraded.
  • If you plan to change the PL/SQL compilation mode, disable the compilation of objects.
  • Log in to the system as the owner of the Oracle Database 11g Release 2 (11.2) Oracle home directory

$ cd ORACLE_HOME/rdbms/admin directory

Connect to the database instance as a user with SYSDBA privileges.

SQL> STARTUP UPGRADE
SQL> @catupgrd.sql
SQL> @utlu112s.sql
SQL> @rdbms/admin/catuppst.sql
SQL> @rdbms/admin/utlrp.sql

Install Oracle Data Mining and OLAP (we didn’t do these steps because ODM and AMD schemas are already exist)

Verify that Oracle Data Mining and OLAP are installed in your database

SQL> connect / as sysdba;

SQL> @$ORACLE_HOME/rdbms/admin/dminst.sql SYSAUX TEMP

If the query does not return AMD, then you do not have OLAP installed.
To install OLAP, use SQL*Plus to connect to the database as SYSDBA and run the following command:

SQL> connect / as sysdba;
SQL> @$ORACLE_HOME/olap/admin/olap.sql SYSAUX TEMP
  • Shutdown instance RAC1 or RAC2 after the 11g upgrade

Prepare sqlnet files

On oracle1 set 11gR2 environment using $HOME/11gtemp.env –> (custom created)

$ mkdir $ORACLE_HOME/network/admin/rac1_oracle1
$ cd $ORACLE_HOME/network/admin/rac1_oracle1

Copy the listener.ora and tnsnames.ora  from 10gR2 ORACLE_HOME and place it under 11gR2 ORACLE_HOME

  • Make sure tnsnames.ora file has only one alias for rac1, check entries above extproc_connection_data in the tnsnames.ora file.
  • Edit listener.ora and replace all LISTENER_oracle1 to LISTENER_rac

create $ORACLE_HOME/network/admin/listener.ora with just ifile entry:

ifile=/racp04/u01/app/oracle/racdb/11gR2/network/admin/rac1_oracle1/listener.ora

create $ORACLE_HOME/network/admin/tnsnames.ora with just ifile entry:

ifile=/racp04/u01/app/oracle/racdb/11gR2/network/admin/rac1_oracle1/tnsnames.ora

  • On oracle2 set 11gR2 environment using $HOME/11gtemp.env –> (custom created)
  • Follow the same steps as done on Node1 i.e oracle1

Remove listener created by CRS install and add new listener RDBMS

Login as oracle and source RDBMS home environment

$ srvctl status listener
$ srvctl disable listener -l LISTENER
$ srvctl remove listener -l LISTENER
$ srvctl add listener -l LISTENER_rac
$ srvctl setenv listener -l LISTENER_rac  -T TNS_ADMIN=$ORACLE_HOME/network/admin
$ srvctl setenv database -d rac -T TNS_ADMIN=$ORACLE_HOME/network/admin
$ srvctl start listener -l LISTENER_rac
$ lsnrctl stat LISTENER_rac

Register database into 11gR2 CRS

From 10g ORACLE_HOME remove the database name

$ srvctl remove database -d rac

From 11g home on oracle1

Make sure the pfile has only spfile entry pointing to shared location

$ srvctl add database -d rac -o /racp04/u01/app/oracle/racdb/11gR2
$ srvctl add instance -d rac -i rac1 -n oracle1
$ srvctl add instance -d rac -i rac2 -n oracle2

Start the database

$ srvctl start database -d rac

Updated Initialization parameters

Disabled the following deprecated init parameters:

USER_DUMP_DEST
PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT
PLSQL_NATIVE_LIBRARY_DIR
BACKGROUND_DUMP_DEST
compatible=11.2.0
rac1.core_dump_dest=’/racp04/u01/app/oracle/racdb/11gR2/log/diag/rdbms/rac/rac1/cdump’
rac2.core_dump_dest=’/racp04/u01/app/oracle/racdb/11gR2/log/diag/rdbms/rac/rac2/cdump’

References
http://download.oracle.com/docs/cd/E11882_01/server.112/e17222.pdf
http://download.oracle.com/docs/cd/E11882_01/server.112/e10819.pdf
http://download.oracle.com/docs/cd/E11882_01/install.112/e10813.pdf
http://www.oracle.com/pls/db112/homepage
http://blogs.oracle.com/stevenChan/2009/10/db11gr2_certified_ebs11i.html

Complete checklist to upgrade the database to 11g R2 using DBUA [ID 870814.1]
Complete Checklist for Upgrades to 11gR1 using DBUA [ID 556477.1]
Complete Checklist for Manual Upgrades to 11gR2 [ID837570.1]

Tagged with →  
Share →
0 comments
Skip to toolbar