Veritas Storage Foundation Oracle RAC 5.0MP1+e1221809a for 11gR1

Article:TECH60344  |  Created: 2008-01-13  |  Updated: 2009-01-15  |  Article URL http://www.symantec.com/docs/TECH60344
Article Type
Technical Solution


Environment

Issue



Veritas Storage Foundation Oracle RAC 5.0MP1+e1221809a for 11gR1

Solution






PATCH for SF Oracle RAC 5.0MP1 for 11gR1 support:


This README is included in the download.

BEFORE GETTING STARTED
======================
This patch is only for SF Oracle RAC 5.0MP1 on Solaris 10 or Solaris 9.
Make sure that you are running one of these supported configurations before
applying this patch.


FIXES INCLUDED IN THIS PATCH
============================
- Support for Oracle 11gR1
- Multiprivnic support


PACKAGES AFFECTED BY THIS PATCH
===============================
VRTSdbac


INSTALLING THIS PATCH
=====================

Stopping the services
---------------------
The following steps should be run on each node in the cluster, one at a time:

1. If Oracle and associated processes are active outside of VCS
  control, stop them.


2.Shut down Oracle instances on all systems of the cluster.

  a) If the database instances are under VCS control, offline the corresponding
      VCS service groups. As "root" user, enter:

       # hagrp -offline <oracle-grp> -sys <node_name>
       
      Make sure the <oracle-grp> is offline.
        From any one node of the cluster enter:

       # hagrp -state <oracle-grp>

       Output should look like :

       #Group       Attribute             System             Value
       <oracle-grp>  State                <galaxy>           |OFFLINE|
       <oracle-grp>  State                <nebula>           |OFFLINE|

  b) If the database instances are under CRS control, run the following on any
      one system in the cluster (as "oracle" user):

       $ srvctl stop database -d <database_name>

  c) Stop all resources configured under CRS control. As "oracle" user,enter:
       $srvctl stop nodeapps -n <sys_name>

3. Change the <oracle-grp>'s AutoStart attribute to zero, so that the
   <oracle-grp> group does not online automatically on reboot.  On any one node
   execute the following:

       # /opt/VRTSvcs/bin/haconf -makerw
       # /opt/VRTSvcs/bin/hagrp -modify <oracle-grp> AutoStart 0
       # /opt/VRTSvcs/bin/haconf -dump -makero

4. Stop CRS on all systems in the cluster.

  a) If CRS is under VCS control, as "root" user on each system in the cluster,
      offline the "cssd" resource:

       # hares -offline cssd -sys <sys_name>

  b) If CRS is not under VCS control, as "root" user on each system in the
      cluster, enter:

        # /etc/init.crs stop

5. Stop VCS on all nodes of the cluster.
   For example, from any node of the cluster, enter:

       # /opt/VRTSvcs/bin/hastop -all
       
       Make sure vcs is down. From each node enter:

       # hastatus

       Output should resemble:

       attempting to connect....not available; will retry

6. On each node of the cluster unconfigure and unload vcsmm:

       # /sbin/vcsmmconfig -U

       Verify that port 'o' has been closed:

       # /sbin/gabconfig -a

       The display should not have port 'o' listed.

        If port 'o' is listed, make sure that Oracle instances are offline.

      # modinfo | grep vcsmm
      250 7b392000  3ad00 294   1  vcsmm (VRTSvcsmm '5.0MP1')
      # modunload -i 250


7. On each node of the cluster, unconfigure and unload lmx:

       # /etc/init.d/lmx stop

      # modinfo|grep lmx
      251 7b3c4000  15860 295   1  lmx (LLT Mux '5.0MP1')
      # modunload -i 251
      Feb 28 16:05:02 thor220 lmx: LMX Multiplexor unavailable



Installing this patch
---------------------

1. Change directory to the patch location.

2. Install the patch as below:
   
  a) On Sol9:

       # patchadd 137332-01

        Verify that the patch is added.

        # showrev -p | grep 137332-01
        Output should show 137332-01

  b) On Sol10:

       # patchadd 137333-01

        Verify that the patch is added.

        # showrev -p | grep 137333-01

        Output should show 137333-01.

      # pkginfo -l VRTSdbac
           PKGINST:  VRTSdbac
             NAME:  Veritas Oracle Real Application Cluster Support Package by Symantec
         CATEGORY:  system
         ARCH:  sparc
         VERSION:  5.0
         BASEDIR:  /
         VENDOR:  Symantec Corporation
         DESC:  Veritas Oracle Real Application Cluster Support Package by Symantec
         PSTAMP:  Veritas-5.0MP1+e1221809-02/27/08-17:21:39
         INSTDATE:  Feb 27 2008 18:47
         STATUS:  completely installed
         FILES:      201 installed pathnames
                 26 shared pathnames
                  8 linked files
                 44 directories
                123 executables
              17786 blocks used (approx)

   
Starting the services
---------------------

1. On each node of the cluster, configure and load lmx:

       # /etc/init.d/lmx start
      # modinfo|grep lmx
      251 7bf5a000  15948 295   1  lmx (LLT Mux '5.0MP1+e1221809')

2. On each node of the cluster, configure and load vcsmm:

      #/etc/init.d/vcsmm start
      #modinfo|grep vcsmm
      250 7be28000  3b078 294   1  vcsmm (VRTSvcsmm '5.0MP1+e1221809')

3. Relink with the new libraries.

  If Oracle and associated processes are active outside of VCS
  control, stop them before relinking.
      
   a) $cd $ORACLE_HOME/lib
          $mv $ORACLE_HOME/lib/libodm11.so libodm11.so.oracle
          $ln -s /usr/lib/sparcv9/libodm.so libodm11.so

   b) $cd /opt/VRTSvcs/rac/lib
          $cp libskgxn2_64.so /opt/ORCLcluster/lib/libskgxn2.so  

4. On each node of the cluster start VCS:

      # /etc/rc3.d/S99vcs start

  On each node, verify that the cvm group is online. For example:

      # hagrp -state cvm

  Output should list cvm group as ONLINE.

5. Change the <oracle-grp>'s AutoStart attribute to one, so that the <oracle-grp>
  group goes online automatically on reboot or when had is started.

   On any one node execute:

      # /opt/VRTSvcs/bin/haconf -makerw
      # /opt/VRTSvcs/bin/hagrp -modify <oracle-grp> AutoStart 1
      # /opt/VRTSvcs/bin/haconf -dump -makero

6. Online the <oracle-grp> group. From each node of the cluster enter:

       # /opt/VRTSvcs/bin/hagrp -online <oracle-grp> -sys <node-name>

       Verify that the <oracle-grp>  group is online.
        For example on any one node enter:

       # hagrp -state <oracle-grp>

       Output should list <oracle-grp> group as ONLINE on all nodes of cluster

7. Start CRS on all systems in the cluster.

  a) If CRS is under VCS control, as "root" user on each system in the cluster,
      online the "cssd" resource:

       # hares -online cssd -sys <sys_name>

  b) If CRS is not under VCS control, as "root" user on each system in the
      cluster, enter:

        # /etc/init.crs start


UNINSTALLING THIS PATCH
=======================

Stopping the services
---------------------

Run the following steps on each node in the cluster, one at a time:

1. If Oracle and associated processes are active outside of VCS
  control, stop them.

2. Shut down Oracle instances on all systems of the cluster.

  a) If the database instances are under VCS control, offline the corresponding
      VCS service groups. As "root" user, enter:

       # hagrp -offline <oracle-grp> -sys <node_name>
       
       Make sure the <oracle-grp> is offline.
        On any one node in the cluster, enter:

       # hagrp -state <oracle-grp>

       Output should resemble:

       #Group       Attribute             System           Value
       <oracle-grp>  State                <galaxy>           |OFFLINE|
       <oracle-grp>  State                <nebula>           |OFFLINE|

  b) If the database instances are under CRS control, run the following on any
      one system in the cluster (as "oracle" user):

       $ srvctl stop database -d <database_name>

  c) Stop all resources configured under CRS control. As "oracle" user,enter:
       $srvctl stop nodeapps -n <sys_name>

3. Change the <oracle-grp>'s AutoStart attribute to zero, so that the
   <oracle-grp> group does not online automatically on reboot. On any one node
   execute the following:

       # /opt/VRTSvcs/bin/haconf -makerw
       # /opt/VRTSvcs/bin/hagrp -modify <oracle-grp> AutoStart 0
       # /opt/VRTSvcs/bin/haconf -dump -makero

4. Stop CRS on all systems in the cluster.

  a) If CRS is under VCS control, as "root" user on each system in the cluster,
      offline the "cssd" resource:

       # hares -offline cssd -sys <sys_name>

  b) If CRS is not under VCS control, as "root" user on each system in the
      cluster, enter:

        # /etc/init.crs stop

5. Stop VCS on all nodes of the cluster.
   For example, from any node of the cluster, enter:

       # /opt/VRTSvcs/bin/hastop -all
       
       Make sure VCS is down. From each node enter:

       # hastatus

       Output should resemble:

       attempting to connect....not available; will retry

6. On each node of the cluster unconfigure and unload vcsmm:

       # /sbin/vcsmmconfig -U

       Verify that port 'o' has been closed:

       # /sbin/gabconfig -a

       The display should not have port 'o' listed.

       If port 'o' is listed, make sure that Oracle instances are offline:

       # modinfo | grep vcsmm
       # modunload -i <vcsmm module id>


Uninstalling this patch
-----------------------

1. On each node of the cluster remove the new patch

    a) On Sol9:

       # patchrm 137332-01

        Verify that the patch is removed.

        # showrev -p | grep 137332-01

        Output should not show 137332-01

   b) On Sol10:

       # patchrm 137333-01

        Verify that the patch is removed.

        # showrev -p | grep 137333-01

        Output should not show 137333-01


Starting the services
---------------------

1. Relink with the new library

  Oracle and associated processes are active outside of VCS
  control, stop them

      a) $cd $ORACLE_HOME/lib
          $mv $ORACLE_HOME/lib/libodm11.so libodm11.so.oracle
          $ln -s /usr/lib/sparcv9/libodm.so libodm11.so

       b) $cd /opt/VRTSvcs/rac/lib
          $cp libskgxn2_64.so /opt/ORCLcluster/lib/libskgxn2.so

2. On each node of the cluster start VCS

      # /etc/rc3.d/S99vcs start

  on each node, verify that the cvm group is online. For example:

       # hagrp -state cvm

   Output should list cvm group as ONLINE.

3. Change the <oracle-grp>'s AutoStart attribute to one, so that the <oracle-grp>
   group goes online automatically on reboot or when had is started.
   On any one node, execute the following:

       # /opt/VRTSvcs/bin/haconf -makerw
       # /opt/VRTSvcs/bin/hagrp -modify <oracle-grp> AutoStart 1
       # /opt/VRTSvcs/bin/haconf -dump -makero

4. Online the <oracle-grp> group. From each node of the cluster enter:

       # /opt/VRTSvcs/bin/hagrp -online <oracle-grp> -sys <node-name>

       
  Verify that the <oracle-grp>  group is online. For example on any one
   node enter:

       # hagrp -state <oracle-grp>

   Output should list <oracle-grp> group as ONLINE on all nodes of cluster.


CONFIGURING THE MULTIPRIVNIC RESOURCE
=====================================

You will need to configure the Multiprivnic resource for Oracle UDP Cluster Interconnect and CRS Private IP.  The MultiPrivNIC resource will monitor multiple links and failover the configured IP addresses for the links in resource.

Configuring the MultiPrivNIC resource in the main.cf
----------------------------------------------------

1. Copy the MultiPrivNIC.cf file to the configuration folder:

    #cp /etc/VRTSvcs/conf/MultiPrivNIC.cf /etc/VRTSvcs/conf/config/MultiPrivNIC.cf

2. Back up the main.cf file:
 
    #cp /etc/VRTSvcs/conf/config/main.cf /etc/VRTSvcs/conf/config/main.cf.backup1

2) Using vi or another text editor, edit the main.cf file:
 
    a)  If there are any PrivNIC resources, remove them.

       For example:

        PrivNIC udp_priv (
               Critical = 0
               Device @thor88 = { bge2 = 0, bge3 = 1 }
               Device @thor89 = { bge2 = 0, bge3 = 1 }
            Device @thor90 = { bge2 = 0, bge3 = 1 }
            Device @thor91 = { bge2 = 0, bge3 = 1 }
               Address @thor88 = "1.1.1.88"
               Address @thor89 = "1.1.1.89"
               Address @thor90 = "1.1.1.90"
               Address @thor91 = "1.1.1.91"
               NetMask = "255.255.255.0"
               )
     
   b)  Add the MultiPrivNIC resource entry to the end of the main.cf
      configuration using the appropriate node names, private network link
      names, and IP addresses for your configuration.

      For example:

       CVMVxconfigd cvm_vxconfigd (
               Critical = 0
               CVMVxconfigdArgs = { syslog }
               )

       MultiPrivNIC udp_priv (
               Critical = 0
               Device @thor88 = { bge2 = 0, bge3 = 1 }
               Device @thor89 = { bge2 = 0, bge3 = 1 }
               Device @thor90 = { bge2 = 0, bge3 = 1 }
               Device @thor91 = { bge2 = 0, bge3 = 1 }
               Address @thor88 = { "1.1.1.88" = 0, "192.168.5.1" = 1 }
               Address @thor89 = { "1.1.1.89" = 0, "192.168.5.2" = 1 }
               Address @thor90 = { "1.1.1.90" = 0, "192.168.5.3" = 1 }
               Address @thor91 = { "1.1.1.91" = 0, "192.168.5.4" = 1 }
               NetMask = "255.255.255.0"
               )

       cssd requires oracle_cfs
       cvm_clus requires cvm_vxconfigd
       oracle_cfs requires oracle_vol
       oracle_vol requires vxfsckd
       vxfsckd requires cvm_clus

      Note: You should configure both CRS private IP and cluster interconnect
      IP's using the MultiPrivNic resource.

   c) Enter the dependency of cssd resource on Multiprivnic:

      For example:

       cssd requires oracle_cfs
       cssd requires udp_priv
       cvm_clus requires cvm_vxconfigd
       oracle_cfs requires oracle_vol
       oracle_vol requires vxfsckd
       vxfsckd requires cvm_clus

   d) Include the MultiPrivNIC.cf in the beginning of main.cf as below.

      include "types.cf"
      include "CFSTypes.cf"
      include "CVMTypes.cf"
      include "MultiPrivNIC.cf"
      include "OracleTypes.cf"
      include "PrivNIC.cf"

3. Stop CRS on all systems in the cluster.

  a) If CRS is under VCS control, as "root" user on each system in the cluster,
      offline the "cssd" resource:

       # hares -offline cssd -sys <sys_name>

  b) If CRS is not under VCS control, as "root" user on each system in the
      cluster, enter:

        # /etc/init.crs stop

4. Stop VCS on all nodes of the cluster. For example, from any node of the
   cluster, enter:

       # /opt/VRTSvcs/bin/hastop -all
       
   Make sure vcs is down. From each node enter:

       # hastatus

   Output should resemble:

       attempting to connect....not available; will retry

5. Start VCS on all nodes of the cluster for the Multiprivnic resource to start.

  Note, start VCS first on the node where the main.cf was modified in the
   previous steps.

  For example, on each node of the cluster, enter:

      # /etc/rc3.d/S99vcs start

  on each node, verify that the cvm group is online. For example:

       # hagrp -state cvm

       Output should list cvm group as ONLINE.


Configuring the cluster interconnect
------------------------------------

1. Install 11gR1 CRS and Database binaries and any required patches.

2. Do not relink Oracle, since you will want to use Oracle UDP. Relink the ODM
  library as instructed earlier section of this README.

3. Create database using DBCA or manually using scripts.

4. Start the database using srvctl or manually on all nodes.

5. On each database instance, log in as sysdba and enter the following queries.

   For example, on node1 for instance 1:

  alter system set cluster_interconnects='1.1.1.88:192.168.5.1' scope=spfile \ sid='sid1';

   For example, on node2 for instance 2

  alter system set cluster_interconnects='1.1.1.89:192.168.5.2' scope=spfile \      sid='sid2';

   For example, on node3 for instance 3

  alter system set cluster_interconnects='1.1.1.90:192.168.5.3' scope=spfile \ sid='sid3';

   For example, on node4 for instance 4

  alter system set cluster_interconnects='1.1.1.91:192.168.5.4' scope=spfile \sid='sid4';

6. Restart the database.

7. You can verify the cluster_interconnects on each system using following sql
   query:

   select * from v$configured_interconnects where IS_PUBLIC='NO';


MANDATORY ORACLE PATCHES
------------------------

The following Oracle patches are required for both Solaris 9 and Solaris 10, and for both single instance Oracle and Oracle RAC installations:

Patch 6849184:TEST HANGS W/ VERITAS ODM. DMON PROCESS READS CONFIGURATION DATA, IN TIGHT LOOP

Patch 6442900:DISM PROCESS DOES NOT COME UP WHEN SGA_MAX_SIZE IS SET


OVERVIEW OF 11gR1 INSTALL
=========================

After setting up Veritas Storage Foundation for Oracle RAC 5.0MP1 + PP (See
above sections for installing the PP), prepare to install the Oracle 11g RAC software. You can install the Oracle 11g RAC software on shared storage or
locally on each node.

Note: It is important to review the Oracle 11g RAC installation manuals before installing Oracle 11g RAC.

Installing Oracle in an SFRAC 5.0 MP1 Solaris environment requires performing
the following tasks:

Tasks before installing Oracle

1.  Create Oracle user and Groups
2.  Set up the shared memory
3.  Set up the remote access
4.  Configure the private IPs
5.  Configure the public IP
6.  Create disk groups, volumes, and mount points for:
      CRS HOME (Oracle and Symantec recommend local installation)
      ORACLE HOME (Oracle and Symantec recommend local installation)
      OCR and Vote disk
7.  Add created volumes and disk groups to the VCS configuration to make them
    highly available
8.  Copy the membership library

Tasks for installing Oracle

Install the CRS and database binaries

Tasks after installing Oracle

1. Link the ODM library
2. Create the database
3. Configure the cluster-interconnect

Before installing Oracle
---------------------------

1. Create Oracle user and groups.

2. Set up the shared memory parameters.
  Edit the /etc/system file and set the shared memory parameters on the nodes
  within the cluster. Refer to the latest Oracle documentation for information
   about setting shared memory parameters. Restart the nodes for the new values
   to take effect.
   
3. The Oracle installation process requires ssh or rsh permission to be set for
  the Oracle user. If the ssh or rsh verification fails on any nodes, enable
   ssh or rsh access for those nodes.

4. Configure CRS and UDP private IP addresses for Failover (See the section             "CONFIGURING THE MULTIPRIVNIC RESOURCE" of this Readme for configuration
   steps.)

  * You must use all LLT links for MultiPrivNIC.
    Non-LLT links can not be used.
  * IP addresses used here must be manually added to /etc/hosts on all nodes
  * UDP IP addresses must be added to the oracle init file as
     cluster_interconnect parameter.

5. Identify the public virtual IP addresses for use by Oracle. Using vi or
   another text editor, add an entry for the virtual IP address and virtual
   public name in the /etc/hosts file. Each node must have a separate public
   virtual IP address. The following is an example of an entry:

      10.182.13.92 galaxy-vip
      10.182.13.93 nebula-vip

  This procedure must be performed on all nodes.

6. Create Oracle disk groups, volumes, and mount points. To create disk groups,          volumes, and mount points for Oracle, review the following guidelines.

  Before you install the Oracle Cluster Ready Services (CRS) and Oracle 11g             binaries,you must create storage space for these installations. You will need
   to provide storage for the following directories and files: home directories,         CRS_HOME for CRS and ORACLE_HOME, for Oracle binaries.

  For example, the same procedure can be followed for CRS_HOME (if it is on
   shared storage), and ORACLE_HOME:

  To create a file system on local storage for Oracle/CRS binaries (/app)

   a)  As root user, create a VxVM local diskgroup, orabindg_hostname:

      # vxdg init orabindg_galaxy Disk_1

   b)  Create a volume, orabinvol_hostname:

      # vxassist g orabindg_galaxy make orabinvol_galaxy 12G

   c)  On each node, create a directory, /app:

      # mkdir /app

      Repeat this step on all nodes.

   d)  Create a filesystem with this volume, orabinvol_hostname:

      # mkfs F vxfs /dev/vx/rdsk/orabindg_galaxy/orabinvol_galaxy

   e)  Mount /app:

      # mount F vxfs /dev/vx/dsk/orabindg_galaxy/orabinvol_galaxy/app

   f)  On each node, add an entry for this filesystem. For example, edit the                   /etc/vfstab file, list the new file system, and specify "yes" for the
      mount at boot column:

      # device device mount FS fsck mount mount
      # to mount to fsck point type pass at boot options
      #.
      /dev/vx/dsk/orabindg_galaxy/orabinvol_galaxy
      /dev/vx/rdsk/orabindg_galaxy/orabinvol_galaxy
      /app vxfs 1 yes -

  The CRS installation requires predefined locations for the Oracle Cluster
   Registry (OCR) and VOTE-disk components. This installation is always on shared
   storage.

  To create a filesystem for OCR and VOTE disks (/ocrvote):

   a)  Determine the CVM master by issuing the following command:

      # vxdctl -c mode


   b)  As root user, from the CVM master, create a shared VxVM diskgroup by
      issuing the following command:

      # vxdg -s init ocrvotedg c4t0d1 c4t0d2


   c)  As root user, from the CVM master, create a mirrored volume, ocrvotevol:

      # vxassist -g ocrvotedg make ocrvotevol 1G nmirrors=2


   d)  As root user, from CVM master, create a filesystem with the volume,
      ocrvotevol.

      # mkfs -F vxfs /dev/vx/rdsk/ocrvotedg/ocrvotevol


   e)  On each system, create a directory, /ocrvote:

      # mkdir /ocrvote


   f)  On each system, mount /ocrvote

      # mount -F vxfs -o cluster /dev/vx/dsk/ocrvotedg/ocrvotevol/ocrvote


   g)  As root user, from any system, change permissions on /ocrvote

      # chown -R oracle:oinstall /ocrvote

7. Whether you create volumes or file system directories, you can add
   them to the VCS configuration to make them highly available:

   a)  Log in to one system as root.
   b)  Save your existing configuration to prevent any changes while you
      modify main.cf:

      # haconf -dump makero
      If your configuration is not writable, a warning appears: "Cluster not
      writable". You may safely ignore the warning.

   c)  Make sure that VCS is not running while you edit the main.cf by using
      the hastop command. This command stops the VCS engine on all systems
      and leaves the resources available.

      # hastop -all -force

   d)  Make a backup copy of the main.cf file:

      # cd /etc/VRTSvcs/conf/config
      # cp main.cf main.orig

   e)  Using vi or another text editor, edit the main.cf file, modifying the
      cvm service group. Specifically, edit the stanzas corresponding to the                   ocrvote_mnt and ocrvote_voldg resources in main.cf:

      CFSMount ocrvote_mnt (
      Critical = 0
      MountPoint = "/ocrvote"
      BlockDevice = "/dev/vx/dsk/ocrvotedg/ocrvotevol"
      )

      CVMVolDg ocrvote_voldg (
      Critical = 0
      CVMDiskGroup = ocrvotedg
      CVMVolume = { ocrvotevol }
      CVMActivation = sw
      )

      ocrvote_mnt requires ocrvote_voldg
      ocrvote_mnt requires vxfsckd
      ocrvote_voldg requires cvm_clus

   f)  Save and close the main.cf file by verifying the syntax of the
      /etc/VRTSvcs/conf/config/main.cf file:

      # cd /etc/VRTSvcs/conf/config
      # hacf -verify .

   g)  Use the following command to start the VCS engine on one system:

      # hastart

   h)  Enter the following command:

      # hastatus

   i)  When "LOCAL_BUILD" is listed in the message column, start VCS on the
      other system with the following command:

      # hastart

   j)  Verify that the service group resources are brought online. On one
      system, enter the following command:

      # hagrp -display

   k)  Enter the following command to check the status of the cvm group:

      # hagrp - state

   l)  Enter the following command to check the status of resources:

      # hagrp - state

   m)  Restart VCS on all the nodes.

8.  Copy the membership library on all nodes:
 
      $cd /opt/VRTSvcs/rac/lib
      $cp libskgxn2_64.so /opt/ORCLcluster/lib/libskgxn2.so


Installing Oracle
-----------------

Install Oracle's Clusterware binaries and database binaries for 11g.

See Oracle documentation for detailed steps.


After installing Oracle
-----------------------

1. Replace Oracle's ODM library with Veritas' ODM library:
   
      mv $ORACLE_HOME/libodmd11.so $ORACLE_HOME/libodmd11.so.oracle
      cp /usr/lib/sparcv4/libodm.so ORACLE_HOME/libodmd11.so

2. Create a database with "dbca" wizard on the Cluster File System that you
    initially mounted.

3. Once the database is created, configure cluster_interconnect using the earlier        section in this document: "CONFIGURING THE MULTIPRIVNIC RESOURCE."


Overview of UPGRADING from 10gR2 to 11gR1
=========================================

The migration procedure assumes you are starting with the following up and
running on the cluster nodes:

Storage Foundation for Oracle RAC 5.0 MP1
Oracle 10gR2
Oracle 10g database

Tasks for migration:

Preparing to upgrade to Oracle 11gR1
Upgrading to Oracle 11g RAC
Migrating the existing Oracle 10g R2 database to Oracle 11g R1
Performing post-upgrade tasks


Preparing to upgrade to Oracle 11gR1
------------------------------------

Before migrating from Oracle 10gR2 to Oracle 11gR1, complete the following pre-upgrade tasks:

1. Upgrade the OS and install any patches, if required.

2. Take a hot or cold backup of the existing database.

3. Take a backup of the existing Oracle home and central inventory.

4. Shutdown the Oracle instance.

   If Oracle is under VCS control, freeze the oracle group:

      # haconf -makerw
      # hagrp -freeze <oracle group> -persistent
      # haconf -dump -makero

  Use oracle commands to shutdown oracle.

5. Shutdown CRS.

  If CRS is under VCS

      # haconf -makerw
      # hagrp -freeze <cssd group> -persistent
      # haconf -dump -makero

  Stop CRS

      # /etc/init.d/init.crs stop
      # /etc/init.crs stop


Upgrading to Oracle 11g RAC
---------------------------

After completing the pre-upgrade tasks, complete the upgrade procedure from Oracle 10 gR2 to Oracle 11.1.0.6:

1. See the Oracle documentation for upgrade procedures.

2. Install the 11gR1 CRS. (See "OVERVIEW OF 11gR1 INSTALL" section of
   this document.)

3. Make sure 11gR1 CRS is running.

  To list the version of CRS software installed:

      # $ORA_CRS_HOME/bin/crsctl query crs softwareversion

  To list the CRS software operating version:

      # $ORA_CRS_HOME/bin/crsctl query crs activeversion

4. Install the 11gR1 RDBMS. (See "OVERVIEW OF 11gR1 INSTALL" section of
   this document.)


Migrating the existing Oracle 10g R2 database to Oracle 11g R1
--------------------------------------------------------------

Upgrade the database to Oracle 11gR1.

For details: See Oracle documentation


Performing post-upgrade tasks
-----------------------------

If CRS and Oracle are under VCS control, perform the following post-upgrade
tasks and unfreeze the service groups.

      # haconf -makerw
      # hagrp -unfreeze <cssd group> -persistent
      # hagrp -unfreeze <oracle group> -persistent
      # haconf -dump -makero
   

Attachments

5.0MP1_e1221809a_303872.tar (19.3 MBytes)

Supplemental Materials

SourceETrack
Value1221809
DescriptionOracle 11g Support on SFRAC 5.0MP1


Legacy ID



303872


Article URL http://www.symantec.com/docs/TECH60344


Terms of use for this information are found in Legal Notices