Deploying SF Oracle RAC and SFCFSHA in the same cluster

Article:HOWTO83475  |  Created: 2013-01-28  |  Updated: 2013-01-28  |  Article URL http://www.symantec.com/docs/HOWTO83475
Article Type
How To

Product(s)

Environment

Subject


Deploying SF Oracle RAC and SFCFSHA in the same cluster

Customer scenario

You want to deploy SF Oracle RAC and SFCFSHA in the same cluster. Veritas Storage Foundation Cluster File System High Availability (SFCFSHA) supports concurrent data sharing in a storage area network (SAN) environment. It provides automation and intelligent management of high availability and performance. Veritas Storage Foundation for Oracle RAC (SF Oracle RAC) leverages proprietary storage management and high availability technologies to enable robust, manageable, and scalable deployment of Oracle RAC on UNIX platforms.

The deployment scenario considers the following mixed configuration setups:

  • Scenario 1: SF Oracle RAC is installed on all nodes in a cluster. Some of the nodes in the cluster are used to run applications in SFCFSHA environment.

  • Scenario 2: SFCFSHA is installed on all nodes in a cluster. Some of the nodes in the cluster are reconfigured to run SF Oracle RAC.

Both configuration scenarios provide the following advantages:

  • Easy repurposing of nodes in a cluster. A node running SFCFSHA can be easily be reconfigured to run SF Oracle RAC.

  • Easy management of application dependencies.

  • Backups of Oracle RAC database from SFCFSHA nodes in the cluster.

  • Enhanced storage availability with CVM I/O shipping.

Note:

In both scenarios, the steps for adding and managing database instances assume administrator-managed database environments.

Configuration overview - Scenario 1

SF Oracle RAC is installed on all nodes in a cluster. Some of the nodes in the cluster are used to run applications in SFCFSHA environment.

In this configuration, fencing and ODM function in enabled mode on all nodes in the cluster.

The following figure illustrates the scenario.

Configuration overview - Scenario 2

SFCFSHA is installed on all nodes in a cluster. Some of the nodes in the cluster are reconfigured to run SF Oracle RAC.

In this configuration:

  • Fencing functions in enabled mode on all nodes in the cluster.

  • ODM functions in enabled mode only on SF Oracle RAC nodes in the cluster.

This configuration is constrained by the following limitations:

  • The SF Oracle RAC nodes must start before the SFCFSHA nodes in the cluster for ODM to function in enabled mode on SF Oracle RAC nodes.

  • Patching operations must be independently performed for SF Oracle RAC and SFCFSHA nodes.

  • ODM will not function on SFCFSHA nodes. Since ODM is already active in enabled mode on SF Oracle RAC nodes, ODM cannot function in exclusive mode on SFCFSHA nodes.

The following figure illustrates the scenario.

Supported configuration

  • SF Oracle RAC: 6.0

  • Oracle RAC: Oracle RAC 10g Release 2 and later

  • Platforms: AIX, HP-UX, Linux, and Solaris

Solution

Reference documents

Keep the following documents handy for reference:

  • Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide

  • Veritas Storage Foundation Cluster File System High Availability Installation Guide

The Symantec documents can be accessed from the product media or from the following web site: http://sort.symantec.com/documents

Sample service group configuration

A sample configuration is as follows:

To set up systems for scenario 1

  1. Install and configure SF Oracle RAC on all nodes in the cluster.

    Note:

    Set the cluster-level attribute PreferredFencingPolicy to allow preference to SF Oracle RAC clusters in split brain conditions.

    For instructions, see the Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide.

  2. Stop LMX from one of the nodes that you plan to use for SFCFSHA.

    AIX

    # /etc/init.d/lmx.rc stop

    HP-UX

    # lmxconfig -U

    Solaris

    # svcadm disable lmx
  3. Stop VCSMM from one of the nodes that you plan to use for SFCFSHA.

    AIX

    # /etc/init.d/vcsmm.rc stop

    HP-UX

    # vcsmmconfig -U

    Linux

    # /etc/init.d/vcsmm stop

    Solaris

    # svcadm disable vcsmm
  4. Install Oracle Grid Infrastructure and database software on the nodes on which you plan to run SF Oracle RAC.

    For instructions, see the Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide.

  5. Edit the configuration file /etc/VRTSvcs/conf/config/main.cf to include two separate cluster groups, for example, one group for SF Oracle RAC cluster and another group for SFCFSHA cluster.

  6. On the remaining nodes on which you plan to run SFCFSHA, install required applications, as necessary.

To reprovision an SFCFSHA node to run SF Oracle RAC in scenario 1

  1. Ensure that the VCSMM library is linked to the Veritas library in /opt/ORCLcluster/lib on the new node (which was part of SFCFSHA cluster).

    AIX

    $ cp /opt/ORCLcluster/lib/ \
    libskgxn2.so $GRID_HOME/lib/libskgxn2.so

    HP-UX

    $ /opt/VRTSvcs/rac/bin/linkrac 11gR2

    Linux

    $ cp /opt/ORCLcluster/lib/ \
    libskgxn2.so $GRID_HOME/lib/libskgxn2.so

    Solaris

    $ cp /opt/ORCLcluster/lib/ \
    libskgxn2.so $GRID_HOME/lib/libskgxn2.so
  2. Enable VCSMM.

    AIX

    # /etc/init.d/vcsmm.rc start

    HP-UX

    # /sbin/init.d/vcsmm start

    Linux

    # /etc/init.d/vcsmm start

    Solaris

    # svcadm enable vcsmm
  3. Create the file cssd-pretend-offline on the new node and make the cssd resource non-critical. Failing this, the cssd resource lapses into an UNKNOWN state until Oracle Clusterware/Grid Infrastructure is installed on the new node, thus preventing the cvm group from coming online.

    • On one of the nodes in the existing cluster, configure the cssd resource as a non-critical resource:

      # haconf -makerw
      # hares -modify cssd Critical 0
      # haconf -dump -makero
    • Create the file cssd-pretend-offline on the new node:

      # touch /var/VRTSvcs/lock/cssd-pretend-offline
  4. Edit the VCS configuration file /etc/VRTSvcs/conf/config/main.cf to add the new node information to the OCR, vote, cssd, database relevant mount resources.

    The following steps add the PrivNIC resource to the new node.

    # haconf -makerw
    # hagrp -modify sfrac_clus SystemList -add new_node 2
    # hagrp -modify sfrac_clus AutoStartList -add new_node
    # hares -modify ora_priv Device -add  nic1 0 -sys new_node
    # hares -modify ora_priv Device -add  nic2 1 -sys new_node
    # hares -modify ora_priv  Address  192.168.12.5 -sys new_node
    # haconf -makero

    The following steps bring the OCR and voting disk CVMVolDG and CFSMount resources online on the new node, which has been added to the service group containing the OCR and voting disk resources.

    # hares -online ocrvote_voldg -sys new_node
    # hares -online ocrvote_mnt -sys new_node
  5. Add Oracle Clusterware/Grid Infrastructure to the new node. For instructions, see the Oracle RAC documentation.

  6. After CSSD is installed and running on the new node, perform the following steps:

    • Delete the file /var/VRTSvcs/lock/cssd-pretend-offline on the new node.

      # rm -rf /var/VRTSvcs/lock/cssd-pretend-offline
    • Clear the fault and probe the cssd resource on the new node to bring the cssd resource online.

      # hares -probe cssd -sys new_node
  7. Add the Oracle RAC database home directory on the new node with the same permissions as that on existing nodes:

    $ mkdir -p $ORACLE_HOME
  8. Add the Oracle RAC database binaries to the new node. For instructions, use Oracle documentation.

  9. Manually mount the Oracle database volumes and mount points on the new node.

  10. Add a database instance on the new node. For instructions, use Oracle documentation.

  11. From any one of the nodes, modify the VCS configuration file /etc/VRTSvcs/conf/config/main.cf to add the new node to the Oracle RAC service group.

    # hagrp -modify racdb_grp SystemList -add new_node 2
    # hagrp -modify racdb_grp AutoStartList -add new_node
    # haconf -dump -makero
  12. Configure the new database instance under VCS.

    # haconf -makerw
    # hares -modify  db_name Sid new_dbinstance -sys new_node
    # haconf -dump -makero

To set up systems for scenario 2

  1. Install and configure SFCFSHA on all nodes in the cluster.

    Note:

    Set the cluster-level attribute PreferredFencingPolicy to allow preference to SF Oracle RAC clusters in split-brain conditions.

    For instructions, see the Veritas Storage Foundation Cluster File System High Availability Installation Guide.

  2. Upgrade a subset of the nodes to SF Oracle RAC.

    For instructions, see the chapter "Migrating from single instance Storage Foundation for Oracle HA to SF Oracle RAC" in the Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide.

  3. On the SF Oracle RAC nodes, enable VCSMM.

    AIX

    # /etc/init.d/vcsmm.rc start

    HP-UX

    # /sbin/init.d/vcsmm start

    Linux

    # /etc/init.d/vcsmm start

    Solaris

    # svcadm enable vcsmm
  4. On the SF Oracle RAC nodes, enable LMX.

    AIX

    # /etc/init.d/lmx.rc start

    HP-UX

    # /sbin/init.d/lmx start

    Solaris

    # svcadm enable lmx
  5. Stop ODM on all nodes.

    AIX

    # /etc/rc.d/rc2.d/S99odm stop

    HP-UX

    # /sbin/init.d/odm stop

    Linux

    # /etc/init.d/vxodm stop

    Solaris

    # svcadm disable odm
  6. Back up the ODM script on each SFCFSHA node.

    AIX

    # mv /etc/rc.d/rc2.d/S99odm \
    /etc/rc.d/rc2.d/bk_S99odm

    HP-UX

    # mv /sbin/init.d/odm \
    /sbin/init.d/bk_odm

    Linux

    # mv /etc/init.d/odm \
    /etc/init.d/bk_odm
  7. Start ODM on SF Oracle RAC nodes. This enables ODM in cluster mode.

    AIX

    # /etc/rc.d/rc2.d/S99odm start

    HP-UX

    # /sbin/init.d/odm start

    Linux

    # /etc/init.d/vxodm start

    Solaris

    # svcadm enable odm
  8. Stop VCS on all nodes:

    # hastop -all -force
  9. From one of the SF Oracle RAC nodes, edit the VCS configuration file /etc/VRTSvcs/conf/config/main.cf to include separate cluster groups for SF Oracle RAC and SFCFSHA. Add the following lines to include the PrivNIC and MultiPrivNIC types.cf files in the VCS configuration file.

    include PrivNIC.cf
    include MultiPrivNIC.cf
  10. From one of the SF Oracle RAC nodes, start VCS:

    # hastart
  11. On all remaining nodes, start VCS:

    # hastart
  12. Install Oracle Clusterware/Grid Infrastructure and Oracle RAC database software on the SF Oracle RAC nodes.

    Configure the CSSD agent on SF Oracle RAC nodes.

    For instructions, see the Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide.

    Note:

    Perform the Oracle RAC pre-installation steps manually to ensure that OCR, voting disk, PrivNIC, MultiPrivNIC, and CSSD resources are configured in a group different from the CVM group. This is to ensure that Oracle Clusterware/Grid Infrastructure starts only on SF Oracle RAC nodes. If you configure the above resources using the product script installation program, then they are configured under the CVM group and VCS attempts to start Oracle Clusterware/Grid Infrastructure on SFCFSHA nodes as well.

  13. On the remaining nodes on which you plan to run SFCFSHA, install required applications, as necessary.

To reprovision an SFCFSHA node to run SF Oracle RAC cluster in scenario 2

  1. Install SF Oracle RAC on the SFCFSHA nodes that you plan to migrate to SF Oracle RAC.

    Note:

    Do not configure SF Oracle RAC on the nodes at this stage.

  2. Copy the /etc/vcsmmtab file from one of the existing SF Oracle RAC nodes to the new nodes.

  3. On the new SF Oracle RAC node, start VCSMM.

    AIX

    # /etc/init.d/vcsmm.rc start

    HP-UX

    # /sbin/init.d/vcsmm start

    Linux

    # /etc/init.d/vcsmm start

    Solaris

    # svcadm enable vcsmm
  4. On the new SF Oracle RAC node, start LMX.

    AIX

    # /etc/init.d/lmx.rc start

    HP-UX

    # /sbin/init.d/lmx start

    Solaris

    # svcadm enable lmx
  5. Create the file cssd-pretend-offline on the new node and make the cssd resource non-critical. Failing this, the cssd resource lapses into an UNKNOWN state until Oracle Clusterware/Grid Infrastructure is installed on the new node, thus preventing the cvm group from coming online.

    • On one of the nodes in the existing cluster, configure the cssd resource as a non-critical resource:

      # haconf -makerw
      # hares -modify cssd Critical 0
      # haconf -dump -makero
    • Create the file cssd-pretend-offline on the new node:

      # touch /var/VRTSvcs/lock/cssd-pretend-offline
  6. Modify the VCS configuration file /etc/VRTSvcs/conf/config/main.cf to add the new node information to the OCR, vote, cssd, database relevant mount resources.

    The following steps add the PrivNIC resource to the new node.

    # haconf -makerw
    # hagrp -modify sfrac_clus SystemList -add new_node 2
    # hagrp -modify sfrac_clus AutoStartList -add new_node
    # hares -modify ora_priv Device -add  nic1 0 -sys new_node
    # hares -modify ora_priv Device -add  nic2 1 -sys new_node
    # hares -modify ora_priv  Address  192.168.12.5 -sys new_node
    # haconf -makero

    The following steps bring the OCR and voting disk CVMVolDG and CFSMount resources online on the new node, which has been added to the service group containing the OCR and voting disk resources.

    # hares -online ocrvote_voldg -sys new_node
    # hares -online ocrvote_mnt -sys new_node
  7. Add Oracle Clusterware/Grid Infrastructure to the new node. For instructions, see the Oracle RAC documentation.

  8. After CSSD is installed and running on the new node, perform the following steps: and .

    • Delete the file /var/VRTSvcs/lock/cssd-pretend-offline on the new node.

      # rm -rf /var/VRTSvcs/lock/cssd-pretend-offline
    • Clear the fault and probe the cssd resource on the new node to bring the cssd resource online.

      # hares -probe cssd -sys new_node
  9. Add the Oracle RAC database home directory on the new node with the same permissions as that on existing nodes:

    $ mkdir -p $ORACLE_HOME
  10. Add the Oracle RAC database binaries to the new node. For instructions, use Oracle documentation.

  11. Manually mount the Oracle database volumes and mount points on the new node.

  12. Restore the ODM startup script to its original location.

    AIX

    # /etc/rc.d/rc2.d/S99odm stop

    Move the ODM startup script located at /etc/rc.d/rc2.d/ to one of the SFCFSHA nodes.

    # mv /etc/rc.d/rc2.d/bk_S99odm /etc/rc.d/rc2.d/S99odm

    HP-UX

    # /sbin/init.d/odm stop

    Move the ODM startup script located at /sbin/init.d/ to one of the SFCFSHA nodes.

    # mv /sbin/init.d/bk_odm /sbin/init.d/odm

    Linux

    # /etc/init.d/vxodm stop

    Move the ODM startup script located at /etc/init.d/ to one of the SFCFSHA nodes.

    # mv /etc/init.d/bk_odm /etc/init.d/odm

    Solaris

    # svcadm disable odm
  13. Enable ODM on the new nodes.

    AIX

    # /etc/rc.d/rc2.d/S99odm start

    HP-UX

    # /sbin/init.d/odm start

    Linux

    # /etc/init.d/vxodm start

    Solaris

    # svcadm enable odm
  14. Add a database instance on the new node. For instructions, use Oracle documentation.

  15. From one of the nodes, edit the VCS configuration file /etc/VRTSvcs/conf/config/main.cf to add the new node to the Oracle RAC service group.

    # hagrp -modify racdb_grp SystemList -add new_node 2
    # hagrp -modify racdb_grp AutoStartList -add new_node
    # haconf -dump -makero
  16. Configure the new database instance under VCS.

    # haconf -makerw
    # hares -modify  db_name Sid new_dbinstance -sys new_node
    # haconf -dump -makero

Legacy ID



v81077050_v84594641


Article URL http://www.symantec.com/docs/HOWTO83475


Terms of use for this information are found in Legal Notices