Late Breaking News (LBN) - Updates to the Release Notes for Veritas Storage Foundation (tm) and High Availability Solutions 5.0 to 5.0 Maintenance Pack 4 ( MP4 ) on Linux and 5.0 Release Update 1 (RU1) for SLES 11 and 5.0 Release Update 3 (RU3) for IBM Power and 5.0 Release Update 4 (RU4) and cross references to product documentation

Article:TECH46445  |  Created: 2010-01-22  |  Updated: 2015-01-29  |  Article URL http://www.symantec.com/docs/TECH46445
Article Type
Technical Solution

Product(s)

Environment

Issue



Late Breaking News (LBN) - Updates to the Release Notes for Veritas Storage Foundation (tm) and High Availability Solutions 5.0 to 5.0 Maintenance Pack 4 ( MP4 ) on Linux and 5.0 Release Update 1 (RU1) for SLES 11 and 5.0 Release Update 3 (RU3) for IBM Power and 5.0 Release Update 4 (RU4) and cross references to product documentation


Solution





To locate the most current product patch releases including Maintenance Packs, Rolling Patches, and Hot Fixes visit https://vos.symantec.com/patch/matrix 


           Veritas Operation Services

 

      VOS (Veritas Operation Services) Portal:   https://vos.symantec.com 


      VOS Portal Contains:
          - Risk Assessments
          - VOS Searchability (Error Code Lookup, Patches, Documentation, Systems, Reports)
          - Detailed Reports (Product and License Usage)
          - Notification Signup (VOS Notification Widget)
      - Installation Assessment Services (Installation and Upgrade preparation)


      Documentation

      SF Rel Notes 5.0 MP4:    
      http://seer.entsupport.symantec.com/docs/vascont/495.html 
      VCS Rel Notes 5.0 MP4:  http://seer.entsupport.symantec.com/docs/vascont/494.html
      Getting Started Guide 5.0 MP4:  http://seer.entsupport.symantec.com/docs/vascont/496.html  
      5.0 MP3 Documentation List:
      http://entsupport.symantec.com/docs/307509 
      5.0 MP1 Documentation List: http://entsupport.symantec.com/docs/307438 
      5.0 Documentation List: http://entsupport.symantec.com/docs/307436   

      Product documentation, man pages and error messages are available at http://sort.symantec.com/documents/

      5.0 RU3 for Linux http://sfdoccentral.symantec.com/Storage_Foundation_HA_50RU3_LoP.html 

      See below for the links to the 5.0 Release Update 1 (RU1) for SLES 11 product documentation.


      Downloads

      5.0 Maintenance Pack 4 is available at  https://fileconnect.symantec.com/LangSelection.jsp

      5.0 Maintenance Pack 3 is available at https://fileconnect.symantec.com   

      Patches are available on Patch Central at https://vias.symantec.com/labs/patch 


      Tools

      VIAS (Veritas Installation Assessment Service)  https://vias.symantec.com  
      Health Check https://vias.symantec.com/labs/vhcs 
      Error Code Lookup https://vias.symantec.com/labs/vels 
      VIMS (Veritas Inventory Management Service) https://vias.symantec.com/labs/vims 
      Veritas Operations Services (VOS) Labs https://vias.symantec.com/labs 
      VRTSexplorer http://support.veritas.com/docs/243150 
      Storage Foundation Simple Admin http://entsupport.symantec.com/docs/303625 
      Storage Foundation Manager https://www-secure.symantec.com/connect/blogs/storage-foundation-manager-20-new-release 


      Forums

      Storage Management Forums http://www.symantec.com/connect/storage-management/forums 
      Clustering and Replication Forums http://www.symantec.com/connect/clustering-and-replication/forums         

      5.0 RU4 is currently released:

      It is available for download from https://fileconnect.symantec.com 

      The script to work around 5.0 RU4 incidents 1870300, 1594277 and 1753740 (DMP cannot detect re-enabled OS devices if the device names have changed on SLES 11) is available at http://support.veritas.com/docs/347065 

      Refreshing CVM configuration after adding or removing a node 

      In the Veritas Storage Foundation Cluster File System Installation Guide (5.0), chapter, "Adding and removing a node," the following steps to refresh the CVM configuration after adding or removing a node is missing.

      Perform the following steps to refresh the CVM configuration on one of the existing nodes after adding or removing a node:

      # /etc/vx/bin/vxclustadm -m vcs reinit
      # /etc/vx/bin/vxclustadm nidmap


      Machine fails to boot after Rootdisk encapsulation on servers with UEFI firmware

      Certain new servers in the market (e.g. IBM x3650 M2, Dell PowerEdge T610) come with support for the UEFI firmware . UEFI supports booting from legacy MBR type disks with certain restrictions on the disk partitions. One of the restriction is that each partition must not overlap with other partitions. During rootdisk encapsulation, we create a overlapping partition that spans the public region of the rootdisk. If the check for overlapping partitions is not disabled from the UEFI firmware, then the machine will fail to come up following the reboot initiated after running the commands to encapsulate the rootdisk.

      Workaround:

      For the IBM x3650 series servers, the UEFI firmware settings should be set to boot with the "Legacy Only" option . For the Dell PowerEdge T610 system, set "Boot Mode" to "BIOS" from the "Boot Settings" menu. This workaround has been tested and is being recommended with the full set of SFHA product family (SF, SFHA and SFCFS).




      5.0 RU1 for SLES 11 and 5.0 RU3 for IBM Power

      Veritas Storage Foundation and High Availability Solutions 5.0 RU1 for SLES 11 Linux and 5.0 RU3 for IBM Power are now available for download at http://fileconnect.symantec.com 

      See the product documentation below for more information.

      5.0 RU3 for IBM Power
      http://sfdoccentral.symantec.com/Storage_Foundation_HA_50RU3_LoP.html

      5.0 RU1 for SLES 11
      Getting Started Guide http://seer.entsupport.symantec.com/docs/vascont/190.html 
      storage Foundation Release Notes http://seer.entsupport.symantec.com/docs/vascont/191.html 
      VCS Release Notes http://seer.entsupport.symantec.com/docs/vascont/192.html 

      More Product Documentation:
      VCS User's Guide http://seer.entsupport.symantec.com/docs/vascont/195.html 
      VCS Installation Guide http://seer.entsupport.symantec.com/docs/vascont/196.html 
      VCS Agent Developer's Guide http://seer.entsupport.symantec.com/docs/vascont/193.html 
      VCS Bundled Agents Reference Guide http://seer.entsupport.symantec.com/docs/vascont/194.html 

      All product documentation http://sfdoccentral.symantec.com/index.html 

      Support for SLES 10 SP4

      The SFHA Solutions 5.0 MP4 RP1 release added support for SUSE Linux Enterprise Server 10 Service Pack 4 on x86_64. See the Storage Foundation and High Availability Solutions Release Notes for 5.0 MP4 RP1 for more information. 

       

      Updates to the Release Notes for 5.0 RU3

      The following products are now supported:

      Veritas Storage Foundation for Sybase
      Veritas Storage Foundation for DB2
      Veritas Storage Foundation for Oracle  


      Updates to the Release Notes for 5.0 RU1

      Array Support Libraries for 5.0 RU1 and HCL

      Array Support Library and Array Policy Module for IBM DS4xxx and IBM DS5xxx Series Arrays in Active-Passive Explicit Failover (A/P-F) mode on Veritas Volume Manager 5.0 RU1 for SLES 11 http://support.veritas.com/docs/324049 
       
      Array Support Library for IBM XIV and XIV Nextra Arrays (Active/Active) on VERITAS Volume Manager 5.0 RU1 for Linux SLES11 http://support.veritas.com/docs/325473 

      Hardware Compatibility List (HCL) http://entsupport.symantec.com/docs/324247 



      Veritas Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 3 Rolling Patch 2 is now available

      Veritas Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 3 Rolling Patch 2 for Linux is now available on Patch Central at https://vias.symantec.com/labs/patch/php/main.php 

      More information is available at http://support.veritas.com/docs/318498   and http://support.veritas.com/docs/318496 


      Veritas Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 3 Rolling Patch 2 Known Issues

      Documentation Errata: Veritas Storage Foundation and High Availability Solutions 5.0 MP3 RP2 Read This First

      In the "Upgrading to 5.0 MP3 RP2 on a SFCFS for Oracle RAC cluster" section, step 24 contains the wrong command.

      It should read as follows:

      24. Enter the following command on each node in the second group to freeze HA
      service group operations for the failover service group:

          # hagrp -freeze failover_service_group -persistent


      Veritas Database Server (VxDBMS) is slow to boot (1433244)

      The boot time for DBED repository database server (VxDBMS) is high.

      Workaround

      To reduce the boot time for VxDBMS

      - Edit the /etc/init.d/vxdbms3 file and modify line 65:

         # Run start_server script to start the server
             $BASE_DIR/$VXDBMS/bin/vxdbms_start_server.pl

      To:

           # Run start_server script to start the server
             $BASE_DIR/$VXDBMS/bin/vxdbms_start_server.pl &


      Remove external APM and ASL packages before upgrading from 5.0MP3 to 5.0MP3RP2

      If you have any external Array Policy Module (APM) or Array Support Library (ASL) packages, you must remove the external APM or ASL packages before upgrading from 5.0MP3 to 5.0MP3RP2.  For example, Sun LSI arrays require an external ASL named  VRTSLSI-ALL-* and an external APM named VRTSLSIapm-*.

      This limitation applies to any Storage Foundation product which uses Volume Manager, including Storage Foundation, Storage Foundation HA, Storage Foundation Cluster File System, and Storage Foundation for Oracle RAC.  

      After completing the upgrade, obtain the required updated ASLs or APMs to ensure the array is claimed correctly.

      To remove external ASL or APM packages

      1. Before you remove any packages, ensure that you are not running anything in VxVM volumes, and ensure that no volumes are mounted.  After unmounting the file systems, disable the VM volumes using the following command:

      # vxvol  -g <diskgroup> stopall

      These steps are necessary to prevent attempts to access the data in disks that were claimed by these ASLs or APMs after the packages are removed.  Attempting to access the data could lead to data corruption if the disks are not claimed correctly.

      2. Determine which external ASL packages are installed:

      # rpm -qf /etc/vx/lib/discovery.d | grep -v "^VRTSvxvm-common"

      This command lists the packages which installed any files in the ASL directory.

      The package VRTSvxvm-common* package is the base VxVM package. Any other packages in this directory are external ASL packages.
      Example output:

      # rpm -qf /etc/vx/lib/discovery.d | grep -v "^VRTSvxvm-common"
      VRTSIBM-DS4xxx-2.0-1.0

      The sample output shows an external ASL named VRTSIBM-DS4xxx-2.0-1.0.

      3. Remove any external ASLs. For example:

      # rpm -e VRTSIBM-DS4xxx-2.0-1.0

      4. Determine which external APM packages are installed:

      # rpm -qf /etc/vx/apmkey.d | grep -v "^VRTSvxvm-platform"

      This command displays package names which installed any APM keys. Any package other than VRTSvxvm-platform* is an external APM package.

      5. Remove the external APMs that are installed on the machine. For example:

      # rpm -e VRTSIBM-DS4xxx-2.0-1.0



      Storage Foundation and High Availability Solutions 5.0 MP3 for Linux device in non-ready state system panic

      On the 5.0 MP3 release of Veritas Volume Manager for Linux, if a device open and inquiry succeeds and I/O fails at an early stage of Volume Manager startup, it causes a system panic.

      For example, EMC devices with BCV could generate this error condition:

      With devices in non-ready state, system panics at an early stage of boot or after running command vxdctl enable or vxconfigd -k (e1401029)


      Workaround

      Install the Volume Manager 5.0 Maintenance Pack 3 Hot Fix 3 Patch for Linux on your system: http://entsupport.symantec.com/docs/307961   or install 5.0 Maintenance Pack 3 Rolling Patch 2  http://support.veritas.com/docs/318496

      5.0 Maintenance Pack 4 for Linux
      Documentation Errata: 5.0 MP4 Veritas Cluster Server Manual Pages
      The version number for the hagrp(1m) manual page in Veritas Cluster Server 5.0MP4 should read as VCS 5.0 MP4 instead of VCS 5.0.1.

      5.0 Maintenance Pack 3 for Linux

      5.0 Maintenance Pack 3 for Linux is available at https://fileconnect.symantec.com 

      The Veritas Volume Manager 5.0 Maintenance Pack 3 Hot Fix 3 Patch for Linux has replaced Hot Fix 1 or 2 and Hot Fix 3 is available at http://entsupport.symantec.com/docs/307961 

      The Veritas File System 5.0 Maintenance Pack 3 Hot Fix 1 Patch for Linux and the Veritas Oracle Disk Manager 5.0 Maintenance Pack 3 Hot Fix 1 patch for Linux are available at http://entsupport.symantec.com/docs/306480 

      The VERITAS File System Documentation 5.0 Maintenance Pack 3 Hot Fix 1 Patch for Linux is available at http://entsupport.symantec.com/docs/307004   

      The Veritas Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 3 for Linux documentation is available at http://sfdoccentral.symantec.com and the  Documentation Disc is available at http://entsupport.symantec.com/docs/306992 

      The Veritas Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 3 Getting Started Guide for Linux is available at http://entsupport.symantec.com/docs/306913 

      The Veritas Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 3 Fixed Incidents List for Linux is available at http://entsupport.symantec.com/docs/308222 

      The links to all of the Storage Foundation and High Availability 5.0 MP3 Product Family Documentation for Linux are available at http://sfdoccentral.symantec.com and http://entsupport.symantec.com/docs/307509 

      The Veritas Cluster Server 5.0 Maintenance Pack 3 Hot Fix 1 for Linux is now available at http://entsupport.symantec.com/docs/311939 

      VIAS (Veritas Installation Assessment Service)  https://vias.symantec.com/main.php

      Patch Central https://vias.symantec.com/labs/patch/php/main.php 

      Health Check https://vias.symantec.com/labs/vhcs/main.php 

      Error Code Lookup https://vias.symantec.com/labs/vels 

      VIMS (Veritas Inventory Management Service) https://vias.symantec.com/labs/vims/main.php 

      Veritas Operations Services (VOS) Labs https://vias.symantec.com/labs/voslabs.php 



      5.0 Maintenance Pack 3 for Linux Issues

      Install the hot fix patches after installing 5.0 Maintenance Pack 3

      For hot fix patch installation instructions, see http://entsupport.symantec.com/docs/306890 


      Security-Enhanced Linux on Red Hat Enterprise Linux 5

      The Complete Storage Foundation stack 5.0 MP3 can run with SE Linux Enabled on RHEL 5

           1) In enforcing mode and permissive mode (both with targeted policy)
           2) Using the RedHat supplied targeted security policy

      Exceptions:

           Oracle and DB2 recommend SE linux should be disabled
           1) Disable SELinux when using SFOracle, SFCFS-RAC
           2) Disable SELinux when using SFDB2

      If Installing SFCFS when SELinux is enabled:

      - Need to use ssh; rsh will not work for the SFCFS installation
      - We support only the shipped, default policy files

      Not supported with SF:

      -RHEL4 when SE Linux enabled
      -SE Linux extended file attributes (file labeling) in VxFS
      -VxFS mount security contexts -Extensions or modifications to the shipped, default policy files


      Security-Enhanced Linux is not supported by SF CFS for Oracle RAC, SF Oracle and SF for DB2

      Storage Foundation Cluster File System for Oracle RAC, Storage Foundation for Oracle and Storage Foundation for DB2 do not support Security-Enhanced Linux.

      To verify:

      # selinuxenabled
      # echo $?
      1

      If the echo command shows output "1", then it is disabled, otherwise it should be disabled using the following steps:

      1 Set the kernel boot parameter selinux to 0 (selinux=0) and reboot

      2 After rebooting the machine, ensure the setting takes effect:

      # selinuxenabled
      # echo $?
      1

      The echo command should show "1"


      Security-Enhanced Linux on Red Hat Enterprise Linux 4

      On Red Hat Enterprise Linux 4 (RHEL 4), Security-Enhanced Linux (SE Linux) is provided for evaluation purposes only and should be disabled if you are running any part of the Storage Foundation product.



      E1256764 - Volumes/sites remain in recover state after Storage Recovery in campus cluster

      The site does not reattach automatically in the case where the site storage has disconnected and reconnected to a CVM slave node, but the master node never lost connection to the site storage.  


      Uninstalling VCS does not remove the VRTSjre package

      Due to potential dependencies among Symantec and non-Symantec products, the installer does not remove the VRTSjre (Java Runtime Environment resources) and the VRTSperl packages at product uninstallation.

      Workaround: You can uninstall these packages manually. On the system where these packages exist, enter the following commands to remove them:

      # rpm -e VRTSjre
      # rpm -e VRTSperl


      Support for CIO with Sybase ASE databases in 5.0MP3

      Veritas Concurrent I/O improves the performance of regular files on a VxFS file system without the need for extending namespaces and presenting the files as devices. This simplifies administrative tasks and allows databases, which do not have a sequential read/write requirement, to access files concurrently.

      CIO is supported for Sybase servers running ASE 12.5.4 and ASE 15.0.2 databases. For information on supported Linux operating systems please refer to Veritas Storage Foundation� Release Notes for 5.0MP3.

      Enabling Concurrent I/O

      Because you do not need to extend name spaces and present the files as devices, you can enable Concurrent I/O on regular files.

      Before enabling Concurrent I/O, review the following:

      To use the Concurrent I/O feature, the file system must be a VxFS file system.

      Make sure the mount point on which you plan to mount the file system exists.
            
      To enable Concurrent I/O on a file system using mount with the -o cio option:

      Mount the file system using the mount command as follows

      # /usr/sbin/mount -t vxfs -o cio <special> /<mount_point>

      where:

      special is a block special device

      /mount_point is the directory where the file system will be mounted.

      Disabling Concurrent I/O

      If you need to disable Concurrent I/O, unmount the VxFS file system and mount it again without the mount option.

      To disable Concurrent I/O on a file system using the mount command:

      1.      Shutdown the Sybase instance.
      2.      Unmount the file system using the umount command.
      3.      Mount the file system again using the mount command without using the -o cio option.


      VEA processes fail to function on SLES9 (SP3 or later) and SLES10 (SP1 and SP2)

      Workaround: Download and install the appropriate patch.

      SLES9 SP4 (x86, x86_64):  http://download.novell.com/Download?buildid=-1J4ehblUb4~

      SLES10 SP2 x86:  http://download.novell.com/Download?buildid=1FNq_WvawQE~

      SLES10 SP2 x86_64:  http://download.novell.com/Download?buildid=GXycfRMtaj8~

      SLES10 SP1 x86_64:  http://download.novell.com/Download?buildid=hWHDb0PAZeg~

      SLES10 SP1 x86:  http://download.novell.com/Download?buildid=PVqARWPV-ew~

      To install the rpm:

      # rpm -Uvh Patch_Name.rpm --force

      Note: Only the libgcc rpm is required.


      Storage Foundation for Sybase

      The Linux 5.0 and 5.0 MP3 Veritas Storage Foundation and High Availability Solutions Getting Started Guide mentions Storage Foundation for Sybase. Storage Foundation for Sybase is not available on the Linux platform.


      If the umask is 0077, the installation or upgrade can fail

      Check the umask setting:
      # umask
      0077

      Change umask to 0022:
      # umask 0022
      # umask
      0022


      Missing /etc/vx/vxvm.exclude File

      Problem: "vxdmpadm listexclude" command dumps core when there is no /etc/vx/vxvm.exclude file.

      This problem can occur when the command is run via VRTSexplorer.

      Workaround: Cut and paste the following lines into the file /etc/vx/vxvm.exclude

      exclude_all 0
      paths
      #
      controllers
      #
      product
      #
      pathgroups
      #


      VCS Issue with Shared Disk Group on iSCSI during shutdown on SLES10

      The shutdown command stops the system services in parallel.

      Since the dependency is not set properly for Veritas Cluster Server (VCS) and Volume Manager (VxVM), VxVM goes down before VCS.

      Since iSCSI service is dependent on VxVM, iSCSI service goes down immediately after VxVM as well.

      This causes the I/O failure on the plex and the plex gets detached on this node.

      Since the detach policy is global by default, this triggers other nodes in the cluster to detach the plex as well.

      Therefore, for the shared disk group configured on iSCSI devices, change the disk detach policy to 'local'.

      The following command on the master node will change the disk detach policy.

      # vxdg -g dg1 set diskdetpolicy=local



      Documentation Errata: 5.0 and 5.0 MP3 Veritas Cluster Server Agent Developers Guide

      The following content replaces the description of the LogFileSize attribute in the Veritas Cluster Server Agent Developers Guide.

      LogFileSize

      Sets the size of an agent log file. Value must be specified in bytes. Minimum is 65536 bytes (64KB). Maximum is 134217728 bytes (128MB). Default is 33554432 bytes (32MB).

      For example,

      hatype -modify FileOnOff LogFileSize 2097152

      Values specified less than the minimum acceptable value are changed to 65536 bytes. Values specified greater than the maximum acceptable value are changed to 134217728 bytes. Therefore, out-of-range values displayed for the command:

      hatype -display restype -attribute LogFileSize

      are those entered with the -modify option, not the actual values. The LogFileSize attribute value cannot be overridden.



      Documentation Errata: 5.0 MP3 Veritas Storage Foundation Release Notes

      On pages 51-54, table 1-8 "Veritas File System fixed issues" has the following errata:
      * Incident 823590 is a Windows operating system issue and should be ignored.

      On page 53, table 1-9 "Veritas File System fixed issues" has the following errata:
      * Incident 770964 is an HP-UX operating system issue and should be ignored.

      On page 58, table 1-14 "Veritas Volume Replicator Web GUI fixed issues" has the following errata:
      * Incident 516812 is an HP-UX operating system issue and should be ignored.

      The Local detach policy support documented in the Storage Foundation Release Notes is not correct:
      The 5.0 MP3 Storage foundation Release Notes included a section titled: "Local detach policy now supported with Veritas Cluster Server clusters and with Dynamic Multipathing Active/Passive arrays." This section is not correct and should be ignored. The restrictions in using the local detach policy still apply for the 5.0MP3 release.



      Documentation Errata: 5.0 MP3 Veritas Storage Foundation Installation Guide

      On page 160, step 6 of the "To update the configuration and confirm startup" procedure should instead read as follows:

        6  Confirm all upgraded nodes are in a running state:

           # /opt/VRTSvcs/bin/hasys -state | grep RUNNING
         1
       


      VEA service takes a long time to start

      VEA service takes a long time to start if the configuration contains large number of LUNs (1403191)

      In configurations with large numbers of LUNS which need to be discovered, the VEA service may take a long time to start. The long start-up time may cause the boot time to be longer than is allowed.

      Workaround:
      The solution is to start the VEA service in the background, so that the boot continues while the LUNs are discovered.

      To start the VEA service in the background:

      1. Edit the VEA start-up script.

      For Storage Foundation 5.x, use the following command:
      #  edit shell script /opt/VRTSobc/pal33/bin/vxpalctrl      

      2. In the start_agent() function, add the following line:
            exit 0

         Before the lines:
            max=10
           count=0


      Starting with 5.0MP3, Symantec has added the feature of one-way link detection in LLT (Etrack Incident# 1031514)

      Prior to this, (i.e. until 5.0MP1), LLT used broadcast heartbeats by default. Beginning with 5.0MP3, instead of using broadcast heartbeats, LLT uses unicast heartbeats.

      LLT considers a link to be in trouble for a peer node when it finds that the link has not been up for that node for 2 seconds (peertrouble 200). On lo-pri links, LLT sends heartbeats just once every second; on hi-pri links, it is twice every second. With the above described change in LLT, it is possible that for lo-pri links, the troubled condition is hit more often. And hence the corresponding LLT messages will be printed more often in the system log. While this message is only informational, frequent appearance in the system logs may alarm the customer.

      Therefore, it is recommended that the LLT peertrouble tunable should be increased to 400, so that link inactivity for up to 4 seconds is ignored by LLT before printing out the message in the system log.
                  
      As noted before, the trouble message is only informational and changing the trouble time to 4 seconds from 2 seconds is harmless.

      If the following messages are seen frequently for an LLT lo-pri link, then change the LLT tunable named peertrouble to 400. Its default value is 200.

      LLT INFO V-14-1-10205 link 2 (eri0) node 1 in trouble LLT INFO V-14-1-10024 link 2 (eri0) node 1 active

      The peertrouble tunable can be changed with the following command on all the nodes in the cluster:

            lltconfig -T peertrouble:400

      It is recommended that the following line be added to /etc/llttab on all the nodes in the cluster in order to ensure that this value is used across server reboots. On each node, the change will take effect only after restarting LLT.

            set-timer      peertrouble:400

      E.g.

      # lltconfig -T query

      Current LLT timer values (.01 sec units):
       . . .
       peertrouble = 200    <--------- before
       peerinact   = 1600
       . . .

      # lltconfig -T peertrouble:400 <--- cmd

      Use the following command to ensure that the values are indeed changed:

      # lltconfig -T query

      Current LLT timer values (.01 sec units):
       . . .
       peertrouble = 400    <--------- after
       peerinact   = 1600
       . . .


      Veritas Volume Manager 5.0 MP3 known issues

      Devices and some paths are not discovered properly with IBM DS4700 disk array after a reboot (1205369)

      On a system with IBM's DS4700 disk array, use fewer than 30 LUNs to ensure that the disk array discovers all of the devices and paths after a reboot.


      LUN states are not updated after disabling and then enabling the Fibre Channel port (994322)

      Disabling and then re-enabling the Fibre Channel (FC) in a Storage Foundation for an Oracle RAC configuration with the Qlogic HBA driver may cause some of the LUNs to be offline.

      Workaround

      This known issue exists only with Qlogic HBA configuration when persistent port ID bindings are not set. Turn on persistent bindings using the SNMP Command Line Interface (SCLI).


      The vxfen process may fail to start after the CVM cluster reboots (1414314)

      In certain cases, the vxfen process may fail to start after the CVM cluster reboots with the persistence=yes tunable set.  In such a case, there is a mismatch of the device names, DMP node, and path names.

      This known issue appears on the following arrays:
      EVA 4400
      CX 340
       
      Workaround

      To resolve this known issue, set persistence=no in the /etc/vx/dmppolicy.info file.


      Array Support Library and Array Policy Module for IBM's DS4000 series disk array in A/P-F mode is not released (1285914)

      Symantec has not released the Array Support Library (ASL) and Array Policy Module (APM) driver for IBM's DS4000 series disk array in A/P-F mode.


      Upgrade to 5.0MP3 fails if Storage Foundation Manager is installed (1423124)

      Upgrading the Storage Foundation products to 5.0MP3 may fail if Storage Foundation
      Manager is installed. The product installer exits with a message indicating that a patch
      is missing in the media.

      Workaround:
      Start the product installer with the following option:
      ./installmp -mpok



      Permission issues while applying Oracle Bundled Patches

      While applying an Oracle bundled patch to Oracle Clusterware (CRS),
      at the step when you execute prepatch.sh script with oracle user,
      the following error is displayed.

      chmod: can't change <ORA_CRS_HOME>/css/admin/init.cssd: Not owner

      Workaround:
      The solution is to change the owner of init.cssd file as oracle user
      before applying Oracle bundled patches to Oracle Clusterware.

      $ chown <oracle user>:<oracle group> <ORA_CRS_HOME>/css/admin/init.cssd



      ODM support for Storage Foundation 5.0 MP3

      The Veritas extension for ODM is now supported for Storage Foundation Standard 5.0MP3 and Storage Foundation Enterprise 5.0MP3.  

      You may need to manually install the support packages. See Installing ODM for details.

      Using ODM with Storage Foundation or Storage Foundation Cluster File System - Linux at  http://support.veritas.com/docs/316756  


       
      Cluster Volume Manager (CVM) fail back behavior for non-Active/Active arrays

      This describes the fail back behavior for non-Active/Active arrays in a CVM cluster. This behavior applies to A/P, A/PF, APG, A/A-A, and ALUA arrays.

      When all of the Primary paths fail or are disabled in a non-Active/Active array in a CVM cluster, the cluster-wide failover is triggered. All hosts in the cluster start using the Secondary path to the array. When the Primary path is enabled, the hosts fail back to the Primary path.

      However, suppose that one of the hosts in the cluster is shut down or disabled while the Primary path is disabled. If the Primary path is then enabled, it does not trigger failback. The remaining hosts in the cluster continue to use the Secondary path. When the disabled host is rebooted and rejoins the cluster, all of the hosts in the cluster will continue using the Secondary path. This is expected behavior.

      If the disabled host is rebooted and rejoins the cluster before the Primary path is enabled, enabling the path does trigger the failback. In this case, all of the hosts in the cluster will fail back to the Primary path. [e1441769]



      Storage Foundation For Oracle RAC (SFRAC) Support

      Storage Foundation For Oracle RAC (SFRAC) is supported on the latest 4.1 Maintenance Pack (MP) and the 5.0 and 5.0 MP1 for Linux releases . From 5.0 MP2 onwards, this product is not available on Linux and has been replaced by the Storage Foundation Cluster File System For Oracle RAC (SFCFS-RAC) product which has been certified by Oracle.

      Customers who have SFRAC on the previously supported versions and who want to move to the latest 5.0 Maintenance Pack (5.0 MP3) will need to move to SFCFS-RAC. There is no direct upgrade path from SFRAC to SFCFS-RAC. Customers will need to uninstall SFRAC and then install the SFCFS-RAC product from the 5.0 MP2 or 5.0 MP3 release.



      Storage Foundation Cluster File System for Oracle RAC support for Oracle 11gR1 RAC

      Storage Foundation Cluster File System for Oracle RAC support for Oracle 11gR1 RAC (11.1.0.7) is now available with the 5.0 MP3 version of SFCFS RAC for RHEL5 and OEL5.

      Storage Foundation Cluster File System for Oracle RAC support for Oracle 11gR1 RAC (11.1.0.6) is now available with the 5.0 MP3 version of SFCFS RAC for RHEL5 and OEL5.

      The complete support matrix for Storage Foundation For Oracle RAC is available at http://entsupport.symantec.com/docs/280186 



      Volume Manager Disk Group Failure Policy: requestleave

      As of 5.0 MP3, Veritas Volume Manager (VxVM) supports 'requestleave' as a valid disk group failure policy. This new disk group failure policy is not currently documented in the 5.0MP3 Veritas Volume Manager Administrator's Guide. The Administrator's Guide will be updated with this information in the next major release.  

      When the disk group failure policy is set to 'requestleave', the master node gracefully leaves the cluster if the master node loses access to all log/config copies of the diskgroup. If the master node loses access to the log/config copies of a shared diskgroup, Cluster Volume Manager (CVM) signals the CVM Cluster Veritas Cluster Server agent. Veritas Cluster Server (VCS) attempts to take offline the CVM group on the master node. When the CVM group is taken offline, the dependent services groups are also taken offline. If the dependent applications managed by VCS cannot be taken offline for some reason, the master node may not be able to leave the cluster gracefully.

      Use the 'requestleave' disk group failure policy together with the 'local' detach policy. Use this combination of disk detach policy and disk group failure policy when the availability of the configuration change records is more important than the availability of nodes.  In other words, if you prefer to let a node leave the cluster rather than risk having the disk group be disabled cluster-wide, because of a loss of access to all copies of the disk configuration.

      Set the requestleave disk group failure policy as follows:

      # vxdg -g mydg set dgfailpolicy=requestleave

      Refer to the 5.0MP3 Veritas Volume Manager Administrator's Guide for more information about the disk group failure policy and the disk detach policy.



      Clients of the GAB service may not get cluster membership

      Symantec recommends that GAB must be configured to provide membership only
      after a minimum quorum number of nodes join the cluster.

      If a client of GAB comes up before GAB Port a formed membership on that
      node, then this client may not get cluster membership until it starts up
      on at least the (configured) quorum number of nodes, not even if Port a
      or any other GAB Ports receive cluster membership.

      Fix: Download and install Hot Fix 5.0MP3RP2HF1 from the following locations:

      Download: https://vias.symantec.com/labs/patch/php/patchinfo.php?download=yes&release_id=1747 
      Release Name: vcs-rhel4_x86_64-5.0MP3RP2HF1
      Product: Veritas Cluster Server 5.0 MP3 RP2

      Download: https://vias.symantec.com/labs/patch/php/patchinfo.php?download=yes&release_id=1748 
      Release Name: vcs-rhel5_x86_64-5.0MP3RP2HF1
      Product: Veritas Cluster Server 5.0 MP3 RP2  

      Download: https://vias.symantec.com/labs/patch/php/patchinfo.php?download=yes&release_id=1749 
      Release Name: vcs-sles9_x86_64-5.0MP3RP2HF1
      Product: Veritas Cluster Server 5.0 MP3 RP2

      Download: https://vias.symantec.com/labs/patch/php/patchinfo.php?download=yes&release_id=1750 
      Release Name: vcs-sles10_x86_64-5.0MP3RP2HF1
      Product: Veritas Cluster Server 5.0 MP3 RP2


      Recommendations on use of Space-Optimized (SO) snapshots in Storage Foundation for Oracle RAC 5.0

      If you use Volume Manager mirroring, Space-Optimized (SO) snapshots are recommended for Oracle data volumes.

      Keep the Fast Mirror Resync regionsize equal to the database block size to reduce the copy-on-write
      (COW) overhead.

      Reducing the regionsize increases the amount of Cache Object allocations leading to performance overheads.

      Do not create Oracle redo log volumes on a space-optimized snapshot.

      Use "third-mirror break-off" snapshots for cloning the Oracle redo log volumes.


      5.0 MP3 Undocumented Change to VxFS Default Tunable Value can Result in Greater Disc Space Usage

      VxFS 5.0 MP3 has an increased default value for the max_seqio_extent_size tunable for better performance in modern file systems.

      The max_seqio_extent_size tunable value is the maximum size of an individual extent.  Prior to the Veritas File System (VxFS) 5.0 MP3 release, the default value for this tunable was 2048 blocks.

      Database tests showed that this default value was outdated and resulted in slower than expected throughput on modern larger file systems.  To improve performance and reduce fragmentation, the default value of max_seqio_extent_size was changed to 1 gigabyte in VxFS 5.0 MP3.  VxFS allocates extents in a way that allows VxFS to use only the necessary percentage of the 1 gigabyte extent size, avoiding over allocation.

      The minimum value allowed for the max_seqio_extent_size tunable is 2048 blocks--the default value prior to the VxFS 5.0 MP3 release.

      Known Issue

      If you have processes that rely on extents being allocated in smaller chunks because unneeded extent space is given to other processes, the change to the default value of max_seqio_extent_size may require you to change how to determine the unneeded space.


      Storage Foundation Cluster Filesystem for Oracle RAC support

      Storage Foundation Cluster Files System for Oracle RAC support for Oracle 11gR1 RAC (11.1.0.7) and Oracle 10gR2 RAC (10.2.0.4) is now available with the  5.0 MP3 version of SFCFS RAC for SLES10.


      VCS 5.0 MP3 Support of VMware ESX

      VCS 5.0 MP3 is supported on Linux guests hosted in VMware ESX Server version 3.5 and with any patches and updates for ESX Server 3.5. You must disable I/O fencing in this environment. Please refer to the VCS Install Guide and Release Notes for more information on the supported Linux operating systems and update levels.



      Storage Foundation Simple Admin

      The Storage Foundation Simple Admin 1.0 for Linux is available at http://entsupport.symantec.com/docs/303625 



      Cluster Server 5.0 Maintenance Pack 2 Rolling Patch 1

      Cluster Server 5.0 Maintenance Pack 2 Rolling Patch 1 for Linux - 5.0 MP2 RP1 for SLES 9 is now available at http://entsupport.symantec.com/docs/311862  and http://entsupport.symantec.com/docs/311859 



      Veritas Cluster Server (VCS) Support

      VCS requires that all nodes in the cluster use the same processor architecture and run the same operating system version.



      Storage Foundation Licensing

      Licensees of Storage Foundation/High Availability (SFHA) products may run different versions of (SFHA) products on different virtual machines in the same physical server/CPU, provided that the versions of the SFHA products are still supported by Symantec Corporation.

       Licensees of Storage Foundation/High Availability (SFHA) products on RISC-based platforms may also run SFHA products for the Linux Platforms in another virtual machine in the same server/CPU.

      Licensees of Storage Foundation Basic products may use a maximum of 4 volumes and 4 file systems per virtual machine.  Storage Foundation Basic can only be used on a server that has a maximum of 2 CPU's.


      Veritas Installation Assessment Service

      The Veritas Installation Assessment Service (IAS) utility assists you in getting ready for a Veritas Storage Foundation and High Availability Solutions installation or upgrade. The IAS utility allows the pre installation evaluation of a configuration, to validate it prior to starting an installation or upgrade.

      https://vias.symantec.com


      Linux Kernel Support Statement

      All kernels and updates which apply to the supported distributions are supported unless they change kABI compatibility. This means Red Hat Updates and SuSe Linux Enterprise Server Support Packs which do not change kernel ABI are automatically supported by Symantec for Storage Foundation and Veritas Cluster server the moment they are released by Red Hat or Novell. When an update or support pack DOES change kABI we will note it in this document. Note: While Red Hat Updates almost never change kABI, the majority of SuSe Linux Enterprise Server Support Packs DO change kABI.


      Storage Foundation Cluster File System for Oracle RAC split-brain situations

      Storage Foundation Cluster File System for Oracle RAC does not support Symantec's implementation of SCSI-3 PGR based I/O fencing and Oracle Clusterware (CRS) is expected to handle any split-brain situations. More information is available at http://entsupport.symantec.com/docs/306411 


      VMware Support

      For information about the use of this release in a VMware Environment, refer to http://entsupport.symantec.com/docs/289033 


      Veritas Cluster Server Agent Support Matrix

      The Veritas Cluster Server Agents Support Matrix is available at http://www.symantec.com/business/products/agents_options.jsp?pcid=psc_disaster_recov&pvid=20_1 


      Oracle 11gR1 RAC support with 5.0 Maintenance Pack 2

      5.0 Maintenance Pack 2 (5.0MP2) Storage Foundation Cluster File System For Oracle RAC is now certified with Oracle 11gR1 (11.1.0.6) RAC for Red Hat Enterprise Linux AS/ES 4 (RHEL4) and Oracle Enterprise Linux 4 (OEL4) .

      The detailed installation and upgrade instructions can be found at http://entsupport.symantec.com/docs/304055 


      VERITAS File System Provider and Client Extension 5.0 Maintenance Pack 1 HF1

      VERITAS File System Provider and Client Extension 5.0 Maintenance Pack 1 HF1 for Linux is now available at http://entsupport.symantec.com/docs/305346  and http://entsupport.symantec.com/docs/305342 


      5.0MP1CP1 - CUMULATIVE PATCH FOR VxFEN FOR VERITAS CLUSTER SERVER 5.0MP1

      The 5.0MP1CP1 - CUMULATIVE PATCH FOR VxFEN FOR VERITAS CLUSTER SERVER 5.0MP1 is now available at http://entsupport.symantec.com/docs/305324 

      The new 5.0MP1CP1 patch replaces the old 5.0MP2CP1 patch that was at http://support.veritas.com/docs/303530 


      Oracle Enterprise Linux 4 Update 6 known issues

      You may receive an "unpacking of archive failed: cpio: Bad magic" error message during the
      Storage Foundation installation, if you are running Oracle Enterprise Linux (OEL) 4 Update 6.

      Workaround:
      Replace the current RPMs and related utilities with the equivalent set from the OEL 4 Update 5 release.


      SLES9 SP4 Issues

      Novell's recently-released Service Pack 4 to their SuSe Linux Enterprise Server 9 (SLES9) release changes kABI and is therefore not eligible for automatic support by Symantec with Storage Foundation and Veritas Cluster Server products. Symantec has provided a patch to enable supported use of SLES9 SP4 with SF HA products: 5.0 Maintenance Pack 2, rolling patch 1


      5.0 Maintenance Pack 2 Rolling Patch 1

      Veritas Storage Foundation High Availability Solutions 5.0 Maintenance Pack 2 Rolling Patch 1 for Linux is now available at http://support.veritas.com/docs/305629 


      Veritas Enterprise Administrator (VEA) 5.0 Maintenance Pack 1 Rolling Patch 1 is Available
      Veritas Enterprise Administrator (VEA) 5.0 Maintenance Pack 1 Rolling Patch 1 (5.0 MP1 RP1) is now available at http://support.veritas.com/docs/297464 


      5.0 Maintenance Pack 2 is Available
      Veritas Storage Foundation (tm) and High Availability Solutions 5.0 Maintenance Pack 2 is available. The Veritas Storage Solutions and High Availability Solutions 5.0 Maintenance Pack 2 (MP2) release adds support for Oracle Enterprise Linux and it adds support for a new product called Veritas Storage Foundation Cluster File System for Oracle RAC. There are no software issues fixed in the 5.0 Maintenance Pack 2 release relative to the 5.0 Maintenance Pack 1 release.

      The release is available at http://www.symantec.com/downloads/fileconnect/index.jsp 

      More information is available at http://entsupport.symantec.com/docs/289439 


      5.0 Maintenance Pack 2 Known Issues

      The 5.0 MP2 Storage Foundation Cluster File System (SFCFS) Release Notes has two incorrect graphics.

      The "SFCFS for Oracle RAC architecture" graphic on page 10 is incorrect. Refer to the correct graphic below:
      http://support.veritas.com/docs/302464 

      The "Communication requirements in SFCFS for Oracle RAC" graphic on page 12 is incorrect. Refer to the correct graphic below:
      http://support.veritas.com/docs/302465


      The SF CFS for Oracle RAC mount points get disabled on loosing private network connectivity for all the private networks.

      With Veritas Storage Foundation Cluster File System for Oracle RAC (SF CFS RAC), if all the private links are lost, then sometimes SF CFS mount points may get disabled on the surviving nodes. SFCFS for Oracle RAC does not support fencing to be configured in enabled mode and Oracle Clusterware (CRS) is expected to handle any split-brain situations. With the current design of SF CFS, if fencing is configured in disabled mode and if jeopardy membership followed by a cluster reconfiguration (by node loss) is reported by GAB to its clients, then CFS assumes this a potential split-brain case and disables the shared mount points on the surviving nodes. One such scenario may occur if the user tries to pull private network cables one after the other with a time interval of more than GAB Stable timeout seconds (by default 5 seconds). In such a case, GAB will first deliver jeopardy membership to its clients (when only one link remains for cluster heartbeat) and then finally (after both the links have been pulled out) it will deliver cluster reconfiguration with node(s) loss.

      To prevent SF CFS to disable mount points in such cases, GAB should be configured to use -s flag. For this, the following line should be entered into /etc/gabtab:

      /sbin/gabconfig -c -s -nX

      Where X is the total number of nodes required for GAB to seed.

      The -s flag for gabconfig disables network partition arbitration and GAB does not report jeopardy membership to any of  its clients even if there is only one link available for cluster heartbeat. But, GAB still reports cluster reconfigurations if no link is available for cluster heartbeat (node loss or split-brain). Therefore with the -s flag, SF CFS will never get jeopardy membership (even if only one link is remaining for cluster heartbeat) and will only get cluster reconfiguration (on a node loss or if all the links for cluster heartbeat are down) in which case it will not disable the shared mount points and will take its normal course of action to recover the shared mount points on the surviving nodes.

      Note: There may be a side affect of using -s to configure GAB on those clients who depend on jeopardy membership. Apart from SF CFS, VCS is another client of GAB which can get affected by not reporting jeopardy membership even if there is only one link for cluster heartbeat. The behavior of Veritas Cluster Server (VCS) depends of whether the node state changed from visible to faulted or from jeopardy to faulted. In the case when we use the -s option, when the node is actually going from jeopardy to faulted, traditional VCS would have auto disabled the service groups, but in this case both sides of the partition will try to online the service groups. However, since we rely on CRS to reboot one of the sides, it must take care of this.

      Moreover, this issue can be observed if both the private links are pulled at the same time. In this case also, CRS should make sure that only one sub-cluster survives before VCS tries to online the service groups.

      Note: The current behavior of  SF CFS will change in the SFCFS for Oracle RAC 5.0 MP3 release. Starting with 5.0MP3, SF CFS will not disable the shared mount points if jeopardy membership followed by a cluster reconfiguration is reported by GAB and therefore we will not need to use the -s flag for configuring GAB in 5.0MP3 .


      5.0 Maintenance Pack 1 Rolling Patches are Available
      VERITAS Volume Manager 5.0 Maintenance Pack 1 Rolling Patch 1 for Linux http://support.veritas.com/docs/290337 
      VERITAS File System 5.0 Maintenance Pack 1 Rolling Patch 1 for Linux - SLES9 http://support.veritas.com/docs/290307 
      VERITAS File System 5.0 Maintenance Pack 1 Rolling Patch 1 for Linux - RHEL4 http://support.veritas.com/docs/290305 


      VRTSvsvc is Obsolete and can be Removed
      Incident 1099152 and 1087985 - The VRTSvsvc component can cause the vxpal.StorageAgent to core dump. This will prevent the Storage Foundation Java GUI and Web GUI from working for the specific managed host.

      Use this command to remove the VRTSvsvc package:
         
      # rpm -e VRTSvsvc


      Updates to the Release Notes for Veritas Storage Foundation Cluster File System (SFCFS)
      Documentation Errata - Installation Guide

      In the SFCFS Installation Guide the following section is missing in the upgrade chapter.

      Disk layout Version 6 is not supported on SFCFS 3.5 or 4.0; you cannot upgrade the file system to disk layout Version 6 or 7 with SFCFS 3.5 or 4.0. Perform the following procedure to upgrade SFCFS to 5.0.

      Upgrading from SFCFS 3.5 or 4.0 to SFCFS 5.0

      1) Upgrade SFCFS to 5.0 using the installsfcfs script.  

      2) Mount each CFS file system as a local VxFS non-cluster file system
        on one of the nodes.

           # mount -t vxfs <device> <mount_point>

      3) Perform an online upgrade of the local mounted disk layout Version 4 or 5
         file systems to disk layout Version 7.  

           # vxupgrade -n 7 <mount_point>

      4) After the disk layout version is successfully upgraded, unmount the
         file system.

          # umount <mount_point>

      5) Cluster mount the file systems using either the '-o cluster' mount option
         or using the cfsmntadm and cfsmount commands.

         See the mount_vxfs(1M), cfsmntadm(1M) and cfsmount(1M) manual pages.


      Daylight Savings Time Change
      For information about Daylight Savings Time issues, refer to this TechNote: http://support.veritas.com/docs/286461 

      The VRTSjre Patch for Daylight Savings Time is available at http://support.veritas.com/docs/290734 


      Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 1
      Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 1 is Now Available
      See the references to 287432 and 287178 in the Related Documents section for more information.

      Storage Foundation and High Availability Solutions 5.0 Maintenance Pack 1 Issues
      The Veritas Cluster Server Enterprise Agent 5.0 Maintenance Pack 1 README document was not included with the software distribution. See the reference to 287292 in the Related Documents section for more information.
      Incident 900480: The information about FSType attribute for Mount agent in the Veritas Cluster Server Bundled Agents Reference Guide is incorrect. FSType is a required attribute and has no default value associated with it.


      The following procedure can be used to import a cloned disk (BCV device) from an EMC Symmetrix array.
      Steps 1 to 3 were not included in the Volume Manager documentation.

      To import an EMC BCV device

      1. Verify that the cloned disk, EMC0_27, is in the "error
      udid_mismatch" state:

            # vxdisk -o alldgs list
            DEVICE          TYPE         DISK    GROUP  STATUS
            EMC0_1          auto:cdsdisk EMC0_1  mydg   online
            EMC0_27         auto         -       -      error udid_mismatch

        In this example, the device EMC0_27 is a clone of EMC0_1.

      2. Split the BCV device that corresponds to EMC0_27 from the disk group
      mydg:

            # /usr/symcli/bin/symmir -g mydg split DEV001

        In this example, the corresponding BCV device to EMC0_27 is DEV001.

      3. Update the information that VxVM holds about the device:

            # vxdisk scandisks

      4. Check that the cloned disk is now in the "online udid_mismatch"
      state:

            # vxdisk -o alldgs list
            DEVICE         TYPE         DISK    GROUP  STATUS
            EMC0_1         auto:cdsdisk EMC0_1  mydg   online
            EMC0_27        auto:cdsdisk -       -      online udid_mismatch

      5. Import the cloned disk into the new disk group newdg, and update the
      disk's UDID:

            # vxdg -n newdg -o useclonedev=on -o updateid import mydg

      6. Check that the state of the cloned disk is now shown as "online
      clone_disk":

            # vxdisk -o alldgs list
            DEVICE         TYPE         DISK    GROUP  STATUS
            EMC0_1         auto:cdsdisk EMC0_1  mydg   online
            EMC0_27        auto:cdsdisk EMC0_1  newdg  online clone_disk


      Storage Foundation and High Availability Solutions 5.0

      Veritas 5.0 Storage Foundation Cluster File System
      For Veritas 5.0 Storage Foundation Cluster File System Release Notes the following known issue is missing.
      If a large message of the day or other notification is present the installation scripts might be unable to complete the OS detect and therefore fail.
      Workaround: Minimize or remove the large message of the day or other notification that is present on your system and try the installation script again.

      For Veritas 5.0 Storage Foundation Cluster File System Administrator's Guide step 7 in the "Setting up the disk group for coordinator disks" section on page 45 is incorrect.

      It should read as follows:

      7) Add the other two disks to the disk group.

          # vxdg -g vxfencoorddg set coordinator=off
          # vxdg -g vxfencoorddg adddisk sdaa
          # vxdg -g vxfencoorddg adddisk sdab
          # vxdg -g vxfencoorddg set coordinator=on

        See the Veritas Volume Manager Administrator's Guide.

      Veritas Storage Foundation 5.0 Release Notes Errata - e914736 EFI Disk Support
      In the Veritas Storage Foundation 5.0 Release Notes, it is stated that EFI disks are not supported. In fact, EFI disks are supported except for use as CDS disks or as encapsulated root disks.

      Veritas Volume Manager (tm) 5.0 Rolling Patch 1 for Linux is now available:
      For more information, see the reference to 285321 below.


      See the Related Documents section below for the cross references to product documentation

      Mixed platform remote and cluster installation and uninstallation are not supported
      Remote and cluster installation and uninstallation require all systems to run on the same processor type and use the same operating system version and all cluster nodes must be at the same patch level.

      Software disc cannot be ejected during installation:

      During installation, if any of the products that contain Veritas Volume Manager (tm) were configured and started, the software disc cannot be ejected. This will prevent further use of the disc drive. This would impact installation of packages from the language pack disc. This problem is not an issue if a product was installed or upgraded that required a system reboot to complete the installation. To avoid this problem at installation:

      1. Specify the -installonly option to the installation script in addition to any other options. This will install the packages only.
      2. Eject the software disc
      3. Run the appropriate installation script , which is in /opt/VRTS/install, with the -configure option specified

      If a software disc cannot be ejected:

      1. Stop the event source daemon:
       # /usr/sbin/vxddladm stop eventsource
      2. Eject the software disc
      3. Restart the event source daemon:
       # /usr/sbin/vxddladm start eventsource


      Veritas Cluster Server (VCS) issues:

      Configuring of security on single node cluster fails (622706)
      The "installvcs -security" command fails with the following message: "Veritas Cluster Server is not configured on <system_name>"

      The instructions to add and remove nodes in the VCS Installation Guide are incorrect.
      Workaround: Do not follow instructions to add and remove nodes from the VCS Installation Guide for Linux. Use this document instead:
      http://support.veritas.com/docs/283982

      CPI displays errors if you install from a Redhat system to a SuSE system (589334)
      Workaround: When installing VCS on SuSE systems, run the installer from a SuSE system

      Cluster node does not panic after split-brain.(618961)
      On SuSE nodes, when the fencing driver calls the kernel panic routine, this routine could get stuck in sync_sys() call. This may cause the panic to hang thereby allowing both sides of the split-brain to remain alive.
      SuSE bugzillas for the issue is: 182348   (https://bugzilla.novell.com/show_bug.cgi?id=182348)
      Workaround: A point patch http://support.veritas.com/docs/284264 (VRTSvxfen-5.0.00.01-PP_SLES9.x86_64_284264.rpm) is available. You can also download it from the Related Documents section below.


      Symantec recommends that you configure only one NFS resource on a system. If you want to configure multiple NFS resources on a system, create multiple proxies pointing to the NFS resource that you already created.


      Cluster Management Console (CMC) issues:

      CMC_SERVICES domain does not get removed on uninstallation.
      Uninstalling the Cluster Management Console management server does not remove the CMC_SERVICES domain.

      Workaround: Remove the domain manually:
      1. Verify the domain exists:
      vssat showpd --pdrtype ab --domain CMC_SERVICES
      2. Remove all principals in the domain:
      vssat deleteprpl --pdrtype ab --domain CMC_SERVICES --prplname [principal name]
      3. Delete the domain:
      vssat deletepd --pdrtype ab --domain CMC_SERVICES@[hostname, as shown by showpd]

      Cluster Management Console does not display localized logs (620529)
      If you install language packs on the management server and on VCS 5.0 cluster nodes, the Cluster Management Console does not initially show localized logs.
      Workaround:
      1 On each node of the cluster, create the following symbolic link:
      From /opt/VRTS/messages/ja to /opt/VRTSvcs/messages/ja
      2 If the cluster is connected to the management server, disconnect and then reconnect the cluster


      Veritas Volume Manager issues:


      With all primary paths inaccessible, the deport operation on a shared disk group fails to clear the PGR keys as the DMP database is not up-to-date.  The deport operation succeeds but the PGR keys are not cleared as the DMP database is not updated to reflect the inaccessibility of failed primary paths.

      Workaround - Running 'vxdisk scandisks' before the DG deport operation triggers DMP reconfiguration which updates the DMP database such that a disk is accessible through active paths.


      Unnecessary file with name "?" appearing on the disc:

      On the disc containing SFRAC software for RHEL4, you can find an unnecessary file named "?" inside the rhel4_x86_64/storage_foundation_for_oracle_rac directory. This file contains invalid text which can be safely ignored.


      The "installsfrac" installer  program does not prompt for the setup of the VCS global cluster option:

      Workaround: Refer to the Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide for instructions on setting up a global cluster ( page 267, "Configuring global clustering")


      EMC PowerPath limitations:


      PowerPath 4.5.x is supported in non-clustered and clustered environments on 32-bit and x86_64 platforms. However, there are currently limitations when using Veritas Cluster Server with a combination of I/O Fencing (which uses SCSI3-PGR) and PowerPath. For details, see the latest Hardware Compatibility List (HCL) or EMC Support Matrix (ESM).



      Updates to the Veritas Storage Foundation for Oracle RAC (SFRAC) - Installation and Configuration Guide:


      Typo on page 288:
      "StartUpOpt" attribute is missing from "Example Oracle database service group configured for replication"

      Oracle recommends starting the database using the srvctl command. Use StartupOpt for OracleAgent. Pfile entries are not required, if you are using "StartupOpt=SRVCTLSTART".

      The current text in the Veritas Storage Foundation for Oracle/RAC (Linux) Installation and Configuration Guide 5.0 is :


      Oracle rac_db (
      Sid @galaxy = vrts1
      Sid @nebula = vrts2
      Owner = Oracle
      Home = "/oracle/orahome/dbs"
      Pfile @galaxy = "/oracle/orahome/dbs/initvrts1.ora"
      Pfile @nebula = "/oracle/orahome/dbs/initvrts2.ora"
      ShutDownOpt = SRVCTLSTOP
      MonScript = "./bin/Oracle/SqlTest.pl"
      )


      It should be as follows

      Oracle rac_db (
      Sid @galaxy = vrts1
      Sid @nebula = vrts2
      Owner = Oracle
      Home = "/oracle/orahome/dbs"
      StartUpOpt = SRVCTLSTART
      ShutDownOpt = SRVCTLSTOP
      MonScript = "./bin/Oracle/SqlTest.pl"
      )


      For more information,  refer to the sample main.cf for primary/secondary cluster in the SFRAC Installation and Configuration Guide



      Updates to the Release Notes for Veritas Storage Foundation Cluster File System (SFCFS):


      In the Veritas Storage Foundation Cluster File System Release Notes, the "Support for SFCFS configurations larger than 16 nodes" issue (page 13) in "Known issues" section should be ignored.  This is a not a known issue.

      In the Veritas Storage Foundation Cluster File System Release Notes, the "Support for 32 cluster nodes" feature in the "New features" section should instead have the following header and text:

      Support for SFCFS configurations larger than 16 nodes
      SFCFS 5.0 is capable of supporting cluster file systems with up to 32 nodes.  Symantec has tested and qualified SFCFS 5.0 cluster file system configurations of up to 16 nodes at product release time. For the latest information on SFCFS support issues, refer back to this TechNote on a regular basis.

      Page 10,  the feature "Version 4 and Version 5 disk layouts" should have the following first sentence instead of the existing first sentence:

      VxFS disk layout Version 4 and 5 are not supported for cluster mounts in SFCFS 5.0.


      Upgrade issues:


      If you are running Storage Foundation 4.0, reinstall the operating system to the supported kernel level and do a fresh installation of Storage Foundation 5.0


      Searching the Support Web Site for Product Messages:

      You can search the Symantec Support Web site for product messages using a link that includes the product universal message identifier (UMI). For example: www.support.veritas.com/umi/v-1-2-3456-789


      Cross References to Product Documentation:
      SF Basic Virtual Machine Entitlements:
      Linux Entitlement on RISC platforms:
      Multiple product versions per server:

 




Legacy ID



281993


Article URL http://www.symantec.com/docs/TECH46445


Terms of use for this information are found in Legal Notices