Late Breaking News (LBN) - Updates to the Release Notes for Veritas Storage Foundation and High Availability Solutions 5.0 to 5.0 MP3 products on AIX

Article:TECH46478  |  Created: 2010-01-30  |  Updated: 2011-07-18  |  Article URL http://www.symantec.com/docs/TECH46478
Article Type
Technical Solution

Product(s)

Environment

Issue



Late Breaking News (LBN) - Updates to the Release Notes for Veritas Storage Foundation and High Availability Solutions 5.0 to 5.0 MP3 products on AIX


Solution



 

 

Upgrading VCS using the installvcs program
In the Veritas Cluster Server 5.0 AIX Installation Guide, in the procedure on page 169, “To remove VCS filesets from previous versions and add 5.0 filesets,” step 2 suggests that you select:  “2) Storage Foundation Enterprise HA filesets - 1114 MB required”.
For VCS-only upgrades, the appropriate option to select is: “1) All Veritas Cluster Server filesets - 633 MB required”

Issue regarding vxsvc

VRTSob patch with fix for vxsvc core dump on startup due to corruption of alertlog.db and tasklog.db has been released.
VRTSobc33 patch with fix for vxsvc core dump if the system has ~1000 virtual ip has been released. 
 
 SFCFS support of vSCSI luns:

Storage Foundation Cluster File System is supported when running within LPARs or Micro-partitions. Symantec always recommends I/O fencing to provide maximum protection against data corruption. If storage is presented to the partition using N_Port ID Virtualization(NPIV), then I/O fencing is configurated the same as with dedicated fibre channel adapters.
If storage is provisioned to the micro-partition as vSCSI LUNs through the VIO Server, then I/O fencing is not available as the VIO Server does not support SCSI-3 Persistent Reservations for vSCSI LUNs. In this case, Storage Foundation Cluster File System can be configured without I/O fencing. However, the application needs to provided protection against data corruption which can be caused by writing to the same storage from more than one system in an uncoordinated manner.

SFHA support of Power 7 hardware:

IBM is releasing specific service pack (SP) levels of AIX 5.3 and 6.1 that will support the new POWER7 servers, operating in POWER6 compatibility mode.

Please check with IBM on the exact SP level of AIX that is supported on a specific POWER7 server.

Storage Foundation stack and VCS is expected to support these SP levels of AIX 5.3 and 6.1 releases that run on POWER7 hardware.

Therefore as long as these AIX levels support the POWER7 hardware in POWER6 compatibility mode, Storage Foundation stack and VCS will support newly released POWER7 hardware.


Behavior of Oracle RAC 11g Release 2 (11.2.0.1) on SF Oracle RAC on Unix and Linux platforms

Details:  
 http://www.symantec.com/business/support/index?page=content&id=TECH135605



 To locate the most current product patch releases including Maintenance Packs, Rolling Patches, and Hot Fixes visit https://vos.symantec.com/patch/matrix 


      Veritas Operational Services

      VOS (Veritas Operation Services) Portal:  
      https://vos.symantec.com                                       

      VOS Portal Contains:
          - Risk Assessments
          - VOS Searchability (Error Code Lookup, Patches, Documentation, Systems, Reports)
          - Detailed Reports (Product and License Usage)
          - Notification Signup (VOS Notification Widget)
          - Installation Assessment Services (Installation and Upgrade preparation)


Issues regarding vxdbdctrl: ( Storage Foundation HA 5.0 MP3 RP3 )

Problem Statement :

    a)      If you have persistent PING failure in vxdbdctrl status check that if the file /var/VRTSat/.VRTSat/profile/VRTSatlocal.conf has been corrupted(non-readable) or has 0 length and if it does follow the below workaround.

 

    b)      In VCS 5.0MP3 clusters , sometimes during reboot file /var/VRTSat/.VRTSat/profile/VRTSatlocal.conf  gets corrupted and becomes zero bytes in some nodes of the cluster. This effects the secure clusters and DBED functionality.



Workaround :

User is advised to take a backup copy of AT configuration file /var/VRTSat/.VRTSat/profile/VRTSatlocal.conf before upgrading the  Storage Foundation for Oracle(HA), Storage Foundation for DB2 version(HA) or Storage Foundation for Oracle RAC.  If user is on a cluster, please take a backup copy of VRTSatlocal.conf
file for all nodes in the cluster.
      
After you have upgraded the Storage Foundation for Oracle(HA), Storage Foundation for DB2 version(HA) or Storage Foundation for Oracle RAC version and you have rebooted the system, please ensure
the /var/VRTSat/.VRTSat/profile/VRTSatlocal.conf is not corrupted by ensuring it is in readable
format and it does not have a 0 length.  If it is corrupted,  please restore the backup copy of the VRTSatlocal.conf file.  Otherwise, DBED feature vxdbd daemon will not be able to start up with a corrupted VRTSatlocal.conf  file.



 Issues regarding DS4K Arrays:

1.
In case of DS4K array series connected to AIX host(s), when all the paths to the storage are disconnected and reconnected back, the storage does not get discovered automatically.
In order to discover the storage, cfgmgr OS command needs to be run on all the affected hosts. Once cfgmgr command is run, the DMP restore daemon will bring back the paths online
automatically in the next path restore cycle. The time of next path restore cycle depends on the restore daemon interval specifed by the tunable dmp_restore_interval. (in seconds)


#vxdmpadm gettune dmp_restore_interval
           Tunable               Current Value  Default Value
------------------------------    -------------  -------------
dmp_restore_interval                    300              300


2.
On DS4K array series connected to AIX host(s) DMP is supported in conjunction with RDAC.  DMP is not supported on DS4K series arrays connected to
AIX hosts In MPIO environment.


Issue regarding llt_peerinact value:

Based on our discussion with IBM engineers and our tests we've found that the default llt_peerinact value of 16s is too aggressive for AIX and can cause false declaration of link/node death. The AIX host could just be stuck for upto 14s (according to IBM engineers) and will not be able to send out LLT heartbeats. Hence we recommend changing the llt_peerinact to be 32s.

Do the following on all nodes:
Edit /etc/llttab to include the following line:
set-timer      peerinact:3200

Then run the following command:
#lltconfig -T peerinact:3200

Verify using the following:
#lltconfig -T query | grep peerinact

peerinact   = 3200


Issue regarding installation of Oracle on AIX 6.1 TL1 SP1:

Installation of Oracle CRS 10R2/11gR1 on AIX 6.1 TL3 SP1 fails to
start VIP with below errors:

CRS-1006: No more members to consider

CRS-0215: Could not start resource 'ora.system1.vip'.

Solution:

Follow the "Oracle Metalink Doc ID 805536.1 : VIP CANNOT START ON AIX
6.1 BECAUSE NETSTAT HAS A NEW COLUMN" .


Incorrect Oracle RAC patch list in the Veritas Storage Foundation for Oracle RAC
documentation (5.0 Maintenance Pack 3) - AIX

Some of the Oracle RAC patches listed in the following Veritas Storage Foundation for Oracle RAC documents are incorrect:

Document:       Veritas Storage Foundation for Oracle RAC Installation and Configuration Guide
Section:        Oracle software patches (Page 39)

Document:       Veritas Storage Foundation for Oracle RAC Release Notes
Section:      Oracle software patches (Pages 17 - 18)

Ignore the patch list in the documents above.

The correct Oracle RAC patch list for SF Oracle RAC is as follows:

Oracle RAC Release Required Patches
Oracle RAC 10g Release 2 Patchset 1 (10.2.0.1) 6613550 4435949 5082958
Oracle RAC 10g Release 2 patchset 2 (10.2.0.2) 6613550 ( AIX 6.1 Only ) 4637591
Oracle RAC 10g Release 2 Patchset 3 (10.2.0.3) 6613550 ( AIX 6.1 Only )
Oracle RAC 10g Release 2 Patchset 4 (10.2.0.4) 6849184
Oracle RAC 11g Release 1 Patchset 6 (11.1.0.6) 6849184 6792086

 
Additionally, see the Oracle RAC documentation for other patches that may be required by Oracle RAC for each release.

Procedure to be adopted for Dynamic Reconfiguration of a Controller/HBA (Host Bus Adapter) in Multipath Configuration on AIX platform:
http://seer.entsupport.symantec.com/docs/327005.htm

What are the current Patch Releases for 5.0 MP1 on AIX:
1.  Veritas Cluster Server (VRTSvcsor) 5.0 MP1 HF1
2:  Veritas Volume Manager 5.0 MP1 RP1
3: Veritas Volume Manager 5.0 MP1 RP2
4: Veritas Volume manager 5.0 MP1 RP5
5: Veritas Volume manager 5.0 MP1 RP1HF1
6: Veritas Volume manager 5.0 MP1 RP1 HF2
7: Veritas Volume manager 5.0 MP1 RP1 HF3
8: Veritas Storage Foundation  HA 5.0 MP1
9: Storage Foundation for Oracle RAC 5.0 MP1+hotfix_fubon_e1243787a
10: Storage Foundation for Oracle RAC 5.0 MP1+e1049285+e1044590a
11: Veritas File System 5.0 MP1 RP5
12: Veritas File System 5.0 MP1 RP3
13: Veritas Enterprise Administrator 5.0 MP1 RP1
14: Veritas Command Central 5.0 MP1 RP2
15: Veritas Command Central 5.0 MP1 RP3
16: Veritas Command Central 5.0 MP1 RP4
17: Oracle Disk Manager 5.0 MP1 RP2
18: Oracle Disk Manager 5.0 MP1RP2 HF1
19: Oracle Disk Manager 5.0 MP1 HF4

What are the current Patch Releases for 5.0 MP3 on AIX:
1: Veritas Volume manager 5.0 MP3 HF1
2: Veritas Storage Foundation HA 5.0 MP3
3: Veritas Storage Foundation HA 5.0 MP3 RP1
4: Veritas Cluster Server 5.0 MP3 HF1
5: Veritas Cluster Server 5.0 MP1 Update1+e1233409
6: Sig Licensing 5.0 MP3 HF1
7. Veritas Cluster Server(Gab& llt) 5.0 MP3 RP1 HF1
8. Veritas Volume Manager 5.0 MP3 RP2
9. Veritas File System 5.0 MP3 RP2
10. Veritas Storage Foundation HA 5.0 MP3 RP2
Patches are available on Patch Central: https://vias.symantec.com/labs/patch 

Issue regarding Thin Provisioning:

Thin provisioning: storage is not reclaimed on mirrored disk after data  removed in volume.(1741147)

A reclaim operation on file system mounted on a mirrored volume fails to  reclaim space from the mirror.

Workaround: None.

CSSD agent mandatory for SF Oracle RAC installations

You must configure the CSSD agent after installing Oracle Clusterware. The CSSD agent starts, stops, and monitors Oracle Clusterware. It ensures that the OCR, the voting disk, and the private IP address resources required by Oracle Clusterware are online before Oracle Clusterware starts. Using the CSSD agent with SF Oracle RAC installations thus ensures adequate handling of inter-dependencies and prevents premature startup of Oracle Clusterware.

During system startup, the Oracle Clusterware init scripts invoke the clsinfo script provided by Veritas software. The clsinfo script ensures that the OCR, the voting disk, and the private IP address resources are online before the cssd resource comes online. After the underlying resources come online, the CSSD agent starts Oracle Clusterware.

During system shutdown, the agent stops Oracle Clusterware before the OCR and voting disk resources are taken offline. This ensures that Oracle Clusterware does not panic the nodes in the cluster due to unavailability of the required resources.

Updates/Issues missing from the Veritas File System 5.0 MP3 Release Notes regarding Tunable:

Veritas File System (VxFS) 5.0 MP3 has an increased default value for the max_seqio_extent_size tunable for better performance in modern file systems.

The max_seqio_extent_size tunable value is the maximum size of an individual extent.  Prior to the VxFS 5.0 MP3 release, the default value for this tunable was 2048 blocks. Database tests showed that this default value was outdated and resulted in slower than expected throughput on modern larger file systems.  To improve performance and reduce fragmentation, the default value of max_seqio_extent_size was changed to 1 gigabyte in VxFS 5.0 MP3. VxFS allocates extents in a way that allows VxFS to use only the necessary percentage of the 1 gigabyte extent size, avoiding over allocation.

The minimum value allowed for the max_seqio_extent_size tunable is 2048 blocks--the default value prior to the VxFS 5.0 MP3 release.

Known Issue:
Processes that relied on extents being allocated in smaller chunks could result in unneeded extent space being given to other processes. This could lead to file systems getting full.

Workaround:
Change the max_seqio_extent_size tunable back to the pre-5.0MP3 value of 2048.



Documentation

5.0 MP3 Documentation List http://entsupport.symantec.com/docs/307509 
5.0 MP1 Documentation List http://entsupport.symantec.com/docs/307438 
5.0 Documentation List http://entsupport.symantec.com/docs/307436   

Product documentation, man pages and error messages for the 5.0 and 5.0 MP3 releases are available at http://sfdoccentral.symantec.com/index.html 


Downloads

5.0 Maintenance Pack 3 is available at https://fileconnect.symantec.com 

5.0 Maintenance Pack 3 Rolling Patch 1 is available at https://vias.symantec.com/labs/patch/php/patchinfo.php?download=yes&release_id=1737 

More patches are available on Patch Central https://vias.symantec.com/labs/patch 


Tools

VIAS (Veritas Installation Assessment Service)  https://vias.symantec.com  
Health Check https://vias.symantec.com/labs/vhcs 
Error Code Lookup https://vias.symantec.com/labs/vels 
VIMS (Veritas Inventory Management Service) https://vias.symantec.com/labs/vims 
Veritas Operations Services (VOS) Labs https://vias.symantec.com/labs 
 

This TechNote provides late breaking information regarding all Veritas Storage Foundation and High Availability Solutions 5.0 products on AIX.  Symantec Corporation updates this TechNote as new customer related information pertaining to 5.0 products becomes available. Symantec recommends that you frequently check this TechNote for updates.


Some VCS 5.0, 5.0 MP1 and 5.0 MP1 Update 1 LLT locks are not interrupt safe

Intermittent failures and possible system crashes are known to occur when using 5.0, 5.0 MP1 and 5.0 MP1 Update 1 Storage Foundation for UNIX, (SF/HA) Veritas Cluster Server, (VCS) Storage Foundation for Oracle RAC, (SFRAC) and Storage Foundation Cluster File Server (SFCFS) with certain AIX platforms http://support.veritas.com/docs/300765 


Recommendations on use of Space-Optimized (SO) snapshots in Storage Foundation for Oracle RAC 5.0

If you use Volume Manager mirroring, Space-Optimized (SO) snapshots are recommended for Oracle data volumes.

Keep the Fast Mirror Resync regionsize equal to the database block size to reduce the copy-on-write
(COW) overhead.

Reducing the regionsize increases the amount of Cache Object allocations leading to performance overheads.

Do not create Oracle redo log volumes on a space-optimized snapshot.

Use "third-mirror break-off" snapshots for cloning the Oracle redo log volumes.


Etrack incident 1395863:
EVA8000 cfsmount resources offline in long duration array side port failure with restored.
The following incident 1395863 is fixed in 5.0 MP3RP1 and AMS, HDS9500 arrays are supported now.  


Etrack incident 1523052:
High cpu usage in vol_kmsg_receiver causes gab to panic.

On EVA array,  with SFCFS or SFRAC environment, high CPU us usage in vol_ksmg_receiver may cause gab to panic. This bug is not applicable to SF-HA environment.


Veritas Storage Foundation and High Availability Solution 5.0 MP3 RP1 for AIX:

The Veritas Storage Foundation and High Availability Solution 5.0 MP3 RP1 for AIX (SFHA 5.0 MP3 RP1 for AIX) can now be downloaded from the Patch Central Website:  Look under the Related Documents section below for the link to the SFHA 5.0 MP3 RP1 for AIX.

The incidents fixed in the SFHA 5.0 MP3 RP1 for AIX are included in the Read This First document under the Related Documents section below. The incidents that are fixed but not included in the Read This First document are documented in the Supplemental Material section with SFHA 5.0 MP3 RP1 for AIX download.

Update to the Read This First Document: SFHA 5.0 MP3 RP1 for AIX

Currently, Dynamic expansion vSCSI LUNs is not supported by VxVM. For vSCSI devices, 'vxdisk resize' command would return an error indicating that the operation is not supported.  This limitation is planned to be removed in next release scheduled for calendar Q4 2009.

FYI regarding VCS and the operating system:

VCS requires that all nodes in the cluster use the same processor architecture and
run the same operating system.

All nodes in the cluster must run the same VCS version. Each node in the cluster
may run a different version of the operating system, as long as the operating
system is supported by the VCS version in the cluster.

Oracle 11gR1 RAC Support

Oracle 11gR1 RAC (11.1.0.7) is now supported with the 5.0 MP3 version of Storage Foundation For Oracle RAC for AIX 5.3 and 6.1.

 
Volume Manager Disk Group Failure Policy: requestleave

As of 5.0 MP3, Veritas Volume Manager (VxVM) supports 'requestleave' as a valid disk group failure policy. This new disk group failure policy is not currently documented in the 5.0MP3 Veritas Volume Manager Administrator's Guide. The Administrator's Guide will be updated with this information in the next major release.  

When the disk group failure policy is set to 'requestleave', the master node gracefully leaves the cluster if the master node loses access to all log/config copies of the diskgroup. If the master node loses access to the log/config copies of a shared diskgroup, Cluster Volume Manager (CVM) signals the CVM Cluster Veritas Cluster Server agent. Veritas Cluster Server (VCS) attempts to take offline the CVM group on the master node. When the CVM group is taken offline, the dependent services groups are also taken offline. If the dependent applications managed by VCS cannot be taken offline for some reason, the master node may not be able to leave the cluster gracefully.

Use the 'requestleave' disk group failure policy together with the 'local' detach policy. Use this combination of disk detach policy and disk group failure policy when the availability of the configuration change records is more important than the availability of nodes.  In other words, if you prefer to let a node leave the cluster rather than risk having the disk group be disabled cluster-wide, because of a loss of access to all copies of the disk configuration.

Set the requestleave disk group failure policy as follows:

# vxdg -g mydg set dgfailpolicy=requestleave

Refer to the 5.0MP3 Veritas Volume Manager Administrator's Guide for more information about the disk group failure policy and the disk detach policy.

 
Permission issues while applying Oracle Bundled Patches

While applying an Oracle bundled patch to Oracle Clusterware (CRS),
at the step when you execute prepatch.sh script with oracle user,
the following error is displayed.

chmod: can't change <ORA_CRS_HOME>/css/admin/init.cssd: Not owner

Workaround:
The solution is to change the owner of init.cssd file as oracle user
before applying Oracle bundled patches to Oracle Clusterware.

$ chown <oracle user>:<oracle group> <ORA_CRS_HOME>/css/admin/init.cssd


Additions to Veritas Storage Foundation 5.0 MP3 Release Notes:

ODM support for Storage Foundation 5.0 MP3

The Veritas extension for ODM is now supported for Storage Foundation Standard 5.0MP3 and Storage Foundation Enterprise 5.0MP3.  

In order to use this functionality, you must install a hot fix patch for the Veritas licensing package at http://support.veritas.com/docs/316720 

You may also need to manually install the support packages. See Installing ODM for details.

Using ODM with Storage Foundation or Storage Foundation Cluster File System - AIX at http://support.veritas.com/docs/316754 


If the umask is 0077, the installation or upgrade can fail

Check the umask setting:
# umask
0077

Change umask to 0022:
# umask 0022
# umask
0022


Fire Drill with VVR not supported in SF Oracle RAC environments

SF Oracle RAC now supports Veritas Cluster Server (VCS) Fire Drill. Fire Drill enables organizations to validate the ability of business-critical applications in resuming operations at hot standby data centers following critical outages and disasters. Fire Drill automates creation of point-in-time snapshots and testing of the applications that use the replicated data in the event of a site-to-site application failover, often referred to as a High Availability Disaster Recovery (HA/DR) failover.
 
Note: All operations are managed within the VCS HA/DR framework through hardware replication technologies that use VCS agents. Replication using VVR is not supported for Fire Drill in an SF Oracle RAC environment. This note corrects a related documentation errata in the Veritas Storage Foundation for Oracle RAC Release Notes, which implies support for VVR.
 
The Fire Drill setup wizard allows automated configuration of a Fire Drill. The resultant Fire Drill configuration is also fully customizable. The Fire Drill wizard is invoked from the disaster recovery site using hardware replication by executing the script shipped with the hardware replication agents.


The Local detach policy support documented in the Storage Foundation Release Notes is not correct

The 5.0 MP3 Storage foundation Release Notes included a section titled: "Local detach policy now supported with Veritas Cluster Server clusters and with Dynamic Multipathing Active/Passive arrays." This section is not correct and should be ignored. The restrictions in using the local detach policy still apply for the 5.0MP3 release.



5.0 Maintenance Pack 3 for AIX

5.0 Maintenance Pack 3 for AIX is available at https://fileconnect.symantec.com 

The Veritas Volume Manager 5.0 MP3 HF1 for AIX ( VxVM 5.0 MP3 HF1 ) SmartMove Hot Fix for AIX is available at http://entsupport.symantec.com/docs/311626 

The links to all of the Storage Foundation and High Availability 5.0 MP3 Product Family Documentation for AIX are available at http://entsupport.symantec.com/docs/307509 

The VIAS Veritas Installation Assessment Services (Formerly SF Prep Utility) installation and upgrade tool is available at https://vias.symantec.com/main.php 


Cluster Volume Manager (CVM) fail back behavior for non-Active/Active arrays

This describes the fail back behavior for non-Active/Active arrays in a CVM cluster. This behavior applies to A/P, A/PF, APG, A/A-A, and ALUA arrays.

When all of the Primary paths fail or are disabled in a non-Active/Active array in a CVM cluster, the cluster-wide failover is triggered. All hosts in the cluster start using the Secondary path to the array. When the Primary path is enabled, the hosts fail back to the Primary path.

However, suppose that one of the hosts in the cluster is shut down or disabled while the Primary path is disabled. If the Primary path is then enabled, it does not trigger failback. The remaining hosts in the cluster continue to use the Secondary path. When the disabled host is rebooted and rejoins the cluster, all of the hosts in the cluster will continue using the Secondary path. This is expected behavior.

If the disabled host is rebooted and rejoins the cluster before the Primary path is enabled, enabling the path does trigger the failback. In this case, all of the hosts in the cluster will fail back to the Primary path. [e1441769]


VEA service takes a long time to start

VEA service takes a long time to start if the configuration contains large number of LUNs (1403191)

In configurations with large numbers of LUNS which need to be discovered, the VEA service may take a long time to start. The long start-up time may cause the boot time to be longer than is allowed.

Workaround:
The solution is to start the VEA service in the background, so that the boot continues while the LUNs are discovered.

To start the VEA service in the background:

1. Edit the VEA start-up script.

For Storage Foundation 5.x, use the following command:
#  edit shell script /opt/VRTSobc/pal33/bin/vxpalctrl      

2. In the start_agent() function, add the following line:
     exit 0

  Before the lines:
     max=10
    count=0


Etrack incident 1407255:
 
Issues regarding CX series arrays: 5.0MP3
CVM cluster resource does not start on CX, CX-3 and CX-4 series arrays (1407255)
On Storage Foundation for Oracle RAC configurations connected to CX, CX-3 and CX-4 series arrays, the cvm_clus resource does not start when the CVM Master node is rebooted. The rebooted node does not join the cluster.
Workaround:
1. Before rebooting the CVM Master node, disable the rc scripts for vxpal StorageAgent and agent_watchdog on both nodes to disable StorageAgent.
# mv /etc/rc.d/rc2.d/S75vxpal.StorageAgent /etc/rc.d/rc2.d/S75vxpal.StorageAgent.bak
# /opt/VRTSmh/bin/agent_watchdog_ctrl.sh stop
2. Reboot the node.
3. Start the StorageAgent after the rebooted node joins the cluster, using the following commands:
# /opt/VRTSobc/pal33/bin/vxpalctrl -a StorageAgent -c start
#  /opt/VRTSmh/bin/agent_watchdog_ctrl.sh start
The VEA GUI has restricted functionality until the StorageAgent is started. The GUI will not display any VxVM objects and will not allow any VxVM related operations.


HBA API

Performance issue with parallel HBA API calls when dmp_monitor_fabric tunable is on.

Etrack Incident: 1314337

For Storage Foundation 5.0MP3 on AIX, the dmp_monitor_fabric tunable is set to off by default. Symantec had observed performance degradation of SCSI commands when HBA API calls were made in parallel.
When the dmp_monitor_fabric tunable is off, Veritas Volume Manager (VxVM) prevents the vxesd daemon from making HBA API calls. We recommend leaving the dmp_monitor_fabric tunable set to off to avoid performance degradation.


Configuring DMP on AIX for SAN Booting:

http://support.veritas.com/docs/308527

Prep Utility
The Prep Utility is a web-based configuration tool for Storage Foundation designed to increase the efficiency of the installation and upgrade process for all versions of Veritas Storage Foundation 5.0 products (including - SF, SFHA, SF Cluster File System, and database versions). The basis of the tool is to automate the process of comparing the Storage Foundation system and hardware requirements to the customer's environment. More information is available at https://sfprep.symantec.com/main.php 

Documentation

Product documentation, man pages and error messages for this release are now available at http://sfdoccentral.symantec.com/index.html 


Veritas Cluster Server Agent Support Matrix

The Veritas Cluster Server Agents Support Matrix is available at http://www.symantec.com/business/products/agents_options.jsp?pcid=psc_disaster_recov&pvid=20_1 


Veritas Enterprise Administrator (VEA) 5.0 Maintenance Pack 1 Rolling Patch 1

Veritas Enterprise Administrator (VEA) 5.0 Maintenance Pack 1 Rolling Patch 1 (5.0 MP1 RP1) is now available at http://support.veritas.com/docs/297464 


Volume Manager 5.0 Maintenance Pack 1 Rolling Patch 5

Veritas Volume Manager 5.0 Maintenance Pack 1 Rolling Patch 5 for AIX is now available at http://support.veritas.com/docs/304122 


5.0 Related Issues:

The minimum system requirements for this release are:

      AIX 5.2 ML6 (legacy) or later
      AIX 5.3 TL4 with SP 4

SP 4 was not available at the time of this release. Veritas 5.0 products also operate on AIX 5.3 with Service Pack 3, but you must install an AIX interim fix. Download efixes iy80272 and iy83913,  using these links:

ftp://service.software.ibm.com/aix/efixes/iy80272/

ftp://service.software.ibm.com/aix/efixes/iy83913/ 


Documentation Errata: 5.0 Veritas Cluster Server Agent Developers Guide

The following content replaces the description of the LogFileSize attribute in the Veritas Cluster Server Agent Developers Guide.

LogFileSize

Sets the size of an agent log file. Value must be specified in bytes. Minimum is 65536 bytes (64KB). Maximum is 134217728 bytes (128MB). Default is 33554432 bytes (32MB).

For example,

hatype -modify FileOnOff LogFileSize 2097152

Values specified less than the minimum acceptable value are changed to 65536 bytes. Values specified greater than the maximum acceptable value are changed to 134217728 bytes. Therefore, out-of-range values displayed for the command:

hatype -display restype -attribute LogFileSize

are those entered with the -modify option, not the actual values. The LogFileSize attribute value cannot be overridden.


Veritas (tm) Cluster Server in conjunction with Fully Qualified Domain Names:
Incident: 625310

LLT does not start when Fully Qualified Domain Names (FQDNs) are used as system names during installation and configuration using the CPI installer. This is because the system names in the /etc/llttab and /etc/llthosts files are not in consistent format.

Mitigation:

The workaround is to manually edit the /etc/llttab file and strip the domain name from the FQDN of the system.

This issue applies to all platforms.


Veritas Cluster Server and Media Speed
Incident 615926

When configuring LLT private links for Veritas (tm) Cluster Server (VCS), the CPI installer checks the media speed settings of the NIC cards to make sure that all the NICs have the same speed. If the NICs do not have the same media speed, the installer prints a warning and prompts the user to continue with this message:

The Private NICs do not have the same Media Speed. It is recommended
that the Media Speed should be same for all the private NICs. Without
this, LLT may not function properly. Consult your Operating System
manual for information on how to set the Media Speed. Do you want to
continue with the installation? [y,n,q,b] (n)

At this point, if the user enters "b" to go back to the previous screen, the installer incorrectly terminates.

Mitigation:

Do not enter a "b" at this point in the installation


NFS client drops I/O during hagrp -switch
Incident: 525924

When a service group with an NFS server resource switches from System to another, client I/O operation also switch. In this process, sometimes I/O can drop.
Workaround: Mount the NFS directory again and restart I/O


Configuring of security on single node cluster (VCS) fails:
Incident: 622706

The installvcs -security command fails with the following message:
"Veritas Cluster Server is not configured on <system_name>"


CMC Product Issue:
CMC_SERVICES domain does not get removed on uninstallation.
Uninstalling the Cluster Management Console management server does not remove the CMC_SERVICES domain.
Workaround: Remove the domain manually:
1. Verify the domain exists:
vssat showpd --pdrtype ab --domain CMC_SERVICES
2. Remove all principals in the domain:
vssat deleteprpl --pdrtype ab --domain CMC_SERVICES --prplname [principal name]
3. Delete the domain:
vssat deletepd --pdrtype ab --domain CMC_SERVICES@[hostname, as shown by showpd]


Cluster Management Console (VCS) does not display localized logs:
Incident: 620529
If you install language packs on the management server and on VCS 5.0 cluster nodes, the Cluster Management Console does not initially show localized logs.
To resolve this issue:  
1. On each on each node of the cluster, create the following symbolic link:
From /opt/VRTS/messages/ja to /opt/VRTSvcs/messages/ja
2. If the cluster is connected to the management server, disconnect and then reconnect the cluster


Storage Foundation Cluster File System (SFCFS) 5.0 Release Notes Corrections:

1. In the Veritas Storage Foundation Cluster File System Release Notes, the "Support for 32 cluster nodes" feature in the    "New features" section should instead have the following header and text:

Support for SFCFS configurations larger than 16 nodes

SFCFS 5.0 is capable of supporting cluster file systems with up to 32 nodes.  Symantec has tested and qualified SFCFS 5.0 cluster file system configurations of up to 16 nodes at product release time.


2. In the Veritas Storage Foundation Cluster File System Release Notes, the "Support for SFCFS configurations larger than 16 nodes" issue in "Known issues" section should be ignored.  This is a not a known issue.

3. In the Veritas Storage Foundation Cluster File System Release Notes, the "Version 4 disk layout" feature in the "No longer supported and future support issues" should have the following first sentence instead of the existing first sentence:

VxFS disk layout Version 4 is not supported for cluster mounts in SFCFS 5.0.


Upgrading to Veritas Cluster Server 5.0: Changes to Veritas Cluster Server Agents

See the following link for more information:

http://support.veritas.com/docs/284340


Coordinator disk issue in Veritas Storage Foundation and High Availability Solutions 5.0:

On a two node cluster,  the following conditions will cause a node to panic:

- Primary path of the coordinator disk on Node A fails
- Node B is rebooted

Above conditions will cause Node A to panic

Workaround:

Run vxdisk scandisks after the primary path to the coordinator disk fails


Coordinator disk issue with DS4K series arrays  in VERITAS Storage Foundation and High Availability Solutions 5.0:

On a two node cluster connected to DS4K series array (used in conjunction RDAC driver), the following conditions will cause a node to panic:

- Primary path of the coordinator disk on Node A fails
- Node B is rebooted

Above conditions will cause Node A to panic

Workaround:

None.  


Regarding the Cluster File System 5.0 Release Notes:

The following Known Issue is missing from the Release Notes:

If a large message of the day or other notification is present, the installation scripts might be unable to complete the OS detect and therefore fail.

Workaround: Minimize or remove the large message of the day or other notification that is present on your system and try the installation script again.

Regarding the Cluster File System 5.0 System Administrator's Guide:

Step 9 of "Setting up the disk group for coordinator disks" on page 46 is incorrect.


It should read as follows:

9) Add the other two disks to the disk group.

      #vxdg -g vxfencoorddg set coordinator=off
    # vxdg -g vxfencoorddg adddisk EMCO_16
    # vxdg -g vxfencoorddg adddisk EMCO_17
    # vxdg -g vxfencoorddg set coordinator=on


Regarding the 5.0 Veritas Storage Foundation for Oracle Administrator's Guide:

Step 3 of the procedure "To verify that Oracle Disk Manager is running" on page 128 is incorrect.

It should read as follows:

3)  Verify that the Oracle Disk Manager is loaded:

   You can use the genkld or the genkex commands:

      # genkld |grep odm
      or
      # genkex |grep odm



Daylight Saving Time Issues:

For information about Daylight Savings Time issues, refer to this technote http://support.veritas.com/docs/286461 

vxdisk scandisks hangs. Plex gets disabled
Incident: 924680

A problem has been identified in 5.0 MP1 related to the DS4000 disk array. When there is high I/O load to the array, a device inquiry may fail which will cause the dmpnode to be disabled. When the dmpnode is disabled, all I/O to it will hang.



Importing EMC BCV devices:

The following procedure can be used to import a cloned disk (BCV device) from an EMC Symmetrix array.

To import an EMC BCV device

1. Verify that that the cloned disk, EMC0_27, is in the "error udid_mismatch" state:

      # vxdisk -o alldgs list
      DEVICE          TYPE         DISK    GROUP  STATUS
      EMC0_1          auto:cdsdisk EMC0_1  mydg   online
      EMC0_27         auto         -       -      error udid_mismatch

  In this example, the device EMC0_27 is a clone of EMC0_1.

2. Split the BCV device that corresponds to EMC0_27 from the disk group mydg:

      # /usr/symcli/bin/symmir -g mydg split DEV001

  In this example, the corresponding BCV device to EMC0_27 is DEV001.

3. Update the information that VxVM holds about the device:

      # vxdisk scandisks

4. Check that the cloned disk is now in the "online udid_mismatch" state:

      # vxdisk -o alldgs list
      DEVICE         TYPE         DISK    GROUP  STATUS
      EMC0_1         auto:cdsdisk EMC0_1  mydg   online
      EMC0_27        auto:cdsdisk -       -      online udid_mismatch

5. Import the cloned disk into the new disk group newdg, and update the disk's UDID:

      # vxdg -n newdg -o useclonedev=on -o updateid import mydg

6. Check that the state of the cloned disk is now shown as "online clone_disk":

      # vxdisk -o alldgs list
      DEVICE         TYPE         DISK    GROUP  STATUS
      EMC0_1         auto:cdsdisk EMC0_1  mydg   online
      EMC0_27        auto:cdsdisk EMC0_1  newdg  online clone_disk



VRTSvsvc package is Obsolete and can be Removed from 5.0 or 5.0MP1
Incident: 1099152 and 1087985

The VRTSvsvc component can cause the vxpal.StorageAgent to core dump. This will prevent the Storage Foundation Java GUI and Web GUI from working for the specific managed host.

Use this command to remove the VRTSvsvc package:
 
# installp -u VRTSvsvc



5.0MP1 Related Issue regarding patch level: (For AIX 5.3)

The Perl script (CPI Patch) below is required if the technology level is at or above TL06.  The following CPI patch can be downloaded from the related documents section below.(290667)

NOTE: (AIX 5.2) The CPI patch level issue is also found with AIX 5.2 on systems running less than TL8. Therefore the CPI patch below in the related document section is also needed with AIX 5.2.(290667)

Installation Procedure for CPI patch:

1. Download the CPI patch.
2. Open with WinZip
3. Copy the attached file into /tmp/5.0MP1-fix.pl
4. Run ./installmp -require /tmp/5.0MP1-fix.pl


VxVM support for SAS devices (POWER6-based system)

The techfile (290004) below provides support for SAS devices on AIX platform. POWER6-based systems employ internal SAS drives, which require a patch to VxVM. If you are using a POWER6-based system, this patch is required. The patch is available for VxVM 5.0MP1 through the Veritas Storage Foundation 5.0MP1RP1HF1 patch. This patch can be downloaded from the related documents section below.


VRTSvsvc package is Obsolete and can be Removed from 5.0 or 5.0MP1
Incident: 1099152 and 1087985

The VRTSvsvc component can cause the vxpal.StorageAgent to core dump. This will prevent the Storage Foundation Java GUI and Web GUI from working for the specific managed host.

Use this command to remove the VRTSvsvc package:
 
# installp -u VRTSvsvc

NOTE#  There are rolling patches available (RP2) for Volume Manager and File System over 5.0MP1. These patches can be downloaded from the related documents section below. The fixed lists are included for each patch.

1. AIX_Storage_Foundation_5.0MP1RP2-VM.tar_290958.gz

2. AIX_Storage_Foundation_5.0MP1RP2-FS.tar_290978.gz  ( Patch no longer available for download )

NOTE#  There is a hotfix (292921) that fixes a vxfenconfig issue in the VCS 5.0MP1 release. The hotfix can be downloaded from the related documents section below. The README is included with the hotfix.

Issue regarding the "Install Shield Multi Platform"(ISMP), an installer used by IBM WebSphere:

Problem Description:

This is a bug in the ISMP (install shield multi-platform), an installer used by the WebSphere. ISMP did not recognize the VxVM/VxFS to calculate the size of the volume.  ISMP fixed this in their 5.0.2 release. However, IBM did not pick this release up until WebSphere 6.0 release.
So, if customer is going to use WAS6.0, then it should install on VxFS. Prior versions continue to fail to install on VxFS. There is a  hotfix from ISMP that would make prior versions work as well.


The work-around is:

(1) copy the attached file to temp directory (Under the Related Documents section)
294985  ISMPfixforWAS5_294985.tar
(2) export AIX_LIB_LOC=/<absloute path to the file>

(3) run install again


Storage Foundation Licensing:

Multiple product versions per server
: Licensees of Storage Foundation/High Availability (SFHA) products may run different versions of (SFHA) products on different virtual machines in the same physical server/cpu, provided that the versions of the SFHA products are still supported by Symantec Corporation.


SF Basic Virtual Machine Entitlements
: Licensees of Storage Foundation Basic products may use a maximum of 4 volumes and 4 file systems per virtual machine.  Storage Foundation Basic can only be used on a server that has a maximum of 2 CPU's.

Issue regarding the Release Notes for Veritas SF & HA 5.0MP1 products on AIX:

The minimum requirement for DMP support of virtual SCSI devices is AIX 5.3 TL6, not AIX 5.3 TL5 with SP1 which is currently documented in the 5.0MP1 product release notes.


Regarding the release of 5.0MP1 RU 1 for AIX 6.1 support
:

Please review the following technote for detail information before upgrading to the 5.0MP1 RU 1 release.

http://support.veritas.com/docs/300577

Issue regarding 5.0MP1 RU 1 for AIX
:
CPI ERROR V-9-0-0 Cannot install Veritas Storage Foundation on system <sys>. Please upgrade to 6.1 SP2.
Symantec is providing a new Perl module as a workaround to this installation problem.
#. Download the patch.
#. Open with WinZip
http://support.veritas.com/docs/304626





Supplemental Materials

SourceETrack
Value615926
Description

Installer terminates when "b" option is chosen while configuring SFCFS


SourceETrack
Value625310
Description

AxRT installsfcfs llt does not like to be given long names for install


SourceETrack
Value525924
Description

NFS client drops I/O during hagrp -switch


SourceETrack
Value622706
Description

Configuring of security on single node cluster fails.


SourceETrack
Value620529
Description

[DOC]Need to document the workaround to show localized logs in CMC when connected to a 5.0 cluster


SourceETrack
Value865150
Description

CVM Reboot Resources Fault


SourceETrack
Value924680
Description

vxdisk scandisks hangs. Plex gets disabled.


SourceETrack
Value1099152
Description

VRTSvsvc needs to be removed thru CPI install


SourceETrack
Value1087985
Description

vxpal/vxsmf dumps core on reboot


SourceETrack
Value1111656
Description

CVM fails to start on one node when both cluster nodes rebooted simultaneously


SourceETrack
Value1314337
Description

(NTAP) AIX hosts take 45 minutes to reboot - AxRT SFOR 5.0mp3 AIX 5.3, NetApp FAS3020c array, SSI Mode.


SourceETrack
Value1407255
Description

cvm_clus resource does not start after CVM-Master node is rebooted.


SourceETrack
Value1523052
Description

High cpu usage in vol_kmsg_receiver causes gab to panic.


SourceETrack
Value1395863
Description

EVA8000 cfsmount resources offline in long duration array side port failure with restored



Legacy ID



282024


Article URL http://www.symantec.com/docs/TECH46478


Terms of use for this information are found in Legal Notices