Late Breaking News (LBN) - Latest additions to the Release Notes for the Veritas 4.1 Solaris product line

Article:TECH35261  |  Created: 2010-01-07  |  Updated: 2010-08-09  |  Article URL http://www.symantec.com/docs/TECH35261
Article Type
Technical Solution

Product(s)

Environment

Issue



Late Breaking News (LBN) - Latest additions to the Release Notes for the Veritas 4.1 Solaris product line


Solution




To locate the most current product patch releases including Maintenance Packs, Rolling Patches, and Hot Fixes visit https://vos.symantec.com/patch/matrix 

VOS (Veritas Operation Services) Portal:  https://vos.symantec.com

VOS Portal Contains:

      - Risk Assessments
      - Installation Assessment Services (Installation and Upgrade preparation)
      - VOS Searchability (Error Code Lookup, Patches, Documentation, Systems, Reports)
      - Detailed Reports (Product and License Usage)
      - Notification Signup (VOS Notification Widget)


Documentation

Storage Foundation and High Availability 4.1 MP2 Product Family Documentation: http://entsupport.symantec.com/docs/307512 

Storage Foundation and High Availability 4.1 MP1 Product Family Documentation: http://entsupport.symantec.com/docs/307511 

Storage Foundation and High Availability 4.1 Product Family Documentation: http://entsupport.symantec.com/docs/307510 

Product documentation, man pages and error messages for this release are available at vos.symantec.com/documents


Downloads

4.1 Maintenance Pack 2 is available at http://support.veritas.com/docs/287690   

More patches are available on Patch Central https://vias.symantec.com/labs/patch 


Tools

VIAS (Veritas Installation Assessment Service)  https://vias.symantec.com  
Health Check https://vias.symantec.com/labs/vhcs 
Error Code Lookup https://vias.symantec.com/labs/vels 
VIMS (Veritas Inventory Management Service) https://vias.symantec.com/labs/vims 
Veritas Operations Services (VOS) Labs https://vias.symantec.com/labs 
 

Daylight Saving Time Issues

For information about Daylight Saving Time issues, refer to this
technote: http://support.veritas.com/docs/286461 


CSSD agent mandatory for SF Oracle RAC installations

You must configure the CSSD agent after installing Oracle Clusterware. The CSSD agent starts, stops, and monitors Oracle Clusterware. It ensures that the OCR, the voting disk, and the private IP address resources required by Oracle Clusterware are online before Oracle Clusterware starts. Using the CSSD agent with SF Oracle RAC installations thus ensures adequate handling of inter-dependencies and prevents premature startup of Oracle Clusterware.

During system startup, the Oracle Clusterware init scripts invoke the clsinfo script provided by Veritas software. The clsinfo script ensures that the OCR, the voting disk, and the private IP address resources are online before the cssd resource comes online. After the underlying resources come online, the CSSD agent starts Oracle Clusterware.

During system shutdown, the agent stops Oracle Clusterware before the OCR and voting disk resources are taken offline. This ensures that Oracle Clusterware does not panic the nodes in the cluster due to unavailability of the required resources.

For all products that use Veritas Volume Manager, the vxdiskadm utility fails to replace a failed or removed non-root disk (1434779)

 The vxdiskadm utility fails to replace a failed or removed disk using the options:  

  4      Remove a disk for replacement
  5      Replace a failed or removed disk

This issue is specific to the replacement of a non-root disk.
 
An error message displays similar to the following example:

  VxVM  ERROR V-5-2-281
  Replacement of disk rootdg02 in group rootdg with device c1t1d0
   VxVM vxdg ERROR V-5-1-559 Disk rootdg02: Name is already used

   Replace a different disk? [y,n,q,?] (default: n)

Workaround:

  Replace the disk using the following command:

   # vxdg -g $repldgname -k adddisk $repldmname=$repldaname

   For example:

   vxdg -g rootdg -k adddisk rootdg02=c1t1d0
 

This issue is seen in the following releases:

   4.1 MP2 RP4                     (plus 4.1MP2RP3HF18 and above)
 
 
The issue will be fixed in the following releases:

  4.1 MP2 RP5                    
   

Updates to Veritas Storage Foundation 4.1 for Oracle RAC Release Notes

      The Oracle patch requirements information on page 6 of the Veritas Storage Foundation 4.1 for Oracle RAC Release Notes is incorrect.

      The correct Oracle patch requirements are as follows.

        Oracle RAC 10g Release 2 (10.2.0.1): Patch 4435949, Patch 4637591, Patch 5082958
        Oracle RAC 10g Release 2 (10.2.0.2): Patch 4637591



Cluster Volume Manager (CVM) fail back behavior for non-Active/Active arrays

This describes the fail back behavior for non-Active/Active arrays in a CVM cluster. This behavior applies to A/P, A/PF, APG, A/A-A, and ALUA arrays.

When all of the Primary paths fail or are disabled in a non-Active/Active array in a CVM cluster, the cluster-wide failover is triggered. All hosts in the cluster start using the Secondary path to the array. When the Primary path is enabled, the hosts fail back to the Primary path.

However, suppose that one of the hosts in the cluster is shut down or disabled while the Primary path is disabled. If the Primary path is then enabled, it does not trigger failback. The remaining hosts in the cluster continue to use the Secondary path. When the disabled host is rebooted and rejoins the cluster, all of the hosts in the cluster will continue using the Secondary path. This is expected behavior.

If the disabled host is rebooted and rejoins the cluster before the Primary path is enabled, enabling the path does trigger the failback. In this case, all of the hosts in the cluster will fail back to the Primary path. [e1441769]


Storage Foundation VEA 4.1 MP2 RP4 for Solaris SPARC patches are now available

Storage Foundation VEA 4.1 MP2 RP4 Patch for Solaris SPARC - vea-sol_sparc-4.1MP2RP4 http://support.veritas.com/docs/315459 

Storage Foundation VEA 4.1 MP2 RP4 Patch for Solaris SPARC - habanero-sol_sparc-4.1MP2RP4 http://support.veritas.com/docs/315465 



Below are the latest additions for the following Release Notes (Starting with the latest version of the 4.1 product line on top):

4.1 Maintenance Pack 2 (MP2) Related Issues:

For Veritas Storage Foundation for Oracle RAC 4.1 MP2 Release Notes

Determining the right version of "libskgxp"

The versions of the libskgxp libraries may change if Oracle re-releases them.  To help determine the libskgxp library version for a particular Oracle release, use the ipc_version_chk utility included in the dbac patch.

Run the ipc_version_chk utility on the Oracle libskgxp library as follows:

For Oracle 9iR2

      64 bit:
      $ /opt/VRTSvcs/rac/bin/ipc_version_chk_shared_64 \
          $ORACLE_HOME/lib/libskgxp9.so

      32 bit:
      $ /opt/VRTSvcs/rac/bin/ipc_version_chk_shared_32 \
            $ORACLE_HOME/lib/libskgxp9.so

For Oracle 10g

      64 bit:
      $ /opt/VRTSvcs/rac/bin/ipc_version_chk_shared_64 \
            $ORACLE_HOME/lib/libskgxp10.so

      32 bit:
      $ /opt/VRTSvcs/rac/bin/ipc_version_chk_shared_32 \
            $ORACLE_HOME/lib/libskgxp10.so

The utility will display an IPC version number, such as 22, 24, or 25, which corresponds to a libskgxp library version. Refer to the table below for the corresponding library version.

IPC version             libskgxp
     22                      /opt/VRTSvcs/rac/lib/libskgxp9_32.so and /opt/VRTSvcs/rac/lib/libskgxp9_32.so
     23                      /opt/VRTSvcs/rac/lib/libskgxp10_32.so and /opt/VRTSvcs/rac/lib/libskgxp10_64.so
     24                      /opt/VRTSvcs/rac/lib/libskgxp9207_32.so and /opt/VRTSvcs/rac/lib/libskgxp9207_64.so
     25                      /opt/VRTSvcs/rac/lib/libskgxp102_32.so and /opt/VRTSvcs/rac/lib/libskgxp102_64.so
_________________


There was an error in the "Copying the IPC and VCSMM Libraries" section in step 4 on page 15, i.e. "libsngxp9_64.so" should be "libskgxp9_64.so".

Here is the corrected step:

4. Copy the IPC libraries into place:

    If your version is 64-bit:
       For Oracle versions 9.2.0.1 through 9.2.0.6:

           $ cp /opt/VRTSvcs/rac/lib/libskgxp9_64.so \
            $ORACLE_HOME/lib/libskgxp9.so


Added support for Oracle 9.2.0.8:

 
4.1 MP2 supports Oracle 9.2.0.8.  The Veritas IPC library needed for 9.2.0.8 is same as oracle 9.2.0.7 and below is the procedural update to page 15 step 4 of the Release Notes.

4. Copy the IPC libraries into place:

 If your version is 64-bit:
 For Oracle versions 9.2.0.1 through 9.2.0.6:
   $ cp /opt/VRTSvcs/rac/lib/libskgxp9_64.so \
   $ORACLE_HOME/lib/libskgxp9.so

 For Oracle version 9.2.0.7 through 9.2.0.8:
 $ cp /opt/VRTSvcs/rac/lib/libskgxp9207_64.so \
 $ORACLE_HOME/lib/libskgxp9.so

 If your version is 32-bit:
 For Oracle versions 9.2.0.1 through 9.2.0.6:
   $ cp /opt/VRTSvcs/rac/lib/libskgxp9_32.so \
   $ORACLE_HOME/lib/libskgxp9.so

 For Oracle versions 9.2.0.7 through 9.2.0.8:
   $ cp /opt/VRTSvcs/rac/lib/libskgxp9207_32.so \
   $ORACLE_HOME/lib/libskgxp9.so


For Veritas Storage Foundation 4.1 MP2 Release Notes

Addition to the Open Issues, VERITAS Storage Foundation Open Issues section:


124354-02 is a 4.1MP1 Rolling Patch required to support the Sun Cluster 3.2 release. VM 4.1MP2 patch 117080-07 includes all the fixes delivered in patch 124354-02. In a Sun Cluster configuration, once the
systems have been upgraded to 4.1MP2, do not install Patch 124354-02 on top of Patch 117080-07. Reinstallation of Patch 124354-02 will overwrite some 4.1MP2 binaries.

-----------------------------------------------------------------------

Problem description:
vxconfigd fails to start due to unresolved referenced symbol:
vxconfigd: fatal: relocation error: file /sbin/vxconfigd: symbol ddl_get_full_disk_dev_for_devno

This issue occurs in configurations that contain EMC arrays, and is caused when a patch installation on Solaris 10 fails to copy the VM shared library "libvxscsi.so" to the /etc/vx/slib directory during installation. During boot processing in this configuration with the root disk encapsulated, vxconfigd needs access to the library, which may not be available early in the boot process.


Workaround:

After patchadd completes on a Solaris 10 configuration, the following shared libraries must be copied to the /etc/vx/slib directory to ensure they are available during early boot processing:

#cp /usr/lib/libvxscsi.so /etc/vx/slib/
#cp /usr/lib/libvxdiscovery.so /etc/vx/slib/


-----------------------------------------------------------------------

1. In a Cluster Volume Manager (CVM) environment, with the DS4800 array (A/P type), it has been observed that with heavy I/O loads and all primary paths disabled, failover does not always occur, resulting in an I/O failure condition.  During path failover to a secondary path, SCSI3 PGR errors were seen, which prevent the registration keys from being set up successfully. Due to this error, the secondary path can not be configured for I/O operations. If you are manually disabling primary paths, always leave one primary path online to prevent an I/O failure condition. (Etrack #913470)

2. With all CX array models, when multiple primary paths are disabled using the vxdmpadm operation, failover succeeds. However, when the primary paths are enabled using the vxdmpadm operation, the failback may not succeed, or an I/O hang condition may be observed.

The following configuration and steps will prevent failback and cause I/O to hang -

- The system has multiple primary and secondary paths

- All the primary paths are disabled one-by-one using the vxdmpadm disable ctlr=<> command

- At this time,  failover occurs properly and I/O continues on secondary paths

- Enable the primary paths one-by-one using the vxdmpadm enable ctlr=<> command

In some cases, depending on which primary path is enabled first, the I/O will hang and failback will not occur.

Workaround:

When re-enabling multiple primary paths, all paths should be enabled at the same time, that is one after the other, without any delays.

Note: If the additional paths are not enabled in less than 1-2 minutes, failback may not succeed, and a reboot will be required if an I/O hang occurs. (Etrack #937511)

-------------------------------------------------------------------------------------------------------------------------------
4.1 Maintenance Pack 1 (MP1) Related Issues:


For all Veritas 4.1 MP1 products

Solaris 10 OS patch required
Multiple patching and packaging issues require a Solaris 10 operating system patch to install and remove Veritas Storage Foundation and High Availability products in the 4.1 Maintenance Pack 1 release. Before installing any Veritas 4.1 Maintenance Pack 1 product, you must install Solaris patch 119254-09. This patch repairs all of the installation problems, however, some Veritas patches cannot be completely removed using the patchrm command.

Sun is aware of the patchrm problem and is working on a resolution.

See Sun Bug ID 6337009 for the latest information.
http://sunsolve.sun.com

For Veritas 4.1 MP1 Release Notes

The following issue should have been documented in 4.1 MP1 Release Notes:
Abstract: A cluster-wide hang possibility exists during node leave reconfiguration in a Cluster Volume Manager (CVM) and Cluster File System (CFS) setup due to a CVM defect - incident 333917.
Problem description:
A leaving node triggers recovery operations in the cluster, as a result, CFS through CVM sends a PING message to all the nodes in the cluster and expects an acknowledgement from the member nodes. A cluster hang could potentially occur if CVM skips updating a bit in one of its message queues for the node that has left the cluster causing it to wait indefinitely in anticipation of a reply from the leaver node which is never going to come back. The hang is not expected to be hit in every node leave reconfiguration as it depends on the presence of a message in a particular message queue which can potentially happen if there are any failures sending a CVM message during the leaver node processing.
Workaround: No known workaround. A reboot of all the cluster nodes is required to restore normalcy.
Resolution: Install Veritas Storage Foundation 4.1 MP1. Check if the Veritas Volume Manager Patch 4.1MP1 (Patch ID 117080-04) or higher was applied which contains the fix for this issue:
# showrev -p | grep 117080

For Veritas Storage Foundation 4.1  MP1 for Oracle RAC Release Notes


This maintenance release introduces support for Storage Foundation for Oracle RAC (SFRAC) to operate on Oracle 10gR2 databases. For more information, refer to this TechPDF
http://support.veritas.com/docs/280186

Addition to the Software Fixes and Enhancements section:

Etrack incident: 319114

Previously, Cluster Ready Service (CRS) sometimes did not work on a Japanese OS. This issue has been fixed. In the VERITAS Storage Foundation for Oracle RAC 4.1 MP1 Release Notes on page 65, a known issue is included that states that CRS does not work on a Japanese OS. This statement is incorrect. CRS works on a Japanese OS in 4.1 MP1 and later releases.

Etrack incident: 764900
The Oracle cluster ware can be manually started and stopped in Oracle 10.1.0.4 and above and Oracle 10gR2. This change has been incorporated in the cssd agent for Veritas Storage Foundation 4.1 MP1 for Oracle RAC.

Known Issue: I/O Fencing for Veritas Storage Foundation for Oracle RAC and Veritas Cluster Server

Prior to version 5.0, I/O fencing for Veritas Cluster Server (VCS) and SFRAC did not support DMP devices. Do not use DMP device names when using vxfentsthdw. Only raw device names are supported in 4.x releases.


Correction to the placement of the "Installing Language Packages (Optional)" section:

Currently, the "Installing Language Packages (Optional)" procedure on pages 18, 27, and 35 follows the "Shutting Down and Restarting Nodes" procedure on pages 17, 26, and 34. If you install language packages, you must perform this procedure before shutting down and restarting nodes.


Correction to the "Copying IPC and VCSMM Libraries" section:

There is a misprint with the source and destination of VCSMM and IPC libraries on the following pages in the Release Notes. Currently, step 2 in the section "Copying IPC and VCSMM Libraries" on page 19 is incorrect. It should read as:

2. Copy files into place:

    To determine if a node is 32- or 64-bit, enter:
      # isainfo -kv

 

    If your version is 64-bit:
      $ cp /opt/VRTSvcs/rac/lib/libskgxn2_64.so /opt/ORCLcluster/lib/libskgxn2.so
      $ cp /opt/VRTSvcs/rac/lib/libskgxn2_64.so $ORACLE_HOME/lib/libskgxn9.so
      $ cp /opt/VRTSvcs/rac/lib/libsngxp92_64.so $ORACLE_HOME/lib/libskgxp9.so

 

    If your version is 32-bit:
      $ cp /opt/VRTSvcs/rac/lib/libskgxn2_32.so /opt/ORCLcluster/lib/libskgxn2.so
      $ cp /opt/VRTSvcs/rac/lib/libskgxn2_32.so $ORACLE_HOME/lib/libskgxn9.so
      $ cp /opt/VRTSvcs/rac/lib/libskgxp92_32.so $ORACLE_HOME/lib/libskgxp9.so



Step 3 in the "Post-Upgrade Procedures" section on page 28 is incorrect. It should read as:

3. Copy the IPC and VCSMM libraries.

    To determine if a node is 32- or 64-bit, type:
      # isainfo -kv

 

    If your version is 64-bit, type:
      $ cp /opt/VRTSvcs/rac/lib/libskgxn2_64.so /opt/ORCLcluster/lib/libskgxn2.so
      $ cp /opt/VRTSvcs/rac/lib/libskgxp10_64.so $ORACLE_HOME/lib/libskgxp10.so

 

    If your version is 32-bit, type:
      $ cp /opt/VRTSvcs/rac/lib/libskgxn2_32.so /opt/ORCLcluster/lib/libskgxn2.so
      $ cp /opt/VRTSvcs/rac/lib/libskgxp10_32.so ORACLE_HOME/lib/libskgxp10.so


For Veritas Storage Foundation 4.1 MP1 Release Notes
Addition to the VERITAS File System Fixed Issues section for 4.1MP1 and beyond:

Direct mount a VxFS file system in a non-global zone

To direct mount a VxFS file system in a non-global zone, the directory to mount must be in the non-global zone.

For example:
To mount the directory "mnt1" in "zone1" the mount path may be "/zonedir/zone1/root/mnt1".  

The mount must occur from the global zone.
     # mount -F vxfs /dev/vx/dsk/dg/vol1 /zonedir/zone1/root/mnt1

VERITAS Volume Manager Software Issues section:

Two documents, Veritas Volume Manager README.117080-04, in "Section 3", and Storage Foundation Release Notes For Solaris 4.1 MP1 page 37, contain incorrect information in the section, "CVM Tunables for Sun Cluster". This is the corrected version of this section:

CVM Tunables for Sun Cluster

The vol_kmsg_send_period and vol_kmsg_resend_period tunables are measured in seconds. To accommodate better reconfiguration times for vxclust step4, finer resolution of the send and resend periods in microseconds is possible using the two new tuning variables - vol_kmsg_send_period_usec, and vol_kmsg_resend_period_usec.

Their default settings are:

       vol_kmsg_send_period_usec (1000000 in microseconds)
       vol_kmsg_resend_period_usec (6000000 in microseconds)

If only one set of the granularity tunables is specified in the vxio configuration file /kernel/drv/vxio.conf, they will be used. If both are specified, a warning will be reported, and the microsecond tunable will be used. If tuning is needed, use both send and resend period tunables in either seconds granularity or in microseconds granularity.

To modify a tunable:

1. Log in as root
2. Go to the /kernel/drv directory:

    # cd /kernel/drv

3. Modify the vxio.conf file with the following line:

    vol_kmsg_resend_period_usec=1000000;

4. Reboot the system
Note: It is important to include the semicolon (;) in step 3.
[419371, 419372]


For Veritas Cluster Server 4.1 MP1 Release Notes

The chart under Supported Enterprise Agents on page 5 of the Cluster Server 4.1 MP1 Release Notes shows support for Oracle 10g. Explicitly stated,  Cluster Server Oracle agent version 4.1 supports Oracle 10g R1 and 10g R2 versions.


-------------------------------------------------------------------------------------------------------------------------------

4.1 Related Issues:


Veritas Cluster Server Installation Guide for 4.1 Documentation correction

LLT supports Cluster ID numbers between 0 and 65535

Cluster ID is a unique number that the LLT module uses to identify a cluster.
The LLT module for VCS 4.1 and later supports an integer value between 0 and 65535 for the cluster ID. The LLT configuration file /etc/llttab contains this value.

The Veritas Cluster Server Installation Guide for 4.1 mentions that the cluster ID is an integer between 0 and 255. But LLT actually supports an integer value up to 65535.

Workaround:
Though the installvcs program does not support a cluster id value that is greater than 255, you can manually change the cluster ID value in the /etc/llttab file.
For example, if your /etc/llttab file content is as follows, you can change the line "set-cluster 2" to "set-cluster 50000"

      set-node north
      set-cluster 2
      link eth1 eth1 - ether -
      link eth2 eth2 - ether -

After you edit the /etc/llttab file, you must restart LLT for this change to take effect.



For all Veritas Storage Foundation 4.1 Release Notes

The Storage Foundation feature table shown on page 18 of the Solaris 4.1 Getting Started Guide incorrectly indicates that the Oracle Disk Manager (ODM) and Storage Rollback features are available with Veritas Storage Foundation (tm) Cluster File System. They should be listed under Storage Foundation for Oracle RAC and not under Storage Foundation Cluster File System.

-------------------------------------------------------------------------------------------------------------------------------
Caution:   VxFS file systems must be cleanly unmounted before upgrading to the Veritas File System 4.0 or 4.1 release from any previous release. For more information, refer to TechNote 265504:  http://support.veritas.com/docs/265504
-------------------------------------------------------------------------------------------------------------------------------


Four user guides in the Veritas 4.1 release for Solaris show inactive part numbers.

The following three administrator's guide PDFs, available in the VRTSvmdoc package, have an incorrect part number shown on the title page:

    "VERITAS Enterprise Administrator VEA 500 Series - Getting Started" (N16515C)
    "Storage Foundation Cross-Platform Data Sharing Administrator's Guide" (N16512C)
    "Flashsnap Point-In-Time Copy Solutions      Administrator's Guide" (N16514C)


The Getting Started Guide PDF, available at the top level of the product discs, also has an incorrect part number:

    "Storage Foundation and High Availability Solutions Getting Started Guide" (N15375F)


When ordering hard copy versions of these documents from the Veritas Web Store, use the following part numbers instead:

    "VERITAS Enterprise Administrator VEA 500 Series - Getting Started" (N13115F)
    "Storage Foundation Cross-Platform Data Sharing Administrator's Guide" (N13116C)
    "Flashsnap Point-In-Time Copy Solutions      Administrator's Guide" (N13117C)
    "Storage Foundation and High Availability Solutions Getting Started Guide" (N15286F)


-----------------------------------------------------------------------

Addition to the Known Issue sections:

Issue
: Installation of our products on Solaris 10 fails in non-global zones that inherit /opt.
Solution: Before you install Storage Foundation and High Availability products on Solaris 10 systems with non-global zones, ensure that /opt is not inherited by any non-global zone.
Run the following command to verify:

   zonecfg -z zone name info

Output similar to the following appears:

   zonepath: /export/home/zone1
   autoboot:  false
   pool:        yourpool
   inherit-pkg-dir:  
             dir: /lib
   inherit-pkg-dir:
             dir: /platform
   inherit-pkg-dir:
             dir: /sbin
   inherit-pkg-dir:
             dir: /usr

You should not see any occurrences of the /opt directory being inherited:

   inherit-pkg-dir:
             dir: /opt

If you do see that the /opt directory is inherited, you will need to reinstall the zone.

-------------------------------------------------------------------------------------------------------------------------------



For Veritas Storage Foundation 4.1 Release Notes


In the Veritas File System Administrator's Guide the Direct mounts section is missing:

Direct mounts

To direct mount a VxFS file system in a non-global zone, the directory to mount must be in the non-global zone and the mount must take place from the global zone. Using direct mounts limits the visibility of and access to the file system to only that non-global zone.

The following procedure describes mounting the directory dirmnt in the non-global zone zone1 with a mount path of /zonedir/zone1/root/dirmnt.

Note: VxFS entries in the global zone /etc/vfstab file for non-global zone direct mounts are not supported, as the non-global zone may not yet be booted at the time of /etc/vfstab execution.

To direct mount a VxFS file system in a non-global zone

1. Log in to the zone and make the mount point:
  global# zlogin zone1
  zone# mkdir dirmnt
  zone# exit

2. Mount the VxFS file system:
  global# mount -F vxfs /dev/vx/dsk/dg/vol1 /zonedir/zone1/root/dirmnt

3. Check if the file system is not mounted or is not visible in the global zone:
  global# df | grep dirmnt

4. Log in to the non-global zone and ensure that the file system is mounted:
  global# zlogin zone1
  zone# df | grep dirmnt
  /dirmnt (/dirmnt):142911566 blocks 17863944 files

-------------------------------------------------------------------------------------------------------------------------------


Additions to the Software Issues section:

Incident 303238 - When installing the VRTSdbed package using JumpStart, you may see the following warning:

installing </a/opt/VRTS/man/man1m/qio_convertdbfiles.1m> with default mode of 644

It is safe to ignore this warning. The permission for file /opt/VRTS/man/man1m/qio_convertdbfiles.1m is correctly set as 644.

-----------------------------------------------------------------------

Incident 314349 - When creating or validating a snapplan with dbed_vmchecksnap, dbed_vmchecksnap fails to detect snapshot plexes created by the vxassist command.

In Veritas Storage Foundation (tm) 4.1 for Oracle, snapshot plexes created by the vxassist command are not supported. A combination of snapshot plexes created by vxassist and vxsnap is also not supported. When performing Database FlashSnap operations, dbed_vmchecksnap may not detect whether the snapshot plexes are created by the vxassist command and could pass the validation. To verify if your existing volumes are supported, use the following command:

      $ vxprint -g dg_name -F"instant=%instant" volume
      instant=on

The output from the above command should display "instant=on". If the output from the above command says "instant=off", you need to upgrade the snapshot plexes by following the procedures described in "Upgrading Existing Volumes to Use Veritas Volume Manager 4.1" in the "Using Database FlashSnap for Backup and Off-Host Processing" chapter of the VERITAS Storage Foundation for Oracle Administrator's Guide.

-----------------------------------------------------------------------

Incident 271180 - Veritas Storage Foundation 4.1 requires SymCli 5.3 or later. However, EMC only supports SymCli 6.0 on Solaris 10. As of this release of VERITAS Storage Foundation 4.1, the test of SymCli 6.0 has not been completed on Solaris 10.

-----------------------------------------------------------------------

Incident 313885 - Uninstalling the VRTSodm package using the uninstallsf script fails and displays the following error message:

Checking odm driver ................................. odm module loaded
UX:vxfs mount: INFO: V-3-20147: Usage:
      mount [-F vxfs] [generic_options] [-o suboptions] {special|mount_point}
            suboptions are: [rw|ro][,crw]
            [remount]
            [quota|usrquota|grpquota]
            [suid|nosuid]
            [log|delaylog|tmplog]
            [datainlog|nodatainlog]
            [snapof=primary_special[,snapsize=blocks]]
            [convosync={direct|dsync|close sync|delay|unbuffered}]
            [min cache={direct|dsync|close sync|tmpcache|unbuffered}]
            [blkclear]
            [qio|noqio]
            [vxldlog=special]
            [largefiles|nolargefiles]
            [cluster]
            [seconly]
            [noatime]
            [nomtime]
            [qlog=special]
            [ckpt=ckpt_name]
            [ioerror={mwdisable|wdisable|nodisable|disable}]
            [logiosize=size]
            [cds={adaptive|mandatory}]

To work around this issue, do one of the following:

* In your PATH environment variable, include /usr/sbin:/usr/bin:/bin before /opt/VRTS/bin

OR

* Manually uninstall VRTSodm using the pkgrm command. Remove the package as follows:

    # pkgrm VRTSodm


-----------------------------------------------------------------------

Incident 312064 - Oracle datafiles that are part of the Oracle Managed Files (OMF) feature are not supported with dbed_clonedb using instant storage checkpoints.
Using instant storage checkpoints for database cloning with OMF datafiles will delete the OMF datafiles in the primary database.

To avoid this issue, enable Oracle Disk Manager (ODM). Instant Storage Checkpoints and dbed_clonedb work properly when ODM is enabled.

-----------------------------------------------------------------------

Etrack #277495: I18N/L10N: In the GUI, some fields are too small to enter data.

When the Array Configuration window is open and the Provider Status tab is selected, the data fields for "Default Polling Interval" and "CLI Path" are too small and you cannot enter any data. To enter any data in these fields, you must resize (stretch) the window. The fields will then expand and you can enter data. This will be fixed in the next release.

-----------------------------------------------------------------------


For the latest database support matrices for this release, see TechNote http://support.veritas.com/docs/274784 


-----------------------------------------------------------------------


For Veritas Volume Manager/EMC 4.1:

In previous releases of Volume Manager, a combination of DMP subpaths and the controllers of DMP subpaths were usually suppressed to prevent interference between DMP and the EMC PowerPath multipathing driver. Suppression has the effect of hiding these subpaths and their controllers from DMP, and as a result the disks on these subpaths and controllers cannot be seen by Veritas Volume Manager.

Volume Manager 4.1 has the ability to discover EMCpower disks and configure them as autodiscovered disks that DMP recognizes are under the control of a separate multipathing driver. This has the benefit of allowing such disks to reconfigured in cluster shareable disk groups. Before upgrading to Volume Manager 4.1, you must remove the suppression of the subpaths and controllers so that DMP can determine the association between EMCpower metadevices and c#t#d# disk devices.

There are three scenarios where you need to unsuppress DMP subpaths and controllers.  They are documented in the VERITAS Storage Foundation 4.1 Installation Guide for Solaris.
- "Converting a Foreign Disk to auto:simple" on page 70.
- "Converting a Defined Disk to auto:simple" on page 72
- "Converting a Powervxvm Disk to auto:simple" on page 75

There is a fourth scenario which is not documented in the Installation Guide and is included here for the completeness.  
Note: The following procedure is not applicable to normal Powervxvm Disk where the whole disk is used as a Volume Manager disk. For normal Powervxvm Disk, refer to the procedure documented on page 75 of the Installation Guide.   The following is the fourth scenario:

Converting a disk to auto:simple for an EMCpower disk defined as a persistent disk access record

1. Run the vxdisk list command to display the EMCpower disks:

    # vxdisk list
    DEVICE          TYPE            DISK    GROUP   STATUS
    c6c0d12s2      auto:sliced       -        -     online
    emcdisk1        simple          fdisk    fdg    online


2. Stop all the volumes in the disk group, then deport it:

    # vxvol -g fdg stopall
    # vxdg deport fdg

   
3. Run the vxdisk rm command to remove the persistent record definitions:

    # vxdisk rm emcdisk1

     
   If you again run the vxdisk list command, the EMCpower disk is no longer displayed:

    # vxdisk list

 

    DEVICE          TYPE               DISK         GROUP   STATUS
    C6t0d12s2       auto:sliced         -                   -          online


4. Run the vxprtvtoc command to retrieve the partition table entry for the device:

    # /etc/vx/bin/vxprtvtoc -f /tmp/vtoc /dev/rdsk/c6t0d11s2


5. Run the vxedvtoc command to modify the partition tag and update the VTOC:

    # /etc/vx/bin/vxedvtoc -f /tmp/vtoc /dev/rdsk/c6t0d11s2

 

    # THE ORIGINAL PARTITIONING IS AS FOLLOWS:
    #SLICE     TAG   FLAGS       START         SIZE

   0         0x0   0x200         0             0
   1         0x0   0x200         0             0
   2         0x5   0x201         0         16776192
   3         0x0   0x200         0             0
   4         0x0   0x200         0             0
   5         0x0   0x200       75776       16693760
   6         0x0   0x200         0             0
   7         0x0   0x200         0             0
 
  # THE NEW PARTITIONING WILL BE AS FOLLOWS :
  #SLICE     TAG  FLAGS        START        SIZE
   0         0xf    0x200      75776       16693760
   1         0x0   0x200          0           0
   2         0x5   0x201          0        16776192
   3         0x0   0x200          0           0
   4         0x0   0x200          0           0
   5         0x0   0x200       75776       16693760
   6         0x0   0x200          0           0
   7         0x0   0x200          0           0
   DO YOU WANT TO WRITE THIS TO THE DISK / [Y/N] : Y
    WRITING THE NEW VTOC TO THE DISK

6. Upgrade to Volume Manager 4.1 using the appropriate upgrade procedure

7. After upgrading Volume Manager, run the vxdisk list command to validate the conversion to auto:simple format:

    # vxdisk list

  DEVICE             TYPE         DISK     GROUP    STATUS
  c6t0d12s2       auto:sliced       -        -      online
  emcpower11s2    auto:simple       -        -      online

8. Import the disk group and start the volumes:

    # vxdg import fdg
    # vxvol -g fdg startall

 

    # vxdisk list

  DEVICE          TYPE          DISK       GROUP    STATUS
  c6t0d12s2     auto:sliced       -          -      online
  emcpower11s2  auto:simple     fdisk       fdg     online


For Veritas Storage Foundation Cluster File System 4.1 Release Notes

On page 1 of the VERITAS Storage Foundation Cluster File System 4.1 Release Notes (Solaris) it states:

"SFCFS and SFCFS HA run on Solaris 8 (32-bit or 64-bit) or Solaris 9 (32-bit or 64-bit) operating systems."  The Solaris 10 (64-bit) information is missing.

Here is the corrected sentence:

SFCFS and SFCFS HA run on Solaris 8 (32-bit or 64-bit), Solaris 9 (32-bit or 64-bit),  or Solaris 10 (64-bit) operating systems.

-----------------------------------------------------------------------
In the Veritas Storage Foundation Cluster File System 4.1 Installation and Administration Guide (Solaris), step 2 in the "Adding a Node" section on page 50 is incorrect.

It should read as follows:

  2. If there are any dependencies, take them offline on all the nodes:

       # hagrp -offline cvm -sys star33
       # hagrp -offline cvm -sys star34



-----------------------------------------------------------------------


Supplemental Materials

Value6337009
Description

some VERITAS patches cannot be completely removed with patchrm


Value632589
Description

Incorrect steps for copying IPC and VCSMM libraries shown in the SFRAC 4.1 MP1 Release Notes


Value319114
Description

Incorrect statement for CRS on a Japanese operating system in SFRAC 4.1 MP1 Release Notes



Legacy ID



272714


Article URL http://www.symantec.com/docs/TECH35261


Terms of use for this information are found in Legal Notices