Veritas SF product till 4.1 was by default taking & keeping 5 backup copies of diskgroups under /etc/vx/cbr/bk directory.
Starting 5.0, this was reduced to only one copy. In most of the cases, diskgroup configuration gets modified the same day & issue occurs on the same day. As a result we are not left with any last known good config copy.
It would be great if this is changed default to 5 copies as was in previous versions.
I agree that since 5.0 can support very large ...
Due to many tailed disks on our cluster environment, the zpool import can consume a lot of time. Since we created a seperate directory for each zone and linked the concerned block devices from /dev/vx/dmp to this directory, we were able to start a zpool with "zpool import -d /dev/. That decreases the import time enormously as shown below.
14:57:04:root:gf01sxdb102t:/opt/VRTSvcs/bin/Zpool > time zpool import bais2t_msc real 4m10.956s user 0m0.311s sys 0m12.495s ...
In a common clusterenvironment there are sometimes failure situations where is no need to switch the service groups from one node to a another node in the cluster. one of this situations is a whole failed network. If the network fails simultaneously over all nodes the cluster framework attempt to start the Servicegroups in one other node in cluster. This behaviour drive the cluster to unnecessary actions and the servicegroup with the application becoming faulted on all ...
In VOM there exist a standard report to show the failovers initiated by the cluster. "The failover report"
However that report is created based on the keyword "failover" in the log files and only works for service groups who are configured as "critical".
We want to get a report of ALL the services group who have stopped and started , including the hostname where they were stopped and the hostname where they are started.
Also for the service groups who are not configured as ...
We see times where DG goes into dgdisabled state while filesystems are still mounted and application running well. As you know, the only way to get the DG out of dgdisabled state is to deport it and import it again. They have to plan a downtime, shutdown production and do it during the approved maintenance window. When everything is running fine, do we have to shutdown whole production just to clear 1 state of the DG.
So if possible, could you please provide a command or any ...
In larger environments the VOM GUI easily gets cramped with information users don't need. The GUI should be customized based on the role based access controles. Means GUI components should be visible or invisible based on the role of the user.
-Unix Admins don't need to see Windows machines
-Oracle Admins should only see their oracle machines
Adding an install script for the Rolling patches like the MP's for Solaris would be very helpful.
Lately Rolling Patches have been patching all the products so it takes time to figure out all the patches that need to be installed.
Windows: When offlining the Windows lanman agent, it does not remove the Windows DNS records (A and PTR). However, when failing over to the remote cluster the lanman agent does update DNS correctly. Unfortunately, during this process it leaves behind the original PTR record. After failing over, we now have two PTR records, one pointing to the Original IP and another one pointing to the New IP.
Unix: When offlining the Unix DNS agent, it removes the appropriate A and PTR ...
we want to install packages (e.g. VERITAS Cluster GUI, VRTSexplorer or own packages) automatically via VOM on every VERITAS based server.
Currently, it is not possible to install packages via VOM.
The Workaround to write a dedicated script and use the Distribution manager Add-on is not a valid workaround.
Currently, you can install VRTSsfmh (a standard package) via VOM, why no other packages?
Suggested: 30 Aug 2012 by bsobek | 3 comments - last comment 22 Jan 2013 by CMilani
we send snmp traps from VOM to nagios. The nagios system receives the following trap:
\'Unknown Trap: enterprises.1302.3.14.10.1.1 ():150:15:19:21.44 enterprises.1302.3.14.10.1.2 ():all_vxvm+vxfs enterprises.1302.3.14.10.1.3 ():event.alert.vom.vm.volume.stopped enterprises.1302.3.14.10.1.4 ():1 enterprises.1302.3.14.10.1.5 ():<hostname>.corp.int enterprises.1302.3.14.10.1.6 ():Volume <volumename> is in stopped state enterprises.1302.3.14.10.1.7 ...
Suggested: 31 Mar 2011 by bsobek | 2 comments - last comment 20 Jun 2011 by Kimberley