Hi,Do I need to run vxcvmconfig to enable VxVM functionality under VCS or do I only need install the license for VxVM custer functionality?Thanks,
As I understand it VCS understands VxVM and VxFS just fine without running any special commands. I am running NetBackup 4.5 on VCS with two nodes, and had to do nothing special for VxVM/VxFS on those systems, beyond making them visible on both nodes, and they run just fine.Now, if you're talking about running Cluster Volume Manager (or File system), that's a different beast as I understand it, and hopefully others will correct me if I'm wrong. CVM allows for multiple nodes to import the same volume group at the same time, and has special locking to keep the nodes from trampling on the files that the other has open. If you are talking about CVM, there probably are things that you would need to do to enable it and configure it on the cluster nodes.That's as far as my experience and knowledge goes, hope you find it helpful.
Thanks for the response this is helpful. But what I want to know if I need to configure CVM or CFS if I will use VCS with 2 nodes which will then have oracle database under veritas cluster. Because I need to create a VxVM shared diskgroup to be able to failover it on other node. Anyone who has setup VCS with 2 nodes and has oracle database setup on share disk? Do I need to configure CVM for this setup? Thanks.
Hi Robert,Thanks for the help. Btw, how did you setup your shared diskgroup or shared volume on your VCS? Can you give some info about your VCS setup. Did you enable your VxVM cluser functionality to be able to share diskgroup?Thanks a lot and appreciate your response.Orly
Orly,A "shared" disk group has nothing at all to do with VCS itself. A disk group is considered shared if more than one host has access to the disks that comprise the disk group.In our environment, the disks for the NBU master server are visible on both systems via the SAN setup that we have (fiber based - the disks that comprise the disk group are mapped to be visible to both hosts). Once you have the disks visible on both servers, you can import them on one server at a time. This is what VCS does in the cluster framework...as your service group fails from one node to the other, VCS will deport the disk group, then import on the gaining node. As far as your Oracle disk group goes, it just needs to be seen (format shows the disks that contain the disk group) on both nodes for VCS to be able to move it from one node to the other in a failover situation.I've no practical experience with Oracle RAC, so hopefully someone who does can correct me if I'm wrong, but the only reason you would need to enable CFS/CVM functionality would be if you were trying to run Oracle as a parallel service group using Oracle RAC. I'm not sure what the setup would be in that case, but there is an Oracle agent add-on for VCS that might be helpful for you in either setup.Here's a glimpse of what the resource hierarchy looks like in our NBU cluster:app_NBU (NBU add-on agent resource type)mount_NBU (Stock VCS Mount resource - ufs or vxfs are available options here)volume_NBU (Stock VCS volume resource)diskgrp_NBU (Stock VCS Disk Group resource) The ordering here is: import the disk group, start the volume(s), mount the volumes on their mountpoints, then start the app. There are many other things that need to be considered for the setup, but hope that gives you a starting place for what you were looking for.Robert
Thanks a lot Robert, looks like this is what I'm looking for with my VCS setup. I only want to create a failover service group for my Oracle, so it seems that It won't be necessary to import the diskgroup under VxVM as shared.