Video Screencast Help

Two node VCS Linux Cluster with HP CA replication

Created: 14 Sep 2012 • Updated: 21 Sep 2012 | 22 comments
uvahpux's picture
This issue has been solved. See solution.

Hi All,

 would like to have a solution for below issue which i am facing during my twno node linux cluster for HP CA replication.

I have configured a two node cluster, each cluster resides in separate sites with dedicated HP P6300 storage. We had configured the storages,Fabric etc according to recommdation. I had installed the vcs agent for hp ca storage.

We have assigned a vdisk from primary array to Primary node and created a DR group for the vdisk for replication accroass to DR site array and presented the same vdisk to DR node. 

We have configured  LVMVolumeGroup, LVMLogicalVolume and Mount resource in VCS Service Group.however whenever i try o bring the Mount resourece to online the service group going to faulty state for some reason.

also i unable to integrate the EVACA resource in VCS Cluster for some reason.

Any advice highly appreciated.

Thanks.

 

Comments 22 CommentsJump to latest comment

Gaurav Sangamnerkar's picture

Hi,

Can you provide more details plz

OS version

VCS version ?

snippet for engine log which demonstrates the error during the resource fail ..

 

G

PS: If you are happy with the answer provided, please mark the post as solution. You can do so by clicking link "Mark as Solution" below the answer provided.
 

uvahpux's picture

Hi Gaurav,

Please fid the following,

OS Version - RHEL 5.8

VCS Version - 6.0

I am sending the logs shortly.

Thanks.

 

uvahpux's picture

Hi Gaurav,

Please note my operating system is RHEL 5.8 x86_64.

Please find the following outout from logs, It is mounting the FS but it is throwing error " lvol0 is already mounted '.

2012/09/15 15:55:39 VCS ERROR V-16-10031-14001 (nmsomu01) LVMVolumeGroup:nmsomuVG:online:Activation of volume group failed.
2012/09/15 15:55:40 VCS INFO V-16-1-10298 Resource nmsomuVG (Owner: Unspecified, Group: nmsomuSG) is online on nmsomu01 (VCS initiated)
2012/09/15 15:56:34 VCS INFO V-16-1-50135 User root fired command: hares -clear nmsomuMNT  from localhost
2012/09/15 15:56:34 VCS INFO V-16-1-10307 Resource nmsomuMNT (Owner: Unspecified, Group: nmsomuSG) is offline on nmsomu01 (Not initiated by VCS)
2012/09/15 15:56:34 VCS INFO V-16-1-10307 Resource nmsomuMNT (Owner: Unspecified, Group: nmsomuSG) is offline on nmsomu02 (Not initiated by VCS)
2012/09/15 15:56:34 VCS INFO V-16-1-50135 (nmsomu01) User root@vcsvom.qtel.com.qa fired command:/opt/VRTSvcs/bin/hares -clear nmsomuMNT from VOM-CS
2012/09/15 15:57:18 VCS INFO V-16-1-50135 User root fired command: hares -online nmsomuMNT  nmsomu01  from localhost
2012/09/15 15:57:18 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group nmsomuSG on all nodes
2012/09/15 15:57:18 VCS NOTICE V-16-1-10301 Initiating Online of Resource nmsomuMNT (Owner: Unspecified, Group: nmsomuSG) on System nmsomu01
2012/09/15 15:57:18 VCS INFO V-16-1-50135 (nmsomu01) User root@vcsvom.qtel.com.qa fired command:/opt/VRTSvcs/bin/hares -online   nmsomuMNT  -sys nmsomu01 from VOM-CS
2012/09/15 15:57:18 VCS NOTICE V-16-10031-5511 (nmsomu01) Mount:nmsomuMNT:online:Trying force mount...
2012/09/15 15:57:18 VCS NOTICE V-16-10031-5516 (nmsomu01) Mount:nmsomuMNT:online:Running fsck...
2012/09/15 15:57:18 VCS WARNING V-16-10031-5521 (nmsomu01) Mount:nmsomuMNT:online:Could not mount the block device /dev/vg-omu/lvol0.
2012/09/15 15:57:19 VCS INFO V-16-2-13716 (nmsomu01) Resource(nmsomuMNT): Output of the completed operation (online)
==============================================
mount: /dev/vg-omu/lvol0 already mounted or /backup busy
mount: according to mtab, /dev/mapper/vg--omu-lvol0 is already mounted on /backup
fsck 1.39 (29-May-2006)
e2fsck 1.39 (29-May-2006)
Warning!  /dev/vg-omu/lvol0 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
/dev/vg-omu/lvol0: clean, 11/6553600 files, 251733/13107200 blocks
mount: /dev/vg-omu/lvol0 already mounted or /backup busy
mount: according to mtab, /dev/mapper/vg--omu-lvol0 is already mounted on /backup
==============================================

2012/09/15 15:59:20 VCS ERROR V-16-2-13066 (nmsomu01) Agent is calling clean for resource(nmsomuMNT) because the resource is not up even after online completed.
2012/09/15 15:59:21 VCS INFO V-16-2-13068 (nmsomu01) Resource(nmsomuMNT) - clean completed successfully.
2012/09/15 15:59:21 VCS INFO V-16-2-13071 (nmsomu01) Resource(nmsomuMNT): reached OnlineRetryLimit(0).

 

 

 

mikebounds's picture

The LVM and mount resources are not working because you have not integrated your EVACA resource yet.  With normal EVACA replication, one side is read-write (RW) and the other read-only (RO) and so the EVACA resource is required to make the storage RW when you failover.  Without this the volume group cannot be activated and the mount resource cannot be mounted RW and hence the messages in the log:

 LVMVolumeGroup:nmsomuVG:online:Activation of volume group failed

So you need to get the EVACA resource working.

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution

uvahpux's picture

Hi Mike,

Thanks for your comment,

I have tried to integrate the VCS with HP CA according to the document vcs_hp_eva_ca_install.

I have configured the HP CA agents in the cluster nodes and configured the HP SSSU utililty to supply the

EVA Management and array information to VCS.

but my issue is i need to mention the array information for each site in main.cf file.

i did try to mention in main.cf file but it did not work. how can i achieve it ?

Any advice  highly appreciated.

Thanks.

 

mikebounds's picture

What error did you get when you tried to configure array name?

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution

uvahpux's picture

Hi Mike,

I have integrated the EVACA in VCS and able to bring the EVACA resource online. However still i am facing the same issue to bring the mount resource online. it says " /dev/vg-omu/lvol0 already mounted or /backup busy "

 

Please find the logs for your reference

2012/09/16 10:36:56 VCS INFO V-16-1-50135 User root fired command: hagrp -online nmsomuSG  nmsomu01  from localhost
2012/09/16 10:36:56 VCS NOTICE V-16-1-10166 Initiating manual online of group nmsomuSG on system nmsomu01
2012/09/16 10:36:56 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group nmsomuSG on all nodes
2012/09/16 10:36:56 VCS NOTICE V-16-1-10301 Initiating Online of Resource nmomuEVACA (Owner: Unspecified, Group: nmsomuSG) on System nmsomu01
2012/09/16 10:36:56 VCS INFO V-16-1-50135 (nmsomu01) User root@vcsvom.qtel.com.qa(unixpwd) fired command:/opt/VRTSvcs/bin/hagrp -online   nmsomuSG   -sys nmsomu01 from VOM-CS
2012/09/16 10:36:59 VCS INFO V-16-20073-26 (nmsomu01) EVACA:nmomuEVACA:online:The replication role of DR group DRG_VCS_OMU is source; no action is required.
2012/09/16 10:37:00 VCS INFO V-16-1-10298 Resource nmomuEVACA (Owner: Unspecified, Group: nmsomuSG) is online on nmsomu01 (VCS initiated)
2012/09/16 10:37:00 VCS NOTICE V-16-1-10301 Initiating Online of Resource nmsomuVG (Owner: Unspecified, Group: nmsomuSG) on System nmsomu01
2012/09/16 10:37:01 VCS ERROR V-16-10031-14001 (nmsomu01) LVMVolumeGroup:nmsomuVG:online:Activation of volume group failed.
2012/09/16 10:37:02 VCS INFO V-16-1-10298 Resource nmsomuVG (Owner: Unspecified, Group: nmsomuSG) is online on nmsomu01 (VCS initiated)
2012/09/16 10:37:02 VCS NOTICE V-16-1-10447 Group nmsomuSG is online on system nmsomu01
2012/09/16 10:38:33 VCS INFO V-16-1-50135 User root fired command: hares -add nmsomuLV  LVMLogicalVolume  nmsomuSG  from localhost
2012/09/16 10:38:33 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/LVMLogicalVolume/LVMLogicalVolumeAgent for resource type LVMLogicalVolume successfully started at Sun Sep 16 10:38:33 2012

2012/09/16 10:38:33 VCS INFO V-16-1-50135 User root fired command: hares -modify nmsomuLV  LogicalVolume  lvol0  from localhost
2012/09/16 10:38:33 VCS INFO V-16-1-50135 User root fired command: hares -modify nmsomuLV  VolumeGroup  vg-omu  from localhost
2012/09/16 10:38:46 VCS INFO V-16-1-50135 User root fired command: hares -add nmsomuMNT  Mount  nmsomuSG  from localhost
2012/09/16 10:38:46 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/Mount/MountAgent for resource type Mount successfully started at Sun Sep 16 10:38:46 2012

2012/09/16 10:38:46 VCS INFO V-16-1-50135 User root fired command: hares -modify nmsomuMNT  BlockDevice  /dev/vg-omu/lvol0  from localhost
2012/09/16 10:38:46 VCS INFO V-16-1-50135 User root fired command: hares -modify nmsomuMNT  MountPoint  /backup  from localhost
2012/09/16 10:38:46 VCS INFO V-16-1-50135 User root fired command: hares -modify nmsomuMNT  FSType  ext3  from localhost
2012/09/16 10:38:48 VCS INFO V-16-1-50135 User root fired command: hares -modify nmsomuMNT  FsckOpt  -n  from localhost
2012/09/16 10:39:21 VCS INFO V-16-1-50135 User root fired command: hares -link nmsomuLV  nmsomuVG  0  0  from localhost
2012/09/16 10:39:22 VCS INFO V-16-1-50135 User root fired command: hares -link nmsomuMNT  nmsomuLV  0  0  from localhost
2012/09/16 10:39:57 VCS INFO V-16-1-50135 User root fired command: hagrp -enableresources nmsomuSG  from localhost
2012/09/16 10:39:57 VCS INFO V-16-1-10297 Resource nmsomuLV (Owner: Unspecified, Group: nmsomuSG) is online on nmsomu01 (First probe)
2012/09/16 10:39:57 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group nmsomuSG on all nodes
2012/09/16 10:39:57 VCS INFO V-16-1-10304 Resource nmsomuMNT (Owner: Unspecified, Group: nmsomuSG) is offline on nmsomu01 (First probe)
2012/09/16 10:39:58 VCS INFO V-16-1-10304 Resource nmsomuLV (Owner: Unspecified, Group: nmsomuSG) is offline on nmsomu02 (First probe)
2012/09/16 10:39:58 VCS INFO V-16-1-10304 Resource nmsomuMNT (Owner: Unspecified, Group: nmsomuSG) is offline on nmsomu02 (First probe)
2012/09/16 10:40:20 VCS INFO V-16-1-50135 User root fired command: haconf -dump -makero from localhost
2012/09/16 10:41:00 VCS INFO V-16-1-50135 User root fired command: hares -online nmsomuMNT  nmsomu01  from localhost
2012/09/16 10:41:00 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group nmsomuSG on all nodes
2012/09/16 10:41:00 VCS NOTICE V-16-1-10301 Initiating Online of Resource nmsomuMNT (Owner: Unspecified, Group: nmsomuSG) on System nmsomu01
2012/09/16 10:41:00 VCS INFO V-16-1-50135 (nmsomu01) User root@vcsvom.qtel.com.qa(unixpwd) fired command:/opt/VRTSvcs/bin/hares -online   nmsomuMNT  -sys nmsomu01 from VOM-CS
2012/09/16 10:41:01 VCS NOTICE V-16-10031-5511 (nmsomu01) Mount:nmsomuMNT:online:Trying force mount...
2012/09/16 10:41:01 VCS NOTICE V-16-10031-5516 (nmsomu01) Mount:nmsomuMNT:online:Running fsck...
2012/09/16 10:41:02 VCS WARNING V-16-10031-5521 (nmsomu01) Mount:nmsomuMNT:online:Could not mount the block device /dev/vg-omu/lvol0.
2012/09/16 10:41:03 VCS INFO V-16-2-13716 (nmsomu01) Resource(nmsomuMNT): Output of the completed operation (online)
==============================================
mount: /dev/vg-omu/lvol0 already mounted or /backup busy
mount: according to mtab, /dev/mapper/vg--omu-lvol0 is already mounted on /backup
fsck 1.39 (29-May-2006)
e2fsck 1.39 (29-May-2006)
Warning!  /dev/vg-omu/lvol0 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
/dev/vg-omu/lvol0 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong (12855467, counted=12855466).
Fix? no

Free inodes count wrong (6553589, counted=6553587).
Fix? no

/dev/vg-omu/lvol0: ********** WARNING: Filesystem still has errors **********

/dev/vg-omu/lvol0: 11/6553600 files (9.1% non-contiguous), 251733/13107200 blocks
mount: /dev/vg-omu/lvol0 already mounted or /backup busy
mount: according to mtab, /dev/mapper/vg--omu-lvol0 is already mounted on /backup
==============================================

2012/09/16 10:42:02 VCS INFO V-16-1-50135 User root fired command: hares -modify ... -add nmomuEVACA  ResourceInfo  ReplicationStatus 
DR Group Role: Source
Operational State: good
Write mode: Synchronous
Failsafe: disable  from localhost
2012/09/16 10:43:03 VCS ERROR V-16-2-13066 (nmsomu01) Agent is calling clean for resource(nmsomuMNT) because the resource is not up even after online completed.
2012/09/16 10:43:04 VCS INFO V-16-2-13068 (nmsomu01) Resource(nmsomuMNT) - clean completed successfully.
2012/09/16 10:43:04 VCS INFO V-16-2-13071 (nmsomu01) Resource(nmsomuMNT): reached OnlineRetryLimit(0).

 

mikebounds's picture

You don't appear to have a DiskReservation resource - see extract from Linux Bundled agents guide:

 

LVMVolumeGroup resources depend on DiskReservation resources. If an LVMVolumeGroup does not have a corresponding DiskReservation resource on which it depends, the LVMVolumeGroup does not function
 
So you need to create a DiskReservation resource and this should online first (make the LVMVolumeGroup resource dependent on the DiskReservation resource.
 
Mike
 
 

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution

uvahpux's picture

Hi Mike,

Thanks for your reply.

Please note that we are using linux device mapper for multipathing.

Please correct me if i am wrong, According to bundle agent document disk reservation is not required for multipathing disks,

So  can i configure disk reservation for linux device mapper managed disk /dev/mpath/mpath1 also ?

I have only one LUN ( /dev/mpath/mapth1 ) for vg-omu volume group.

Pv Name : /dev/mpath/mapth1

VG Name : vg-omu

LV Name : lvol0

Thanks.

 

 

 

 

mikebounds's picture

Extract I gave earlier was from 5.1 bundled agents guide which also says:

You cannot use the DiskReservation agent to reserve disks that have multiple
paths. The LVMVolumeGroup and the LVMLogicalVolume agents can only be
used with the DiskReservation agent, Symantec does not support the
configuration of logical volumes on disks that have multiple paths. To ensure
data protection on such a configuration, Symantec recommends the use of
Veritas Volume Manager (VxVM) disk groups. 
However, sorry, I note you are using 6.0 and this is slightly different as it seems Disk reservation is not mandatory, but recommended as the 6.0 bundled agents guide (http://www.symantec.com/business/support/resources/sites/BUSINESS/content/live/DOCUMENTATION/5000/DOC5279/en_US/vcs_bundled_agents_60_lin.pdf) says:
No fixed dependencies exist for LVMVolumeGroup Agent. When you create a
volume group on disks with single path, Symantec recommends that you use the
DiskReservation agent.
and also says:
You cannot use the DiskReservation agent to reserve disks that have multiple
paths. In case of Veritas Dynamic Multi-Pathing, the LVMVolumeGroup and the
LVMLogicalVolume agents can be used without the DiskReservation agent
This says you can use Veritas Dynamic Multi-Pathing with LVM (note Veritas Dynamic Multi-Pathing recently, from about 5.1SP1 I think, became available as a separate product which didn't require SF and can be used on non-VxVM disks).  It doesn't explicitly says you can't use other multipathing software and in the LVMVolumeGroup agent it gives examples using multipathing, but it does not say if this is Veritas Dynamic Multi-Pathing or third party (such as Linux Device Mapper) multipathing, but it does mention using taging when using Veritas Dynamic Multi-Pathing:
Enabling volume group activation protection for Veritas Dynamic Multi-Pathing
 
On each node in the cluster, perform the following procedure to enable activation
protection for volume groups on Red Hat and SUSE systems.
To enable volume group activation protection
1 On each node in the cluster, edit /etc/lvm/lvm.conf , and add the following line:
tags { hosttags = 1 }
2 On each node in the cluster, create the file lvm_`uname –n`.conf in the /etc/lvm/ directory.
3 Add the following line to the file you created in step 2:
activation { volume_list="@node" }
where node is the value of the uname -n command.
But the examples later say that tagging is option when using multipathing.
 
Note also that the HCL has specific notes about using LVM with third party disk multi-pathing for AIX, not no mention for Linux:
The VCS LVM agent supports the EMC PowerPath third-party driver on EMC's Symmetrix 8000 and DMX series arrays.
The VCS LVM agent supports the HITACHI HDLM third-party driver on Hitachi USP/NSC/USPV/USPVM, 9900V series arrays.
 
I will see if I can get clarification from Symantec, whether third-party multipating is supported with LVM VCS agents.
 
Mike

 

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution

mikebounds's picture

I posted query on support for third party Multipathing with VCS LVM agents at https://www-secure.symantec.com/connect/forums/using-third-party-multipathing-vcs-lvm-agents and it seems that multipaths are not supported for 6.0 VCS LVM agent, but it should technically work.

Have you tried manually activating volume group.  I guess as there are no errors about the import, the vgimport was successful in the agent and it is the "vgchange -a" that fails, so see if it works from the command line.

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution

uvahpux's picture

Hi Mike,

Yes, You are correct. I was asking to symantec support also and they said i have to have dmp.

native device mapper may be supported in upcoming version.

 i tried your suggestion to vgimport for vgomu VG but it is saying write failure as below.

[root@ ~]# vgimport vgomu
  /dev/mpath/36001438009b04d840000400001030000: write failed after 0 of 4096 at 24576: Operation not permitted.

I have a doupt that this lun was not writable coz my replication from PR storage to DR storage is read only. (available options are none and read only )

so the hpevaca agent should take care of it that the DR storage should able to write when we do failover.

Thanks.

 

uvahpux's picture

Hi mike,

 

I would like to inform you that node1 is running now, i can able to bring the service group online.

however i unable to do failover the service group to node2.

 

Thanks.

mikebounds's picture

If you can't run vgimport then issue is with hpevaca agent which is not making LUNs writeable.

Have you localised the attributes for the hpevaca resource so that you are specifying a different array for each node - if not, this will be why it doesn't work on node2.  If you don't know how to localise attributes then please give extract from main.cf  (in /etc/VRTSvcs/conf/config) of your hpevaca resource and whether you want to use GUI or command line.

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution

uvahpux's picture

Hi Mike,

I have specified the PR array information, I am giving you the main.cf output.

 EVACA nmomuEVACA (
  ManagementServer = ""
  LocalEVAName = Q_NMS1_EVA
  DRGroupName = DRG_VCS_OMU
  SSSUPath = "/opt/Hewlett-Packard/sssu_linux_x64"
  )

I would like to have commands.

mikebounds's picture

You need to localise at least LocalEVAName, so if the local array for node 2 is Q_NMS2_EVA, then you need to run:

haconf -makerw
hares -local nmomuEVACA LocalEVAName
hares -modify nmomuEVACA LocalEVAName Q_NMS2_EVA -sys nmsomu02
hares -dump -makero

If any other attributes are local to that node, you need to change these too, so for instance if you have a different ManagementServer for each node, then you need to locaise this attribute too.

Mike

 

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution

uvahpux's picture

Hi Mike,

It means i have to have EVACA resource which contains info about PR storage for node1 only and

I ahve create another resource for EVACA lets say EVACA2 along with Secondary storage info and restrict it to node 2 only orrect ?

 

mikebounds's picture

No, you have one resource with localised attributes, so when you have run the commands in the last post, your resource in main.cf will look like:

 EVACA nmomuEVACA (
  ManagementServer = "172.16.236.200"
  LocalEVAName@nmsomu01 = Q_NMS1_EVA
  LocalEVAName@nmsomu02 = Q_NMS2_EVA
  DRGroupName = DRG_VCS_OMU
  SSSUPath = "/opt/Hewlett-Packard/sssu_linux_x64"
  )

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution

uvahpux's picture

Hi Mike,

It is working but i have tested it with other setup where i have installed dmp also!

I will configure it with device mapper tomorrow and confirm you.

It was great and really appriciated your efforts to solve this issue.

Thank you very much!

mikebounds's picture

I think it should work with device mapper - the support from Symantec, just means they test it and it is very unlikely they would have to change any code for it to work as it is very unlikley vgimport/vgchange commands will change just because Disk Multipathing is involved.  So I don't why Symantec are bothered whether Disk Multipathing is used from 6.0 where the DiskReservation agent is no longer required.

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution

SOLUTION
uvahpux's picture

Hi Mike,

I tried with device mapper but it was not successful as it was unable to activate the VG, manual also not working. after installing dmp it was working fine.

One more Question! in vg i have 4 nos of pv configured. all of them were replicated with respective DRG group to DR array, i have assigned them to the hosts.

now i need to specify all four DRG group name in conf,

 should it be like below ?

EVACA nmsispiEVACA (
  LocalEVAName @S1_EVA
  LocalEVAName @S2_EVA
  DRGroupName = DRG1

DRGroupName = DRG2

DRGroupName = DRG3

DRGroupName = DRG4
  ManagementServer = ""
  SSSUPath = "/opt/Hewlett-Packard/sssu_linux_x64"

 

or i need to have a single DR group ?

Thanks.

mikebounds's picture

The syntax you have shown is invalid - an attribute in a resource can only appear once, except if it is localised like LocalEVAName.  Some attributes let you specify lists like "= {value1, value2}", but DRGroupName is not a list, it is just a single attribute.  So you need to put in PVs into one DRGroupName as all the PVs are related as they are in the same VG.  Assuming DRGroupName is the same concept as replication groups in other replication products like VVR and EMC, you should always put related LUNS in the same group, regardless of whether you are using VCS and separate replication groups should only be used for separate independent LUNs where you would create a seperate resources in VCS.

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has helped you, please vote or mark as solution