Video Screencast Help
Give us your opinion and win with Symantec! Please help us by taking this survey to tell us about your experience with Symantec Connect, so that we can continue to grow and improve.  Take the survey.

CFS configuration over GCO on SFCFS 6.0.1 (Linux)

Created: 14 Nov 2012 | 3 comments

Hi Guys,

I am configuring 8 nodes GCO (4 nodes per site) for CFS configuration on SFCFS 6.0.1, my replication is over Hitachi True copy, I have done base configuration up till HTC agent and tested GCO failover up till HORCM switch its working ok. Now i need to configure CFS in global SG.

1. Once I configure configure CFS mounts on active site (4 nodes), then configuration for all CFS mount points need to manually populate to cluster configuration on remote cluster? or some way to auto populate the configuration to remote cluster within GCO for initial setup and managebility point of view in day 2 day operation for support teams.

2. As per instructions on CFS admin guide, i am fine to create CFS on local site but there is not clear about Global clusters cofiguration update, specially while using OEM storage replication technologies.

High level steps for CFS

i) create shared DG from master node on CFS cluser nodes

ii) Add that DG in to cluster configuration using "cfsdgadm"

iii) Create volumes & filesystems

iv) Add the mounts to cluster configuration using "cfsmntadm"

v) Verify the configuration & mount CFS mounts using "cfsmount"

Thanks for your help in advance.


Comments 3 CommentsJump to latest comment

mikebounds's picture

In 5.1 you had to manually populate cluster configuration on remote cluster and GCO would not do this for you and I see nothing in the 6.0 or 6.0.1 release notes to see that this feature has been introduced.

One important step which is often missed in CFS configurations with GCO and hardware replication is to the configure import, deport, vxdctlenable actions on the CVMVolDg agent - see extract from HORCM VCS agent install guide:

To configure the agent in a Storage Foundation for Oracle RAC or SFCFS
1 Configure the SupportedActions attribute for the CVMVolDg resource.
2 Add the following keys to the list: import, deport, vxdctlenable.
3 Use the following commands to add the entry points to the CVMVolDg
   haconf -makerw
   hatype -modify CVMVolDg SupportedActions
   import deport vxdctlenable
   haconf -dump -makero
Note that SupportedActions is a resource type attribute and defines a list of action
tokens for the resource.
You can use cfsmntadm to add resources to VCS, but personally I just add resources to VCS manually (by hares command or editing or using VCS java GUI).  If you do use cfsmntadm to add resources to VCS on primary site, then I would just copy the section created in to remote site (and restart VCS on remote cluster) rather than rerun all your cfsmount commands, especially if you have a lot of mounts.



UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

mkumar36's picture

Thanks Mike,

It will be bit tedious job for support team for always updating remote cluster configuration :-( , which manual steps, as they adding additional CFS mounts. I was expecting some nice features on SFCFSHA 6.0.1 some thing better for GCO manageability.

Any further help will be appreciated.



mikebounds's picture

If you use cfsdgadm to add CFS mounts then you can just rerun the same commands on remote cluster.  You can also cut and paste resources in GUI or you can use main.cmd.  To use main.cmd, after adding resources to prod site;

On prod site, change directory to conf directory and make sure config is saved - this will fail it is is already closed and saved, so just ignore if this is the case

cd /etc/VRTSvcs/conf/config
haconf -dump -makero 

After last command allow a few seconds for it to save and create a main.cmd file and then grep out resource names which depending on your naming convention, you may be able to grep out several mounts:

grep "resource_name_pattern" main.cmd >

Have a look at the file created and it should contain lines "hares -add" followed by "hares -modify", followed by "hares -link for the resource(s) matched by your grep.  You can then copy this file to remote site and execute it to create resource and links on your remote site.

You can edit the "" file to create new resources on prod, by doing global subsitutions if your new mounts are similary named to existing mounts (instead of using cfsdgadm) or if you are good with scripting you can create your own scripts which create SF mounts and add them to VCS


UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below