Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.

Can we make a cluster filesystem available in both nodes at a time

Created: 16 May 2013 • Updated: 19 Jun 2013 | 4 comments
This issue has been solved. See solution.

Hello,

We have a Veritas cluster file system in 2 nodes. Now application team has requested to mount the cluster file system in both nodes at same time.

Current status of filesystem  is Active/Passive and customer request is to make that file system as Active/Active.

Is it possible to make Active/Active. If yes, please let me know the procoess...

 

Regards,

Chaitanya Bezawada. 

Operating Systems:
Discussion Filed Under:

Comments 4 CommentsJump to latest comment

stinsong's picture

Hi Chaitanya,

Sure it could do by CFS(cluster file system). If you already install SFCFS or SFRAC on the servers, and create shared DG, you can simply configure the CFS in VCS as CFS mount resources in a application servivce group and together with a CVM service group.

It's a lot of words on how to make it happen. So pls read through the SFCFS admin guide on following link:

https://sort.symantec.com/public/documents/sfha/5....

"Adding CFS file systems to VCS configuration." will generally talk about how to configure CFS in VCS. But it needs CVM and DG already setup in the VCS, which you could reference to other part of the document:

https://sort.symantec.com/public/documents/sfha/5....

 

 

SOLUTION
Marianne's picture

In addition to above excellent post - have a look at this article:

https://www-secure.symantec.com/connect/articles/cfs-local-vxfs-and-back-again

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

ChaitanyaBezawada's picture

Thanks for your words @stinsong and @Marianne van den Berg.

I have  "/apps"  veritas cluster file system configured in a 2 node cluster with Active/Passive modes. Now I want /apps file system mounted to be in node 1 and node 2.

Please give me the procedure to set file system to be available in both nodes.

mikebounds's picture

Procedure off the top of my head (so may miss some steps):

  1. If you haven't already configured fencing then zone 3 small LUNs to your cluster ready for configuring  fencing diskgroup later (see VCS install guide for fencing requirements)
     
  2. If you only have SFHA installed, then upgrade to SFCFS
     
  3. Stop the cluster (if it isn't aready stopped from upgrading to SFCFS)
     
  4. Configure fencing (if not configured already) by running "installvcs -fencing" (see VCS install guide - or you can use "installsfcfsha -fencing" - I believe both do the same)
     
  5. Create cvm servicegroup:
    Backup main.cf
    Run "installsfcfsha -configure"
    I have only ever run "installsfcfsha -configure" on a fresh install which is why I say backup main.cf, but I would assume this should add the "cvm" service to your main.cf, but if it overwrites main.cf, then simply copy cvm service group into your backed up main.cf and then put your new main.cf back in place
     
  6. Edit main.cf and move your Diskgroup and Mount resources to a new a parallel group and change to CVMVolDg and Mount resources.  For the CVMVolume attribute of the CVMVolDg resource, just pick any single volume in your diskgroup.  Also make group dependent on cvm by adding line:
    "requires group cvm online local firm" after resources and before resource dependencies - Example:
    group cfs_grp (
    SystemList = { sys1 = 0, sys2 = 1 }
    AutoFailOver = 0
    Parallel = 1
    AutoStartList = { sys1, sys2 }
    )
    
    CFSMount data1_mnt (
      MountPoint = "/data1"
      BlockDevice = "/dev/vx/dsk/datadg/data1"
    )
    
    CFSMount data2_mnt (
      MountPoint = "/data2"
      BlockDevice = "/dev/vx/dsk/datadg/data2"
    )
    
    CVMVolDg data_voldg (
      CVMDiskGroup = datadg
      CVMVolume = { data1 }
      CVMActivation = sw
    )
    
    requires group cvm online local firm
    
    data1_mnt requires data_voldg
    data2_mnt requires data_voldg
    
    
    
  7.  
  8. Your App and virtual IP will now remain in a failover group, but if your App should no longer failover and it should run on both nodes at the same time, then it should be moved to the parallel group and the IP configuration will depend on your App.Convert diskgroup(s) to shared (diskgroup should be deported as VCS is stopped)
    vxdg -s import dg_name
     
  9. Start VCS on node you changed main.cf on first - then start VCS on other node

Mike

 

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below