Video Screencast Help

Unable to Snapback in Windows 2008r2 VCS

Created: 01 Feb 2013 • Updated: 01 Feb 2013 | 4 comments
This issue has been solved. See solution.

I am running SFWHA5.1 with VCS. I have a diskgroup that is monitored by VCS. In my diskgroup I have my main drive then 3 snapshot drives that I have scheduled task running to Snapback then take a new snapshot daily. This process was running fine till I added the snap drive as resources in VCS. I want the snap drives to fail over to the second node and reattach the drive letters, basically I want what VCS will do but I am getting an error now V-76-58627-1162 now that I have done this.

How do I configure VCS to allow Snapback?

Discussion Filed Under:

Comments 4 CommentsJump to latest comment

mikebounds's picture


which say you can't snapback if volume is in VCS so you must remove from VCS.

But your volume should still get failed over because the snapshots are in the same diskgroup as the main volume, but I'll expand a bit more on this:

Suppose your main drive is D: and your 3 snapshots are S: T: and U:

Then if these are all mounted on node1, then if you switch service group to node2, then VCS will unassign drive D: on node1 and NOT uassign any snapshot drives, but as the diskgroup is deported, then the drives disapear, but snapshot drives are still assigned in the registry.  Then diskgroup is then imported on node2 and D: drive is assigned on node2, and if snap volumes have never been on node2 before, then these will not be assigned drives.  However if you now assign S: T: and U: drives to your snapshots on node2, then this effectivey creates a resgistry entry and so if you now fail back to node1, the snap drives should moint on node1 and if you switch back to node2 again, then snap drives should be assigned on node2.


UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

mhab11's picture

Yep I understood all of that, that is how I was doing it. What I noticed and I am not sure if this was actually the cause, when I failed over the d: volume would ofline but the DG would hang about 120-150 secs. I can only guess that it would hang cause the Snap volumes have drive letters that are not managed by VCS and it was hanging tring to offline those volumes. If I am wrong in thinking that you can stop me there.

Once I added the Snap volumes to VCS the failover process almost doubled in speed about 60-90 secs now.

mikebounds's picture

The alternative is for your schedule to remove VCS resource before snapping back and then add them back again.  If your not familar with command line for deleting and creating VCS resources then:

hares -list Type=MountV   (this lists your main and snap drive resource names)
cd program files\VERITAS\Cluster Server\conf\config
hacf -verify . (this produces main.cmd with commands to create all resources)
findstr snap-drive1-res-name main.cmd (this gives commands to create and link snap drive resource)

Repeat the findstr for your other snap drive resources and then you have all the commands to create your resources which will be something like:

hares -add snap-drive1-res-name MountV group_name
hares -modify snap-drive1-res-name attribute_name1 atrribute_value
hares -modify snap-drive1-res-name attribute_name2 ...
hares -link ...

so then you can run:

haconf -makerw  (open config)
hares -delete snap-drive1-res-name (delete resource)
hares -delete snap-drive2-res-name
hares -delete snap-drive3-res-name
haconf -dump -makero (close config)

Then do your snapshot stuff and then run

haconf -makerw  (open config)
commands to create resources
haconf -dump -makero (close config)


UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

mhab11's picture

Thanks I am going to try this and see if it works for me.