Video Screencast Help

VCS for SQL Server 2008 R2

Created: 05 Jul 2011 • Updated: 07 Jul 2011 | 8 comments
Ashon1's picture
This issue has been solved. See solution.


I need some help and I hope someone here can help me.  I am trying to create a disaster recovery cluster for our SQL databases.  There are two cluster servers at each location.  I have created a cluster in location A with notification option and geocluster option enabled.  I created a second cluster in location B also with notification option and geocluster enabled.  I have not linked the clusters.  I need to install three instances of SQL.  I have a few questions though.

  1. To install three instances of SQL do I link the clusters first? 
  2. Is the installation process (and storage migration) A1 -> A2 and separately B1 -> B2 and geocluster later? or is it geocluster first and install A1 -> A2 -> B1 -> B2?
  3. Do I repeat this process for EACH instance?
  4. One of the instances needs to use SQL Reporting services.  Can I tie IIS to the SSRS service group?  The goal would be that IIS would run on the server hosting SSRS but not on any of the other servers.  How would this be done?

All help is appreciated.

Comments 8 CommentsJump to latest comment

Wally_Heim's picture

Hi Ashon1,

The quick answer to your question is it depends on how you want to set it up.  There is some flexiblity in the installation and configuration order of SQL and GCO in this type of configuration.

You did not mention if you are using VVR so I'm going to assume that you are.

With two basic clusters setup as you describe I would continue as follows:

1. Create disk groups and volumes on the cluster at site A on say node A1.

2. Install SQL instance 1 on A1 ensuring to put the binaries on the local drive and the databases/logs on the share drive(s) created in step 1.

3. Install any SQL service packs as needed.

4. Stop SQL services for SQL instance 1 and set them to manual.

5. Deport the disk group for SQL instance 1.

6. Move to node A2 and import the disk group from step 1 and mount the volumes as they were on node A1.

7.  Repeat steps 2-5 on node A2.  (you will need to rename the database/log folders on the shared drive before install SQL on node A2.

8.  Run the SQL service group configuration wizard to setup the SQL service group at this site (A.)

9.  Setup replication both primary and secondary - remember to use a virtual IP address at both sites for replication.  (This can be done just after step 1 if you want to.)

10. run the VVR Service group configuration wizard to create the VVR service group as needed on site A.

11.Once replication is up to date and the SQL service group is offline on site A, mirgrate replication to site B.

12.  Repeat steps 2- 10 on the nodes at site B.  Ensure that the service group name for site B matches exactly the service group name at site A.

13.  Link the two clusters at sites A and B together.

14.  Right click on the SQL service group in cluster manager and select make global.

Repeat these steps for ht ereaming SQL service group instances.  You will not need to link the two clusters again for the remaining instances because they are already linked.

You can manually add the IIS resource to the group that is needed.  The IIS resource controls the IIS sites so the IIS service will be running on all nodes but VCS will start and stop the IIS website as needed on the node where the group is running.



Ashon1's picture

Thank you for your reply! 

You are right.  I didn't mention VVR.  I didn't mention storage.  There are two netapps.  One at each location.  So we are going to be using snapmirror to replicate the data.  I believe that I would just adjust your document replacing VVR with configuring snapmirror.  Correct?

I do have two followup questions.  1) In step 7 you are installing SQL on additional nodes.  My SQL binaries would be on the local C drive.  I usually place the databases on shared storage.  The manual says be sure not to install in the same place... so then where do i put them?  does it matter? 

2) Step 9 you mention a virtual ip at both sites... is there any other way to do this?  No is acceptable.  I am just trying to avoid that confiugration if I can.

Ashon1's picture

Hi Wally,

I am on step 8 and I am getting an error saying that there are no shared drives discovered.  Possibly the volumes are already part of another service group in the current configuration (which they are not). Error V-16-13-179.  Any ideas?  I have tried with the drives disconnected (as your steps say) and I have tried with the drives connected.  Did something get left out?


Wally_Heim's picture

Hi Ashon1,

Regarding followup question #1.  You do need to install the Master database to the same path on all nodes.  On step 7 I did mention that you will need to rename the folder that was created on the prior installation of SQL before you install SQL on the next node.  This will allow you to select the same installation path but not overwrite the databases put on the shared disk from the first install.

Regarding followup question #2. If you are using NetApp Snapmirror then you do not need to worry about the virtual IPs needed for VVR replication.  You can get around this during setup of a VVR environment by setting replication to use the local IP and then switch them to the virtual IP later.  This is typically more work than setting it up with the virtual IPs to begin with.

Other thing to keep in mind is that if you are using NetApp Snapmirror then you do not want to use the SFW-HA product.  You want to install the VCS for Windows product which does not include SFW (volume management.)



Ashon1's picture

Ok.  I think I understand #1.  But for #2 I was more worried about having a virtual IP on two subnets.  I'm not trying to be dense (but I feel like I am). From what you are saying, either way you always end up pointing to a virtual ip.  and that ip has to be seen by both subnets (site locations). Correct? 

I'm asking again to be sure because I'll have to buy some networking equipment to make that happen.

Wally_Heim's picture

Hi Ashon1,

OK.  Let's forget about VVR use of virtual IPs.  Your SQL service groups will have a virtual server name and a virtual IP address in them.  With GCO each site typically has a different IP subnet so the IP resources in the link SQL service group will have different IP addresses (site A will have and site b will have  SQL will use the IP address of the site where the service group is online at.

The Lanman resource will control the virtual server name and can be configured to update DNS with the IP address that SQL is using.  Configuring Lanman updating DNS is the recommended configuration when using GCO. 

There are also options in Lanman to PurgeDuplicates which is also recommended in GCO configuration.  The PurgeDuplicates option tells lanman to remove DNS records for the virtual server name that are not for the IP address where it is currently located.  This option keeps DNS from telling clients that SQL is at site A when it is actually at site B.

I hope this helps clear up your questions regarding virtual IP usage in your configuration.  If not please feel free to let me know.



Ashon1's picture

When I try to bring the SQL Service Group online, I am getting an error that the netapp is faulting on both servers.  None of the snapdrives are coming online.  As a result the group isn't coming online.  Any ideas on how to clear the fault or make sure that the netapp and SAN resources come online?

Wally_Heim's picture

Hi Ashon1,

In cluster manager, you can right click on the service group and select clear fault - auto. 

As for determining what is going on, check the agent's log that is located at %vcs_home%\log.  The agent logs are named <agent name>_A.txt.

If the agent log does not help, then you might need to increase logging for the agent by adding logging levels to the LogDBg attribute for the agent type.