Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.

How do I setup my Bunker and Secondary together?

Created: 01 Nov 2012 • Updated: 02 Nov 2012 | 6 comments
This issue has been solved. See solution.

I have just begun to use VR. I have played with setting up a bunker and a secondary but not together.

Do I have me secondary replicate from my bunker?

Do they work completely seprate?

Once testing is done and I feild this my Bunker will be in the same data center as my primary and my Secondary will be in another state.

Comments 6 CommentsJump to latest comment

mikebounds's picture

A bunker is a secondary that has no data, just the SRL.  A typical scenario would be:

  • Replicate to a bunker that is 10 km away synchronously
  • Replicate to a DR site that is 1000km asynchronously

In normal operation the primary sends writes to both the bunker and the DR site (the bunker does NOT send writes to the secondary), but if the primary site fails, then the bunker is converted to a primary and it send any outstanding writes to the DR site and when this is complete, the DR site becomes the primary and your application runs using the data at the DR site.

If the primary site and the bunker are close together, then a disaster may effect both primary and bunker, in which case you would have to accept that the DR site may be behind.

I don't think there is much point putting your bunker in the same data centre as the Primary - if you did you would need to make your bunker as independent as possible, so server in a separate rack (physically far away from primary if possible), separate power supply and the bunker SRL should use a separate array or local storage.  

You need to consider what you are protecting yourself against as this does not protect zero data loss if you lose your primary data centre, so for:

  1. Protection against loss of servers:
    A 3-node cluster gives better protection against a 2-node cluster + bunker server as in a 3-node cluster the 3rd node can actually run the application where the bunker server cannot.  The advanatge of the bunker server is that it does not need to be of high spec as it does not have to run the application, so it is a bit cheaper
     
  2. Protection against loss of array:
    Mirroring JUST the SRL to another array (not using bunker replication) gives similar protection as putting a bunker SRL on a separate array as if you were to loose your primary array, if you still have one half of your SRL mirror, then in theory VVR should be able to continue and replicate outstanding writes to the DR site, so only advantage I can see of using bunker is that if you only have 1 array, then a bunker could use local storage ( you can't mirror SRL to local storage).  Better protection against array loss is to mirror the whole storage to another array, but this is more expensive as it requires double the storage.
     

So why were you thinking of using bunker replication

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

mhab11's picture

Here is what I have, 2 servers running SFWHA with VCS. There are 2 disk arrays attached to these server. I also have a Domain Controller that doubles as a backup box. I am going to be adding 1 server in another state that I was going to use VR to keep the data up to date. As I already had that backup box I was going to add a bunker to that so that if there was a problem it would replicate the most current data to the other state. I must have misunderstood what the purpose of the bunker was.

 

My plan, if the 2 main server went down the secondary would look to the bunker for any missing data. I am looking at this as a backup and not a true disaster. Such as some kills the power to my rack on accident, as dumb as this sounds it has happened in the last year. Anyways in the event of a true disaster and my whole site is out services would move to the secondary in another state.

 

I have been reading the VR admin guide as I am setting up and testing so I am sure I will have a bunch more that I get stuck on. If you have any early advice for me I would love the input.

 

Thanks for the help.

mikebounds's picture

You should put your 2 VCS nodes in separate racks so that if someone kills the power to one rack, you can failover to the 2nd VCS node in a separate rack.  

I have a few questions:

  1. Is the 3rd node, you are planning to use as a bunker, SAN attached and so could potentially import the diskgroup that is shared between the 2 cluster nodes?
  2. Is the storage for the 2 cluster nodes mirrored across the 2 arrays using vxvm mirroring?
  3. What storage are you planning to use for the bunker SRL -  mirrored across 2 arrays, or using some other storage?
  4. How are you planning to replicate to bunker SRL - over IP or over SAN?
  5. How much CPU and memory does the bunker node have compared to the 2 cluster nodes

Mike

 

 

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

mhab11's picture

1. No the 3rd node is not attached to the Disk Arrays, If redundancy would be better if it was then I could add a card and make this happen.

2. Yes the 2 arrays are mirrored through VEA

3. Currently the 3rd node only has 2 HDD for the OS, I was going to add more drives the I could replicate to.

4. Replication over IP, the 3rd node(local) will have a 1gb connection, the 4th node (other state) will be around a 1mb connection, I am still testing that.

5. All server have 16gb ram and the same CPU.

mikebounds's picture

I would add a FC card to your 3rd node and zone your diskgroup used in your cluster to this 3rd node and not use bunker replication - below is a comparison against using 3rd node as a bunker if you loose both your cluster nodes.

  1. To replicate any outstanding writes with:
    3rd node as a bunker you will need to:
      Import and recover bunker diskgroup
      Deactivate bunker
      Start replication to DR to replicate any outstanding writes
      
    3rd node with access to replicated diskgroup
      Import and recover replicated diskgroup
      Replication will automatically resume to replicate any outstanding writes
     
  2. After any outstanding writes are replicated then with:
    3rd node as a bunker, when you online application service group at DR:
      RVGPri agent will do a takeover as primary diskgroup is not available
      This means writes at DR will be tracked in DCM and cannot be replicated back to primary site
    3rd node with access to replicated diskgroup
     RVGPri agent will do a migration as primary diskgroup is available on 3rd node
      This means writes at DR be replicated back to primary diskgroup on 3rd node
     
  3. When at least one of your cluster nodes comes back to recover with:
    3rd node as a bunker you will need to:
      Initiate DCM replay which means you will have to wait while data is transferred
      During DCM replay, your primary diskgroup is inconsistent as DCMs are bitmaps and do not record the order of thw writes like the SRL
      If you loose your DR node during DCM replay, you cannot fail to primary site as diskgroup is corrupt and your only option would be to revert to a backup or a snapshot if you took one before the DCM replay
     After DCM replay, you will need to reactivate the bunker

    3rd node with access to replicated diskgroup
      Replication is already in place so you just need to switch diskgroup to cluster nodes to resume

The only advantage I can see for using 3rd node as a bunker (other than bunker node does not require a FC card), is if you lost both arrays, but you would have to have an event pretty severe to loose two independent arrays and any event that causes you to loose both array would probably mean you would loose your bunker node,in which case you wouldn't be any better off.  If you want to protect against loosing 2 arrays, then if you have a 3rd array you could mirror, JUST the SRL to the 3rd array and then  you could force import the diskgroup (which would be missing all the data and 2 copies of the SRL) containing copy(s) of the SRL from 3rd array and in THEORY, it should replicate outstanding writes to the DR site.  Or  if you want to protect against loosing 2 arrays then you could maybe use the 3rd node as a bunker node with the 3rd node also capable of importing replicated diskgroup, so that you have 2 options depending on what hardware fails.
 
Also, if your 3rd node has the same CPU and Memory as cluster nodes, then I would add it as a 3rd node to the cluster as this just requires a VCS licence and possibly a licence for your application.  Having the oportunity to run your application on your 3rd node is better than an extended outage as failover to DR will probably take 10 - 60 minutes as it is manually initiated so you need to wait for someone to take the decision to fail to DR (and with a bunker you will have to run additional tasks on the bunker node) and the DR may need time to catch up if your bandwidth is less than the peak I/O write throughput.   Failover in VCS can be sub one minute so your application will be available much quicker with 3 nodes in the cluster if you loose 2 nodes.
 
Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

SOLUTION
mhab11's picture

Thanks for all the advice, I am going to try each idea and see what works the best. Getting another HBA for the 3rd node is no problem so I will start there.