Video Screencast Help

Application awareless of virtual storage unit in clustered SAN media server using Enterprise client license in NBU 7.5

Created: 08 Jun 2012 • Updated: 09 Jun 2012 | 11 comments
GHS-NBU-ASC's picture
This issue has been solved. See solution.

Dear Expers,

The setup is NBU 7.5.0.1  clustered master server, a media server and 4 clustered SQL servers with multiple instances online on each node of cluster. I have created a virtual storage unit and configured each node as SAN media server and an Enterprise client license and Database agent is installed on each node. The policies for SQL will be created with virtual name of the SQL instances.

My concern is I want to make sure that the backup happens local directly to the SAN. In other words the media server used for the backup SHOULD be the node which is Active for that perticular instance.

Regards,

 

Comments 11 CommentsJump to latest comment

pedrogarciar's picture

Hello,

 

create a virtual storage unit for each database instance. In the "Media server" setting of each storage unit, use the DNS name of the cluster service for that application.

That way if the clustered application fails over to another node, as the virtual storage unit is referencing the cluster service name, it ends being the node in which the service has been failed over to, and backup will be always local.

 

Regards.

 

RLeon's picture

Since you have already created a "virtual storage unit" I assume you understand the difference between a "virtual" and a "normal" storage unit.

But as pedrogarciar has pointed out, you have to create one virtual storage unit for each of your database instances. One is not enough because you said you have multiple instances, and each of them could move between the cluster nodes independently, and without regards to the node-location of other instances.

The only thing I would add, which has not been mentioned, is that all the SAN media servers (Aka, your cluster nodes) will have to have shared access to the same storage(s) for this to work. In other words, all your virtual storage units will have to share the same target storage.

If you are using a fibre connected tape library, you need the Netbackup SSO license, then you must zone this library to all the SAN media servers so they can all backup to it.
If you also zone it to the master server, you can let the master server be the robot control host.

If you are using disk based storage instead of a tape library, you will need the Netbackup AdvancedDisk license (included in either the Enterprise Disk license, or the Data Protection Optimization license).
The reason why you need AdvancedDisk is because it is the only disk storage method in Netbackup that allows shared access from multiple media servers.
Neither BasicDisk nor Deduplication Disk Pool allows for shared concurrent access to the same exact network UNC/SMB path, or file system mount point.
And as we all know, when an application changes node in a cluster, a "virtual storage unit" would continue to work ONLY IF the underlying backup storage is shared between the nodes.
If you are on the Windows platform, your shared AdvancedDisk storage must be CIFS based as far as I know. If you are on non-Windows, then you can use some "real" clustered file systems such as VxFS for the shared AdvancedDisk storage. In most cases, a clustered file system block level shared access performs better than using CIFS or NFS shared access.

So basically, for tape, SSO.
For disk, AdvancedDisk shared access either via network shares, or via cluster files systems.

You can find more information in the Netbackup 7.5 HA admin guide, in the section titled "About installing NetBackup media software in a cluster".
Note: The things I said about using AdvancedDisk for your kind of setup aren't made very clear in the guide. If you just follow the guide, you could be "tricked" in to simply creating independent BasicDisk storages for each of your virtual storage units. In which case, there could be problems during restores where backup images cannot be located because they are on another node.

RLeon

RLeon's picture

Ok I just checked the Deduplication guide and it only says that iScsi PDDO cannot be clustered.
Is it implying that MSDP can be clustered, as in using a clustered file system with concurrent block level access from multiple MSDP storage (media) servers, similar to an AdvancedDisk setup?
I doubt it, but perhaps someone could confirm.

RLeon

GHS-NBU-ASC's picture

Hi pedrogarciar and RLeon

Thanks so much for shading light on this. The only thing I did not understand is creating a storage unit for each instance. Also how can I use the instance name as media server name.

Marianne's picture

I agree with pedro:

Each Instance should be associated with a Virtual hostname using Virtual IP..

What I am missing from all advice given so far is creation of Application Cluster in NBU.

So, first of all, ensure that master can resolve Virtual hostname. Add each Virtual hostname as SERVER or MEDIA_SERVER to Master's Server entries.

Next, use nbemmcmd commands on master to create Application Cluster definitions in the NetBackup EMM database for each virtual name the cluster uses.

Detailed description starting on page 20 in NetBackup in Highly Available Environments Administrator's Guide http://www.symantec.com/docs/DOC5183 with steps and commands on p. 27.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

GHS-NBU-ASC's picture

Dear Marrianne,

Thanks a lot for your usual support. What hosts should I associate with the vitual storage unit.

I belive we need to run the nbemmcmd to first create the app_cluster entry and then add hosts to it.

Regards,

Gulzar

Marianne's picture

You need to add the physical node names of all the machines that form the cluster.
The manual contains all the commands with good explanation and examples.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

RLeon's picture

You were using the term "virtual storage unit" so I assumed that you have already read through and understand everything about using nbemmcmd to create "application cluster media servers" and how to add nodes to them.
But it seems there are some confusions, so I will attempt to explain some more.

I said:

But as pedrogarciar has pointed out, you have to create one virtual storage unit for each of your database instances. One is not enough because you said you have multiple instances, and each of them could move between the cluster nodes independently, and without regards to the node-location of other instances.

That's what normal MS SQL fail-over cluster configs do.
For example, say you have HostA and HostB, and you have say 3 virtual SQL applications that are running on those hosts, named V-SQL-A, V-SQL-B, and V-SQL-C.
If you have setup your Windows failover cluster properly, those 3 virtual applications could each independently "travel between" HostA and HostB.
So you could have a situation where HostA "serves" V-SQL-A, V-SQL-B, while HostB "serves" V-SQL-C.
If HostA fails, V-SQL-A and V-SQL-B fails over to HostB, so HostB "serves" all 3 virtual applications.

What you have to understand is that from Netbackup's perspective, it is dealing with 5 different hosts.
Not 2, not 3, five:
HostA, HostB, V-SQL-A, V-SQL-B, and V-SQL-C.

Next, understand the following problem.
I said:

And as we all know, when an application changes node in a cluster, a "virtual storage unit" would continue to work ONLY IF the underlying backup storage is shared between the nodes.

If you create a "normal" storage unit for HostA
(what that means is, that Netbackup records the fact that the storage is attached to only HostA.),
you will only be able to backup a virtual application only if it is currently "on" HostA.
Say if currently HostA serves V-SQL-A, V-SQL-B, while HostB serves V-SQL-C,
then that means only V-SQL-A and V-SQL-B could be backed up directly to HostA's storage unit.
If you attempt to backup V-SQL-C to HostA's storage unit, then the data may travel through the LAN in the following manner:
V-SQL-C --> HostB --> LAN --> HostA --> HostA's storage unit.
...which is NOT what you would want.

What if V-SQL-A fails over to HostB? Unless you update the policy so that V-SQL-A backups to HostB's own storage unit, the above problem would happen to V-SQL-A where data travels through LAN, back to HostA's storage unit.

Now that you understant the problems with virtual applications moving around inside a failover cluster, you can start to understand what a "virtual storage unit" is. (I know you kept using the term...)
I said:

You can find more information in the Netbackup 7.5 HA admin guide, in the section titled "About installing NetBackup media software in a cluster".

If you read through page 24 to 29, which is the above section, you could see that a so called "virtual storage unit" is basically just a storage unit created against a virtual application's name (E.g., V-SQL-A) as if this is a media server. But of course since it is not really a normal media server, you have to follow the guide and use the nbemmcmd command to add each virtual application's name as a cluster app media server.
So using my above example, you have to create a cluster app media server for each of V-SQL-A, V-SQL-B, and V-SQL-C. And you will have to use the nbemmcmd command again to associate HostA and HostB to each of the 3 cluster app media servers. That tells Netbackup that HostA and HostB are the actual physical nodes that "serve" V-SQL-A, V-SQL-B, and V-SQL-C.

Next, now that the 3 virtual applications are some kind of media servers from Netbackup's perspective, you can scan for storage devices and create storage units out of them the usual way. (Device Config Wizard or commands)
The resulting 3 storage units will be associated to the 3 virtual app names (V-SQL-x), and not to the 2 physical node names (HostA and HostB). This type of storage units are referred to as "virtual storage units" because they are associated to virtual application names, and not to the underlying physical node hostnames.

If you understand the above, then you will understand why I said:

...that all the SAN media servers (Aka, your cluster nodes) will have to have shared access to the same storage(s) for this to work. In other words, all your virtual storage units will have to share the same target storage.

Only if the underlying storage (tape or disk) is shared between HostA and HostB, would it be possible for a virtual application to continue to backup to it LOCALLY after a failover from cluster physical node to cluster physical node. Why? Because it is just the same storage regardless of which node that the virtual app is running on.
And also, since the storage is shared, when you run the Device Config Wizard for each of the virtual applications, it would not matter which physical node that the virtual application is running the wizard from, because it is the same storage regardless of which node you are running the wizard for.

Hope that clears everything.

RLeon

GHS-NBU-ASC's picture

Hi RLeon,

First of all thanks a lot for spending time to clear things. Still I have doubts in creating the storag unit. Let me explain what -

We will take the same example.  NodeA NodeB and cluster instances VHostA VHostB and VHOSTC.

The HA guide gives the details to create the virtual storage unit using nbemmcmd -addhost

So we can use following commands to create the app_clusters

nbemmcmd -addhost -machinename <VHostA> -machinetype app_cluster

After that when we associate the host to this app_cluster what exactly should I associate it with? If we associate it with physical nodes then there is possible of backups to NOT run LOCALY.

Can u please explain in terms of storage unit and app_cluster what should be the configuration. Also what I understand that when we create the storage unit, we must select the "media server" as app_cluster.

Regards,

Gulzar

RLeon's picture

Step 1: Make sure you share a storage between NodeA and NodeB. Tape or Disk. Read my first post about it.

Step 2: Add all NodeA, NodeB, VHostA, VHostB and VHostC to::
 - Master's host properties' server list
 - NodeA's host properties' server list
 - NodeB's host properties' server list

Step 3:

nbemmcmd -addhost -machinename VHostA -machinetype app_cluster
nbemmcmd -updatehost -add_server_to_app_cluster -machinename NodeA -machinetype media -clustername VHostA -netbackupversion 7.5 -masterserver NbuMaster
nbemmcmd -updatehost -add_server_to_app_cluster -machinename NodeB -machinetype media -clustername VHostA -netbackupversion 7.5 -masterserver NbuMaster

nbemmcmd -addhost -machinename VHostB -machinetype app_cluster
nbemmcmd -updatehost -add_server_to_app_cluster -machinename NodeA -machinetype media -clustername VHostB -netbackupversion 7.5 -masterserver NbuMaster
nbemmcmd -updatehost -add_server_to_app_cluster -machinename NodeB -machinetype media -clustername VHostB -netbackupversion 7.5 -masterserver NbuMaster

nbemmcmd -addhost -machinename VHostC -machinetype app_cluster
nbemmcmd -updatehost -add_server_to_app_cluster -machinename NodeA -machinetype media -clustername VHostC -netbackupversion 7.5 -masterserver NbuMaster
nbemmcmd -updatehost -add_server_to_app_cluster -machinename NodeB -machinetype media -clustername VHostC -netbackupversion 7.5 -masterserver NbuMaster

After Step 3 your master server would think you have added 3 new media servers VHostA, VHostB and VHostC. (We know they are not "normal" media servers, they are "app_cluster media servers").
Then just configure storage for each, just like you would for normal media servers.
You don't have to worry about which Node's storage the Device Config Wizard is actually scanning, because it is the same storage regardless of which Node you scan it from.
In other words, you won't have to worry about which VHost is on which Node.

Step 4:
Create a storage unit for each of the 3 "media servers".
Set the storage units up such that all 3 storage units point to the same storage. (E.g., they all point to the exact same tape library that has been SSO shared.)
We call them virtual storage units because they are not associated to physical hosts such as NodeA, they are associated to virtual application names such as VHostB, and that these virtual application names can "travel" between the underlying Nodes during a failover.
So for example, you now have 3 virtual storage units with the following names:
1. VHostA-TLD0
2. VHostB-TLD0
3. VHostC-TLD0

Step 5:
Create a policy for each VHost you want to backup. For example:
 - Policy1: Storage unit is VHostA-TLD0, client list is VHostA, selection list is the DBs you want to backup.
 - Policy2: Storage unit is VHostB-TLD0, client list is VHostB, selection list is the DBs you want to backup
 - Policy3: Storage unit is VHostC-TLD0, client list is VHostC, selection list is the DBs you want to backup
 

That's it really. There is absolutely NO WAY that the backups would NOT run LOCALLY, unless you messed up the SSO tape/AdvancedDisk shared storage part.

To explain with an example:
VHostA is currently "on" NodeA, Policy1 runs, VHostA's DBs are to be backed up to the virtual storage unit VHostA-TLD0, as configured in the policy's attributes.
Where does VHostA-TLD0 point to?
To whatever Node that VHostA is currently "on" when the backup starts.
In is case, it is pointing to NodeA's LOCALLY CONNECTED TAPE LIBRARY.
What that means is, VHostA backs up LOCALLY to NodeA's tape library.

"What if VHostA fails over from NodeA to NodeB?" I heard you asked.
The following happens:
VHostA is currently "on" NodeB, Policy1 runs, VHostA's DBs are to be backed up to the virtual storage unit VHostA-TLD0, as configured in the policy's attributes.
Where does VHostA-TLD0 point to?
To whatever Node that VHostA is currently "on" when the backup starts.
In is case, it is pointing to NodeB's LOCALLY CONNECTED TAPE LIBRARY.
What that means is, VHostA backs up LOCALLY to NodeB's tape library.

As you can see, all data travels locally from any Node to the same storage. VHost failovers would not break this. No data will travel through the LAN from one Node to another.

RLeon

SOLUTION
GHS-NBU-ASC's picture

A Million Thanks to You!!! I will try these steps and would let u know how it goes