Using NetBackup AdvancedDisk with Storage Foundation Cluster File System

Article:TECH68582  |  Created: 2009-01-09  |  Updated: 2009-01-11  |  Article URL http://www.symantec.com/docs/TECH68582
Article Type
Technical Solution

Product(s)

Environment

Issue



Using NetBackup AdvancedDisk with Storage Foundation Cluster File System

Solution



Introduction
AdvancedDisk is one of the disk types that form the NetBackup Disk Foundation feature set introduced in NetBackup 6.5.   The disk types covered the NetBackup Disk Foundation can make use of many advanced features including storage lifecycle policies, advanced staging, protection service levels, automatic capacity management, and media server load balancing.  AdvancedDisk makes use of disk pools to allow multiple volumes and file systems to be aggregated together an d associated with multiple disk storage units.  
With the introduction of NetBackup 6.5.2 the capabilities of AdvancedDisk have been extended to allow the disk pools to be written to and read by multiple media servers.  Using this capability requires the disks to be presented to the media servers simultaneously using a network file systems such as NFS or a cluster file system such as Veritas Storage Foundation Cluster File System (SFCFS).
SFCFS and Storage Foundation Cluster File System High Availability (SFCFS HA – a combination of SFCFS and Veritas Cluster Server) provide clustering functionality for Veritas File System (VxFS) and are independently licensed features of Storage Foundation product set.
This document describes how to configure AdvancedDisk with SFCFS to create a resilient disk based storage platform for short retention backups and 'staging' backups prior to transferring them to long term storage media such as deduplicating disk storage or tape.
Considerations when using AdvancedDisk with 'shared' disk pools
The individual volumes in a disk pool are either in an 'up' or 'down' state.  If  a volume is 'down' it is not accessible for writing or reading.  A volume is marked down automatically if a media server is unable to access it.  In an environment where shared disk pools are used if a volume is inaccessible to one media server it is marked down and is inaccessible to all media servers until the problem is rectified and the volume is marked as up again.
This is an import consideration when adding new volumes to an existing disk pool.  The new volumes must be accessible to all media servers that can access the pool when they are added.  
Considerations when using SFCFS.
SFCFS requires a dedicated network for GAB (Global Atomic Broadcast) and LLT (Low Latency Transport).  While a single additional network connection can provide the connectivity required for GAB and LLT it is recommend that two dedicated links are provided between servers to improve resilience, as they would be for a Veritas Cluster Server implementation.
Configuring AdvancedDisk with SFCFS
The following sections provide examples of how to configure AdvancedDisk with SFCFS using these host names:
nbumaster: The NetBackup master/EMM server.
sfcfs1: SFCFS node 1 which is also a NetBackup media server
sfcfs2: SFCFS node 2 which is also a NetBackup media server.
If the nodes in the configuration are used exclusively as NetBackup media servers then only SFCFS is required for the solution.  If the nodes are also supporting highly available applications (e.g. SAN media servers in an active/active cluster environment) then SFCFS HA is required to configure application failovers.  Examples of both SFCFS and SFCFS HA are provided here.
Section A – Installing required software components
  1. Install and configure a NetBackup master/EMM server. Please refer to NetBackup Installation Guide for details.
  2. Install and configure SFCFS on media servers that have access to SAN Storage.  Please refer to SFCFS Installation Guide for details.  
  3. Install and configure a NetBackup media server on all SFCFS nodes. Please refer to NetBackup Installation Guide for details.
  4. If you are using SFCFS HA (i.e. SFCFS with VCS), skip to Section C. Else proceed to Section B
Section B – SFCFS Configuration
  1. Identify the Cluster Volume Manager (CVM) master node. This can be identified using the following command.
sfcfs1:/>vxdctl -c mode
mode: enabled: cluster active - MASTER
master: sfcfs1
If the node is not a CVM master, the output would be
sfcfs2:/>vxdctl -c mode
mode: enabled: cluster active - SLAVE
master: sfcfs1
2. Create CVM disk group(s) for disks from CVM master node. The example below is for a single disk group.
sfcfs1:/>vxdisk -o alldgs list
DEVICE       TYPE            DISK         GROUP        STATUS
sda          auto:none       -            -            online invalid
sdb          auto:none       -            -            online invalid
sdd          auto:cdsdisk    -            -            online
sde          auto:cdsdisk    -            -            online
In the above example, the disk sdd and sde are on SAN visible from all nodes in SFCFS
Initialize a new disk group advdg as a shared disk group with disks sdd and sde.
sfcfs1:/>vxdg list advdg
Group:     advdg
dgid:      1235495025.29.sfcfs1
import-id: 33792.28
flags:     shared cds
version:   140
alignment: 8192 (bytes)
local-activation: shared-write
cluster-actv-modes: sfcfs1=sw sfcfs2=sw
ssb: on
autotagging: on
detach-policy: global
dg-fail-policy: dgdisable
copies:    nconfig=default nlog=default
config:    seqno=0.1028 permlen=51360 free=51356 templen=2 loglen=4096
config disk sdd copy 1 len=51360 state=clean online
config disk sde copy 1 len=51360 state=clean online
log disk sdd copy 1 len=4096
log disk sde copy 1 len=4096
sfcfs1:/>vxdisk list | grep advdg
sdd          auto:cdsdisk    advdg01      advdg        online shared
sde          auto:cdsdisk    advdg02      advdg        online shared
At this time, the disk group advdg will be visible on all nodes. You may verify this using 'vxdg list advdg' on all SFCFS nodes.
3. Create volumes on available disks.  First the maximum available space on each disk is identified  and that value is used as the volume size.
sfcfs1:/>vxassist -g advdg maxsize alloc=advdg01
Maximum volume size: 8304640 (4055Mb)
sfcfs1:/>vxassist -g advdg make adv-vol01 8304640
The above command creates a volume adv-vol01 that uses all the available space from the disk advdg01.
Repeat for other available disks.
sfcfs1:/>vxassist -g advdg maxsize alloc=advdg02
Maximum volume size: 8304640 (4055Mb)
sfcfs1:/>vxassist -g advdg make adv-vol02 8304640
TIP:  Use the vxprint command to verify that the volumes have been created .
4. Create file systems on the volumes created.  In the below example '-t' is used as the media servers are all Linux. Use the appropriate command line switch for the platform being used. (e.g. '–F 'on Solaris, '-V' on AIX etc.)
sfcfs1:/>mkfs -t vxfs /dev/vx/rdsk/advdg/adv-vol01
   version 7 layout
   8304640 sectors, 4152320 blocks of size 1024, log size 16384 blocks
   largefiles supported
sfcfs1:/>mkfs -t vxfs /dev/vx/rdsk/advdg/adv-vol02
   version 7 layout
   8304640 sectors, 4152320 blocks of size 1024, log size 16384 blocks
   largefiles supported
5. Mount the file systems using 'cluster' option after creating the mount point.
sfcfs1:/>mkdir /adv-vol01
sfcfs1:/>mount -t vxfs -o cluster /dev/vx/dsk/advdg/adv-vol01 /adv-vol01
sfcfs1:/>mkdir /adv-vol02
sfcfs1:/>mount -t vxfs -o cluster /dev/vx/dsk/advdg/adv-vol02 /adv-vol02
Update  /etc/fstab (on Linux) or its equivalent for the platform involved so that the file systems are mounted automatically on reboot. For example, on Linux add entries as follows.
/dev/vx/dsk/advdg/adv-vol01 /adv-vol01 vxfs cluster 0 0
/dev/vx/dsk/advdg/adv-vol02 /adv-vol02 vxfs cluster 0 0
6. Repeat step 5 on all SFCFS nodes and then proceed to Section D – NetBackup AdvancedDisk Configuration.
Section C – SFCFS HA Configuration
If VCS is available, you can make use of its monitoring and managing capabilities for SFCFS.
  1. Identify the Cluster Volume Manager (CVM) master node. This can be identified using the following command.
sfcfs1:/>vxdctl -c mode
mode: enabled: cluster active - MASTER
master: sfcfs1
If the node is not a CVM master, the output would be
sfcfs2:/>vxdctl -c mode
mode: enabled: cluster active - SLAVE
master: sfcfs1
2. Create CVM disk group(s) for disks from CVM master node. The example below is for a single disk group.
sfcfs1:/>vxdisk -o alldgs list
DEVICE       TYPE            DISK         GROUP        STATUS
sda          auto:none       -            -            online invalid
sdb          auto:none       -            -            online invalid
sdd          auto:cdsdisk    -            -            online
sde          auto:cdsdisk    -            -            online
In the above example, the disk sdd and sde are on SAN visible from all nodes in SFCFS
Initialize a new disk group advdg as a shared disk group with disks sdd and sde.
sfcfs1:/>vxdg list advdg
Group:     advdg
dgid:      1235495025.29.sfcfs1
import-id: 33792.28
flags:     shared cds
version:   140
alignment: 8192 (bytes)
local-activation: shared-write
cluster-actv-modes: sfcfs1=sw sfcfs2=sw
ssb: on
autotagging: on
detach-policy: global
dg-fail-policy: dgdisable
copies:    nconfig=default nlog=default
config:    seqno=0.1028 permlen=51360 free=51356 templen=2 loglen=4096
config disk sdd copy 1 len=51360 state=clean online
config disk sde copy 1 len=51360 state=clean online
log disk sdd copy 1 len=4096
log disk sde copy 1 len=4096
sfcfs1:/>vxdisk list | grep advdg
sdd          auto:cdsdisk    advdg01      advdg        online shared
sde          auto:cdsdisk    advdg02      advdg        online shared
At this time, the disk group advdg will be visible on all nodes. You may verify this using 'vxdg list advdg' on all SFCFS nodes.
3. Create volumes on available disks.  First the maximum available space on each disk is identified  and that value is used as the volume size.
sfcfs1:/>vxassist -g advdg maxsize alloc=advdg01
Maximum volume size: 8304640 (4055Mb)
sfcfs1:/>vxassist -g advdg make adv-vol01 8304640
The above command creates a volume adv-vol01 that uses all the available space from the disk advdg01.
Repeat for other available disks.
sfcfs1:/>vxassist -g advdg maxsize alloc=advdg02
Maximum volume size: 8304640 (4055Mb)
sfcfs1:/>vxassist -g advdg make adv-vol02 8304640
TIP:  Use the vxprint command to verify that the volumes have been created .
4. Create file systems on the volumes created.  In the below example '-t' is used as the media servers are all Linux. Use the appropriate command line switch for the platform being used. (e.g. '–F 'on Solaris, '-V' on AIX etc.)
sfcfs1:/>mkfs -t vxfs /dev/vx/rdsk/advdg/adv-vol01
   version 7 layout
   8304640 sectors, 4152320 blocks of size 1024, log size 16384 blocks
   largefiles supported
sfcfs1:/>mkfs -t vxfs /dev/vx/rdsk/advdg/adv-vol02
   version 7 layout
   8304640 sectors, 4152320 blocks of size 1024, log size 16384 blocks
   largefiles supported
5. Add the shared disk group to VCS configuration so that all the nodes in SFCFS has write access. (i.e. shared write access, sw)
sfcfs1:/>cfsdgadm add advdg all=sw
 Disk Group is being added to cluster configuration...
6. Now add the mount points to all nodes of the cluster with read-write access.
sfcfs1:>cfsmntadm add advdg adv-vol01 /adv-vol01 all=rw
 Mount Point is being added...
 /adv-vol01 added to the cluster-configuration
sfcfs1:>cfsmntadm add advdg adv-vol02  /adv-vol02 all=rw
 Mount Point is being added...
 /adv-vol02 added to the cluster-configuration
7. Mount all CFS file systems.
sfcfs1:~>cfsmount /adv-vol01
 Mounting...
 [/dev/vx/dsk/advdg/adv-vol01] mounted successfully at /adv-vol01 on sfcfs2
 [/dev/vx/dsk/advdg/adv-vol01] mounted successfully at /adv-vol01 on sfcfs1
sfcfs1:~>cfsmount /adv-vol02
 Mounting...
 [/dev/vx/dsk/advdg/adv-vol02] mounted successfully at /adv-vol02 on sfcfs2
 [/dev/vx/dsk/advdg/adv-vol02] mounted successfully at /adv-vol02 on sfcfs1
Section D - NetBackup AdvancedDisk Configuration
  1. Login to NetBackup master server. The examples here assume a UNIX/Linux master server.
  2. Create AdvancedDisk storage server for all SFCFS nodes.
nbumaster:/>nbdevconfig -creatests -storage_server sfcfs1 -stype AdvancedDisk -st 5
Storage server sfcfs1 has been successfully created
In the above command sfcfs1 is the one of the SFCFS nodes that is now configured as AdvancedDisk storage server.  Repeat the same procedure for all other SFCFS nodes.
nbumaster:/>nbdevconfig -creatests -storage_server sfcfs2 -stype AdvancedDisk -st 5
Storage server sfcfs2 has been successfully created
3. Start NetBackup Administration Console, login as NetBackup Administrator and select "Configure Disk Pool" wizard.
4. Read the instructions in Welcome screen, click "Next"
5. Choose AdvancedDisk as the disk pool type and click "Next"
6. Now you are presented with a list of Storage Servers. Select all the SFCFS nodes where the volumes are mounted and select "Next"
7. Now you are presented with a set of mount points, which are common to all the Storage Servers, you had selected in step 6. Please select only those mount points where cluster file systems are mounted. You may all the cluster files systems all at once to create a single disk pool or select a few cluster file systems so as to create multiple disk pools.  Click "Next"
8. Now you are presented with a screen where you must chose a name for the disk pool. Type in the desired name and adjust Low/High water marks if necessary.  Click "Next"
9. Review the Summary screen. If acceptable, click "Next". Or click "Back" to go back and make desired changes.
10. Now disk pool is created with specified attributes. Click "Next"
11. You are asked whether you would like to create a Storage Unit for the disk pool just created. Make sure the check box is enabled and click "Next"
12. Provide a name for the storage unit. Click the radio button corresponding to "Use only the following media servers" and make sure that all SFCFS nodes are selected. Adjust value for "Maximum Concurrent Jobs" if desired. Click "Next"
13. Click "Finish" to exit the wizard.
It is possible to create multiple storage units for the same disk pool. Thus you have the flexibility to use specific media server(s) for specific backup policies or for specific data classification types in a Storage Life Cycle policy.
NetBackup AdvancedDisk on SFCFS provides a way for multiple Storage Servers to access the same disk pool. It is possible to dedicate specific Storage Servers (i.e. SFCFS nodes) just duplications or restores by setting or clearing storage server attributes.
For example, if sfcfs2 should be the only host used for duplication of images from the disk pool to a directly attached tape drive, set the ReqDuplicate flag for it from the master server.
nbumaster:/>nbdevconfig -changests -storage_server sfcfs2 -stype AdvancedDisk
-setattribute ReqDuplicate
Storage server sfcfs2 has been successfully changed
More details on the options available through the nbdevconfig –setattribute command can be found in the NetBackup 6.5.2 documentation updates.




Legacy ID



320797


Article URL http://www.symantec.com/docs/TECH68582


Terms of use for this information are found in Legal Notices