NetBackup Support for Oracle Solaris Virtualization

Article:TECH162994  |  Created: 2011-06-22  |  Updated: 2014-04-09  |  Article URL http://www.symantec.com/docs/TECH162994
Article Type
Technical Solution


Environment

Issue



NetBackup Support for Oracle Solaris Virtualization


Solution



Solaris 11 Zones Support:
 
Zones (containers) are isolated application execution environments that an administrator can create on a single Solaris 11 instance. Each Solaris update may add additional features and fixes so is best to be at latest update. 
 
Global Zone server support:
All NetBackup components supported with Solaris 11 physical media servers are supported in a Solaris 11 global zone. Backups can be performed in individual local zones, or by backing up the entire system from the global zone. 
 
Non-Global Zone server support:
NetBackup servers are not currently supported in either whole or sparse Solaris 11 non-global zones. 
 
Global Zone client backup:
A NetBackup client in the global zone can be used to back up the entire system. A full restore of the global zone restores everything in the global zone and the local zones as well, as long as the respective root file systems of the local zones were included in the backup. Make sure that the databases in local zones are offline before starting the backup from the global zone. 
 
Whole Non-Global Zone client backup:

Clients should be configured to skip lofs file systems to avoid backing them up twice. The client and DB agent are required in local zone for hot database backups.

Systems configured with local zones can be backed up with the NetBackup Solaris 11 standard client, provided the standard client is loaded on each local zone in which a backup is desired.  On a local zone, a workaround must be performed to successfully load the NetBackup standard client. Prior to installation, the /usr/openv directory must be created and made writable for a successful NetBackup client installation. To do this, you must use the zlogin process to become "Zone root" on the local zone. Then, you will be able to create a link from /usr/openv to a writable location on the local zone.
 

Sparse Non-Global Zone client support:
NetBackup clients are not currently supported on Solaris 11 in sparse non-global zones. 
 
Caution:
  • Processes in the local zone are visible in the global zone.  This means a bp.kill_all in the global zone will shutdown all NBU processes across all zones.     
 
Solaris 11 Logical Domains (LDoms) Support

Solaris 11 Logical Domains (LDoms) are only available on Sun servers utilizing CMT technology. Solaris 11 LDoms provide virtual machines that run an independent operating system instance, and contain virtualized CPU, memory, storage, console, and cryptographic devices.
 
Each update to LDoms may add additional features and fixes, so it is recommended that the latest updates always be applied.

All NetBackup components supported with Solaris 11 SPARC physical servers are supported in a Solaris 11 LDoms Control Domain and I/O Domain with the exception of Bare Metal Restore (server or client). Guest domain support is limited to standard client, database agents, master server and disk media server.

Be sure to backup the LDoms environment with the Control Domain backup:
 
  • Backup the database files in /var/opt/SUNWldm.
  • Protect the primary domain configuration with the following command:
# ldm list-constraints -x primary > primary.xml
 
 
 
Solaris 10 Zones Support:

Zones (containers) are isolated application execution environments that an administrator can create on a single Solaris 10 instance. Each Solaris update may add additional features and fixes so is best to be at latest update.

 
Global Zone server support:
All NetBackup components supported with Solaris 10 physical media servers are supported in a Solaris 10 global zone.
Backups can be performed in individual local zones, or by backing up the entire system from the global zone.
 
Local Zone server support:
All NetBackup components supported with Solaris 10 physical media servers are supported in a Solaris 10 whole root local zone.
Local zone support is limited to standard client, database agents, master server, media server and Bare Metal Restore client (as described in the Bare Metal Restore Administrators Guide).

Global Zone client backup:
A NetBackup client in the global zone can be used to back up the entire system. A full restore of the global zone restores everything in the global zone and the local zones as well, as long as the respective root file systems of the local zones were included in the backup. Make sure that the databases in local zones are offline before starting the backup from the global zone.

Local Zone client backup:
Clients should be configured to skip lofs file systems to avoid backing them up twice. The client and DB agent are required in local zone for hot database backups.

Systems configured with local zones can be backed up with the NetBackup Solaris 10 standard client, provided the standard client is loaded on each local zone in which a backup is desired.  On a local zone, a workaround must be performed to successfully load the NetBackup standard client. Prior to installation, the /usr/openv directory must be created and made writable for a successful NetBackup client installation. To do this, you must use the zlogin process to become "Zone root" on the local zone. Then, you will be able to create a link from /usr/openv to a writable location on the local zone.

 
Instructions for installing NetBackup standard client software in local zones:
  • On a whole root local zone, client push install or local install is done normally.
 
  • On a sparse root local zone that does not have a master/media server on the global zone, the install procedure is: 
 
In the global zone, create /usr/openv as a symbolic link to the location where the software will be installed in the local zone.  This will need to be done even if the global zone does NOT have that directory.

For example:
# ln -s /nonglobdir/openv /usr/openv
# ls /nonglobdir/openv
/nonglobdir/openv: No such file or directory
# ls -al /usr/openv
lrwxrwxrwx   1 root     root          10 Aug 23 15:13 /usr/openv -> /nonglobdir/openv

In the local zone, make sure that /usr/openv exists as a link.
# ls -al /usr/openv
lrwxrwxrwx   1 root     root          10 Aug 23 15:13 /usr/openv -> /nonglobdir/openv

In the local zone, make sure that the directory linked exists and is writeable.
# ls -al /nonglobdir/openv
total 32
drwxr-xr-x   9 root     bin          512 Aug 18 15:23 ./
drwxr-xr-x  18 root     bin          512 Aug 18 15:30 ../

A client push install or local install can now be done normally following the procedures in NetBackup documentation.
 
 
  • On a sparse root local zone that has a master/media server on the global zone, the install procedure is:  

In the global zone, /usr/openv in a default installation is a link to /opt/openv.  Alternatively, the master/media server can be installed in some BASEDIR other than the default /opt, and /usr/openv be linked to /BASEDIR/openv.  In either case, verify the directory linked to by /usr/openv.

For example:
# ls -al /usr/openv
lrwxrwxrwx   1 root     other         10 Aug 18 11:39 /usr/openv -> /opt/openv/

In the local zone, create a writable directory where the linked /usr/openv points:
# mkdir /opt/openv

A client push install or local install can now be done normally following the procedures in NetBackup documentation.
 

Instructions for installing NetBackup 7.x
Install NetBackup and NetBackup updates into the local zones like regular media server installs.

*Note: The sg driver install will fail but continue on with the install.
 
 
Installation changes after local zone install:
The sg driver and related utilities/libraries need to be moved from the local zone to the global zone in order to do initial device configuration.  The following steps must be performed in the global zone to accomplish this (skip steps 1-6 if NetBackup media server is already installed in the global zone):
 
  1. mkdir -p /usr/openv/volmgr/bin
  2. cp -r <local zone path>/root/<path>/openv/volmgr/bin/driver /usr/openv/volmgr/bin
  3. cp <local zone path>/root/<path>/openv/volmgr/bin/sgscan /usr/openv/volmgr/bin
  4. cp <local zone path>/root/<path>/openv/volmgr/bin/sg.build /usr/openv/volmgr/bin
  5. cp <local zone path>/root/<path>/openv/volmgr/bin/scsi_command /usr/openv/volmgr/bin
  6. cp –r <local zone path>/root/<path>/openv/lib /usr/openv
  7. cd /usr/openv/volmgr/bin/driver.  run ../sg.build with normal options to create sg configuration files
  8. run sg.install
  9. run /usr/openv/volmgr/bin/sgscan to view the devices that are visible in the global zone.
  10. Allocate devices to the local zones using the following procedure.  Remember that each local zone must have its own SCSI/FC controller.  These steps must be repeated for each drive/robot that you want to allocate to a local zone.  The attached script (replicate_to_local) can be used to automate this process:

a) Determine the raw and block device that needs to be exported.  Only raw (sg) for libraries, both block (st) and raw (sg) for tape drives.

# ls -la /dev/rmt/0cbn
lrwxrwxrwx   1 root     root          78 Aug 17 15:05 /dev/rmt/0cbn -> ../../devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/st@w5005076300400020,0:cbn
 
# ls -la /dev/sg/c0tw5005076300400020l0
lrwxrwxrwx   1 root     root          78 Oct  4 16:37 /dev/sg/c0tw5005076300400020l0 -> ../../devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/sg@w5005076300400020,0:raw
 
b) Create the devices directory
# mkdir -p <zone_path>/root/devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/
 
c) Capture major and minor number
# ls -la /devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/st@w5005076300400020,0:cbn
crw-rw-rw-   1 root     root       33, 223 Nov 20 15:18 /devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/st@w5005076300400020,0:cbn
major number is 33 in this case, minor number is 223
 
# ls -la /devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/sg@w5005076300400020,0:raw
crw-------   1 root     root     264, 14 Nov 20 16:03 /devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/sg@w5005076300400020,0:raw
major number is 264 in this case, minor number is 14
 
d) Use /usr/sbin/mknod to create the raw and block device in the devices directory you just created.
# cd <zone_path>/root/devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/
 
# mknod st@w5005076300400020,0:cbn c 33 223
 
# mknod sg@w5005076300400020,0:raw c 264 14
 
e) Create symbolic links from the /dev directory to /devices directory for both sg and rmt
        # mkdir -p <zone_path>/dev/sg
 
        # mkdir -p <zone_path>/dev/rmt
 
        # cd <zone_path>/dev/sg
 
        # ln -s ../../devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/sg@w5005076300400020,0:raw c0tw5005076300400020l0
 
        # cd <zone_path>/dev/rmt
 
        # ln -s ../../devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0/st@w5005076300400020,0:cbn 0cbn
 

11. Now the media server is ready for normal NetBackup device configuration.

 
Recommendations:
  • For native fibre drives that support SCSI persistent reservation (SPR), use SPR in place of scsi reserve (SR) when configured in SSO.
  • For drives using NetBackup Shared Storage Option (SSO), it is recommended that each local zone has its own dedicated SCSI FC port/SCSI channel to the drives configured for sharing.
  • All zones on the same media server using tape need to be in the same NetBackup domain.
 
Caution:
  • Processes in the local zone are visible in the global zone.  This means a bp.kill_all in the global zone will shutdown all NBU processes across all zones.     
 

Solaris 10 Logical Domains (LDoms) Support

Solaris 10 Logical Domains (LDoms) are only available on Sun servers utilizing CMT technology. Solaris 10 LDoms provide virtual machines that run an independent operating system instance, and contain virtualized CPU, memory, storage, console, and cryptographic devices.

 
  • The LDom hypervisor is a firmware layer on the flash PROM of the server motherboard
  • Control domain - Executes Logical Domains Manager software to govern logical domain creation and assignment of physical resources
  • Service domain - Interfaces with the hypervisor on behalf of a guest domain to manage access to hardware resources, such as CPU, memory, network, disk, console, and cryptographic units
  • I/O domain - Controls direct, physical access to input/output devices, such as PCI Express cards, storage units, and network devices.
  • Guest domain - Utilizes virtual devices offered by service and I/O domains and operates under the management of the control domain

Each update to LDoms may add additional features and fixes, so it is recommended that the latest updates always be applied.

All NetBackup components supported with Solaris 10 SPARC physical servers are supported in a Solaris 10 LDoms Control Domain and I/O Domain with the exception of Bare Metal Restore (server or client). Guest domain support is limited to standard client, database agents, master server and disk media server.

Be sure to backup the LDoms environment with the Control Domain backup:
 
  • Backup the database files in /var/opt/SUNWldm.
  • Protect the primary domain configuration with the following command:
# ldm list-constraints -x primary > primary.xml
 

 

 

Caution:  Processes in the local zone are visible in the global zone.  This means a bp.kill_all in the global zone will shutdown all NBU processes across all zones.     
 

 


Attachments

replicate_to_local (5 kBytes)



Article URL http://www.symantec.com/docs/TECH162994


Terms of use for this information are found in Legal Notices