Video Screencast Help
Netting Out NetBackup
Showing posts tagged with NetBackup
Showing posts in English
Randy Serafini | 09 Jan 2012 | 3 comments

How many of you have made a New Year’s resolution to lose weight?  Well OK, maybe not weight literally but you’ve been tasked to find ways in 2012 to reduce cost in your backup infrastructure and with a bit of smart maneuvering actually improve backup and recovery performance.  In the upcoming release of NetBackup, Symantec can help reduce cost and accelerate recovery with the integration of backup and snapshot replication management with the new NetBackup Replication Director.

In most Enterprise backup environments, both backup software and array based snapshot technology co-exist to provide tiers of protection and recovery.  The problem is that in many cases, the backup infrastructure is managed by the Backup Team, and snapshots and replicas are managed by the Storage Team.  While this may ‘appear’ to be efficient when things are good, when things turn bad and a fast recovery is required from either a major or minor disaster, it...

Peter_E | 04 Jan 2012 | 5 comments

Could you obliterate your backup window problems with 100x faster backups?  What if your car company called you up and told you that with a software upgrade you could make your car accelerate 100x faster.  What if the county or province where you live told you that your daily trip to work or the grocery store would be 100x faster in the coming months.  A new feature in the next release of NetBackup is expected to deliver just this type of massive leap in performance. 

Symantec first gave a hint about this feature, which will be called NetBackup Accelerator, back at our US Vision conference in 2011 (read the press release here), where we announced our intention to break the backup window and provide customers with a plan to modernize data protection....

CRZ | 19 Dec 2011 | 5 comments

I'm very pleased to announce that a new Maintenance Release for NetBackup 7.1 is now available!

NetBackup 7.1.0.3 is the third maintenance release on NetBackup 7.1.  This release contains several new proliferations as listed below:

  • Support for vSphere5
  • Support for SharePoint 2010 SP1 and Exchange 2010 SP2
  • Client support for Mac 10.7
  • Master and media support for AIX 7.1
  • NBSL changes to gather hardware information from appliance media servers attached to NBU7.1.x master servers

Along with above mentioned proliferations, several customer issues and internal engineering defects were fixed covering:

  • Resolution of Deduplication issues around data inconsistency, stream handler, GRT and high memory consumption during backups
  • Resolution of performance issues experienced by customers in BMR pre-restore environment since 7.0.X
  • Restore related issues in BMRon windows and HP ‘G...
Mayur Dewaikar | 07 Dec 2011 | 0 comments

If you are evaluating dedupe solutions, the dedupe ratios claimed by dedupe vendors are bound to intrigue you.  I have seen claims of dedupe rates as high as 50:1 and I am sure there are claims of even higher dedupe than that. Are such dedupe rates realistic? Speaking truthfully, yes, but one must understand the assumptions and the math behind such high dedupe rates.  These dedupe rates generally have the following assumptions:

  1. Logical Capacity: Logical capacity is the amount of data one “would have” stored with no dedupe or compression. So for example, if you are protecting 20 TB of data for 30 days and if you are running daily backups, your total data protected data (in theory) is 20 x 30= 600 TB. In practice, for an environment with average change rate, backend dedupe capacity is equal to the front end capacity for a 30 day retention period. So assuming 20 TB of dedupe storage is needed, your dedupe ratio is 600/20 = 30:1. While this makes...
AbdulRasheed | 30 Nov 2011 | 0 comments

Intel3.JPGWhen Lisa Graff, VP/GM of Intel’s Platform Engineering Group took the stage at a special event during SC ’11 Super Computing Conference in Seattle, I was not the only one who had wondered why the new launch was named EPSD 3.0, when there was no 2.0.  Within 10 minutes of her announcement speech, she articulated why it wasn’t just a 2.0!

Okay, what is EPSD? It stands for Intel Enterprise Platform and Services Division. This group designs and builds server board and related products for Channel and alliances. When Symantec, the world-leader in security and storage solutions, sought a partner to help deliver it's award winning backup software in an appliance form factor, it selected Intel EPSD for an enterprise class server board. The result can be found in the NetBackup 5220, a single-vendor, purpose-built, enterprise backup appliance from Symantec that...

SeanRegan | 29 Nov 2011 | 1 comment

If there is one key challenge for the virtualization team, it is backup.  All of that newfound agility that makes the virtual machine (VM) teams ninja-like in their ability to deliver IT as a service comes with a backend challenge.  As more and more mission critical applications and systems go virtual, how can these teams make sure they can deliver the same or better SLAs for backup?  Virtualized systems and data are not second class workloads anymore, they are prime time.  And lest you think virtualization is only a big company phenomenon – think again.  Small and mid-sized companies are adopting server virtualization technology at a faster pace than their bigger counterparts.  So the issue of protecting important data in virtualized environments is touching your neighborhood firms as much as big name businesses.

 

Vendor Landscape – Proceed with Caution

It’s no secret that the...

Phil Wandrei | 18 Nov 2011 | 1 comment

In the data protection world, a number we frequently see and hear are deduplication rates. We hear of dedupe rates ranging from 50:1, 20:1, to 10:1. Recently, I heard someone say that 50:1 is 5 times better than 10:1.  Their fuzzy math made me cringe, and I knew it was time to address this.       

To clarify deduplication rates, we need to examine: 1) the factors that influence deduplication rates and 2) the math. 

Deduplication Factors

Deduplication rates are like automobile miles per gallon (mpg):  Your Results Will Vary. The factors that affect deduplication results are:

  • Types of data (unstructured versus structured data) 
  • Change rate of data (what percent of data changes)
  • Frequency and type of backup (how often are you backing up the data? (i.e. daily, weekly, fulls or incremental)
  • Retention (how long are you keeping the dedupe data)
...
SeanRegan | 02 Nov 2011 | 1 comment

IDC approached Symantec, unsolicited, for a briefing on V-Ray. They were looking to create an Insight Report for their customers as they had so many inquiries around V-Ray. You’ll find this to be a very comprehensive and technical write up that shows NetBackup’s leadership in virtual machine protection with VMware and Microsoft Hyper-V.

http://idcdocserv.com/230790

AbdulRasheed | 27 Oct 2011 | 8 comments

Looking for the details on NetBackup for VMware?  Would you like to know about the nuts and bolts inside? We recently published, and we intend to publish more on technical details on award winning NetBackup for VMware protection. As there are many blogs in this series, I am publishing this blog as a container for this series.

The series so far:

Discovery job in VMware Intelligent Policy

Understanding V-Ray vision through backup process flow

Transport methods and TCP ports

...

AbdulRasheed | 27 Oct 2011 | 20 comments

Recently, one of our customers asked me if NetBackup for VMware supports the use of a dedicated data store for snapshots. That triggered this blog.

  Snapshot is great. Among many of its uses, NetBackup employs it to create a consistent point-in-time image of the virtual machine for the purpose of backup. When a snapshot is active, the writes to VMDK files are directed redo logs. At the end of the backup, the snapshot is released and redo log is played back into VMDK.

  The world is less than ideal. What happens if a backup ends prematurely and the snapshot is left behind? Now the redo log grows. What if such situations arise frequently? Now you have multiple redo logs growing in the data store. There are two major issues here.

  1. The storage space on data store gets used up quickly, if the data store fills up all the VMs using that data store would be affected
  2. The more snapshots you have for the same virtual...