Video Screencast Help
Netting Out NetBackup
Showing posts tagged with 7.1.x and Earlier
Showing posts in English
CRZ | 19 Dec 2011 | 5 comments

I'm very pleased to announce that a new Maintenance Release for NetBackup 7.1 is now available!

NetBackup 7.1.0.3 is the third maintenance release on NetBackup 7.1.  This release contains several new proliferations as listed below:

  • Support for vSphere5
  • Support for SharePoint 2010 SP1 and Exchange 2010 SP2
  • Client support for Mac 10.7
  • Master and media support for AIX 7.1
  • NBSL changes to gather hardware information from appliance media servers attached to NBU7.1.x master servers

Along with above mentioned proliferations, several customer issues and internal engineering defects were fixed covering:

  • Resolution of Deduplication issues around data inconsistency, stream handler, GRT and high memory consumption during backups
  • Resolution of performance issues experienced by customers in BMR pre-restore environment since 7.0.X
  • Restore related issues in BMRon windows and HP ‘G...
Mayur Dewaikar | 07 Dec 2011 | 0 comments

If you are evaluating dedupe solutions, the dedupe ratios claimed by dedupe vendors are bound to intrigue you.  I have seen claims of dedupe rates as high as 50:1 and I am sure there are claims of even higher dedupe than that. Are such dedupe rates realistic? Speaking truthfully, yes, but one must understand the assumptions and the math behind such high dedupe rates.  These dedupe rates generally have the following assumptions:

  1. Logical Capacity: Logical capacity is the amount of data one “would have” stored with no dedupe or compression. So for example, if you are protecting 20 TB of data for 30 days and if you are running daily backups, your total data protected data (in theory) is 20 x 30= 600 TB. In practice, for an environment with average change rate, backend dedupe capacity is equal to the front end capacity for a 30 day retention period. So assuming 20 TB of dedupe storage is needed, your dedupe ratio is 600/20 = 30:1. While this makes...
AbdulRasheed | 30 Nov 2011 | 0 comments

Intel3.JPGWhen Lisa Graff, VP/GM of Intel’s Platform Engineering Group took the stage at a special event during SC ’11 Super Computing Conference in Seattle, I was not the only one who had wondered why the new launch was named EPSD 3.0, when there was no 2.0.  Within 10 minutes of her announcement speech, she articulated why it wasn’t just a 2.0!

Okay, what is EPSD? It stands for Intel Enterprise Platform and Services Division. This group designs and builds server board and related products for Channel and alliances. When Symantec, the world-leader in security and storage solutions, sought a partner to help deliver it's award winning backup software in an appliance form factor, it selected Intel EPSD for an enterprise class server board. The result can be found in the NetBackup 5220, a single-vendor, purpose-built, enterprise backup appliance from Symantec that...

SeanRegan | 29 Nov 2011 | 1 comment

If there is one key challenge for the virtualization team, it is backup.  All of that newfound agility that makes the virtual machine (VM) teams ninja-like in their ability to deliver IT as a service comes with a backend challenge.  As more and more mission critical applications and systems go virtual, how can these teams make sure they can deliver the same or better SLAs for backup?  Virtualized systems and data are not second class workloads anymore, they are prime time.  And lest you think virtualization is only a big company phenomenon – think again.  Small and mid-sized companies are adopting server virtualization technology at a faster pace than their bigger counterparts.  So the issue of protecting important data in virtualized environments is touching your neighborhood firms as much as big name businesses.

 

Vendor Landscape – Proceed with Caution

It’s no secret that the...

Phil Wandrei | 18 Nov 2011 | 1 comment

In the data protection world, a number we frequently see and hear are deduplication rates. We hear of dedupe rates ranging from 50:1, 20:1, to 10:1. Recently, I heard someone say that 50:1 is 5 times better than 10:1.  Their fuzzy math made me cringe, and I knew it was time to address this.       

To clarify deduplication rates, we need to examine: 1) the factors that influence deduplication rates and 2) the math. 

Deduplication Factors

Deduplication rates are like automobile miles per gallon (mpg):  Your Results Will Vary. The factors that affect deduplication results are:

  • Types of data (unstructured versus structured data) 
  • Change rate of data (what percent of data changes)
  • Frequency and type of backup (how often are you backing up the data? (i.e. daily, weekly, fulls or incremental)
  • Retention (how long are you keeping the dedupe data)
...
SeanRegan | 02 Nov 2011 | 1 comment

IDC approached Symantec, unsolicited, for a briefing on V-Ray. They were looking to create an Insight Report for their customers as they had so many inquiries around V-Ray. You’ll find this to be a very comprehensive and technical write up that shows NetBackup’s leadership in virtual machine protection with VMware and Microsoft Hyper-V.

http://idcdocserv.com/230790

AbdulRasheed | 27 Oct 2011 | 8 comments

Looking for the details on NetBackup for VMware?  Would you like to know about the nuts and bolts inside? We recently published, and we intend to publish more on technical details on award winning NetBackup for VMware protection. As there are many blogs in this series, I am publishing this blog as a container for this series.

The series so far:

Discovery job in VMware Intelligent Policy

Understanding V-Ray vision through backup process flow

Transport methods and TCP ports

...

AbdulRasheed | 27 Oct 2011 | 20 comments

Recently, one of our customers asked me if NetBackup for VMware supports the use of a dedicated data store for snapshots. That triggered this blog.

  Snapshot is great. Among many of its uses, NetBackup employs it to create a consistent point-in-time image of the virtual machine for the purpose of backup. When a snapshot is active, the writes to VMDK files are directed redo logs. At the end of the backup, the snapshot is released and redo log is played back into VMDK.

  The world is less than ideal. What happens if a backup ends prematurely and the snapshot is left behind? Now the redo log grows. What if such situations arise frequently? Now you have multiple redo logs growing in the data store. There are two major issues here.

  1. The storage space on data store gets used up quickly, if the data store fills up all the VMs using that data store would be affected
  2. The more snapshots you have for the same virtual...
AbdulRasheed | 25 Oct 2011 | 6 comments

Note: Please click here for a recent and updated version of this webcast

As more and more of your business critical applications get virtualized, your data protection solution needs to step up to the plate. You are likely to have a hybrid environment and your data protection solution should have the visibility into what is inside both virtual and physical systems. Processor intensive blind deduplication of VMDK files will not scale as your environment grows. Above all, the solution also needs to protect vCenter server, the backbone of vMotion, DRS, HA and more.

We provided a technical deep dive on NetBackup for VMware and how this award winning data protection solution can be quickly deployed for protecting everything in your data center in a matter of minutes. Powered by V-Ray and Intelligent...

Jed Gresham | 19 Oct 2011 | 0 comments

 

Welcome to the NetBackup 7.5 Pre-GA group.  This space will be a valuable area for sharing information about the new features of NetBackup 7.5.  Over the coming months there will be Beta and First Availibility information posted here to assist with testing and deployment of the signature features of NetBackup 7.5 and various other changes/improvements to the product and it's components.

 

Stay tuned for more!