Video Screencast Help
Scheduled Maintenance: Symantec Connect is scheduled to be down Saturday, April 19 from 10am to 2pm Pacific Standard Time (GMT: 5pm to 9pm) for server migration and upgrade.
Please accept our apologies in advance for any inconvenience this might cause.

Netting Out NetBackup

Showing posts tagged with NetBackup
Showing posts in English
CRZ | 08 Feb 2012 | 0 comments

PostgreSQL (commonly referred to as Postgres) is a powerful, open source object-relational database system. It is used by several prominent users including the International Space Station, Skype, Reddit, and Yahoo!

In a blog post last year, Backing up the Dolphins, we had announced support for backup of MySQL via NetBackup. I'm happy to now be able to announce the availability of a Postgres Agent for NetBackup, certified and available from Zmanda, our STEP partner and a specialist in backing up open source databases.

Zmanda's NetBackup Postgres Agent was developed using the XBSA API provided by NetBackup. The NetBackup XBSA interface allows Zmanda to create, query, retrieve, and delete data objects using NetBackup for data storage. The operations...

Simon Jelley | 05 Feb 2012 | 4 comments

It seems like businesses are looking at modernizing backup like it’s a trip to the dentist: too time-consuming and painful, and always costly! So they end up taking shortcuts, hoping that the cavity of data loss will never occur. But just as advances in dental techniques have taken much of the pain out of these visits, backup is finally experiencing a revolution of its own, thanks to new developments from Symantec.

Everyone knows what the problem is – it’s just too complicated. You shouldn’t have to feel like you’re performing root canal surgery in order to make sure you organization’s data is protected, and most importantly can be recovered. But that’s just how it feels when the amount of data that needs to be backed-up is growing so rapidly and as data centers grow ever more complex. The reality is that there are now so many physical and virtual systems crowding infrastructure that the average company has seven separate backup...

SeanRegan | 26 Jan 2012 | 0 comments

Backups have become a big and burdensome operation for many backup admins. SLAs are getting tighter while information grows and new platforms like virtualization create higher density environments. With these forces in play the current approach to backup modernization is not effective. Today “backup modernization” is championed by vendors who champion solutions that address one or two aspects of backup – such as deduplication, snapshots, or tools for backing up just VMware and Hyper-V environments.  These are quick fixes, not modern data protection. Throwing more solutions at a problem as a quick fix is the cause of backup complexity and cost.

Backup and recovery is a crucial step in protecting an organization’s information and its ability to stay in business if something goes wrong.   A new approach is needed. To determine current trends, Symantec commissioned a global survey of enterprises...

Kristine Mitchell | 20 Jan 2012 | 1 comment

Forget the itchy  socks, the sweater I will never wear, and the crock pot (nice try…but still not getting me to cook). You can imagine my excitement when I opened this Christmas gift – a vintage typewriter keyboard for my iPad. If you are like me, you love your iPad, but still struggle with the keyboard. You also love cool and unusual gifts. But this got me thinking…what is it about human nature that digresses back to old practices? You know what I’m talking about – 2 steps forward, one step back. This was so obvious to me when I spoke to a customer the other day.  They have a manual, “old school” process in place to protect their virtual machines (VMs). In fact, they have one full time person who does nothing but map VM datastores to backup policies. Now here’s a cool technology like virtualization and...

Danny Milrad | 16 Jan 2012 | 0 comments

Never underestimate the bandwidth of a station wagon loaded with backup tapes.  This was thrown on the table during a recent customer meeting in the context of getting data offsite and onto disaster recovery sites.  What a great visual I thought to myself. The customer continued, FedEx is an amazing network they have high bandwidth but also high latency.  They move millions of packages every day…phenomenal bandwidth.  But in the always-on economy 24 hours to ship a backup tape is the epitome of high latency.

I talk to customers regularly about their regular backup rituals and disaster recovery plans. Like clockwork, FedEx (or other overnight carrier) comes up as the preferred network transport to ship tapes to the salt mines.  But I had to ask myself is putting your company’s most valuable data on trucks the best option.  If it’s getting backed up and sent offsite, it has to have at least some value, right?  While I...

Alex Sakaguchi | 16 Jan 2012 | 0 comments

All bark and no bite.  Heard the saying?  It essentially means one who talks a tough story, but then shuns away when asked to step up.

In 2010, Symantec conducted a survey of more than 1,600 senior IT and legal executives in 26 countries to determine the best – and worst – practices in the area of information management and published the results in the 2010 Information Management Health Check Survey.

87% of these folks said that enterprises should have a proper information retention strategy that allows them to delete unnecessary information.

Why, then, do less than 46% actually have one?

What’s the deal?  Why is it so hard to delete information that isn’t needed? 

Well, that’s the key isn’t it? ...

Randy Serafini | 09 Jan 2012 | 3 comments

How many of you have made a New Year’s resolution to lose weight?  Well OK, maybe not weight literally but you’ve been tasked to find ways in 2012 to reduce cost in your backup infrastructure and with a bit of smart maneuvering actually improve backup and recovery performance.  In the upcoming release of NetBackup, Symantec can help reduce cost and accelerate recovery with the integration of backup and snapshot replication management with the new NetBackup Replication Director.

In most Enterprise backup environments, both backup software and array based snapshot technology co-exist to provide tiers of protection and recovery.  The problem is that in many cases, the backup infrastructure is managed by the Backup Team, and snapshots and replicas are managed by the Storage Team.  While this may ‘appear’ to be efficient when things are good, when things turn bad and a fast recovery is required from either a major or minor disaster, it...

Peter_E | 04 Jan 2012 | 5 comments

Could you obliterate your backup window problems with 100x faster backups?  What if your car company called you up and told you that with a software upgrade you could make your car accelerate 100x faster.  What if the county or province where you live told you that your daily trip to work or the grocery store would be 100x faster in the coming months.  A new feature in the next release of NetBackup is expected to deliver just this type of massive leap in performance. 

Symantec first gave a hint about this feature, which will be called NetBackup Accelerator, back at our US Vision conference in 2011 (read the press release here), where we announced our intention to break the backup window and provide customers with a plan to modernize data protection....

CRZ | 19 Dec 2011 | 5 comments

I'm very pleased to announce that a new Maintenance Release for NetBackup 7.1 is now available!

NetBackup 7.1.0.3 is the third maintenance release on NetBackup 7.1.  This release contains several new proliferations as listed below:

  • Support for vSphere5
  • Support for SharePoint 2010 SP1 and Exchange 2010 SP2
  • Client support for Mac 10.7
  • Master and media support for AIX 7.1
  • NBSL changes to gather hardware information from appliance media servers attached to NBU7.1.x master servers

Along with above mentioned proliferations, several customer issues and internal engineering defects were fixed covering:

  • Resolution of Deduplication issues around data inconsistency, stream handler, GRT and high memory consumption during backups
  • Resolution of performance issues experienced by customers in BMR pre-restore environment since 7.0.X
  • Restore related issues in BMRon windows and HP ‘G...
Mayur Dewaikar | 07 Dec 2011 | 0 comments

If you are evaluating dedupe solutions, the dedupe ratios claimed by dedupe vendors are bound to intrigue you.  I have seen claims of dedupe rates as high as 50:1 and I am sure there are claims of even higher dedupe than that. Are such dedupe rates realistic? Speaking truthfully, yes, but one must understand the assumptions and the math behind such high dedupe rates.  These dedupe rates generally have the following assumptions:

  1. Logical Capacity: Logical capacity is the amount of data one “would have” stored with no dedupe or compression. So for example, if you are protecting 20 TB of data for 30 days and if you are running daily backups, your total data protected data (in theory) is 20 x 30= 600 TB. In practice, for an environment with average change rate, backend dedupe capacity is equal to the front end capacity for a 30 day retention period. So assuming 20 TB of dedupe storage is needed, your dedupe ratio is 600/20 = 30:1. While this makes...