Video Screencast Help

What is the best way to use NetBackup Appliance to backup large NAS system

Created: 22 Jan 2014 • Updated: 17 Jul 2014 | 7 comments
This issue has been solved. See solution.

We are in the middle of a project to replace our current BackupExec 2012 with Data Domain 565 backup architecture with one of 3 possible solutions:

1. NetBackup 5230 Appliances (1x5230 with 76TB at primary DC and 2x5230 with 76TB each at secondary DC)

2. EMC Avamar/DD2500 (with NDMP Accelerator node and 1x90TB at primary DC and 1x135TB at secondary DC)

3. Commvault Simpana (dont have as much detail on this option, mainly looked at a Backup Service with a local provider that would use this system)

Here are some details about our current storage architecture:

Primary SAN = EMC VNX 5300 with some block level and file level storage

Secondary SAN = EMC VNX 5300 with some block level and file level storage

Use RecoverPoint to replicate block level storage (LUNs) between SANs

Use VNX Replicator to replicate file level storage (file systems) netweem SANs

We are almost 100% virtualized with VMware 5.0 host servers (3 hosts at primary and 3 hosts at secondary sites)

100Mbps metro ethernet between data centers

Ok, so here is where our backup problems come up:

5 x NAS file systems stored on file level side of VNX 5300

fs1 = 7.2TB file system takes too long to back up

fs2 = about 8TB of small files that is out image archive system (130 milion files)

fs3 = large file archive about 11TB, only has data added each quarter of the year (about 400-500GB per quarter), these files are manally moved from production file systems into this archive area, but users need access to this read only data 24x7 as needed

This is probably enough detail for now...

We like the Symantec approach and would like to go with it but are not 100% sure how it will handle these lare file systems.  We have been told so far that we can use NetBackup Accelerator to backup the large file systems, but it I am not convinced.  I do like the way the NetBackup appliance is directly attached to the storage and I think that will help make the backup jobs more efficient with less network parts in the puzzle.  I also like the NBU interface best so far.

I like the Avamar/Data Domain approach since we are familar with Data Domain already and we know what kind of depuplication we should get.  Plus, since we have had the EMC VNX SANs, I have had good experience with EMC support so far and this would be more of the same in that sense.  Symantec support for BackupExec... sucks... I am not a big fan of NDMP and the Avamar solution would leverage that.  I have seem many posts that

Anyway, I guess what I am hoping to hear is someone has gone through something simular and has used a NBU appliance with NBU accelerator to SUCCESSFULLY backup their large NAS file systems ON A VNX series SAN.   OR is Avamar with NDMP backups our best bet.

Operating Systems:

Comments 7 CommentsJump to latest comment

Riaan.Badenhorst's picture


Last year I implemented the same type of solution, NBU 5220 + VNX NAS.

We tested both options, Accelerator and NDMP backup type. Much to my surprise the NDMP method actually out performed / matched the Accelerator method. Large restores from the accelerator file backups were slow for some reason though. However this might have been environmental as they we're using HyperV guests to mount the NAS.

To use the accelerator you have to get a standard client to mount the NAS and then you use accelerator to backup the FS as if it belonged to that client. So data travels from the NAS to the client then to appliance. I've heard a suggestion to just do the mount of the NAS on the appliance but never tried that. At least that would remove one hop from the chain.

In the end the client decided to go with NDMP, Full on weekend and INCR in the week. They're also still running the accelerator once a month to have a backup that can be restored to something other than a NAS (or if they want to switch vendors as NDMP restore is only supported on like hardware)


Riaan Badenhorst

ITs easy :)

HoneyBadger1972's picture

So, I assume you are refering to NDMP vs Accelerator both on the Netbackup Appliance?

Can I ask what they had previously?  Did they have another deduplication system prior to this and how did deduplication change?

That is the other area of concern for us.  We are pretty confident about what we will get with Data Domain, we just dont know how well the NetBackup appliance will dedup compared to DD.

Also, how different was the performance between NDMP and Accelerator?  How bad was the restore in numbers?  I take it restores from NDMP side were fine?

Mark_Solutions's picture

Lots of factors actually come into play when using de-dupe, especially during restores etc.

This can be affected by having a very large fragment size for the storage unit in particular

Accelerator tunining is also important for performance - siply by increasing the number of worker threads you can have an appliance cope with accelerator backups far better

The "offical" test figures i have seen show the initial Accelerator backup will take longer than the NDMP backup,but after that it should be much quicker - and if setup and tuned properly the restores should be good too.

Assuming that this is a new installation i assume you will be going with NetBackup in which case it is unlikely that you will get many people give you a comparison in view of how new it is - but the official appliance figures show a huge improvement from 7.5 to 7.6 - plus, being applianced based you can take advantage of having a 72TB de-dupe pool which you cannot do on anything else

So the figures look a bit like this:

Single stream backup with no de-dupe: 7.5 was 150MB/s, 7.6 is 500 MB/s

16 stream backup with 98% de-dupe: 7.5 was 366 MB/s, 7.6 is 1029 MB/s

longevity 8 stream restore (what ever that means!) : 7.5 was 90 MB/s, 7.6 is 362 MB/s

Lots of DD comparison figures out now too which claims up to 53x faster

I know it is all sales and marketing but it does look like 7.6 is a major step forward when used on an appliance but as it is pretty knew you may not find anyone that has fully put it to the test yet

Hope this all helps

Authorised Symantec Consultant

Don't forget to "Mark as Solution" if someones advice has solved your issue - and please bring back the Thumbs Up!!.

Mark_Solutions's picture

The "official" 7.5 test of Accelerator Vs NDMP was as follows:

20 Million Files (1.31 TB)
10 Volumes, 2 Million 64kb files per volume (small files highlight inefficiency of NFS)
NFS mounted on Linux client

NDMP backup - 215 minutes

MSDP - NO acclerator - 100% de-dupe - 612 minutes

MSDP - acclerator - 0% de-dupe - 610 minutes

MSDP - acclerator - 89% de-dupe - 77 minutes

MSDP - acclerator - 94% de-dupe - 40 minutes

MSDP - acclerator - 97% de-dupe - 27 minutes

MSDP - acclerator - 100% de-dupe - 19 minutes

You would expect 7.6 to be better than this

This shows that it is all down to the de-dupe rates you get in your environment

Hope this also helps

Authorised Symantec Consultant

Don't forget to "Mark as Solution" if someones advice has solved your issue - and please bring back the Thumbs Up!!.

HoneyBadger1972's picture

That is great information, what I like to see...

Would you say that having 7.2TB (130 million files, mostly image files that once posted dont change) would not work well with NFS mounts?  This is a file location that people upload pictures to via a web front end, pics are uploaded and cut into many smaller files to allow users to browse these picture collections to be browsed through a web page kind of like google maps from what I have been told, using tiles.  ANYWAY... we do not back up this entire file system more than one a quarter.  We do run weekly fulls and daily incrementals on the most recent quarter of files.  This quarterly folder grows over time until the next quarter beings, then we point to a new quartly folder and change our backup source...  maybe more detail than you want, but sometimes it is the details that bite you in the @$$.

We have another large file system that is your more traditional, unstructured file storage.  Office files, PDFs, Acad files, Revit files, some media files, etc.

Also, your list of details above shows Accelerator backups overtime and makes sense.  For the NDMP detail you have listed, I assume the first time was a level 0 dump, how fast are the level 1 dumps after that?  What dedupe rates would it look like with NDMP overtime?

Also, as I understand it, NBU does some kind of dynamic fixed block level dedupe.  DD/Avamar claim to be the "only" true variable block length dedupe.  What say you?  Is there some definative documentation about how NBU handles dedupe?  I know I have read how you have performance trade offs with variable vs fixed length and the question to be is what works well enough for us given the cost trade offs of the systems.

HoneyBadger1972's picture

Also, how did you make the NFS mounts?  Did you connect directly from the appliance or did you have to have a windows machine act as a proxy?

Mark_Solutions's picture

The NFS mounts can be whereever you like - on the appliance itself but usually on a Windows / Unix Client

De-Dupe for image file is not always great - so this will apply whether using NDMP or Accelerator - the only way to really find out it to try it

The de-dupe does go deeper than on a file basis but it does really help if you have multiple similar files as although not true de-dupe all of the time the single instance does come into play

The issue i guess is what happens every quarter when you start a new folder

What accelerator will give you is that even with a full backup only the "new" images will get backed up as the others are already in the de-dupe database - so it will track the changes nicely and shoul dstill give good rates.

For your traditional file systems it will work really well using accelerator

I dont have any other figures - or the NDMP Vs Acclerator figures for 7.6 yet

Hope all of this helps - it is something that is always specific to your environment so the proof will come in the testing to be honest and predictions really cannot be made

Technically as far as NBU De-Dupe is concerned (at least up to 7.5) :

It ocurrs at the file and file segment level

The fixed level "intelligent" de-dupe is used to optimise performance (CPU and Memory)

The block size could actually be changed if required for a particular installation but would probably need to be put past support if you wanted to do that.

Authorised Symantec Consultant

Don't forget to "Mark as Solution" if someones advice has solved your issue - and please bring back the Thumbs Up!!.