Video Screencast Help

Flash Backup to backup file server

Created: 16 Sep 2012 • Updated: 15 May 2014 | 3 comments
This issue has been solved. See solution.


We have a cluster file Server(Windows 2008) with over 4 lakh images stored under one directory. When ever the backup process initiates on this server, the users accessing the images face slowness.

More over the backup runs for almost a week as the total size of data is more that 4TB.

Will a flashbackup help me to complete the backups faster. Or is there any way that i can complete this backup faster. Can some one please share your thought on best practice to backup this server.

Comments 3 CommentsJump to latest comment

Nicolai's picture

It sound like you have some very basic I/O issues - epically when you say users are impacted by the backup. 

See the bpbkar section on :

If reading direct of the disk to "nowhere" take the same time as a normal backup you may have a disk issue.

Assumption is the mother of all mess ups.

If this post answered your'e qustion -  Please mark as a soloution.

Marianne's picture

Millions of files and lots of directory levels along with fragmentation is known for this.

We have been to cut down backup time for customers by breaking up the volume into multiple streams. Something like this:


Select  'allow multiple data streams' in policy attributes, select multiplexing in schedules (as many NEW_STREAMS appear in backup selection)  4-8 streams was seen to give good results at various customers. Ensure 'max jobs per clients' is set to as many streams are specified. STU mpx level should also match.

You obviously need to know what directory structure looks like and ensure that nothing is missed.
The first time folders are grouped like this, you will see some streams completing faster and some longer.
Make a note and adjust before next full backup is due.

This way full backups should be able to complete over a weekend and you can select differential or cummulative incrementals for weekdays.
These changes should only be made just before a FUll backup is due to prevent Incrementals from running as Full.

Try this before testing FlashBackup.

Flashbackup is good, but bear in mind that entire volume needs to be backed up at block-level (even unused blocks). Users will still experience performance hit.

Best if FlashBackup can be combined with off-host backups.
This requires entire volume to be mirrored at hardware level or with Storage Foundation.
Array level snapshots or SF snapshots can then be used to import/map the snapshot mirror on the media server and perform FlashBackup backup on the media server.
This will offload all backup activity onto media server.
This can be completely automated with NBU Snapshot Client (part of Enterprise Client that includes FlashBackup). Details in Snapshot Client Admin Guide.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

Mouse's picture

Other option if to use any Symantec Deduplication (Appliance or MSDP) and leverage the Accelerator feature that is specially designed for this use case (even if you won't get good compression provided you have a lot of pictures, you should have decent dedupe ratio on second and further backups because majority of data would be screened off)

Flash Backup is okay but don't forget about restore performance, it could be an issue