Video Screencast Help

Slow Speed of Exchange 2007 mailbox backup - Netbackup

Created: 14 Jan 2013 • Updated: 15 Jan 2013 | 9 comments
UnionAW's picture
This issue has been solved. See solution.

Master Server - Windows 2008
NB Version -
Policy config - Exchange 2007 mailbox on windows 2008
Currently no snapshot or GRT enabled
Master server is on same site as client, 100mb link.
Data is backed up to a Puredisk storage pool locally attached to the master server

Backup is running at 1-1.5mb a second and causing a full (130GB) to take more than a day to complete. I'm assuming this isn't normal ?

I'm trying to investigate what is causing the slow speed,

I'm aware auto config will have an impact, but not this much surely ?

I'm new to the enviornment trying to work out why certain things have been done.  I'm assuming GRT would only make the backup take longer ?



Discussion Filed Under:

Comments 9 CommentsJump to latest comment

Marianne's picture

Mailbox backups has always been known for slow performance. The problem is with the way in which mailboxes are being backed up - the admin account used for mailbox backups litterally needs to log into each mailbox one-by-one to back it up.
For this reason we have in the past broken it up into multiple streams combined with multipexing and multistreaming.

No longer needed with GRT.
Backup Information Store with the ability to restore single items.
Best to config GRT.

Use Handy NBU Links in my signature for links to Exchange HOWTO videos.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

RLeon's picture

In addition to the above excellect post, please be aware that to backup the Exchange Information Store with GRT enabled in, you will also have to use snapshot or the backup will fail.

This wasn't written in the guides, compatibility lists or release notes, but it is a requirement:

Part of your low backup performance could be just as they say in the thread (paraphrase):
If you use the streaming backup style to Puredisk - as you do in your case with mailbox backups - then you will likely see low backup performance.

Master server is on same site as client, 100mb link.

If it is indeed 100Mb and not 1000Mb, then that may also explain part of the problem.

UnionAW's picture

My thanks to both of you, I'll review the above and let you know.

Some of the network kit is GB but they are part way through their upgrade programme so for the moment we are limited to 100mb.

To confirm, would you both recommend doing the MIS backup rather than setting up a dedicated policy for the Mailboxes.

 Finally I notice that the new and shiny accelerator doesn't work on exchange policies.  My client is complaining as they are saying the other Enterprise applications do give you option of block level full backups running as incrementals time wise. Has he got the wrong end of the stick or not ?

Symantec came out with this massive fan fare about 100 times faster backups, I have yet to see anything like this.  It's quite possible I'm not setting it up properly though.

Marianne's picture

My recommendation is to use GRT, rather than separate MIS and mailbox backups.

There are many TNs available that explains why mailbox backups are slow, even on a Gb network (e.g. ). MAny forum posts as well, like this one:

See HOWTO Videos: and NBU for Exchange Admin Guide for GRT configuration.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

RLeon's picture

Since MS no longer support Mailbox level stream based backup in newer versions of Exchange, backing up on the IS or DB level would be the way to go in the long run. With Netbackup's GRT capability, you will still be able to restore at the mailbox and individual e-mail level.

With Exchange 2010, MS requires you to use snapshot backups. With Exhange 2007, although you are supposed to be able to perform stream based backups on the IS level, Netbackup requires that you use snapshots with that.

Until this requirement is published and publicly referenced, I shall refer to it as the Speechless Imperative, in honour of Chris Zimmerman, a.k.a. CRZ.

The accelerator is for use at the file level. For the application level, you can use client-side-deduplication, which technically is still looking at and sending things on the block level.
I could dig deeper on this topic, but your client is not entirely correct. I would recommend reading up on snapshot based backups in Netbackup. They are all block level, one way or another. (For example, if your Exchange/someApp is in a VM and you do a VMware vStorage backup of the entire VM, it is block level backup. Then even better, if you use block level incremental backup on this VM, it is block level backup. And so on...)

Accelerator could be more than 100 times faster if the data chage rate is low to none at the source. It is all relative.

UnionAW's picture

Thanks for this. 

What I'm trying to understand is the idea that your fulls can be much smaller because they point to an original base level and rather than doing a complete full each time it does a comparison with the base and only backs up the difference. Then you do a new base at regular intervals.  

Or have I got myself totally confused ?

Marianne's picture

Apologies! Link to NBU for Exchange manual:

PS: You can find links to all NBU manuals in Handy NBU Links in my signature.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

RLeon's picture

You are trying to describe how deduplication works. If you want the technical stuff, you can refer to the pdf from here:

It is not about Netbackup specifically, but it describes the exact same dedup engine, and the bits and blocks behind how it works, with diagrams.

In Netbackup, deduplication and the Accelerator work together, but conceptually they are two very unrelated things. You can find out more from the blog here, and from the discussions here and here.
Practically, Accelerator has to work with deduplication because if it only give you one block of a file, you cannot get back that whole file; you only have a piece of it. But together with deduplication, which works at the block level, the whole file could be put back together using that one new block and from the previously collected unchanged blocks of that file. (They call it Optimized Synthethic.) And then in the end, there is a chance that the block given by Accelerator may not even need to be written to storage because an existing identical one is identified by the dedup engine, whether it be client-side-dedup or media-server dedup, only pointer references will be updated in the dedup DB. Hope that made sense.