Video Screencast Help

How does Backup Exec 2012 Dedup works in deep

Created: 02 Apr 2012 • Updated: 03 Apr 2012 | 5 comments
Superkikim's picture
This issue has been solved. See solution.


Is there any techical bulletin or white paper explaining the Dedup mechanism in Backup Exec 2012 ?

I know that target Dedup, i.e. Datadomain won't reduce the backup window, as the work on the client side will still be the same as with regular storage.

I have quite some questions about dedup. I couldn't find easy answers in Symantec resources:

  • Am I right in thinking that source dedup will reduce the backup window on servers ?
  • What is the overhead for Dedup ?
  • What is precessing to the dedup ? the agent on the protected server or the Media server before writing data to disks ?
  • Are the data re-hydrated when written to tapes ?
  • What if we use a dedup storage like Datadomain. Can we have source dedup from symantec and target dedup from Datadomain ? does it gives any advantage ?

A live webcast on dedup could be great at some other time as they are usually (10amPST). This is too late for Central Europe.

Comments 5 CommentsJump to latest comment

Colin Weaver's picture

Source (Client Side) Dedup may or may not reduce the window as the calculation of what to backup remains the same, what source dedup does is move the processing overhead to the source server and reduces the amount of data actually sent over the network.

The overhead varies depdending on what is being backed up so is difficult to quantify.

Where the processing occurs depends on if you have client side or media server side dedup configured.

Data is rehydrated when it is duplicated to tape

Not sure what you are asking on Data Domain question

ZeRoC00L's picture

Some more info to read about the deduplication option:

If this response answers your concern, please mark it as a "solution"

Superkikim's picture

Thank you Colin and ZeRoC00L.  That was useful.

Regarding Datadomain, my question is can you use software dedup in addition of a dedup appliance. But my guess is yn can't... and it might just not make sense....

Regarding a deep dive into dedup, I wanted to understand how the dedup process is working. Is it based on blocks, or files ? But in the BE 2010 TEC129694 bulletin, it seems clear that an appliance is doing a better job, most probably because it is block based.

Regarding the backup window, I was expecting the backup to be faster the second, and third time, and so on, as what has already been dedup'd does not need to be again, if there were no change. But again, this depends how dedup works.

Also, if I get it right, client side dedup will not be as efficient as media server dedup in saving space as it will only dedup the client data, and not the data of multiple client, what is most probably the case in the media server dedup... And also, in the media server dedup, are data dedup for all clients in a same job only, of for all client "period" ?

Colin Weaver's picture

BE Dedup is 'chunk' based which is kind of similar to blocks, but not directly related to the blocks on the disks.

With regards Client Side Dedup it still gets information form the media server on whether a chunk of data already exists (even if it did come from a different server)

Deduplication primarily reduces disk space - the effect on backup windows themselves are secondary and may or may not be significant.

Superkikim's picture

for your message... Clarifies a lot :)

Have a nice day