Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.

Overlapping dedupe jobs

Created: 24 Feb 2012 • Updated: 24 Feb 2012 | 8 comments
This issue has been solved. See solution.

Hi,

Running Backup Exec 2010 R3 with dedupe option.

When scheduling backup jobs is it better not to overlap them when backing up to a dedupe folder?

Will overlapping jobs mean im not getting proper deduping of backups if the overlapping jobs have the same sort of content?

 

Im after some recommendations on how I should be scheduling backups to a dedupe folder and then optimize dedupe to another media server. At the moment I am duplicating to another media server at completion of each backup job automatically, is this the best thing to do?

Thanks

Comments 8 CommentsJump to latest comment

pkh's picture

There is no harm in scheduling your duplicate jobs immediately after your backup jobs.  Only the changed data blocks would be sent to the other server.

LorisM's picture

What about overlapping backup to dedupe folder jobs?

pkh's picture

There is no harm in that either.  Only one copy a data block will be kept in the dedup folder.  If two files have the same data block, the data block from the first file to be backed up will be stored in the database.  The other file will just reference it.

Note that when you run multiple dedup jobs, there is high demand on CPU and RAM.

teiva-boy's picture

Remember, the benefit to Backup to disk with BackupExec is that you can send multiple concurrent jobs to it.  Leverage that!  

There is an online portal, save yourself the long hold times. Create ticket online, then call in with ticket # in hand :-) http://mysupport.symantec.com "We backup data to restore, we don't backup data just to back it up."

pkh's picture

.... provided you have the horsepower to handle the simultaneous jobs.  People often forget that there is no such thing as a free lunch.

LorisM's picture

well yer im seeing a bit of a slow down in performance with multiple dedupe jobs compared to multiple  backup-to-disk backups.

server is an ibm x3650 32mb ram, intel xeon 3ghz, media server deduplication

pkh's picture

Definitely, there is a lot more processing going before the data is stored in the dedup folder.  BE has to hash the data and check each block against existing blocks to make sure that it is unique before storing it. For B2D folders, it is just a straight write.

When you run simultaneous jobs, you have to balance the load on your machine with the time gain.  With a not so poweful machine, you might be better off running consecutive jobs.

SOLUTION
LorisM's picture

thanks pkh..

not sure if this is possible but can a job automatically start when another is finished rather then having it based on a scheduled time in order to get around this issue?

so really only one job for example has a start time and the rest just run consecutively after each has finished?