Video Screencast Help
Protect Your POS Environment Against Retail Data Breaches. Learn More.

DeDupliaction Folder Memory Warning - 1.5GB per TB of Storage

Created: 28 Jan 2012 | 3 comments
KeirL's picture

Hi All

Can someone give me an insight into the underlying reasons for the warning that Backup Exec 2010 R3 gives when a deduplication backup to disk folder is created on a large volume?

I'm absolutley bought into the fact that deduplication is horrendously CPU and Memory intensive and so recognise this is a somewhat silly question, but I kind of have a couple of peripheral questions:

If this warning is ignored how to problems manaifest themselves?  Do jobs fail or just take ages to complete?

If a server has 12GB and a 20TB lun for deupe (which will cause this warning to appear) are errors\failures still possible if the size of data being backed up is only about 5TB - therefore significantly less in size than the threshold depicted in the warning message?

What if the client plans to increase the server memory in line with the data growth but wants the array to be initially sized at 20TB whilst data backups are low?

I guess a single Backup Exec server can only have a single dedupe folder?  so configuring the storage is configured in 2 x Luns of 10TB with a dedupe folder on each wouldn't be an option - right?

Thanks

KL

Comments 3 CommentsJump to latest comment

Kiran Bandi's picture

If a server has 12GB and a 20TB lun for deupe (which will cause this warning to appear) are errors\failures still possible if the size of data being backed up is only about 5TB - therefore significantly less in size than the threshold depicted in the warning message?

The amount of memory required to support a dedup folder increases as the data stored in dedup folder increases. So, when there is very little amount of data in dedup folder existing memory is good enough to maintain the data. But when the dedup folder gets filled up with data it will be a problem. So it is better to consider installing sufficient memory at the starting itself to eliminate any problems in future. If not possible install minimum (8 GB) while starting and upgrade the memory as the data grows.

For 5TB of data 8GB of RAM is sufficient.

To maintain a dedup folder of size 20TB, it is recommended to have 30GB of RAM.

I guess a single Backup Exec server can only have a single dedupe folder?

Correct. 

teiva-boy's picture

In BackupExec 2010 initial release, the dedupe option required 1GB of RAM for every 1TB of data you were managing within the dedupe folder.

In R2, it became 1.5GB of RAM for every TB within the dedupe folder.  

This is in ADDITION to your OS and other services.  So a server in general would have at least 8GB of RAM, and you would add even more memory for the dedupe functionality.

The dedupe folder is artificially limited to 16TB in size, though your LUN at 20TB is just fine.  Newer builds of PureDisk (what the dedupe engine is based on) goes up to 32TB and soon 64TB.  Eventually, perhaps the BackupExec product management team will allow Multi-Streaming and a larger dedupe pool.

Do not look at the 5TB of data you are backing up in your calculations.  Look at the size of your dedupe folder to factor in RAM requirements.  Sadly it's cart before the horse here.  How do you know what size it'll be?  You can't and Symantec has terrible sizing tools for partners to calculate this.  So you just have to over spec RAM in anticipation.  It's a cheap upgrade.

There is an online portal, save yourself the long hold times. Create ticket online, then call in with ticket # in hand :-) http://mysupport.symantec.com "We backup data to restore, we don't backup data just to back it up."