Storage Optimization - Part III - Dedupe
As in my series of blogs around Storage Optimization there is a commonality about the recognition of the
"data explosion" conundrum, and how this is a placing an extra burden on an already stretched IT storage team.
I've already covered off how Symantecs Thin Provisioning and Compression technologies can assist with
ever expanding storage paradigm, again its time to change tack again.
Today’s blog will again be targeting space utilization and more specifically Deduplication, assisting
most organizations to get the best utilization from their storage at the best price point.
So what Symantec S/W are we talking about? - as ever, Storage Foundation.
Let’s focus on Deduplication and how this can offer space savings and cost reductions.
What is deduplication ?
Simply put, deduplication is identifying a commonality in data, and where this occurs only keeping a single
copy rather than mutliple copies, and allowing this single copy to be referenced appropriately.
What data is a target for Deduplication ?
As previously discussed, any data with commonality, typical examples of data with commonality, but by no means
VMDK's - Virtual machine clones, multiple OS's with limited differences.
Unstructured data, i.e. File Servers, containing engineering data where there are versioned instances of data.
Source code directories with limited incremental changes.
What value does Symantec's Storage Foundation dedupe bring ?
The data deduplication feature eliminates duplicate blocks used by your data by comparing blocks across the
file system. When the data deduplication feature finds a duplicate block, it removes the space used and
instead creates a pointer to the common block.
You can perform post-process periodic deduplication in a file system to eliminate duplicate data without any
continuous cost in CPU overhead.
Once identified, potential targets (as above) you can apply a 'pre-check' to see if there are any space saving
benefits from applying deduplication, and if dedupe targets are identified then proceed.
Flexible deduplication chunk size, with options between 4k and 128k to the power of two.
Finally there is one other benefit of deduplication apart from space saving, and this is :
Improved read performance by enabling several process reading data from different files which shared blocks.
Often a concern from customers is about the detrimental affect deduplication has on performances writes, this
is not something that impacts Storage Foundations dedupe technology.
As is the common mantra, Symantec’s values are such that there is that we have no H/W agenda and we realise no benefit from selling more tin, we genuinely want customers to achieve space saving benefits.
Simply is it worth having a further look at what Symantec has to offer ? If you want to use your storage more
effectively and get the full benefits from adopting deduplication, Of course, the answer is Yes !