Video Screencast Help
Storage & Clustering Community Blog

Tight Storage Budget? – Time to Talk to Symantec

Created: 03 Jul 2012 • Updated: 11 Jun 2014
bpascua's picture
0 0 Votes
Login to vote

Storage Tiering is a fairly well known concept in IT. Like many of these technologies It can often take many years before these cool concepts make it into the mainstream. Thin provisioning is a good example of this. I remember meeting 3PAR and them going over the technology with me. Here we are 5 years later and whilst thin provisioning is now very mainstream, customers implementations vary massively from Netapps check box implementation to many customers not wanting to over provision for fear of running out of storage at all. Storage Tiering is very simply the ability to move data about between different tiers of storage presented to the same host. Hardware vendors provide this in some high end arrays where it does of course come at a high cost and you can only tier within an enclosure. Over time these technologies have improved, for a long time arrays could only tier at a lun level, meaning you had to move everything on a given lun to another higher/lower tiered lun. Technologies like EMC’s FAST2 mean you can now shift big chunks instead of whole luns. I have heard of mixed experiences when actually trying to implement FAST2.

I always thought that Storage Tiering was a brilliant idea. Back in the days when I was in support I would occasionally get a support call about a Storage Foundation customer who was using tiering at the host level with storage Foundation. I would setup a test system and have several luns from different enclosures presented as one filesytem. Then simply set the data movement policy to IO temperature , meaning keep the most frequently used data on the fastest tier of storage. Over the years as I moved into presales I realized the reason customers didn’t use this was the operations expense in configuring this versus the actual savings made in storage. So has anything changed? Its 2012 and Storage Foundation’s implementation of Storage Tiering has been around for ten years. The simple answer is yes, its all changed now. Firstly and most importantly we have an implementation of Storage Tiering which understands Oracle, its called Smart Tier. This means that customers can place their huge data bases on a vxfs filesystem made up of several tiers of storage and Smart Tier will move chunks of Oracle data as they age onto the lower tiers of storage. The advantage this brings is that customers can use disks form different enclosures to assemble their tiered storage strategy. Unlike the hardware equivalents we can also move data proactively ahead of a specific event. For example an end of month billing cycle is going to run, move all my data files for May to SSD Tier 1 storage.

The real game changer for me is the ease of configuration that Storage foundations Storage Tiering can now be configured through Veritas Operations Manager. This means if you don’t want to script your implementation you can now configure it easily through VOM or setup a template to provision it. I recently set this up and thought to myself – Maybe now this is so simple that the payback is there for our customers. So for customers looking to save on a huge storage bill each year it may be time to talk to Symantec.

http://symantec.dcig.com/2011/01/new-smart-tier-for-oracle-give.html

If you are interested and want to get an idea of the storage savings you could potentially make please take a look at the Analyser tool which you can run to see what benefits tiering could bring on your real data –

https://www-secure.symantec.com/connect/downloads/...