Solid State continues to be touted as a way to address storage bottlenecks for running performance-intensive applications. Yet despite rising interest and an uptick in adoption, enterprises are still struggling with how to implement the technology in an optimized manner.
A plethora of new applications, coupled with trends around virtualization and substantial data growth, have upped the ante for high-performance infrastructure that is better poised to meet escalating service-level agreement (SLAs) requirements. While CPU compute power and memory performance have flourished thanks to Moore's law and the advent of multicore processors, the performance of traditional hard disk drive-based storage (HDD) has not kept up at a similar pace, resulting in a widening I/O performance gap.
Other data processing challenges that point to the potential of Solid State include:
- Data is increasingly more decentralized today so tiering capabilities and temporary data moves are important;
- Client sprawl is straining shared storage infrastructure;
- Eight percent of data now requires low latency, high I/O operations per second (IOPS) and existing HDD techniques deliver;
- Finally, HDD can't keep up with the demands of next-generations applications.
The impressive performance stats are driving mainstream demand for the current generation of solid-state devices, as well as priming the pump for faster devices in the future. International Data Corporation (IDC), found that 72% of organizations are either currently deploying solid-state devices or were planning to implement them in the next 12 months. Moreover, IDC expects the amount of NAND solid-state technology being shipped into the enterprise to grow substantially, by 20x, reaching almost 2.9EB annually by 2016.¹
So how is all this solid-state technology being used? One common approach is to combine solid state devices with traditional HDDs as part of a hybrid array to optimize performance, particularly for dynamic tiering use cases or to operate as a cache layer within the array. All-flash arrays, while relatively nascent technology, are also grabbing the spotlight given their potential to boost the performance of particularly high-performance I/O applications like those associated with big data analytics.
Yet solid-state devices alone are not enough to deliver on the potential of the technology. The devices need to be paired with enterprise-grade data management software in order to optimize their available capacity, provide high levels of data protection and continuous application availability, and cost-effectively make use of available tiers of storage. Because solid state devices are more expensive compared to HDDs on a dollar-per-gigabyte basis, it's critical that solutions incorporate capabilities such automatic tiering, snapshots, deduplication, and thin provisioning to leverage the technology in the most cost-effective manner rather than introducing a wholesale deployment of all data sets or applications to the new storage medium.
That's where Symantec's Veritas Storage Foundation comes into play. The platform, written specifically to be flash compliant, delivers a set of advanced host-based storage management functionality that allows system administrators to effectively tackle all of the challenges associated with solid state, either as part of high-end enterprise SAN or as an all-flash array. Specifically, Veritas Storage Foundation delivers:
- Visibility to differentiate between multiple types of storage devices provisioned to a host, reducing any risk of errors during provisioning and routine management operations.
- Dynamic Storage Tiering, used to analyze which data and applications would benefit most from residing on solid state and which should remain on regular HDD storage. Symantec's tiering functionality automates the relocation of any or all files in a file system between storage tiers according to administrator-defined policies.
- Automation, because in production environments, the data best suited for solid state and not an HDD changes over time. Storage Foundation continually determines which data is "hot" or most frequently accessed, allocating it to a solid state device, while relegating "cold" or less frequently accessed data to less expensive, less optimized storage.
- Storage efficiency, achieved via use of such technologies as thin provisioning, thin reclaimation, deduplication, and compression, and by leveraging capabilities like Linux TRIM primitives. These functions expunge unnecessary data out of flash storage while reclaiming capacity after data has been deleted, helping Storage Foundation extend the life of solid state storage capacity.
In recognition of Storage Foundation's ability to optimize solid state, leading companies in this space are partnering with Symantec to incorporate Storage Found
ation technology as part of their solid state offerings. For example, Symantec has teamed with Violin Memory to integrate Storage Foundation's data management capabilities, including snapshots, cloning, deduplication, replication, and thin provisioning into its portfolio of Arrays. Separately, Symantec is working with Fusion-io to optimize Storage Foundation to run faster and better on flash using Linux TRIM primitives and the pair are creating deeper integrations between their respective products.
Check out Symantec Veritas Storage Foundation
to learn more about how it can help solid-state technology live up to its promise of breaking the storage logjam and enabling the high-performance data center infrastructure necessary for today.
- ¹ IDC Taking Enterprise Storage to Another Level: A Look at Flash Adoption in the Enterprise, Doc # 236366, August 2012