IT organizations today face the difficult challenges of managing exploding data volumes, delivering high service levels, and mitigating business risks while at the same time keeping costs under control. As if that weren't enough, they must do all these things within a data center environment where complexity has grown out of control. This article examines the steps IT organizations can take to gain centralized control across their multi-platform server, storage, and application environments.
Over the past 10 years, organizations have gone from leveraging email as an alternative communications vehicle to depending on it as their most mission-critical application. According to the Enterprise Strategy Group, more than 60 percent of mid- and enterprise-tier businesses together believe that email is the number one mission-critical business application for their organization.
According to some industry estimates, the volume of email that businesses are storing is increasing by more than 60% each year. An analysis conducted by the Radicati Group helps to put that figure in perspective.
The research firm has estimated that the average corporate email user sends and receives a total of 84 messages per day. The average message size of a message without an attachment is about 22KB. By 2008, the firm estimates that an average corporate email user will process up to 15.8MB of data per day. For a company with 1,000 users, that's an average of 10GB per day -- or 200GB per month.
Of course, the increase in the volume of emails coming into the corporate network introduces an exponential growth in associated hard costs by regularly exceeding available capacity of traditional email gateway systems, mail transfer agents, email storage servers, groupware servers, and network bandwidths.
This explosive growth in data volumes comes at a time when the average enterprise data center is becoming increasingly complex. That's partly because organizations rarely buy all of their servers, routers, switches, and other network hardware and software from a single vendor at one time. If that were the case, they would be able to implement a truly end-to-end, homogenous network that works together and provides some form of centralized console for management and administration.
But as IT departments know all too well, networks have a way of evolving on their own. As Peter McKellar, a Symantec group product manager, recently told the trade publication Processor, networks grow over time, picking up and adding whatever piece makes the most sense or provides the best value at the time. The more the network grows, the more cumbersome it can be to manage and secure.
"For example, if a company uses servers from both Sun and HP, they need to use two different volume managers, two different file systems, two different clustering tools, etc.," McKellar said. "For companies that have three or four different hardware vendors and dozens of different application vendors, the list of infrastructure software they must support becomes unmanageable."
But what if organizations were able to gain more visibility and control over their data center storage environments? What if they were able to eliminate numerous point solutions and instead manage their storage infrastructure with one tool? Wouldn't they be in a better position to manage that explosive data growth, optimize storage hardware investments, and adapt to changing business requirements?
A complete solution for heterogeneous storage management would include the following features:
- Increased storage utilization Storage utilization and capacity management are improved across heterogeneous operating systems and storage hardware. Storage volumes and file systems are dynamically grown and capacity is reclaimed, and storage is dynamically provisioned to new applications without any modifications required by the end user. Daily and repetitive storage tasks are automated and performed online, including RAID reconfiguration, defragmentation, file system resizing, and volume resizing.
- Dynamic storage tiering This allows unimportant or out-of-date files to be moved automatically to less expensive storage devices without changing the way users or applications access those files. Policies are created that will move files based on date created, last time accessed, owner, size, or name. These files are dynamically moved without having to take the application offline.
- Centralized storage management This means that organizations centrally manage their application, server, and storage environments. This will lead to faster application deployment times, higher service levels, reduced risk of human error, and greater visibility throughout the environment. Centralized control over storage administration enables hundreds of systems to be managed from a single console and automates routine tasks to eliminate time-consuming manual processes.
- Multi-vendor hardware infrastructure This provides enterprises with the freedom to choose industry-leading functionality across platforms without getting locked into proprietary solutions.
With today's IT organizations facing swelling data volumes and increasingly complex data centers, the need for a single tool to manage the storage infrastructure is more imperative than ever. Such a tool must provide these organizations with centralized visibility and control across their multi-platform server, storage, and application environments. Ultimately, it should enable IT organizations to reduce operational costs and capital expenditures across the data center.
To learn more about a solution for heterogeneous storage management, click on the links below.