It may not have captured many headlines at the time, but a recently announced alliance could wind up having some profound implications for tomorrow’s corporate data center.
Last April, computer heavyweights including IBM, HP, Sun, and Advanced Micro Devices (AMD) put aside their differences temporarily to announce the Green Grid Alliance. The alliance, which aims to help cut the energy consumption of corporate data centers via education, the establishment of server power measurement standards, and changes in product designs, comes at a time when rising electrical rates, coupled with increases in the deployment of power-hungry servers, are causing major concerns about electricity consumption among many large businesses.
Alliance members point to a number of environmental factors that are seriously impacting today’s data centers, including:
- The cost-per-watt of electricity is increasing.
- Data centers continue to deploy additional hardware.
- Storage growth is exploding.
- Incremental power, when available, is even more expensive.
- Federal environmental regulations are increasing.
- Power costs will increase their share of IT spending.
Runaway energy consumption, alliance members agree, is hitting the wall hard.
“The data center energy crisis is inhibiting our clients’ business growth as they seek to access computing power,” says Mike Daniels, senior vice president of IBM Global Technology Services. “Many data centers have now reached full capacity, limiting a firm’s ability to grow and make necessary capital investments.”
That view is in keeping with an announcement by Gartner Inc. late last year that, by 2008, 50% of current data centers will have insufficient power and cooling capacity to meet the demands of high-density equipment.
“With the advent of high-density computer equipment such as blade servers, many data centers have maxed out their power and cooling capacity,” said Michael A. Bell, research vice president for Gartner. “It’s now possible to pack racks with equipment requiring 30,000 watts per rack or more in connected load. That compares to only 2,000 to 3,000 watts per rack a few years ago.”
A report released earlier this year puts the current situation in historical perspective.
According to the report, produced by the Lawrence Berkeley National Laboratory in Berkeley, Calif., and funded by AMD, the amount of electricity used to power the world’s data centers doubled between 2000 and 2005, due mainly to the tremendous increase in demand for Internet services. If current trends continue, the amount of power needed to run the world’s data center servers could increase by an additional 40% by 2010, the report states.
The study, using data from International Data Corp., looked specifically at servers used in the world's data centers, which typically represent approximately 60% to 80% of a data center's total IT loads. Reporting on the study, eWEEK wrote:
“As the demand for new technology grew, the number of installed, low-end volume servers--typically systems under $25,000, which also includes blade servers--increased. This trend seems to have driven the skyrocketing energy consumption of the last five years more than the actual energy usage per server.”
Then there’s the matter of cost.
“The total cost of building a large data center is now on the order of $100 to $200 million, which is sufficient to get the attention of the CEO of most large organizations," writes Jonathan Koomey, author of the report and a staff scientist at Lawrence Berkeley National Laboratory. “That visibility to corporate management is likely to drive operational and design improvements that should over time . . . spur the adoption of energy metrics and purchasing standards for efficiency of IT equipment within these companies.”
While servers account for a hefty portion of the average data center’s overall power consumption, storage isn’t far behind. In fact, recent statistics compiled by Nemertes Research show storage growing at an average rate of 75% per year. In a number of industries, such as financial services, average storage growth (in gigabytes) is exceeding 100% per year.
Fortunately, there are steps that data center managers at financial institutions can take to start reducing power consumption in existing data centers without making a huge investment—or sacrificing performance or availability.
- Consolidation According to The Uptime Institute, “between 10% and 30% of servers are dead and could be turned off.” It’s estimated that removing one physical server from service saves $560 annually in electricity costs. Once idle servers are removed, data center managers should consider moving as many server-based applications as feasible into virtual machines.
- Power management While power management tools are available, administrators don’t always use them. The Rocky Mountain Institute in Snowmass, Colo., estimates that taking full advantage of power management features and turning off unused servers can cut data center energy requirements by about 20%.
- Efficient power supplies Researchers estimate that inefficient power supplies can waste nearly half of the power before it gets to the IT equipment. Power supplies are available today that attain 80% or higher efficiency even at 20% load, but they do cost more. However, researchers say moving to these more energy-efficient supplies reduces both operating costs and capital costs.
- New standards A recent issue of Computerworld described several initiatives under way that may help users identify and buy the most energy-efficient IT equipment. For example, a certification program called 80 Plus, which was initiated by electric utilities, lists power supplies that consistently attain an 80% efficiency rating at load levels of 20%, 50% and 100%. The Environmental Protection Agency, meanwhile, is working with Lawrence Berkeley National Laboratory to study ways to promote the use of energy-efficient servers. An Energy Star specification could be in place later this year.
While data center managers can take these steps now to contain steadily increasing electricity costs, Symantec believes that transforming the data center to meet tomorrow’s business challenges requires a more holistic approach. Simply put, the key to creating an IT infrastructure that promotes business growth and profitability is best arrived at by implementing a strategy for optimizing operations across the three essential elements of data center management: protection, standardization, and service delivery.
- Protection A sound data protection solution should provide for tiered service levels encompassing support for mission-critical applications and remote offices. For leading-edge protection, financial institutions need to develop high availability and recovery strategies to prepare for risks such as system failures, human error, and natural disasters. And they have to test their plans to ensure timely business recovery.
- Standardization Growing IT complexity is driving up costs and undermining service. The key to mastering the complexity lies in standardizing on a software infrastructure that supports every platform in the data center. The standardization of server and storage management improves IT productivity via the creation of repeatable processes, which leads to greater agility, better service, and the ability to align IT resources with business objectives.
- Service Delivery Mergers and acquisitions, regulations, and the explosive growth in data volumes. In today’s business environment, the only thing that can truly be anticipated is constant change. But despite the complications it imposes, change can’t be used as an excuse when IT service levels don’t live up to growing expectations or more stringent service-level agreements (SLAs). To ensure the agility required to drive a competitive advantage, organizations need to focus on the operation of their data centers, both within administrative domains and from end to end. And they need to do it with solutions that ensure the availability and performance of business applications.
With energy costs skyrocketing, data volumes growing nearly 60% annually, email traffic soaring, and new laws and governance policies clouding the landscape, today’s data center clearly needs a better system for managing and protecting data.
Symantec believes that, for an optimized data center, organizations need to deploy a standard set of management tools that help them get the most out of their existing infrastructure—tools that promote the efficient utilization of resources; streamline the management of data, storage, and servers; and enhance application performance. That’s a recipe for sustainable growth.