Shrinking IT Budget?
Increasingly we hear more and more that IT departments are being expected to do more with less. This translates to more projects and initiatives that require the same levels of service that the business has come to expect with budget looking very much the same as in the previous years. This divergence is causing customers to look at new ways of making their data centers more efficient. Generally speaking IT managers don’t want to “save” money, they are after all looking to grow their business and their infrastructure and delivery capabilities. However our customers are looking at initiatives to drive their efficiency up while maintain high levels to their business.
Symantec (formerly VERITAS) is a key vendor to 99% of the fortune 500 their data centre portfolio. The broad base of solutions has meant that Symantec really can offer customers a vendor independent view of the data centre without any reliance on any particular flavour of hardware. A major new version of Symantec’s Storage Foundation and High Availability solutions will be released on December the 5th this year. This new version has many advantages and benefits unique to Symantec. Below are three key technologies within Storage foundation that will help drive a more efficient data centre without having to tear out existing hardware.
Storage Foundation – Filesystem level deduplication and compression
Built into the Veritas Filesystem is Deduplication and Compression. Compression means that customers with largely read intensive workloads (fileservers for example) can achieve huge storage savings by switching on Compression. Data can be reduced ratios as high as 10:1 while storage can be reduced by up to 70%. Additionally any snapshots that are taking place will be more lean and efficient. Depuplication means that regardless of whether or not customers have expensive deduplication appliances they can now benefit from deduplication on at a filesystem level to drive huge storage effiencies. With deduplication turned on write performance is not impacted and in some cases read performance can actually be improved. Deduplication is particularly relevant in Virtual Desktop Environments where so much of the workload uses shared data. Data centres can expect to reduce their storage footprint by up to 80% by simply enabling Deduplication at their filesystem level.
Storage Foundation High Availability – Virtual Business Service
Multitier applications are increasingly common and increasingly complex to manage. The Virtual Business Service means that our customer can manage/monitor and report on multi tier applications from a central console with Veritas Operations Manager (no cost option with Storage Foundation, Application HA or DMP standalone) The Virtual Business Service means you can report on all components in your chosen business service, this could be anything from running a storage utilisaiton report to an uptime report or DR firedrill. Additionally fault monitoring is automatically done at the VBS level, so for example if you lose an HBA and with that your storage connection this is reported in context of the whole service rather than a system in isolation. Finally you can easily manage several systems that make up your service to automatically stop and start all the components at the click of a button. Perhaps you have a billing systems that comprises of an AIX database on some IBM hardware and you have some custom applications running in virtual Vmware servers. The failure of the IBM database will likely render the applications useless and typically these applications require recovery. VBS will automatically fault propagate issues so that in the scenario mentioned with the failure of the back end database VBS would automatically take the application offline and restart them after the database has been recovered and restarted. From an operational perspective a service can also be managed by a click of a button to bring a multitiered application online or offline driving efficiency and simplicity though ease of management.
Storage Foundation High Availability – Fast Failover
One of the key technologies Symantec offers is the ability for an application to failover to another system in a sub 60 seconds timeframe. This is achieved through the use of a clustered filesystem. Ultimately in a two node setup the storage is actively seen and mounted on both servers simultaneously. In the event of a failure no storage migration or recovery is necessary and all that takes place is an application startup on the failover server. This gives much greater uptime in environments with very high Service level agreements. This fast failover technology has been widely used for many years in UNIX environments. From December this technology will be available in the Windows world for the first time. Our pilot testing with customers SQL failovers have shown that an average failover might take between 20 – 30 minutes. Our fast failover solution is making it possible to bring a SQL database up in less than 5 minutes on a failover server. This will dramatically improve the uptime of such applications like SQL in windows environments.
The data centre is a complex place and while many initiatives like virtulisation may look to increase efficiency they very often increase the management burden of the infrastructure as well as the storage required. Storage Foundation 6 is a key solution for customers looking to drive the most efficiency and uptime as well as easing operational management in their data centres.