Cloud computing and the utilisation imperative
Concerned about the relative costs of cloud computing? Released earlier this year is a joint report from the Engineering and Physical Sciences Research Council (EPSRC) and the Joint Information Systems Committee (JISC) concerning the costs of cloud for research. Specifically focused on cloud server infrastructure, the 64-page report is an essential read for anyone considering using hosted processing resources rather than in-house systems, not just research organisations.
So, what does the report tell us? First that costing the cloud is complicated, not least because of the variety of costing models used by different cloud providers. Numerous factors need to be taken into account, from physical hardware and software – through licensing, electricity and other and running costs – to operational overheads including the people required to procure, deploy and support the resulting infrastructure.
The report incorporates a number of case studies that allow costs between hosted and in-house processing to be compared. Despite differences in physical and virtual server types, number of cores and so on, at their heart lies a single keyword: utilisation. Comparing server-to-server, cloud-based resources can cost more than their physical equivalents, but only if the latter are running at high levels of efficiency from the moment that they are paid for and plugged in.
By way of comparison, if a cloud-based instance costs £1 per server hour, an equivalent physical server can cost as little as 40 pence per server hour if the server is running at 100% utilisation over 3 years. However if it is run at 30% utilisation, the costs will be around £1.40 per server hour. The picture is similar for multiple server environments such as compute clusters.
In other words, over time a highly efficient in-house IT infrastructure can cost less than its cloud-based equivalent. This simple statement opens a massive can of worms however – just how many IT departments can say that they run all of their IT at “highly efficient” levels? This may be seen as unfair question – operational IT departments often lack input into IT systems design, so perhaps shouldn’t be expected to take the heat for inefficient IT. But the question remains, all the same.
Of course factors such as data risk, compliance and so on need to be taken into account when deciding whether to run a workload in the cloud. Historical deployments (and blame-assignment) aside, the utilisation question becomes the most important of all when performing cost comparisons. For example, cloud-based processing does offer a viable option where previously it would have been hard to justify new infrastructure spend. And the requirement to run an analytical task over a period of weeks, say, is as common in financial services and business intelligence as it is in academic research.
At the other end of the scale, the existence of an option in which you only pay for what you use puts existing IT into sharp focus. Cloud computing models may well be less suitable for processing tasks that need to be maintained over a long period. But only if in-house infrastructure has been designed, as well as being operated, with efficiency in mind. For IT departments that are concerned about their relevance in an increasingly hosted world, this should be seen as much as an opportunity as a threat.