As we are all aware, information is expanding at a staggering rate. World data in 2010 was estimated at 1.2 zetabytes, expected to rise to 7.9ZB in 2015 and, in 2020, to 40ZB, with something like 30 billion connected devices in the next few years. Against that backdrop, there is really no way we have either the time or bandwidth (cost) to shift these large lumps of data around. Yet, potentially, they have enormous value to people who want to access them.
In this world of ‘Bigger Data’ – and what is rapidly becoming ‘Even Bigger Data’ – this presents a massive challenge for all of us: how do we supply Data as a Service, while still maintaining control?
Because the reality is that, in these data-driven times, everyone is going to have to consider themselves as a consumer and a provider of Data as a Service, and deal with all of the consequences this brings into play.
In the new world of mega-data, information that would once have been considered beyond...