While I am generally upbeat about the latest developments in technology, it's also my job to be cynical. Symantec’s customers depend on a certain level of realism, so I don’t feel too bad about pointing out some of the downsides and risks.
When it comes to software-defined networking, storage and so on - in a nutshell, the ability to orchestrate and control a widening variety of hardware devices and resources - most potential issues boil down to a single question - can software be trusted?
The answer, as Douglas Adams might say, is “mostly harmless”. While software starts simple, it can often become highly complex and, therefore, very difficult to test. Software designed for enterprise-scale use cases inevitably tends to the complex, which is where the problems start.
If damage does happen, it can do so in a big way. Some organisations may have experienced the avalanche effect that can take place if a poorly constructed patch is rolled out across the environment. More recently, we’ve seen downtime issues in cloud services - most surprising is that commentators saw the major providers as too big to fail in the first place.
So, is software-defined everything heading in the same direction? Will we end up with tales of woe, when organisations have put too much reliance on inadequate management platforms? The answer is yes, probably, we shall see isolated incidents of major failure (to say otherwise would be to suggest the future will be different to the past).
The advice I’m giving to anyone who asks is, simply, not to put all the eggs in a single basket. Pan-enterprise, software-defined resource orchestration is an admirable goal, but while it is unproven it should be kept in moderation - for example by focusing on the dynamic management of specific resources.
While Symantec's stance continues to be, to protect against the unexpected, let's not create situations we can avoid.