Ask any industry insider, and they'll tell you that cloud computing is on course to become a staple business technology around the world by the end of the end of the decade. But even after offices forget what they ever did without the cloud, 2012 will be a year worth remembering.
The technology has taken its share of punches over the past 12 months, enduring skeptical attitudes and navigating unforeseen events. But what hasn't killed it only made it stronger. So as companies eye success in 2013, they must be sure to study history lest they repeat it themselves.
Bad news first
This past summer, cloud confidence was shaken as a series of storm systems disrupted Amazon Web Services operations on two separate occasions within a matter of weeks. With big-name subscribers like Netflix, Instagram and Pinterest suffering highly publicized performance issues as a result, it seemed as if the technology took two steps back just as market momentum was driving adoption rates forward.
Months later, researchers cast additional doubts on the cloud's technical trustworthiness with a proof-of-concept attack which suggested that a customer could have their encryption keys stolen by a malicious tenant residing within the same public cloud server. As a result, companies were once again forced to weigh the risk of breached assets against the undeniable gains in operational efficiency and flexibility.
Finally, cloud vendors struggled to find consensus on equitable pricing strategies. According to InformationWeek editor-at-large Charles Babcock, even trying to compare similar offerings between hosts could require dozens of calculations and an advanced mathematics degree. With no standardized definition of service units, or public disclosure of how much they cost, buyers have started off at a distinct disadvantage when it comes to cloud cost optimization.
Reasons for hope
While the aforementioned challenges caused their share of anxiety and frustration, these growing pains were rather predictable and rarely severe.
As the Amazon outage saga reminded us all, cloud computing strategies must be grounded in the same realities as any other IT outsourcing project. Companies must determine their level of fault tolerance, establish clear vendor expectations and plan for the worst with intelligent backup and recovery protocols.
Considering the anomalous set of circumstances which threw a data center's emergency power systems offline, limiting downtime to just a few hours was pretty impressive. But according to Babcock, telecommunications providers could actually be the ones stepping in and throwing their considerable weight behind disaster recovery initiatives in 2013. By constructing failover facilities tied back to primary data centers through private lines, companies like CenturyLink have provided a promising means of safely replicating and transferring data even after nasty weather strikes.
But while risk management will always be a moving target that demands diligent attention, customers could secure some concrete victories in the cloud pricing wars sooner than expected. First and foremost, companies will benefit from the power of choice.
As Babcock noted, Amazon and Rackspace are no longer the only shows in town as we head into 2013. Former colocation and managed service providers have made the short leap into cloud hosting and diversified the marketplace. The emergence of promising open source frameworks has even brought innovation and competition down to the server level. This all adds up to greater leverage and fatter wallets for public cloud tenants and the customers they serve.