In an editorial for Bloomberg Businessweek, IBM supercomputer development chief Dave Turek explains how current data center density models and the explosion of big data are going to combine and completely destroy any IT organization’s operating budget – unless something in our way of thinking changes, and soon.
Turek’s argument, in short, is as follows: Right now, most data centers and IT infrastructures are built such that capacity is added on an as-needed basis, adding more storage and processing power in a piecemeal fashion, leading to what Turek calls “acres of slap-dash, poorly designed data centers.” And that has an end result of IT spending up to 70 percent of its budget just keeping the lights on – an assertion Turek backs up with a recent study.
Factor in Moore’s Law, and you have denser, faster chips that still require more and more power to cool and run. Power efficiency is often the watchword for the CIO looking to make data center purchases, and Turek claims that the TCO of a new data center is approaching the $1 billion mark.
This is where big data comes in. More and more data is generated by modern services and applications, and as exabyte scale becomes the norm, these “slapdash” data centers simply can’t keep up. Enterprises who want to take advantage of the kind of insights that big data can provide are going to have to update to new technologies, like three-dimensionally packed processors and memory chips and densely-packed cables compressed into square inches of silicon.
That’s especially relevant when considering the cost of shuttling exabytes of data around a network. Rather than send big data elsewhere to be analyzed, Turek sees the future of large-scale computing in the ability to examine data at the source. And there’s ample opportunity to turn crisis into opportunity, he writes, as the smart IT manager can turn that practice into new business models, granting a real market edge to early adopters. But really, no matter how you slice it, there’s a real “density dilemma” in the data center, and it’s time for a change, even if it means a complete re-architecting.
Turek never mentions this, but it seems relevant that he wrote this the same week that Big Blue is putting its portfolio where its mouth is with new solutions aimed at automatically optimizing and integrating the data center. What’s more, IBM has been a huge proponent of late of best data center practices. IBM may have been founded over a hundred years ago, but it’s clearly giving a lot of thought to how it can leverage its hardware business to meeting the next wave of IT challenges.