By John Parkinson
Around the middle of last year, my then-boss asked me what I thought was the most critical strategic decision we had taken in the preceding 18 months.
It wasn't a trick question--he was going to be asked the same thing at an investment conference and wanted some thought on the logic behind the answer.
What we settled on was the decision to aggressively go after virtualizing our technology infrastructure, starting with the computing capacity and proceeding rapidly to storage and networking.
Of course, when you work in the mainframe world, virtualization of the computing environment is old hat. Even in high-end UNIX servers, logical partitioning of the resources and oversubscription of resource allocation is a couple of decades old (which is ironical if you remember what triggered the development of UNIX in the first place).
But applying the principles of virtualization to everything in the data center is a big step beyond just choosing approaches to abstracting hardware. It's a commitment to a new way of thinking about managing infrastructure.
Which is what makes it hard.
The financial justification is easy. If I can significantly improve the utilization of my deployed assets, I can, over time, reduce the number--and therefore the cost--to own those assets. Even in storage, I can slow the rate of growth significantly, even if I can't actually stem the tide of bits to be stored.
I also get benefits from "enforced" standardization and the resulting simplification of infrastructure. You get major gains from the interconnection fabric, especially if you use blades and virtual connection technology. Just eliminating a lot of the physical interconnections--which are virtualized over fewer, higher bandwidth links--makes life much simpler. When you have thousands of physical cores to work with, standardizing the software stack to a few virtual machines 'images" saves thousands of hours a year in deployment, patching and updating effort.
Balancing the gains, IT staffs have to learn new habits of thought and new operational processes. Many things are easier in a virtualized world, but some aren't. And even the easy things are different from what people are used to.
There's a lot of instant folklore about what works and what doesn't--some of which was once true, but has been fixed, much of which was never true but was a convenient excuse to delay a change. The vendors don't always help--selling futures in a rapidly evolving technology segment is a way of life in the software industry. So some things you need aren't quite there yet and some things that are heavily promoted, you don't really need...
Fight your way through all of that and you get to the core issues that you'll still need to face:
â¢ Highly virtualized environments run much "hotter" than their lowly-utilized physical counterparts. Turns out it's easier to burn real estate than power density, but tell that to the space planners when they see half-populated racks everywhere. â¢ Your applications will have many (hopefully minor) quirks that matter in a virtual environment even if they didn't previously. Expect to test everything. â¢ The more concentrated your capacity becomes, the worse an outage hits you. Rethink your high-availability and recovery strategies. â¢ The further ahead of the mainstream you get (we got way ahead) the harder it is to find people who understand what you're doing (and places to go that are ready for you if you need to move in a hurry).
Aggressive virtualization was indeed the most critical decision we made--but it wasn't a free win (although it did turn out to be the easy part) and took longer to get done than it should have.
Now if I could just virtualize power and cooling...
John Parkinson, the former CTO of TransUnion LLC, has been a technology executive and consultant for over 30 years, advising many of the world's leading companies on the issues associated with the effective use of IT. Click here to read his columns in CIO Insight's print edition.