back to krns-inc

K&R Network Solutions Blog

Improving the way you do business since 2001.

Successful virtualization: The “Central Vac” versus the “Dust Buster”

Posted on | November 13, 2009 | No Comments


12 November, 2009 By Joanne Moretti, SVP Product Marketing & Analyst Relations, CA, Inc.


Data center virtualization is here to stay, as enterprises seek to lower IT, energy and real estate costs while also improving their ability to support changing business demands. One estimate from Forrester Research is that 31 percent of [new server] operating system instances were virtualized in 2008, and that will grow to 54 percent in 2010.

The challenge now is not simply deploying virtualization, but optimizing it not only for tactical cost reductions but also for more strategic business results. And to achieve those results, holistic management is essential. Operating virtualized environments as just another IT silo can actually end in higher costs and less flexibility. Succeeding with a more strategic approach to virtualized environments requires comprehensive management that unifies virtualized and physical systems.

There is no lack of tools and utilities offered by virtualization technology vendors designed to manage the virtual partitions, and they can be valuable. But they don’t provide the visibility and intelligence needed to manage virtualized services in today’s large scale, complex IT production environments. I think of those platform specific tools as “dust busters”–very much needed for those specific spot jobs, however not as effective as a built-in central vacuum system that keeps the entire IT house clean. Those “Central Vac” management disciplines like Performance and Availability Management, Security and Compliance Management, Change and Configuration Management, IT Asset and Financial Management–and of course the real accelerator to driving ROI in a virtualized environment, Automation Management–must be comprehensively deployed across virtualized and physical systems in order to get the full return on the technology. [business intelligence]

Why virtualization took off

When virtualization was introduced, commodity servers offered an affordable computing alternative to the mainframe. For high-volume, business critical applications, powerful Unix clusters and mainframes were still the norm. But less complex applications and services could be supported by off-the-shelf hardware. Typically running Linux or Windows, these servers were built to handle one application per box, and their relatively low price meant their numbers grew dramatically across departments as well as inside IT.

In addition to the departmental sprawl that occurred, IT generally provisioned more capacity than necessary to ensure horse-power for peak operating periods, or for failover measures. This “Just in Case” provisioning coupled with departmental sprawl spelled huge: “server sprawl”, and as a result many enterprises experienced floor space and power consumption issues. Facilities meant for tens or hundreds of servers suddenly had to accommodate thousands. New data centers had to be constructed. All of this growth required a huge increase in support staffing costs. And ironically, it became clear that utilization of these resources was extremely low, frequently less than 25% of CPU and memory usage during normal processing periods.

This growing amount of hardware meant energy use was also on the rise, both to power the servers and also to cool the data centers. As clock speeds in chips had increased, so had voltage “leaks” which caused dramatic heating problems. Most data centers could not handle the increased cooling load, and retrofitting costs and energy bills were soaring.

So, enterprises were faced with more hardware than they really needed, consuming more space and energy than they could afford.

Virtualization as a Building Block

Virtualization can lower hardware and energy costs, provide greater flexibility to adapt to changing business conditions and can reduce overall IT risk.

But successfully managed virtualization also offers the opportunity to help transform IT into a service. By delivering the right resources where they are needed whenever they are needed, CIOs begin a successful path toward cloud computing.

The private cloud model proposes resources, physical and virtual, can be automatically provisioned regardless of environment, OS or type. Applications and the users they serve get the resources that are needed, with limited waste. That has long been a vision, but it is one that can be fulfilled today with appropriately managed virtualization that includes robust solutions that span security, governance and traditional management. 


Leave a Reply

Subscribe to our feed