Logo Rob Buckley – Freelance Journalist and Editor

From theory to practice

From theory to practice

How should a company that wants to consolidate its servers go about the process?

Page 1 | Page 2 | All 2 Pages

It sounds simple enough. A company has a large 'farm' of servers that are typically only running at 20% of their capacity. To save costs, the company would like to move as much of its processing work as it can onto a few high-end servers, and use the remaining ones for other purposes.

Yet how easy is it to re-deploy applications and servers operating in a production environment? And how can an organisation ensure that 'a few' is not so few that its systems cannot cope when things go wrong or when data traffic increases?

“The typical server only has 17% to 22% utilisation,” says Mark Lewis, product marketing manager at Sun Microsystems. “If you have 1,000 servers with fairly low utilisation, that's 800 servers not being fully used at a particular time.” Many companies looking at those statistics, particularly those with data centres or web farms, begin to consider consolidation. But what should their first steps be, and how many servers can they get rid of and replace with bigger versions when, as Lewis warns, “the flip side is that some servers go through peak loads of 100%. Next to one server running at 20%, there may be another running itself to pieces.”

Ian Benn, marketing director at systems and services company Unisys, specialises in server consolidation and says that even the first small steps in consolidation can pay large dividends. “For the most part, step one is for an organisation to standardise servers on one version of an operating system, applications and systems management tools, just as they have done on the desktop. It's low risk, it's medium payoff: you can get a 15% to 20% return on investment. Physical consolidation of servers into one place and redesigning the network can give a 20% return.”

But, says Benn, the way to successful server consolidation is to look at server loads throughout the day and see which servers are running which applications. “The secret is to put all the spare capacity together. There's usually a surge in email activity at 9.30am and far less activity during the rest of the day, so you can share the server with an application whose load is greater at night, say.”

Windows Data Center
Benn cites an example: “The first Windows Data Center site in the UK belongs to [UK financial services provider] Abbey National; it runs an application that downloads information to local offices last thing at night and handles email during the day.” He suggests various combinations and rules for running multiple applications on a single server. If an enterprise resource planning application shares a server with other applications, it should always have priority over them, except first thing in the morning, for instance. “You get big savings, big performance improvements,” says Benn, with such application consolidation.

But as more applications are deployed on fewer servers, so the number of possible points of failure decreases – and the chances of one fault knocking out a large number of services increase. Mark Lewis confirms that many businesses thinking of going down this path are “alarmed” at the thought that “they're consolidating multiple platforms onto one single point of failure”.

“Obviously, overall reliability and security are an issue as you put more and more processes onto a single server,” agrees Colin Grocock, new business director, IBM eServer. Along with Sun and Unisys, IBM has been focusing on bringing the fault tolerance of mainframe products to its midrange server lines. Its Enterprise X-Architecture and Project Eliza initiatives are part of its concept of 'autonomic computing' – computers that can manage themselves.

“We have self-healing systems via Eliza. Systems can detect errors before critical memory fails, can even cope if areas of memory fail,” says Grocock. In addition to the hot-swappable power supplies, fans and drives of low-range servers, Eliza's features include internal sensors to detect faults in components.

Eliza is currently available on IBM's pSeries servers, but the Enterprise X-Architecture that IBM is building into its latest servers will be the backbone of future Eliza developments, offering memory mirroring, hot-addable, hot-replaceable memory and diagnostics software that can run at the same time as the operating system.

Page 1 | Page 2 | All 2 Pages

Interested in commissioning a similar article? Please contact me to discuss details. Alternatively, return to the main gallery or search for another article: