Logo Rob Buckley – Freelance Journalist and Editor

Founding principles

Founding principles

Over-complex. Under-utilised. Today's IT architectures are a mess. What technologies will underpin the move to a more efficient model?

Page 1 | Page 2 | Page 3 | All 3 Pages

The utility computing vision that could trigger the widespread remoulding of IT architectures as central pools of scalable, easily controllable, fault-tolerant and self-adjusting resources is unquestionably attractive. IT directors everywhere would clearly like to dispense with the every day hassles of server overloads, failover provisioning, and resource utilisation management.

But while utility computing is one of the greatest challenges facing server and storage system vendors, over the past 18 months their vision has been crystalising. Technologies have started to hit the market that have taken utility computing beyond the proof of concept stage.

To achieve the goal of having servers and storage devices on a network appear and act as a single resource, utility computing management requires two fundamentals: a way to monitor the network of systems, and a way to control and manage that network - even when it is made up of a mix of different vendors' hardware, software and operating systems. Equally importantly, utility computing must find a way to isolate the user from all those underlying complexities so that the resources become a single source of scalable computing power.

That is the wider challenge, although vendors have concentrated their early efforts on providing utility computing products and services that pool together their own hardware. To do so, they have modified their existing systems management and monitoring tools to provide control capabilities. IBM, for example, has build the Utility Management Infrastructure, codenamed Blue Typhoon, and Hewlett-Packard has added the Utility Controller to its OpenView software.

According to Peter Hindle, senior technical consultant for HP, the company's Utility Data Centre (UDC) software, in combination with OpenView, pools a central database of machine specifications and rules so that the UDC can issue the correct configuration instructions to each machine. Says Hindle, “The controller software would know that it needs to issue a particular shell command to a machine running the HP-UX [Unix operating system] to get it to reboot, but would need a completely different instruction for a Windows 2000 machine. And a different HP-UX server might need to issue a subtly different command to a machine running the Sun Solaris operating system.”

When the aim is to maximise utilisation by switching applications between machines, the control mechanism will have to be highly intelligent.

IBM's Jean Lorrain, chief technology executive for e-business hosting, points out that utility computing architectures will have to determine which servers are capable of running which applications. “If one server runs on Intel chips and another on a different chip, you won't be able to redeploy the application that is today running on the Intel server unless you have a version of that software for the other architecture.”

Furthermore, utility computing demands more than just the ability to push applications between different machines. As servers swap roles to meet changes in demand, they may need to have their hard drives completely wiped and their entire operating systems changed as they go from, say, being a web server to an application or database server and back again. This again requires extensive hardware control and resource allocation capabilities if the server is going to be up and running again without delay. Additionally, that kind of operation has to be carried out over a network without an administrator present. In many such operations, this will require the use of relatively new hardware, so legacy machines may not be easily incorporated into the full utility vision

Equally important is a means of optimising application environments. While some degree of control is available from the operating system, finer control is essential. The system needs to automatically instruct a web server to, say, open up the throttle on the bandwidth available to a particular site, or for an application server to prioritise the use of the organisation's customer relationship management system between the hours of nine and five - something that is not available in many applications.

Homeless applications
Although most applications were never designed to offer this level of management, such facilities are now appearing within enterprise application integration (EAI) software. However, for utility computing, a less expensive, ubiquitous and standard method of control is needed. Web services, the XML-based approach for applications to communicate across a network, came to prominence last year as one of the more promising ways for this application-level integration. A standardised way of creating application interfaces and communicating, the web services approach is likely to be one of the main ways for utility computing to flourish as more than just an advanced systems management mechanism.

Page 1 | Page 2 | Page 3 | All 3 Pages

Interested in commissioning a similar article? Please contact me to discuss details. Alternatively, return to the main gallery or search for another article: