Logo Rob Buckley – Freelance Journalist and Editor

Founding principles

Founding principles

Over-complex. Under-utilised. Today's IT architectures are a mess. What technologies will underpin the move to a more efficient model?

Page 1 | Page 2 | Page 3 | All 3 Pages

The utility computing vision that could trigger the widespread remoulding of IT architectures as central pools of scalable, easily controllable, fault-tolerant and self-adjusting resources is unquestionably attractive. IT directors everywhere would clearly like to dispense with the every day hassles of server overloads, failover provisioning, and resource utilisation management.

But while utility computing is one of the greatest challenges facing server and storage system vendors, over the past 18 months their vision has been crystalising. Technologies have started to hit the market that have taken utility computing beyond the proof of concept stage.

To achieve the goal of having servers and storage devices on a network appear and act as a single resource, utility computing management requires two fundamentals: a way to monitor the network of systems, and a way to control and manage that network - even when it is made up of a mix of different vendors' hardware, software and operating systems. Equally importantly, utility computing must find a way to isolate the user from all those underlying complexities so that the resources become a single source of scalable computing power.

That is the wider challenge, although vendors have concentrated their early efforts on providing utility computing products and services that pool together their own hardware. To do so, they have modified their existing systems management and monitoring tools to provide control capabilities. IBM, for example, has build the Utility Management Infrastructure, codenamed Blue Typhoon, and Hewlett-Packard has added the Utility Controller to its OpenView software.

According to Peter Hindle, senior technical consultant for HP, the company's Utility Data Centre (UDC) software, in combination with OpenView, pools a central database of machine specifications and rules so that the UDC can issue the correct configuration instructions to each machine. Says Hindle, “The controller software would know that it needs to issue a particular shell command to a machine running the HP-UX [Unix operating system] to get it to reboot, but would need a completely different instruction for a Windows 2000 machine. And a different HP-UX server might need to issue a subtly different command to a machine running the Sun Solaris operating system.”

When the aim is to maximise utilisation by switching applications between machines, the control mechanism will have to be highly intelligent.

IBM's Jean Lorrain, chief technology executive for e-business hosting, points out that utility computing architectures will have to determine which servers are capable of running which applications. “If one server runs on Intel chips and another on a different chip, you won't be able to redeploy the application that is today running on the Intel server unless you have a version of that software for the other architecture.”

Furthermore, utility computing demands more than just the ability to push applications between different machines. As servers swap roles to meet changes in demand, they may need to have their hard drives completely wiped and their entire operating systems changed as they go from, say, being a web server to an application or database server and back again. This again requires extensive hardware control and resource allocation capabilities if the server is going to be up and running again without delay. Additionally, that kind of operation has to be carried out over a network without an administrator present. In many such operations, this will require the use of relatively new hardware, so legacy machines may not be easily incorporated into the full utility vision

Equally important is a means of optimising application environments. While some degree of control is available from the operating system, finer control is essential. The system needs to automatically instruct a web server to, say, open up the throttle on the bandwidth available to a particular site, or for an application server to prioritise the use of the organisation's customer relationship management system between the hours of nine and five - something that is not available in many applications.

Homeless applications
Although most applications were never designed to offer this level of management, such facilities are now appearing within enterprise application integration (EAI) software. However, for utility computing, a less expensive, ubiquitous and standard method of control is needed. Web services, the XML-based approach for applications to communicate across a network, came to prominence last year as one of the more promising ways for this application-level integration. A standardised way of creating application interfaces and communicating, the web services approach is likely to be one of the main ways for utility computing to flourish as more than just an advanced systems management mechanism.

Web services will also fit into another aspect of utility computing that some vendors favour: grid computing. While the utility computing vision of a single IT resource is shared by many vendors, there is no consensus over the degree of granularity into which workloads should be split. While some favour the redeployment of applications from overtaxed to under-used servers as the lowest level of resource reallocation, grid computing allows an appropriately written application to offload some of the processes involved in the application to other machines on the network; these can then hand the results back to the original machine when complete. Grid proponents argue that this takes fuller advantage of under-used machines than application redeployment and prevents a single machine from becoming overloaded if there are no free servers to come to its assistance.

Although the complexities of creating a working, secure grid are great, the Globus project and vendors such as Platform Computing have been working for the best part of a decade to develop and deploy grid technology. IBM and others use Globus' open source toolkit to incorporate grid computing capabilities into their own software, but grid only works when it is possible to break an application down into many parallel tasks.

Limited grid
Jonathan Eunice, an analyst at industry research group Illuminata, says the much higher input-output and application performance requirements of commercial data processing environments will limit business users' enthusiasm for grid computing. “We need to stop talking as though the traditional database application is going to be distributed over a grid,” he argues. The performance overheads of dividing up the database processing tasks and then reassembling them would make that totally unpractical, he says.

Nevertheless, analyst group Gartner predicts that grid computing will become a mainstream business technology by 2008.

Some utility proponents argue that in certain cases, rather than trying to manage a large number of servers that can be re-tasked according to changes in demand, it is easier, from a systems management point of view, to have a small number of powerful servers that can be 'partitioned' into smaller virtual servers. It also means that applications that cannot be split over servers can have a highly powerful server to run on and can swap partitions if their virtual server collapses. “There are some things you'd want to use lots of small servers for, but for some applications you need 'big iron' [centralised mainframes],” says IBM's Jean Lorrain.

This partitioning is available both in hardware and software, depending on the vendor. Sun, notably, claims that Solaris 9 on its servers is the first operating system to provide both.

Sun also offers dynamic partitioning so partitioned servers can 'grow' their resources when needed to meet particular demand-peaks. This dynamic partitioning can even be arranged according to rules so that an SAP system's virtual server automatically has more resources during the day, while an email server gets more resources first thing in the morning.

Healing power
By putting all the applications on one server, however, organisations are at risk if the server collapses for whatever reason.

Hardware monitoring, hot-swappable memory and processors, redundant fans, power supplies and other components are all designed to provide the reliability needed in such situations. IBM is going one step further with its 'autonomic computing' initiative, designed to provide self-healing, self-protecting, self-optimising and self-regulating computers that are able to ward off attacks and fix their own system faults.

But autonomic computing is still a work-in-progress. Self-optimising databases, for instance, are a nice idea, but “it is a black art how databases are tuned,” says Surajit Chaudhuri, head of data management and exploration at Microsoft. “It is tough to ship a tuning guru with every database.”

Such issues aside, HP's Hindle is adamant that utility computing is the future for organisations that want ease of management and the chance to get greater use out of existing hardware. “If you want to fine-tune everything and get the last drop of performance out of your servers, then utility computing may not be for you.” But if your business falls into the other camp, then it is time to get started, he advises.

Page 1 | Page 2 | Page 3 | All 3 Pages

Interested in commissioning a similar article? Please contact me to discuss details. Alternatively, return to the main gallery or search for another article: