The anatomy of utility IT
- Article 1 of 1
- The fifth utility, April 2002
Is it really possible to create a 'worry-free' system that offers as much power as needed on demand?
Page 1 | Page 2 | Page 3 | Page 4 | All 4 Pages
Most large organisations have more than enough servers and resources to meet the average demands placed on them. There are failover servers in case servers break down. And there are additional servers available for when peak demands exceed server capabilities. The result? More servers than are actually needed, many sitting idle for much of the time. Even more frustrating for server administrators is when over-worked servers buckle under traffic loads and under-utilised servers are unable to pick up the task.
In 2002, a new vision has emerged – 'utility computing'. The idea is that computers linked together over networks will pool their processing power in order to provide all the computing resources that an organisation needs, whenever it needs them. Servers will be just servers – not dedicated web servers, database servers or application servers – ready to be deployed as demand requires.
But what is actually needed to create a seamless computing pool, and to enable IT managers to take an end-to-end view of the network and its resources? Is it really possible to create a 'worry-free' system that can allocate computing resources to different tasks according to need?
Large systems and software suppliers such as Hewlett-Packard (HP), IBM and Sun Microsystems say it is. But the approaches these suppliers are taking to utility computing differ widely. In broad terms, they can be split into two categories: 'top-down' and 'bottom-up'.
The bottom line
The bottom-up approach takes a large number of servers and provides server administrators with the ability to share loads among them, reallocating tasks as requirements change. This is the approach that companies such as HP and Dell are taking. An extra dimension to this approach is 'grid computing' – not just allocating servers to particular tasks, but giving them parts of tasks to process before integrating the results.
Systems management tools such as HP's OpenView suite have long given users the ability to view, monitor and control network resources and infrastructure. However, the utility approach goes further. It gives both managers and administrators far greater control over individual components of the infrastructure. Indeed, the primary raison d'être of the utility computing concept is to be able to remotely re-task a server, even to the extent of changing its operating system, as requirements vary.
This is not a simple technological issue. Aside from providing monitoring requirements, administrators need to be able to issue commands appropriate to heterogeneous hardware ranging from relatively 'dumb' hardware such as firewalls, to complex mainframe and high-end servers.
HP's Utility Data Center (UDC) is one of the few systems currently available that attempts to address these needs. Based on the company's OpenView systems management suite and a dedicated utility controller system, UDC adds a resource abstraction layer (RAL) to servers, and effectively functions as a driver.
“The bulk of [the RAL] is written in Java,” explains Peter Hindle, senior technical consultant at HP. “That makes us hardware-agnostic. [The RAL] responds to a relatively small set of commands, and passes them down to the physical hardware.
Once the RAL receives simple instructions from the UDC, says Hindle, it consults a database of instructions to find those appropriate for the hardware to which the instruction is being sent: ”The resource abstraction layer needs to be written for a particular system and is configuration-specific. The instructions to change a mount point on a two-way server, for example, may be different from those on an eight-way server. The RAL has to know that for a particular box, it needs a particular script for a particular command.“
Page 1 | Page 2 | Page 3 | Page 4 | All 4 Pages
