The anatomy of utility IT
- Article 1 of 1
- The fifth utility, April 2002
Is it really possible to create a 'worry-free' system that offers as much power as needed on demand?
Page 1 | Page 2 | Page 3 | Page 4 | All 4 Pages
As long as hardware can physically fit into HP's custom-built UDC racks (no simple matter for a mainframe, although HP executives claim that the company has already integrated high-end Sun Microsystems servers and others into UDCs), if an OpenView client exists for it, and HP's services division or the customer's own developers can write a RAL for the hardware, it is possible to integrate it into a UDC environment.
'Dumb' equipment, meanwhile, poses a different problem for the utility approach – security. Firewalls are not designed to be configurable across a network without a great deal of security and there are aspects that should not be controllable without direct physical access. While HP has written software for accessing a wide variety of such hardware, a direct connection via an RS-232 port is still needed for full redeployment capabilities, says Hindle. As a result, networking hardware that is distributed over a wide geographic area cannot come under the UDC umbrella.
Jim Cassell, an analyst at Gartner, agrees that the experience HP has of working with both hardware and software gives it a head start, perhaps as much as 18 months in some cases. ”It's the only company with a tangible technology for heterogeneous networks,“ he claims.
Heterogeneity is a major challenge for other suppliers, as Lance Osborne, product marketing manager at Dell's Enterprise Systems Group concedes. ”To work with heterogeneous systems, you need wide experience with both hardware and software, which gives HP an advantage,“ he says.
Dell's president and COO Kevin Rollins has publicly said that the utility approach is ”an old idea given a new name“. However, that has not deterred Dell from making significant efforts in this area, says Osborne. The company's OpenManage software is capable of remotely deploying whole operating system images to Dell hardware, and will offer the same capabilities for other vendors' systems by the end of 2002, he says.
For the bottom-up utility computing approach to work, spare server capacity must be available for re-tasking, otherwise the unified system will simply divert resources from one processing-hungry task to another.
The HP approach to utility computing still leaves resources under-utilised, as Hindle admits. ”If you must have your system tuned to the nth degree, you don't want UDC, because that fine-tuning is your key factor in terms of running your system. You're prepared to pay for it and sacrifice flexibility to get it.“
So-called grid computing, which Gartner's Cassell predicts will become part of vendors' offerings by 2007, overcomes that issue by using the spare resources on a server for individual parts of tasks rather than whole tasks. Applications developed to take advantage of grid computing can distribute calculations and parts of tasks to other servers on a network or even on the Internet and then collate the answers to complete the task.
But, as Cassell points out, that will require applications to be rewritten and there are many enterprise applications that will not benefit from the huge boost in processing gains grid computing offers. ”You need to be able to parallelise your application so that parts of the same calculation can be performed out of order if necessary,“ explains Cassell. ”That's a lot of work and most programs just don't need that kind of processing capability.
It's really only the mathematically and computationally intensive applications that will benefit.“
Page 1 | Page 2 | Page 3 | Page 4 | All 4 Pages
