Charging ahead
- Article 1 of 2
- Advanced IT architectures, November 2004
Utility computing presents a big problem: who pays for what?
The two main approaches to utility computing - internal and external - have different charging and usage models. But the complexities of developing the charging models are common to both.
The internal model of utility computing relies on resources being dynamically deployed to serve different tasks or applications as their demands change. But where such systems are distributed throughout the organisation, individual business units may be reluctant to hand over their resources to 'the pool', especially if the systems were originally purchased out of their own budgets. Overcoming this resistance may require the IT department to develop some form of dynamic 'charge-back' system that measures who has been using the resources and for how long, and bills them appropriately.
Vendors have yet to develop a sufficiently robust automated charge-back mechanism. And because resources can, in theory, be allocated and reallocated to meet rapidly changing demand, charging models need to be sophisticated to track and bill for the changes appropriately.
One solution is to use a model closer to internal time-share than utility charge-back. IT services firm PA Consulting has developed an approach for an internal market, where departments with surplus resources can advertise on an intranet a description of the resource that they have on offer, a price, service-level details and the time for which the resource is available.
But current models for licensing software in a utility environment present a further problem. Many licences are based around the number of processors used and do not take into account whether those are real or virtual processors, for example. Developing a suitable payment structure may require extensive negotiation with the supplier - a further barrier to adoption.
In the external approach to utility computing, an outsourcer dynamically provides as many resources as are needed and then charges the organisation a fee based on usage. While this does mean that an organisation only has to pay for what it uses and does not have to over-provision to cope with fluctuations in demand, it does have some drawbacks. Buyers no longer know in advance how much software and hardware will cost, since the price will vary according to usage.
Again pricing models are complicated by software. While storage can be charged per gigabyte per day, or similar rates, and servers can be charged at per processor per hour rates, how should operating system usage be measured? Should pay-per-click be adopted for software or a simple flat rate per user per month?
There are some vendors offering utility pricing: Computer Associates offers a monthly payment schedule based on variable metrics, such as terabytes or transactions; Akamai's EdgeComputing customers can commit to a fixed monthly fee, plus an additional variable fee based on the number of server requests. Nevertheless, these are rare exceptions, and early adopters of utility computing will have almost as many issues with new pricing models as they do with the technology itself.
