About
Subscribe

Pay-as-you-go computing

Computing was never meant to be difficult or worrisome. Utility computing proposes customers forget all about the infrastructure, and instead pay only for its use, as they do with electricity. But Nirvana is some way off.
Carel Alberts
By Carel Alberts, ITWeb contributor
Johannesburg, 15 May 2003

Computing on demand has been called grid computing (which is a building block of on-demand computing), utility computing, -based computing and many other things. It is the buzz of the new millennium, and most agree it provides a sound value proposition, though it is not an easy or inexpensive one to achieve.

Albeit composed of a myriad technological components, solutions and services, it is in essence a business concept that represents a basic shift in the provisioning of IT services, through metered use of computer services, supplied from a utility-like grid, rather than selling customers discreet IT solutions for every need.

When prophesied as such, it is not surprising that it has been met with the usual mix of near-religious fervour and scepticism. The former reaction is, predictably, evident with industry vendors, who view it as a choice field to play in, as IT spend continues to falter. The scepticism comes from customers, who want to see an easy value proposition and road map, case studies and results.

Understanding its value is easy enough. Although there are different variants of the on-demand model, the basic premise is metered usage and provisioning of IT services as and when needed. The infrastructure powering the service includes, as per usual, hardware, operating systems, servers and applications, which may take the form of a shared computing pool or on-site/off-site centre. Its attractiveness lies in operational removal from the customer and hence, infringement on valuable time.

Customers only interface with technology in the sense of using computing cycles when they need them. This means they don`t have to worry about acquiring and setting up an infrastructure, expanding and maintaining it, or consider their security, storage, networks, resilience and other services. They can get on with running their businesses.

It`s a flexible approach. One need not acquire more equipment than necessary at any given moment. It`s also cost-effective. While December may produce peak computer cycles, May might not. One can take and pay for as much processing power as needed.

The ASP, outsourced or hosted concepts of using IT are secondary to the vision of an integrated view of the enterprise IT infrastructure and its metered provisioning, usage and billing.

Andrew Fletcher, server spokesman, HP SA

In one possible manifestation of this model, a managed/outsourced/hosting service provider manages the data centre. Or if one chooses to "use IT as a business differentiator", in IBM`s words, in other words own one`s own infrastructure, one could kit up with as much infrastructure as the busiest months require, and sell off spare computing power in lean months. Another way to do it is to pool computing resources in a distributed environment, suitable for like-minded research entities.

"Whoever has control over the data centre is really irrelevant," says Andrew Fletcher, Hewlett-Packard SA server spokesman. "The ASP, outsourced or hosted concepts of using IT are secondary to the vision of an integrated view of the enterprise IT infrastructure and its metered provisioning, usage and billing."

From complexity to value

The computing on demand model has great attraction for many, because the complexity and heterogeneity of IT systems in current use at large enterprises, proponents say, should not be the worry of the enterprise. That is what the integration, IT resource management and/or hosting partner offering this service is for.

<B>Dinner party cues</B>

Quick definition: Computing on demand involves paying for IT resources, often processing power, on a per-usage basis, metered automatically.
Quick link: IBM Q&A
Who`s who? IBM has invested $10 billion in it, HP is said by Gartner to enjoy leadership and Sun is in an advanced pilot project in North America.
The projects: Sun has N1, IBM`s autonomic computing project is named Project eLiza and its programme is called e-Business on Demand, and HP has pulled together solutions under the Adaptive Enterprise moniker.
Different names: Call it computing on demand, utility computing, grid computing or policy-based computing. Gartner, which calls it the latter, says the phenomenon will be mainstream in 2005, and has potential to save companies untold amounts of money.
Although Google uses a computing pool of PCs to power its search engine, this is not true computing on demand. Mainframe transaction costs work out less expensively than PCs, and true data crunching environments require better infrastructure.
Computing on demand is not application service provision (ASP). ASP is a narrow concept involving a service provider buying software and hosting it on its servers, and then charging for its use. Neither is it outsourcing proper - neither model need apply to computing on demand, which refers merely to an integrated utility-like infrastructure and its metered usage for payment.
IBM signed the first customer for computing on demand, Petroleum Geo-Services (PGS), which uses massive server farms to crunch sonar data in its hunt for oil deposits. PGS will outsource a third of the supercomputing capacity it needs to IBM.

Complexity of current information systems is probably the chief cause of IT's inability to realise its potential. "With the current layer-upon-layer of computing platforms in evidence at many large enterprises, this is proving very difficult," says Dave Austin, Europe, Middle East and Africa field product marketing manager for Citrix Systems.

"The mainframe wasn`t overtaken by the mini computer or client server or the PC. Wave merely settled on foregoing wave, and hence we have a very complex scenario at many companies, something CIOs want to simplify and consolidate." (Citrix champions a whole new market of its own, the access infrastructure field, which envisions the accessibility of such an integrated enterprise, anywhere, any time, securely and from any platform, and as such has bearing on the on-demand world as defined here.)

Perhaps as a result of this inability to see value, and other factors, such as ever-tighter budgets, the buying public has for some time now put a halt on IT spend. The main research groups, such as Meta Group and Gartner, expect no more than 4% growth in worldwide IT spend this year. The recurring litany from analysts is that customers are concerned with getting more value out of what they have and will only spend in exceptional cases.

A different angle on the same problem of return on investment is HP`s assertion that CIOs are demanding more accountability and transparency of business processes, and hence clamour for a closer link to be forged between IT and business processes. This link, integration into one view of the enterprise, is a crucial underpinning of the process of becoming ready for computing on demand.

The first key element in achieving IT utility involves combining technologies, products, services and solutions to deliver and support a continuously available environment, ensuring the stability and efficiency of your business in the face of change.

Andrew Fletcher, server spokesman, HP SA

And extraction of value from existing systems is what the computing on demand blueprint proposes. Its value lies in the suggestion that what customers already have (in terms of enterprise systems) should be integrated to provide a completely transparent computing utility, which can be used at will, with this use accounted for accurately.

It puts forward a variety of ways in which incumbents and newcomers in enterprise IT user circles can draw on an integrated pool of IT, either in an on-site, finished solution, or outsourced.

Does anyone offer it today?

Glibly used, the term computing on demand makes no mention of the utter complexity that goes into preparing such an environment, the process of readying customers for it, or the enormous scope of solutions it may entail.

No vendor will deny that we are some years away from mainstream acceptance of the idea, and of the industry`s readiness to offer it. According to Gartner, "policy-based computing will become mainstream in 2005".

Many IT vendors, especially the traditional end-to-end hardware, software and services companies like IBM, Sun Microsystems and HP, have made noises in this regard for a few years now. Each can lay claim to some form of leadership, and are in particularly good positions to offer utility computing, given their "total solutions" capabilities, ie hardware, services, consulting and operating system expertise.

In this regard, Dave Botha, IBM marketing executive, remarks that full-house vendors have the best chance to pull off this end-vision of computing. This is because of the horizontal integration capabilities the giants possess, and the many environments they play in. "A consortium will find it difficult to project manage such a complex venture," he says.

IBM invested $10 billion in what it calls e-business on demand (EBOD) last year, according to spokespersons its largest infrastructural bet on the future yet. It has at its basis the value proposition that "agile" organisations of whatever size should be able to respond quickly to competitive threats and market opportunities while getting on with their knitting. EBOD will provide a flexible, variable, resilient and secure environment, customers are told, but IBM is not about to make it sound easy, or cheap. "We are currently in the integration phase of all the IT components in some customers, and this takes time," he says.

Botha adds that since most applications are written to middleware these days, integration can be an easier task than before, but many companies running on legacy systems have a rather tougher road ahead of them.

IBM`s pet project, running alongside the EBOD direction, is called eLiza. It is about "autonomic computing", and seeks to develop "self-healing" technology, which in turn promises resilience, security and flexibility - without the customer having to worry about acquiring or maintaining any of it.

Hewlett-Packard quotes Gartner Forum findings, which state that the company has a head start on the competition of some 18 months. Last week, it tied up an on-demand portfolio of 12 solutions, some of which have been in existence for years, says spokesman Andrew Fletcher, calling the initiative HP Adaptive Enterprise.

Sun says it is in an advanced pilot project status in North America, but Jan Dry, Sun SA spokesman, admits the concept is nowhere near to mainstream acceptance, nor will many customers be ready to step into such a framework today.

This hasn`t stopped other vendors from getting on the bandwagon too. Veritas has made its own announcements in this regard.

Better known as a storage management software-maker, Veritas Software will soon release products from two acquired companies, "which further its plans to provide more on-demand computer services". Veritas` purchase of Precise Software Solutions gives it products that detect performance problems, and with Jareva Technologies, it offers server automation tools, allowing users to move servers between applications based on demand.

In the second half of this year, Veritas plans to ship its service manager software that will help companies more precisely track and allocate technology costs.

Citrix Systems` Metaframe suite of tools providing integrated access to enterprise resources are mirrored to some extent in other vendors` computing on demand efforts too. Managing enterprise resources is approached by HP with its OpenView`s suite, by IBM with Tivoli, Computer Associates with Unicenter and Sun with Grid Engine software.

Computer Associates last week unveiled its strategy for on-demand computing and introduced new network management products to support it. Through Unicenter, CA promises on-demand computing benefits without requiring a complete overhaul of infrastructure (owing to its interoperability claims). The vendor`s network, security and storage management products will support the effort.

While CA claims cross-platform capabilities, HP and other vendors profess that their efforts will also include interoperability.

Finally, local start-up WebTec is delivering and signed a US distribution agreement for its subscription-based software, which it says offers time-based use of and charging for software, as well as managed services. The company also offers application management, invoicing, payment collection and bills customers automatically. The flexibility this provides will suit customers whose licensing requirements change before the traditional option of annual renewal becomes available, or those who need software for once-off projects only.

WebTec does not offer an application service provider option, saying it is too bandwidth-intensive, and does not assuage customers` security fears.

The company will, like all the above vendors, market its solution to a channel of service providers, including Internet service providers and managed service providers as well as systems integrators.

Sun, IBM and HP`s blueprints

So, who are the more vocal vendors in this area?

We have a very complex scenario at many companies, something CIOs want to simplify and consolidate.

Dave Austin, Europe Middle East and Africa field product marketing manager, Citrix Systems

In Sun`s case, its data centre architectural blueprint is called N1, which much like every other serious computing on demand provider, makes the data centre work like a single system; in other words provides one logical computing pool. "It turns once-isolated resources into one pool of virtual resources, so they can be re-allocated in minutes and used in more flexible ways. Further, N1 does this over a wide array of heterogeneous devices from Sun and other vendors," says Sun`s Dry.

The N1 architecture makes first mention of the stages that precede computing on demand. They are:

* Foundation resources (infrastructure);

* "Virtualisation" (the software-based "tying up" of all resources, of whatever description, in one place for seamless access);

* Provisioning (mapping business services onto the virtual resource pool, ie capturing all service requirements, describing them in software and implementing design from resources);

* Policy (rules defining performance objectives for a service) and automation; and

* Monitoring (of usage on a per-service basis, not on a per-box basis, which makes for more accurate accounting and management generation).

IBM is starting to deliver some components related to its autonomic computing vision (self-managing infrastructure), including technology formerly known as Storage Tank, according to reports. The components include a number of storage virtualisation applications, shown publicly for the first time last week.

The products enable scalable and secure management of data storage and migration, transitioning to virtualised storage and pooling of storage capacity from non-like array vendors into a single file space. This technology is a complete virtualisation solution.

The company acknowledges that virtualisation technology requires a leap of faith on the part of customers before they will bring it into their data centres.

It says self-configuring may be initiated to adjust allocation of resources or in response to faults, during run-time or at booting (its self-healing aspect), and is important when automatically providing IT services as needed.

Products put forward by IBM to help customers move towards autonomic capabilities include WebSphere Application Server, for building of e-business applications; Tivoli software, to manage infrastructure; Storage Server, code-named Shark, to allow customers to more easily configure their systems and manage their information.

HP advocates the adaptive enterprise and an IT consolidation journey towards a computing utility. HP`s Fletcher says collocating, integrating hardware, data and applications are all necessary steps toward building a truly "adaptive infrastructure".

"The first key element in achieving IT utility involves combining technologies, products, services and solutions to deliver and support a continuously available environment, ensuring the stability and efficiency of your business in the face of change. You maintain service levels while you reduce costs," says Fletcher.

"The second key element is the ability to match resource capacity to service demands in real-time (mapping business processes to IT services).

"The third key element is the ability to monitor and control resource health, track use, and report on infrastructure operations that impact the business. When a system has a single management station, overall management of the cluster is significantly simpler - further decreasing downtime."

Fletcher says HP`s IT consolidation services will help in this regard, from assessing goals and needs, with an IT consolidation value workshop, to developing an investment justification and architectural blueprint, to detailed design, implementation and ongoing management and support.

Ignore it at your peril

Computing on demand provides clear value as an end-vision of IT, which will become a utility from which customers draw computing power. Its attraction lies in that IT`s complexity, cost, maintenance and suitability should not be the concern of customers.

However, this clear value belies the complexity that goes into it. Customers could need up to five years to ready their current systems for it, the end-to-end vendors are in different stages of a full enterprise offering, and one could expect this to be mainstream only in a few years` time.

However, to ignore computing on demand could be disastrous. Whatever your company`s size, the preponderance of information today means companies must increasingly react to any opportunities and threats with alacrity, rather than with the ponderous stealth that comes with managing a complex, heterogeneous environment by oneself.

Share