IBM`s Project eLiza closing the gap between autonomic and grid computing
IBM`s billion-dollar autonomic computing project (eLiza) is bringing closer the reality of commercial grid computing, which will allow businesses to plug into computing infrastructure power in the same way as they plug into electricity, water and telecommunications grids today.
Grids enable data to be stored in and shared across geographically dispersed servers, obviating the cost of huge data centres. The storage servers, and those accessing them, feed off and contribute to the collective infrastructure power of the grid, spreading the processing and traffic load.
A grid`s collective computing power is, therefore, far greater than can be achieved in even the largest of proprietary corporate systems.
Computing grids used for scientific and medical purposes, such as global weather watching and national breast cancer analysis, are already in place.
Commercial grids are not yet a reality, because of the massive pipeline and throughput capacity needed - and the systems to manage them.
Says Iqbal Hassim, IBM SA`s enterprise server group executive: "Grids will connect heterogeneous resources anywhere in the world. It will be impossible to manually manage the complexity of such vast, interconnected systems.
"Autonomic computing - whereby a system heals, secures and maintains itself on a policy-driven basis - is the only way to manage them."
IBM is the only technology company committed to end-to-end autonomic computing.
Project eLiza has already delivered the industry`s first self-healing, self-managing and self-protecting servers and desktop and notebook computers. The focus is now on extending those capabilities to entire systems.
"There would simply be no way to manually trace a fault or restore lost data if a grid went down," says Hassim.
"So, grid technology must guarantee non-stop operation.
"To do that, it must be self-healing - using extra servers for built-in redundancy and reassignment of spare memory.
"It must have an inherent immune system that makes it self-protecting against hacking, viruses or physical attack.
"And, it must be self-managing - maximising its resources by predicting faults or bottlenecks in time to move workloads from one part to another."
The vastly increased redundancy needs of autonomic computing have a cost implication.
But, Hassim says, the rapidly dropping price point on hardware will more than pay for the redundancy.
"Chips are cheaper to make now and they deliver far more power with far less heat and use of electricity.
"Also, we can pack ever more data on to ever smaller storage media. So we`re saving space and, therefore, dropping overhead costs.
"When you add in the savings on human skills that autonomic computing makes possible, the business case for grid computing is very compelling."
But how do humans keep control of autonomic systems?
"That`s where standards and policies come in. The business rules that commercial grids will be programmed to follow will dictate what they do and why they do it.
"Which is why it is crucial, upfront, to get the business rules and operating standards for grids right."
IBM is closely involved in Project Globus, a multi-vendor group that is setting the international standards for grid computing.
It will ensure that multiple heterogeneous servers from multiple vendors talk to each other as though they are one server.
In terms of the risk of terrorists or criminals turning autonomic systems and commercial grids into tools of economic destruction, Hassim says that a grid`s behavioural policies will be captured in firmware rather than software.
"Anyone with malevolent intent would have to get at the hardware guts of the system to override its programmed policies.
"At that point, the system`s self-protecting faculties would kick in.
"So, while it is possible to re-programme any computer, autonomic or not, with an autonomic system your protection is that much the greater."