Two closely related ICT tactics will become even more prevalent - cost reduction, and consolidation or centralisation of servers, which includes the strategies of scale-out or scale-up.
We believe the market will see an increase in the number of edge servers deployed across the organisation.
Manoj Bhoola, business group manager, Microsoft SA
Scale-out seems to be the more popular of the two strategies and centres on harnessing the combined processing power, flexibility and increased fault-tolerance of multiple smaller, commodity servers. Scale-up is the more traditional approach, centring on building high-end and powerful capabilities into a single server.
Both approaches have their benefits. Scale-out doesn`t require a massive upfront purchase of hardware, and it lends itself to a transitional model. With scale-up, buying an extremely powerful server capable of replacing the majority of other server hardware in the organisation is almost a prerequisite.
However, ever-stronger server resources are being built on the back-end, in many cases using clustering and virtualisation to build what the front-end will view as a single, powerful computing resource. But it is using technology to split this "single server resource" into smaller chunks.
The reason the market is so focused on consolidation at present is, quite simply, that administering multiple server resources costs more. Reducing the number of systems (whether logical, through clustering and using a scale-out approach; or physical, by using a scale-up approach) can cut maintenance costs.
Maintenance costs
Manoj Bhoola, business group manager of servers and tools for Microsoft South Africa, says a recent Accenture study found that much of organisations` IT budgets is being spent on manually maintaining servers.
"The focus is on reducing that figure through automation and management," Bhoola continues. "Accenture believes the 70% of their IT budgets that organisations are spending on maintenance can be reduced to 55% through the use of sound and proven management and automation technologies."
He says this calculation takes into account the initial investment and maintenance costs of such a strategy over a three-year period.
Yet consolidation isn`t taking place throughout enterprises. "We believe the market will see an increase in the number of edge servers deployed across the organisation," Bhoola says, "with the majority of the consolidation taking place in the data centre. But the existence of edge servers will depend on workload, namely whether applications can work well across a WAN or whether keeping performance levels high will mean the deployment of an edge server at a branch level.
"For example, groupware can work perfectly across a WAN, but applications dependent on databases and management tools require a node at each branch office. And in terms of what is being done with ActiveDirectory, each node at a branch can act as an edge server and synchronise with a central server easily."
Memory musings
It seems that when Intel came to market with its Itanium 64-bit offering a couple of years ago, it was a little early. Linux is making a stronger play in the server space from numerous levels and 64-bit isn`t as distant as many might have thought.
Why is the market moving to 64-bit computing?
Joe Ruthven, business development manager for Linux and open source at IBM South Africa, says that in his experience, the move to 64-bit stems from the same reasons the market moved from 8-bit to 16-bit and 32-bit computing. "It`s the way the advancement in technology needs to go."
Applications are becoming more and more memory-hungry, and 32-bit technology is hamstrung by its 4GB memory limit. Depending on what 64-bit processor technology the server`s architecture is built on, this memory limitation moves to about 1TB, a figure that should be more than enough for virtually any business application today.
However, AMD and Intel have produced processors that can offer 64-bit memory capabilities on 32-bit processing architectures.
Intel`s offerings include a pure 64-bit processor in Itanium and processors with 64-bit extensions in the form of any chips with an EM64T (Extended Memory 64-bit Technology) tag.
From AMD, the offering is Opteron, which apart from giving customers access to a 64-bit memory architecture, has a couple of other tricks to speed up performance even more.
Starting in the Intel camp, Frans Pieterse, channel applications engineer, says introducing 64-bit memory technology can mean huge performance increase, but does not necessarily mean extra performance. "Factors like the awareness of the BIOS, O/S, drivers, and applications to the existence of EM64T are critical.
"Intel has made tools available that end-users and developers can use to maximise performance by 'porting` to this new architecture. With the right coding modifications, we`ve seen anything from a 9% to a 30% increase in performance with EM64T. Besides the larger memory space, performance can also be gleaned from tweaking the applications and operating systems to make use of technologies prevalent on today`s new Intel processors like enhanced speedstep (power saving) and demand-based switching," he says.
Intel has not abandoned its plans for Itanium, though. "Itanium is gaining good momentum in the high-end, back-end server space at the moment, yet Intel sees EM64T and Itanium coming together in 2007. By then, there will be no price delta between the two technologies and existing, average 32-bit customers will be able to make the switch cost-effectively," says Pieterse.
AMD`s tack
Jan Guetter, PR manager for EMEA at AMD, says the reason 64-bit computing has become big news in the market is that there`s now a trustworthy "transitional" approach available to customers. "With massive investments in x86 architectures, customers have been looking for a natural progression to 64-bit that doesn`t use a proprietary instruction set," says Guetter.
While AMD-64 primarily offers customers access to a larger memory space, AMD`s edge over the competition, in Guetter`s opinion, is that its architecture removes the traditional bottlenecks that used to be prevalent in the x86 architecture.
"The first of these is the memory controller, which is now resident on the processor itself. The second is the front side bus (FSB), which has been replaced with hyper transport. AMD calls the combination of these two features its Direct Connect architecture, since it directly connects all of the components within a system and increases the internal bandwidth to remove bottlenecks," Guetter says.
"Direct Connect is our differentiator, since it`s unique to AMD and the competition`s offerings still need to utilise the FSB, which increases latency and slows down other performance aspects," he says. "There is only so much data an FSB can accommodate."
Guetter says there are also benefits to using AMD`s approach when scaling up. The Direct Connect architecture ensures each time a processor is added to the system, a high-speed dedicated channel exists between processors and each processor`s bandwidth is added to the system.
Guetter agrees that performance is closely tied to whether the application and operating system are aware of 64-bit technology. "Generally, with a 32-bit OS and 32-bit applications, the performance is not any different. The good news is that with AMD-64 customers still have the benefits of Direct Connect to draw on.
"And if a customer is using a 64-bit O/S and 32-bit applications, the memory per application is increased to 4GB per application. With a 32-bit O/S, a 32-bit application could only use a maximum of 2GB of memory at any one time."
Guetter says another reason the common architecture play is so important is that server virtualisation (building a proverbial "single computing resource" through multiple physical computers) requires a common hardware infrastructure, but a heterogeneous software infrastructure. In a nutshell, it needs multiple software types running on common hardware platform.
"While virtualisation is available today, it has always been difficult to distribute workloads effectively and have the right level of performance available. With Opteron, AMD can do this cost-effectively - both VM-Ware and XENSource (pieces of software that allow multiple operating systems to be run on a single hardware resource simultaneously) are certified for use with AMD-64. We`re planning to design in new functionality for Opteron (called Pacifica) that makes this virtualisation even simpler and more effective," he concludes.
Porting needed
Ruthven says IBM is already there on the virtualisation front, touting its Power 5 processor (a true 64-bit processor) and its Open Power, designed exclusively for use with Linux. "One of IBM`s virtualisation plays centres on something we`ve developed for Open Power called micro partitioning, which allows for partitioning to take place at a chip level," he says.
Essentially, this allows for every Power 5 processor (with a maximum of four processors per Open Power server) to be partitioned into tenths and thus, to run 10 independent instances of Linux on each processor in the server. "This means customers potentially have 40 independent Linux servers running on a single server and then, to boot, balance the load between them and on the fly, allocate more processing power where required. The best part, however, is that this takes place at a hardware level and not through software.
"With typical software-based tools the customer still has a single underlying O/S controlling the hardware. The problem with this is that if the O/S fails, the entire server and all other 'virtual servers` on it fail. With 40 separate instances of Linux running directly on the hardware there is no dependency on software, so should one of the servers crash, the other 39 stay up unaffected," Ruthven explains.
He says Open Power is priced so aggressively that IBM believes it has the perfect platform for customers to consolidate, run new applications and exploit 64-bit architecture to its full potential.
But nothing is ever perfect. "The downside is that customers can only run Linux applications that support the Power architecture. So, it`s not as much to do with Intel and RISC technologies than it has to do with the fact that code is compiled differently. Applications must be Linux for Power compatible," he says.
Out of the roughly 6 000 Linux applications IBM has on its software list from the ISV community, 70% are Linux on Intel compatible and only 30% (about 1 000 apps) Power on Linux compatible.
So IBM needs to rectify the application scenario, which means porting. "The porting process depends on the application," Ruthven says, "and often this simply means a recompile for Power on Linux. In most cases, however, it entails a rewrite of between 5% and 7% of code."
At Linuxworld, IBM announced a solution to ease this process it codenamed "Chiphopper", officially known as IBM eServer Application Advantage for Linux.
"To a great degree, the codename gives it away. Chiphopper is an offering that gives ISVs access to the tools and resources needed to take applications that are already ported to Linux on Intel, and quickly make the changes needed to ensure compatibility on Linux for Power. Quite importantly, Chiphopper is not software - software tools are part of the solution, but intellectual capital and services make up a big part of the equation," he says.
"Participation in the programme costs nothing and ISVs are provided with access to IBM consultants in our innovation and porting centres worldwide."
Share