Subscribe

Growing and consolidating

In the world of server technology, consolidation is still a major trend, while 64-bit computing is poised for mass adoption.
By Brett Haggard, ITWeb contributor
Johannesburg, 18 Apr 2005

Two closely related ICT tactics will become even more prevalent - cost reduction, and consolidation or centralisation of servers, which includes the strategies of scale-out or scale-up.

We believe the market will see an increase in the number of edge servers deployed across the organisation.

Manoj Bhoola, business group manager, Microsoft SA

Scale-out seems to be the more popular of the two strategies and centres on harnessing the combined processing power, flexibility and increased fault-tolerance of multiple smaller, commodity servers. Scale-up is the more traditional approach, centring on building high-end and powerful capabilities into a single server.

Both approaches have their benefits. Scale-out doesn`t require a massive upfront purchase of hardware, and it lends itself to a transitional model. With scale-up, buying an extremely powerful server capable of replacing the majority of other server hardware in the organisation is almost a prerequisite.

However, ever-stronger server resources are being built on the back-end, in many cases using clustering and virtualisation to build what the front-end will view as a single, powerful computing resource. But it is using technology to split this "single server resource" into smaller chunks.

The reason the market is so focused on consolidation at present is, quite simply, that administering multiple server resources costs more. Reducing the number of systems (whether logical, through clustering and using a scale-out approach; or physical, by using a scale-up approach) can cut maintenance costs.

Maintenance costs

Manoj Bhoola, business group manager of servers and tools for Microsoft South Africa, says a recent Accenture study found that much of organisations` IT budgets is being spent on manually maintaining servers.

"The focus is on reducing that figure through automation and management," Bhoola continues. "Accenture believes the 70% of their IT budgets that organisations are spending on maintenance can be reduced to 55% through the use of sound and proven management and automation technologies."

He says this calculation takes into account the initial investment and maintenance costs of such a strategy over a three-year period.

Yet consolidation isn`t taking place throughout enterprises. "We believe the market will see an increase in the number of edge servers deployed across the organisation," Bhoola says, "with the majority of the consolidation taking place in the data centre. But the existence of edge servers will depend on workload, namely whether applications can work well across a WAN or whether keeping performance levels high will mean the deployment of an edge server at a branch level.

"For example, groupware can work perfectly across a WAN, but applications dependent on databases and management tools require a node at each branch office. And in terms of what is being done with ActiveDirectory, each node at a branch can act as an edge server and synchronise with a central server easily."

Memory musings

It seems that when Intel came to market with its Itanium 64-bit offering a couple of years ago, it was a little early. Linux is making a stronger play in the server space from numerous levels and 64-bit isn`t as distant as many might have thought.

Why is the market moving to 64-bit computing?

Joe Ruthven, business development manager for Linux and open source at IBM South Africa, says that in his experience, the move to 64-bit stems from the same reasons the market moved from 8-bit to 16-bit and 32-bit computing. "It`s the way the advancement in technology needs to go."

Applications are becoming more and more memory-hungry, and 32-bit technology is hamstrung by its 4GB memory limit. Depending on what 64-bit processor technology the server`s architecture is built on, this memory limitation moves to about 1TB, a figure that should be more than enough for virtually any business application today.

However, AMD and Intel have produced processors that can offer 64-bit memory capabilities on 32-bit processing architectures.

Intel`s offerings include a pure 64-bit processor in Itanium and processors with 64-bit extensions in the form of any chips with an EM64T (Extended Memory 64-bit Technology) tag.

From AMD, the offering is Opteron, which apart from giving customers access to a 64-bit memory architecture, has a couple of other tricks to speed up performance even more.

Starting in the Intel camp, Frans Pieterse, channel applications engineer, says introducing 64-bit memory technology can mean huge performance increase, but does not necessarily mean extra performance. "Factors like the awareness of the BIOS, O/S, drivers, and applications to the existence of EM64T are critical.

"Intel has made tools available that end-users and developers can use to maximise performance by 'porting` to this new architecture. With the right coding modifications, we`ve seen anything from a 9% to a 30% increase in performance with EM64T. Besides the larger memory space, performance can also be gleaned from tweaking the applications and operating systems to make use of technologies prevalent on today`s new Intel processors like enhanced speedstep (power saving) and demand-based switching," he says.

Intel has not abandoned its plans for Itanium, though. "Itanium is gaining good momentum in the high-end, back-end server space at the moment, yet Intel sees EM64T and Itanium coming together in 2007. By then, there will be no price delta between the two technologies and existing, average 32-bit customers will be able to make the switch cost-effectively," says Pieterse.

AMD`s tack

Jan Guetter, PR manager for EMEA at AMD, says the reason 64-bit computing has become big news in the market is that there`s now a trustworthy "transitional" approach available to customers. "With massive investments in x86 architectures, customers have been looking for a natural progression to 64-bit that doesn`t use a proprietary instruction set," says Guetter.

While AMD-64 primarily offers customers access to a larger memory space, AMD`s edge over the competition, in Guetter`s opinion, is that its architecture removes the traditional bottlenecks that used to be prevalent in the x86 architecture.

"The first of these is the memory controller, which is now resident on the processor itself. The second is the front side bus (FSB), which has been replaced with hyper transport. AMD calls the combination of these two features its Direct Connect architecture, since it directly connects all of the components within a system and increases the internal bandwidth to remove bottlenecks," Guetter says.

"Direct Connect is our differentiator, since it`s unique to AMD and the competition`s offerings still need to utilise the FSB, which increases latency and slows down other performance aspects," he says. "There is only so much data an FSB can accommodate."

Guetter says there are also benefits to using AMD`s approach when scaling up. The Direct Connect architecture ensures each time a processor is added to the system, a high-speed dedicated channel exists between processors and each processor`s bandwidth is added to the system.

Guetter agrees that performance is closely tied to whether the application and operating system are aware of 64-bit technology. "Generally, with a 32-bit OS and 32-bit applications, the performance is not any different. The good news is that with AMD-64 customers still have the benefits of Direct Connect to draw on.

"And if a customer is using a 64-bit O/S and 32-bit applications, the memory per application is increased to 4GB per application. With a 32-bit O/S, a 32-bit application could only use a maximum of 2GB of memory at any one time."

Guetter says another reason the common architecture play is so important is that server virtualisation (building a proverbial "single computing resource" through multiple physical computers) requires a common hardware infrastructure, but a heterogeneous software infrastructure. In a nutshell, it needs multiple software types running on common hardware platform.

"While virtualisation is available today, it has always been difficult to distribute workloads effectively and have the right level of performance available. With Opteron, AMD can do this cost-effectively - both VM-Ware and XENSource (pieces of software that allow multiple operating systems to be run on a single hardware resource simultaneously) are certified for use with AMD-64. We`re planning to design in new functionality for Opteron (called Pacifica) that makes this virtualisation even simpler and more effective," he concludes.

Porting needed

<B>The analyst`s view of processors</B>

AMD and Intel`s introduction of 64-bit extensions to x86 processors brings new choices to the entry-level and midrange server markets. However, enterprises must not only weigh the processor decision, but also the choice of operating system (OS) and applications.
There are therefore five categories of entry-level and midrange processor technology:
1. x86 processors running a 32-bit OS. This represents a clear majority of high volume, industry-standard servers installed today, and ranges from one-way Pentium servers to eight-way Xeon servers, running predominantly Linux or Windows Server. Over five-years, many of these servers will move into categories 2 and 3, as 64-bit extensions become mainstream in x86 CPUs. Category 1 currently addresses most high-volume server workloads, but ERP and groupware, which are increasingly shared memory-intensive, will gradually outgrow this category.
2. x86 CPUs with 64-bit extensions running a 32-bit OS. This new category is driven by AMD`s Opteron and Intel`s 64-bit extended Xeon processors. They feature enhancements including 64-bit addressing, but can also run 32-bit OSs, which dominate the high volume market. However, when doing so, 64-bit addressing extensions are not available. Still, we expect performance advantages in 32-bit mode over non-extended x86 CPUs.
Two-way and four-way configurations will represent the bulk of this market. This server category satisfies many HPC workloads and commercial 32-bit workloads that exceed the capacity of standard four-way systems.
3. x86 CPUs with 64-bit extensions running a 64-bit OS. Another new category driven by the abovementioned processors, but use of a 64-bit OS (Linux, Solaris, or Windows) renders addressing extensions fully functional. Applications port to a flat 64-bit addressing space with improvements over 32-bit addressing space, and 32-bit x86 programs have efficient use of up to 4GB of memory. This category also provides strong performance for 32-bit x86 applications. Expect two- and four-way configurations to dominate this market and 90% of the high-volume server market to move in this direction by year-end 2008 (0.7 probability). This category addresses HPC, trading applications, memory-intensive applications, and low-end RISC Unix offloads to Linux.
4. Itanium processors running a 64-bit OS. Intel`s Itanium competes with RISC processors and is not a short-term replacement for x86 architecture processors. So Itanium will only run a 64-bit OS (normally Linux or Windows) and is not as efficient at running 32-bit x86 applications as category 3 servers. However, Itanium has stronger reliability, availability and serviceability. We still see Itanium servers as a strong competitor to RISC servers and expect Itanium to be most successful in four-way configurations and up. This category addresses database serving, high-end HPC demanding strong floating point performance, data warehousing, fault tolerant applications, high-end ERP and other OLTP applications.
5. RISC processors running a 64-bit OS. Both IBM`s Power and Sun`s Sparc processors fall into this category, plus several legacy architectures like PA-RISC, Alpha and MIPS. While most new RISC business is Unix-oriented, there are also users of proprietary OS environments like OpenVMS, OS/400 and NSK. Many of these will have to migrate within five years. In most cases, they run a 64-bit OS (eg RISC Unix or a proprietary OS) and also offer efficient running of 32-bit applications spawned by earlier generations of the processors. The RISC market is under attack from x86 servers at the low-end, and from Itanium servers at the high-end. As with Itanium, RISC will be most successful four-way and up. The server category satisfies existing high-end Unix workloads, database serving, ERP, data warehousing, high-end OLTP, and high-end HPC applications.
Over time we see 32-bit processing being phased out in the server space and see strong traction for all four forms of 64-bit processor technologies.

Ruthven says IBM is already there on the virtualisation front, touting its Power 5 processor (a true 64-bit processor) and its Open Power, designed exclusively for use with Linux. "One of IBM`s virtualisation plays centres on something we`ve developed for Open Power called micro partitioning, which allows for partitioning to take place at a chip level," he says.

Essentially, this allows for every Power 5 processor (with a maximum of four processors per Open Power server) to be partitioned into tenths and thus, to run 10 independent instances of Linux on each processor in the server. "This means customers potentially have 40 independent Linux servers running on a single server and then, to boot, balance the load between them and on the fly, allocate more processing power where required. The best part, however, is that this takes place at a hardware level and not through software.

"With typical software-based tools the customer still has a single underlying O/S controlling the hardware. The problem with this is that if the O/S fails, the entire server and all other 'virtual servers` on it fail. With 40 separate instances of Linux running directly on the hardware there is no dependency on software, so should one of the servers crash, the other 39 stay up unaffected," Ruthven explains.

He says Open Power is priced so aggressively that IBM believes it has the perfect platform for customers to consolidate, run new applications and exploit 64-bit architecture to its full potential.

But nothing is ever perfect. "The downside is that customers can only run Linux applications that support the Power architecture. So, it`s not as much to do with Intel and RISC technologies than it has to do with the fact that code is compiled differently. Applications must be Linux for Power compatible," he says.

Out of the roughly 6 000 Linux applications IBM has on its software list from the ISV community, 70% are Linux on Intel compatible and only 30% (about 1 000 apps) Power on Linux compatible.

So IBM needs to rectify the application scenario, which means porting. "The porting process depends on the application," Ruthven says, "and often this simply means a recompile for Power on Linux. In most cases, however, it entails a rewrite of between 5% and 7% of code."

At Linuxworld, IBM announced a solution to ease this process it codenamed "Chiphopper", officially known as IBM eServer Application Advantage for Linux.

"To a great degree, the codename gives it away. Chiphopper is an offering that gives ISVs access to the tools and resources needed to take applications that are already ported to Linux on Intel, and quickly make the changes needed to ensure compatibility on Linux for Power. Quite importantly, Chiphopper is not software - software tools are part of the solution, but intellectual capital and services make up a big part of the equation," he says.

"Participation in the programme costs nothing and ISVs are provided with access to IBM consultants in our innovation and porting centres worldwide."

Share