Subscribe

Sidetracked by SDN

Software-defined networking is making headlines, but is it all it's cracked up to be?

Andy Robb
By Andy Robb, Technology specialist at Duxbury Networking.
Johannesburg, 10 May 2013

The publicity surrounding SDN - software-defined networking - is ramping up. Already this year, vendors have announced plans to market SDN-enabled infrastructures. And considerable press attention has been given to the belief that SDN is 'the next wave' and the 'next big thing' in networking.

It's not. While SDN might promise network administrators simplicity, ease of use and a host of other benefits, in reality these are not new goals. The networking industry has been working towards achieving them since the late 1990s and early 2000s.

Back then, network consolidation was the key objective. It was centred on Ethernet and IP technologies, which replaced a multitude of protocols and data transport technologies - such as FDDI, ATM and others.

Consolidation was necessary because networking infrastructure decisions were being influenced by these technologies and not by the business requirements of the end-users. At this time technology 'evangelists' were promoting their alliances and trying to adapt application demands to suit their offerings. The focus was on technology rather than on the customer.

It couldn't last. The IT industry had to change, and as it became more customer-centric, so networking infrastructures evolved. Components and technology platforms became standardised, leading to easier set-up and development. At the same time, network design was simplified.

Looking for trouble

It's surprising that there are now factions within the networking industry seemingly hell-bent on reintroducing complexity.

Let's put the SDN protagonists under the spotlight. Their philosophy - which centres on the promise of software capable of manipulating an underlying heterogeneous networking environment comprising networking servers and storage - might appear sound at first glance. But, does SDN have the ability to best handle how applications and services flow across the network without introducing costly complexity into the equation?

SDN adoption is being driven by the Open Networking Foundation, a body founded to promote SDN standardisation through the adoption of the OpenFlow communications protocol. OpenFlow's inventors consider their creation as the enabler of SDN. They believe all network switches, no matter the manufacturer, will in future be managed by OpenFlow.

However, many SDN-supporting vendors are not implementing OpenFlow. Some believe it is not mature enough, while others see more benefit in innovating traditional networking features in their switches. Yet others are enhancing OpenFlow with proprietary protocols for functions specific to their offerings.

As switch vendors move towards individual implementations, iterations and understandings of SDN technology and the protocol, the potential for added network complexity and marketplace confusion will increase.

For example, for there to be an effective interface between technology planes (hardware, data and software management), a considerable amount of 'intelligence' will be needed in future generation switches.

As a result, custom ASIC (application-specific integrated circuit) designs will be needed, rendering today's cheaper, off-the-shelf switches redundant, to be replaced with custom-designed alternatives.

Stuck

The proliferation of proprietary technologies will encourage vendor 'lock-in', which will drive prices even higher as customers will no longer have a choice of supplier.

In reality, OpenFlow cannot be relied on to deliver an open standards interface. So users opting for SDN will be forced to choose a specific vendor solution.

Users opting for SDN will be forced to choose a specific vendor solution.

There is a significant danger the IT industry could put itself in a position to relive the WiMax experience. WiMax - the Worldwide Interoperability for Microwave Access - is a wireless communications standard designed to enable the delivery of last-mile wireless broadband access as an alternative to cable and DSL. It was as created by the WiMax Forum, formed in 2001 to promote conformity and interoperability of the standard. Despite the standard's ratification, there were very few vendors whose products were able to interoperate.

Customers were thus locked into a WiMax solution from a single vendor. One of the problems was it took so long to ratify the standard that most vendors had 'pre-standard' solutions. They were unable to migrate these solutions to a point where they were truly interoperable. This is why the technology was never broadly adopted and remains largely stillborn.

This scenario is mirrored by what we are seeing today in the telecommunications industry, with the LTE (long-term evolution) standard, marketed as 4G LTE. This is a standard for wireless communication of high-speed data for mobile phones and data terminals.

In fact, many service providers which are offering 4G services are already locked into specific vendors and their particular LTE interpretations. They will not be able to go to market and adopt more cost-effective alternatives should they appear.

Based on these and similar scenarios, it is safe to say a proprietary solution is almost always followed by an increase in costs. This is the inevitable fate of SDN.

Share