Unlocking the data centre
By Bala Pitchaikani, Vice-President of Product Line Management at Dell Force10 Networks, San Jose, California
Meeting the needs of increasingly demanding data centre applications may require more flexibility than users can gain by following the set recipe of a major networking vendor.
In fact, the largest data centres in the world are built from open solutions, using open protocols and open usage models (like the ones defined by consortiums like Open Data Centre (ODC) Alliance) and combined in ways that recipe-based configurations can`t support.
This points to a need for best-of-breed solutions in the data centre, but many data centre operators are afraid to stray from the prescribed path defined by the largest network vendors, says Bala Pitchaikani Vice-President of Product Line Management at Dell Force10 Networks, San Jose, California.
This not only hinders business agility, but also data centre optimisation and elasticity. By opting for open, standards-based equipment, however, data centre operators can gain flexibility, agility, and elasticity, and above all, provide the ability to deliver virtual data centres with self-provisioning capabilities for their customers.
New data centre models are forcing network architects to rethink the way they build data centres. For years, data centre architecture has used a monolithic, chassis-based core with a 2- or 3-layer design, but today new models are emerging.
Today`s vendors are promoting the use of a distributed core, or a distributed Clos fabric. Distributed core architecture eliminates the traditional hierarchy that exists with data centre switching elements, and in turn provides a high-performance, non-blocking communication for compute and storage nodes. Built with the right products, distributed core architecture can yield tremendous performance benefits, while also proving to be extremely cost-effective and scalable for large-scale deployments.
Vendors have taken different approaches to providing distributed core solutions. Most of the large pure-play networking vendors have taken a monolithic approach to a distributed architecture. In other words, the fabric is distributed but it must be managed as a complete entity, and it doesn`t interoperate with other vendors` products.
For example, Juniper`s QFabric uses its own proprietary fabric protocol, which makes it non-interoperable across vendors and requires usage of its own management solution (QFabric Director). Cisco`s Fabric Path and Brocade`s VDX also take similar closed approaches. In these cases, at best, the vendor has extended standards so they become a locked approach - vendors offer proprietary protocols (FabricPath and QFabric), proprietary operating systems (IOS and Juno), and proprietary silicon (EARL, EARL2, IP2, and I-Chip). Contrast this approach with an open standards approach, where components can be mixed and matched, as the customer requires.
Corporations and IT are being asked to follow dual-vendor strategies, and the only way to unlock the data centre is for the customer to have the power to mix and match two vendors at a minimum. Large pure-play networking vendors may have something that looks and feels like a distributed core, but its proprietary nature and/or cost prohibitive form factors limit customer choice, and as such, prevent them from building an efficient and scalable data centre.
Contrary to popular belief, not all data centres are created equal - in fact, each data centre is unique. This leads to a need for tremendous flexibility when building out the data centre. For example, if a customer has to pick two vendors and one vendor`s products aren`t performing well, then the customer shouldn`t have trouble replacing that vendor with a different one. The customer should be able to swap in a different vendor in a day or so. If it takes longer than that, then it`s an example of a proprietary system.
Naturally, customers do appreciate reference designs and market-specific examples of how a data centre should be built to handle a specific application. But even here, there are differences in the way a lock-in vendor would approach this (for example, Cisco`s Cisco Validated Design, CVD v2.0), and the way an open standards vendor such as Dell Force10 would approach it.
The largest vendors talk about distributed core and fabrics, but the customers must adapt to what the vendor is offering, rather than the customers adopting the vendors` offering to meet their needs. It`s akin to going to a restaurant and being told what to eat rather than being able to choose from the menu.
Open standards-based vendors offer plug-and-play components (or base architectural building blocks) that are like Lego blocks, and allow the customer to assemble them as they see fit. A vendor may create reference architectures (orthogonal to validated designs) based on customer input that have been provided to open consortia (like ODC Alliance, Open Stack, etc) based on the vendor`s and customer`s joint understanding of that specific market. In such a case, the customer sees Lego blocks that are arranged in a specific configuration to suit his or her specific requirement (for example, to suit a specific requirement to deliver a virtualised application). However, the customer can easily and quickly rearrange the basic building blocks and/or the components to customise the design of the data centre.
Customers are being pushed to provide agility and non-stop service. They need the visibility, monitoring, correlation, and management capabilities so they can orchestrate the network the way they want. These customers need an open standards fabric, so as to flex it the way they want it and when they want it. They also need a fabric that can be managed by open orchestration systems so they are not forced to purchase an orchestration solution from the network equipment vendor (which locks in that vendor`s proprietary solution).
Having the flexibility to mix and match Lego blocks is critical. This gives the opportunity to pre-assemble Lego blocks in an architecture the customer may want, while maintaining the flexibility to change those blocks around to meet specific needs.
Following are the criteria for an unlocked data centre:
* Open architectures. An open systems infrastructure relies on open, standards-based technology (such as Equal-cost multi-path (ECMP), Virtual Routing and Forwarding (VRF)-lite, VLAN, and NIC Teaming) for interfaces, interconnect (Remote Direct Memory Access (RDMA), Data Centre Bridging (DCB)), control plane, and other aspects of network operations. The network thus supports any computing (Hypervisors, Ethernet Virtual Bridging (EVB), vSwitch) or storage (NAS - NFS/CIFS, iSCSI, fibre channel) solution that also supports open standards. This approach gives data centre owners unrivalled flexibility in choosing components that suit their specific needs. No two data centres are exactly the same, so by providing this flexibility, an open systems approach meets varied data centre needs as no other approach can.
* Open systems architectures include switches for the core and the top-of-rack. In the core, data centre owners should be able to choose either a traditional hierarchical architecture designed for traditional data centres (using core-aggregation) or a next-generation distributed architecture optimised for fabric deployments. Both architectures must be standards-based and capable of advancing any existing environment with higher performance and lower cost structures.
* Open automation. One of the most common objections to open systems is its inability to manage them as a single entity. Open automation provides standards-based automation (using DHCP, SNMP, NetConf, or Representational State Transferful (RESTful) APIs) for data centre operations such as bare metal provisioning, configuration management, and monitoring. Data centre managers have the ability to automate other control or monitoring functions with standard scripting, using Perl or Python. Automation is critical for data centres of any size as it allows operators to dynamically stitch together network, compute, and storage resources, using object-oriented constructs for self-provisioning, multi-tenancy and so on. For maximum choice, automation shouldn`t be something that forces users down a single-vendor path. Open automation allows users to automate with their choice of solutions.
* With the adoption of virtualisation, data centres have become more responsive and efficient, but also more complex. IT managers must now manage hundreds to thousands of virtual machines and their associated storage and networks. Data centre infrastructure must be more responsive, quickly adapting to changes in application requirements. Additionally, server, storage and network infrastructure can no longer be managed as separate silos, but rather as a single, dynamic environment. While large, dedicated installations tend to use single image servers to maximise computations, virtualisation is enabling capabilities to be offered to broader audiences with less total expense. Open automation addresses these management challenges using industry standards and common industry technology, allowing IT managers to deploy virtualised environments using best-of-breed technology. Standards such as Edge Virtual Bridging (802.1Qbg) will be instrumental in providing data centres with complete manageability of virtualised resources.
* Open ecosystems. Open ecosystems bring together leading makers of standards-based go-to-market solutions and technology to offer unrivalled flexibility and choice in the selection of best-of-breed solutions for the data centre. Open ecosystems are a critical and necessary element to unlocking the full potential of data centre deployments. Simply put, there is safety in numbers; in other words, the more people trying to solve problems and innovate, the better. Ultimately, choice comes from having the broadest ecosystems.
Whether building conventional or distributed core architectures in the data centre, customers need the flexibility to choose the specific compute, storage and networking systems that best fit their needs. While major networking vendors want to enforce a specific design, an open systems approach provides the flexibility to build a data centre that precisely matches customer needs.
Bala Pitchaikani is the Vice-President of Product Line Management at Dell Force10 Networks (San Jose, California). http://www.force10networks.com