About
Subscribe
  • Home
  • /
  • TechForum
  • /
  • Fibre-Channel -- the basic building block for storage area networking

Fibre-Channel -- the basic building block for storage area networking

By Johan du Plessis
Johannesburg, 01 Jun 1998

SCSI as we know it today has been around for almost two decades. It has changed and evolved from a humble, although very fast for it`s day, asynchronous transfer rate of 2.5MB/s to the blistering 80 MB/s of Ultra-2 SCSI. Most people agree that it is a complicated standard as it is dependent on connectors and cabling specifications that change with every revision. It also seems that the names of these connectors vary according to who you talk to. Recently a side branch was added to the evolution of SCSI - serial SCSI. The main reason for this was that parallel SCSI is pushing the performance limits of what the physical components - cables and connectors - are capable of. Serial transmissions, by their very nature are more robust, and with the silicon available today, there is no longer a performance penalty for going serial as opposed to parallel. Serial SCSI started with two formats: SSA and FC-AL. SSA is IBM`s standard and FC-AL is developed by various manufacturers (HP, Intel, Adaptec, Seagate, Gadzoox, Raidtec and many others) according to standards defined by the FC-AL community. Fibre Channel Arbitrated Loop is not as complex as it may sound as its is structured as a set of hierarchical functions. With FC-0 as the hardware interface layer and FC-4 as the high level protocol layer. The arbitrated loop has various control functions built into it and transmissions are divided into different classes of service: Class 1: Acknowledged Connection Service Class 2: Acknowledged Connectionless ServiceClass 3: Unacknowledged Connectionless ServiceClass 4: Fractional Bandwidth Connection Oriented Service Class 6: Simplex Connection Service Class 1 provides true connection service. The result is circuit switched, dedicated bandwidth connections. Fibre Channel`s advantage is connection setup and tear-down measured in microseconds. An end-to-end path between the communicating devices is established through the switch. Since the only overhead for Class 1 is connection setup and tear-down, it is an efficient service for large data exchanges. Fibre Channel Class 1 service provides an acknowledgment of receipt for guaranteed delivery. Class 1 also provides full-bandwidth, guaranteed delivery and bandwidth for applications like image transfer and storage backup and recovery. Some applications use the guaranteed delivery feature to move data reliably and quickly without the overhead of a network protocol stack. Class 2 is a connectionless service, independently switching each frame and providing guaranteed delivery with an acknowledgment of receipt. As with traditional packet-switched (connectionless) systems, the path between two interconnected devices is not dedicated. The switch multiplexes traffic from N_Ports and NL_Ports without dedicating a path through the switch. Class 2 credit-based flow control eliminates congestion that is found in many connectionless networks. If no buffer space is available, a "Busy" is sent to the originating N_Port. The N_Port will then re-send the message. This way no data is arbitrarily discarded just because the switch is busy at the time. Typical Class 2 frame latency is less than one microsecond, making it ideal for shorter data transfers like those in most business applications. Class 3 is a connectionless service, similar to Class 2, but no confirmation of receipt is given. This unacknowledged transfer is used for multicasts and broadcasts on networks and for storage interface on Fibre Channel loop. The loop establishes a logical point-to-point connection and reliably moves data to and from storage. Class 3 Arbitrated Loop transfers are also used for IP networks. Some applications use logical point-to-point connections without using a network layer protocol, taking advantage of Fibre Channel`s reliable data delivery. Class 4 is a fractional bandwidth, connection-oriented service. Virtual connections are established with bandwidth reservation for a predictable quality of service (QoS). A Class 4 connection is bi-directional with one virtual circuit (VC) operational in each direction and supports a different set of QoS parameters for each VC. These QoS parameters include guaranteed bandwidth and latency. A quality of service facilitator (QoSF) function is provided within the switch to manage and maintain the negotiated quality of service on each VC. A node may reserve up to 256 concurrent Class 4 connections. Separate functions of Class 4 are setup of the QoS parameters and the connection itself. When a Class 4 connection is active, the switch paces frames from the source node to the destination node. Pacing is the mechanism used by the switch to regulate available bandwidth per VC. This level of control permits congestion management for a switch and guarantees access to the destination node. The switch multiplexes frames belonging to different VCs between the same or different node pairs. Class 4 service provides in-order delivery of frames. Class 4 flow control is end-to-end and provides guaranteed delivery. Class 4 is ideal for time-critical applications like video. Class 6 is similar to Class 1, providing simplex connection service. However, Class 6 also provides multicast and preemption. Class 6 is ideal for video broadcast applications and real-time systems that move large quantities of data. Fibre Channel has an optional mode called Intermix. Intermix allows the reservation of full Fibre Channel bandwidth for a dedicated (Class 1) connection but also allows connectionless traffic within the switch to share the link during idle Class 1 transmissions. An ideal application for Intermix is linking multiple large file transfers during system backup. During a Class 1 file transfer, a Class 2 or 3 message can be sent to the server to set up the next transfer. Upon completion of one transfer, the next will immediately start, increasing efficiency. Basically FC is a fast, flexible and reliable way to interconnect storage and servers, all of which allow us to start building storage area networks. SAN is a next-generation storage architecture that is a pratical reality right now. What SAN offers is the facility to create a separate network for all storage components, instead of having storage attached to server as was traditionally the case with parallel SCSI. This development allows us to do things with storage that previously IT and storage managers could only dream of. Storage can be located where it is convenient, it no longer has to reside right next to the server. Users can have a server room and a separate data storage room or rooms. If we fibre optic cabling is used an offsite mirror of valuable data that is easily implemented and managed can be set up. It is quick and easy to expand capacity with 126 nodes available - just connect another disk subsystem to the loop (1 Terabyte on one controller is possible using 9 GB disks). If an intelligent FC-AL hub or switch is used, this can be done live. Because servers and clients access the data from the SAN as opposed to from the server, with the implementation of server clustering software enables the provision of server redundancy. There are many other components that go into a SAN but the basic foundation is FC-AL. If a storage solution based on FC-AL is implemented today - it can be grown into an enterprise SAN over the next few years.

Share

Editorial contacts

Brian Streak
GBS Marketing
(011) 781-2126
gbsbrian@global.co.za
Johan du Plessis
Storgate Africa
(011) 234-0400
johan@seagate.co.za