Intel`s support of the peer-to-peer networking architecture sparked a debacle at Intel`s developer forum in San Jose, California last week. Although the vendor did qualify its statements with demonstrations and showcases of the technology at work, it left a wake of confusion as to the definition of peer-to-peer networking, as well as a few wondering if Intel has forgotten the lessons learnt from years of peer-to-peer security and management nightmares.
Despite the difficulties involved in distributed architectures, they are certainly not impossible to implement.
Jason Norwood-Young, technology editor, ITWeb
According to the MCSE: Networking Essentials study guide, "peer networks are defined by a lack of central control over the network. There are no servers in peer networks; users simply share disk space and resources, such as printers and faxes, as they see fit".
The guide notes that disadvantages of peer-to-peer networking include weak and intrusive security; lack of central management, which makes large peer networks hard to work with; additional load on computers; a requirement for users to administer their own computers; no central point of storage for file archiving; and the inability of peers to handle as many network connections as a server.
In other words, peer-to-peer networking may work well in small organisations, but to use it in the enterprise space, as is being promoted by Intel, is simply not practical.
Rash decision
My first instinct was to dismiss Intel`s peer strategy entirely. After four days of Intel`s conference, however, I began to understand that the confusion stems not so much in a rash architectural decision, but rather from an incorrect definition of peer-to-peer networking. Intel`s understanding of peer networking seems to differ greatly from the conventional wisdom of the IT community`s definition. When Intel says "peer-to-peer", it should say "distributed computing".
Suddenly the strategy starts to make sense. Distributed computing models have proved useful in military, academic and commercial computing fields. The concept is that processing tasks or data are spread around an organisation or a number of organisations, through a broker, thus utilising unused processing power and storage capacity to its fullest extent.
Intel`s prime example is Napster, the free MP3 file sharing system. Another distributed architecture in use today is Seti@home, a system that utilises unused MIPS in computers around the world to scan the skies for extraterrestrial signals. Both have proved an unmitigated success, allowing previously untapped resources to be used for the common good of both the individual (in Napster`s case) and the organisation (Seti).
The concept of boxing and selling this architecture is quite brilliant, but Intel will have a few hurdles to leap before it can convince many IT managers to implement such a system in their environments.
Insecure
First and foremost is security. Distributed architecture is by its very nature insecure. You are forced to trust multiple machines, often not even in your organisation, with data and processing tasks that are often vital to your company`s business. The trust relationships set up in such an environment are tricky to control and monitor, especially since they cannot be administered from a central location.
Administration is the next headache. If an important part of a database is stored on a machine that goes down on the other side of the globe, that database will be incomplete, and so corrupted. If one machine processing data is a 286 and requires two days to perform a task that a mainframe handles in two minutes, correlation of results can be thrown off schedule. The adage "a chain is only as strong as its weakest link" is particularly true in a distributed computing environment.
Despite the difficulties involved in distributed architectures, they are certainly not impossible to implement. Intel uses a distributed architecture internally to compile calculations that would otherwise take a great amount of time. Boeing has also used a similar system to tap into supercomputers around the US for calculations regarding wing design.
Intel admits that its "peer-to-peer" architecture will remain complementary to the prevalent client/server design. However, I doubt that we will see Intel`s dark horse rolled out in any great quantities while client/server still fills the needs of most organisations.
Share