Standards

Modern networking, including the Internet and related technologies are the result of the success story of open standards. Around 1987, when IBM was launching the PS/2 (Personal System/2) a war was being waged for competition for the rapidly growing small business machine environment fueled by significant government purchasing. IBM was poised to dominate the PC space with aggressive tactics for preventing competition and by building a superior PC architecture called MCA. This presented a closed system that if another company wanted to use, would have to pay royalties. Some companies like Apple did so in producing their personal computers. However, the Gang of Nine (an group of manufacturers led by Compaq), got together and created an extension of ISA architecture called EISA. This was a more open architecture that allowed for significant competition. Although, IBM had the superior technology, superior operating system, and superior business applications they could not compete with a open market place that drove prices down and produced a furious industry with innovations coming out everyday. So the battle raged, and by 1992 IBM admitted defeat and began to use the EISA architecture as well. Out of this rose the “Wintel” partnership, Microsoft and Intel and flexible open platforms became the tradition in the business (leave it to Apple, Inc. to once again challenge that notion).

The rapidly growing peripheral market created from the PC war and a new standard in Motherboard Architecture, EISA, gave rise to a rapidly emerging technology known as Client/Server. This technology challenged IBM even further by threatening to move shared resources common in a mainframe scenario to a PC environment. New companies started to emerge offering communications methods, protocols, and software to accommodate a client/server or PC networking environment. The rising star out of this movement was a small company based in Provo, Utah that became the dominant PC networking company with their product called NetWare. By 1992, most companies entertaining a PC network, were using Novell Netware. Around this time emerged a greater requirement for a standards based approach to network communications. As more and more applications made their way to the PC world, the need to define a model for communications grew. This was for many reasons, probably the most significant was the U.S. Government’s desire to purchase PC networks on a major scale. Along with this came requirement to build a standard that could grow an industry.

Closed systems, like IBM, traditionally would provide everything a business needed for an application. The mainframe, terminals, cabling, switching, software, and in even in a lot of cases the operators of the system. The new emergence of an open system, one that a company could install and integrate with their PC purchases to produce a centralized server capable of handling common business applications was significant. Out of this climate came the adoption of a standard for an open system of communications protocols. Although there are many organizations that author, impact, or steer the networking/communications industry the greatest standards body is that of the International Organization for Standardization (ISO) and the participating membership countries (such as the ANSI – American National Standards Institute). This body eventually produced what is known as the Open Systems Interconnection (OSI) Model and in the most part still provides a standard for communications in open systems today. 

Although the OSI model is not a practical implementation (they actually did produce a protocol, that no one really used), it is a concept that is for the most part adhered to in the open systems world. Although each manufacturer and software developer has their own interpretations, the concepts and terminology used in this model are universal. One of the greatest advantages to this model, is the concept of layered communications. When the responsibilities of network communications are broken into layers of responsibility the effort required to bring products to market is significantly reduced. For example, if you were developing a software package that would communicate across an open system network, imagine the burden that you would be exposed to if you had to test every possible networking peripheral device manufactured in order to guarantee that your product would work. Instead the OSI model proposes to separate the requirement to interface with hardware with a set of protocols that the application developer would not be required to expose the application to.

Important to note is the model itself, best visualized in this article from Wikipedia. In network communication, each layer at a client “communicates” directly with the layer at the server. This is done by adding information to the data being sent across the network. Each layer adds its own information as the data is sent down through the layers and then eventually the application data is transferred  via  network media. When the data arrives at it’s destination (the server), the information at each layer is stripped or removed away and passed up to the next layer.

As a necessary ritual in any networking students life, you must memorize the OSI layers, the responsibility of each layer, sample protocols that work at each layer, know what devices in a network function on a given layer and be able to understand the OSI terminology that is used frequently in discussions. The above referenced article has most of those concepts. In class sessions in my networking fundamentals class I will be providing a walk through of various Internet protocols and explain how they accomplish functionality of a particular OSI layer.

Leave a Reply

Your email address will not be published. Required fields are marked *