We begin by explaining the basics of information network media, topology, and protocols as they apply to most networks at this time [Cotton86] .
Typical media for networks include optical fibers, shielded and unshielded twisted wire pairs, coaxial cables, electromagnetic radio emanations, and ultrasound. These currently have approximate cost and performance characteristics as specified in the following table, but the extremely rapid changes in technology currently underway will undoubtedly make these figures invalid in the near future (they have changed twice during the last year). Bandwidth is given in bits per second (BPS), initial and ongoing costs are rated from 0 to 5, 0 being the cheapest for a typical installation.
Media Cost Bandwidth Ongoing --------------------------------------------------- fiber optics 2 50M BPS 2 shielded pair 1 10M BPS 1 unshielded pair 0 1M BPS 0 coaxial cable 3 400M BPS 5 radio 5 1000M BPS 4 ultrasound 4 30K BPS 3
Fiber optics are difficult for an attacker to tap (although by no means impossible) compared to the other technologies, radio is more difficult to destroy but more susceptible to noise, unshielded wires have noise problems, and ultrasound is very susceptible to noise. Current computer to computer communications rarely take advantage of performance over 10M BPS because hardware (disks, processors, etc.) that uses this much bandwidth is expensive, and the rarity of applications requiring this bandwidth on an ongoing basis. In the case of voice data, each channel is about 40Kbaud, and for video data about 10MHz is required per signal.
Networks are typically connected in a 'star', 'tree', 'bus', 'ring', or 'mesh' topology, or combinations thereof. When combinations are used, they often require special 'gateway' machines to allow 'internet' communications.
A star network consists of a central node to which all other nodes are connected. Because all communications are centralized, the network fails when the star node fails, and policies which attempt to control these networks typically depend heavily on the protection provided by the star node. Redundant star nodes are often used to increase availability, and in some cases graceful degradation may be attained in this manner.
A tree network typically has sets of nodes connected to a common 'parent', with sets of parents, themselves connected to common parents, etc.. They typically have a 'root' node through which the highest level parents communicate. There is distribution of responsibility within each branch of the tree, and routing of information is quite straight forward. Parents closer to the root have greater responsibility and thus potentially more severe protection effects.
A bus network consists of a common media to which all nodes are connected. Communications on these networks require either a means of allocating the bus to a given node at a given time, or a method to allow detection of simultaneous use (called a 'collision'), and rescheduling of communications for a retry. A bus allows all parties to observe or effect all communications, which means the entire network depends on proper behavior by all parties. Multiple busses are typically used to cure this problem (e.g. each bus only linking subsets of all nodes, no node on all busses, no bus on all nodes, at least two links between each pair of nodes with no other node on both busses, etc.).
A ring network typically consists of a set of processors through which each communication must pass. Messages are sent from processor to processor till they return to their original source, at which point they are eliminated and/or replaced with a new message. Ring networks typically send data in only one direction, although bidirectional rings also exist. In a ring network, every node must handle all data, and this causes substantial reliability and protection problems.
A mesh network is a network with arbitrary connections between nodes. There may be multiple paths between nodes, routing decisions for end to end transmission are typically required, and a high degree of fault tolerance it typically possible.
Protocols are sets of rules that specify how parties (e.g. nodes in a network) interact. Protocols were historically designed through the painstaking mental effort, but as networks and applications became more diverse, protocols became more complex, and problems like ambiguity and incompleteness caused major failures [Bochmann77] . Testing protocols for compliance to specifications also became much more difficult with increased complexity, and is now a major problem. Formal methods for design, analysis, and testing of protocols became necessary to resolve these problems.
Research has been carried out for more than a decade on formal protocol specification, design, implementation, and testing [Sunshine79] [Merlin79] [Sabnani85] . General formalisms consisting of program segments, transition models, and hybrid combinations of these provide for clear and concise descriptions of protocols [Hailpern83] . Several types of finite state machines (FSMs) have been investigated as formal models [Danthine82] , but the number of states required quickly becomes excessive. A hybrid model combining FSMs and abstract program features has been shown to effectively limit the number of states [Bochmann77] [Sundstrom77] . State transitions are used to model control aspects of protocols while variables and data are handled by a programming model. A hybrid form of model using a form of FSMs [Sunshine78] appears feasible in the modeling of distributed environments.
Although formalisms for specification of protocols are coming into widespread use, testing of protocols is still a substantial problem. Exhaustive search of protocol states and transitions is quite complex [Palmer86] and is analogous to the problem of proving program correctness, which is well known to be NP-complete. [Garey79] Efforts to generate test sequences were initially adapted from programming language and sequential circuit tests [Linn83] [Sarikaya82] , but complexity problems have made these methods ineffective for all but the simplest protocols. A procedure for automatically generating test sequences for protocols is under development [Sabnani85] and may make formal protocol testing feasible.
Protocols are typically designed for the application at hand by considering communication requirements and hardware features. Two popular protocols are token passing and contention detection and retry. Token passing networks are networks in which a token is passed from information system to information system, with the system holding the token in control of communications to the next system. If the system in control of the token abuses the privilege, network failure may result. Contention networks allow multiple systems to share a single media, with the systems contending for the use of the media. Again, a lack of cooperation may cause the network to fail.
Hierarchical structuring is often used to reduce the complexity of protocol design. In this way, many of the protocol functions may be made independent of the methods of communication, and thus an economy of scale may be afforded as well as a reduction in the complexity of protocol generation and verification. Gateways are often used to communicate between networks, either because of limitations in media or because of differences in protocol. Gateways act as protocol converters that convert information from the sending network to the hierarchical level at which information from the communicating networks commensurate (i.e. are translatable into a common language), and then reconvert back to the form required for the receiving network. Gateways present high levels of exposure because they can usually cause harm to all connected networks, whereas machines which are not gateways can only effect other networks transitively through gateways.