Information has been critical to warfighting throughout recorded history. Over 5000 years ago, there were spies, well defined command and control structures, supply and logistics systems, documented strategic planning, and mechanical cryptographic systems. [Kahn67] Numerically inferior forces with informational advantage have historically dominated in military conflict [Arquilla] because of what is now called the force multiplier provided by that advantage. Better battlefield intelligence and communications leads to a fighting pace and efficiency that often overwhelms an enemy, better strategic knowledge leads to more well directed weapons design, morale is dependent upon the availability and content of information from home, and psychological operations are centered on impacting the enemy's human information processing. These information factors and many others have had significant impacts on the outcome of wars from Biblical times [Moses] through to today, [Campen92] and they will likely continue to impact warfare for the indefinite future. [Cohen93-2]
If this has been true throughout history, why is it that there is a pressing need to reconsider this issue in a different light today?
The answer lies in the fundamental changes in information systems and the new ways in which people have come to depend on them over the last several years. Just as the industrial age led to fundamental changes in the way wars were waged, the information age is now leading to fundamental changes in the way wars are waged. [Tofler93]
The Gulf War is a recent example of how current US warfighting doctrine depends on and stresses information infrastructure. It is likely that the Gulf War was a unique experience in that there was no apparent attempt by the Iraqis to disrupt the information infrastructure the DoD put in place during Operation Desert Shield. A series of extraordinary efforts by military and civilian personnel in the middle-east, in the continental US, and throughout the world, created a temporary infrastructure capable of letting US forces fight as they had trained. [Campen92] A prime planning concern for the future should be getting enough of an infrastructure in place to be able to handle a similar situation and acquiring the capability to support the multi-theater scenario called out in current defense guidance. [BotUp93]
Statements of new US military doctrine have been promulgated to reflect these new realities. These writings on doctrine explicitly address the role of information in modern warfare and speak to the resulting offensive and defensive aspects of information warfare.
The central role of information in warfare, as in the economies of modern information age societies, will continue into the future. As one author put it: ``In the best circumstances, wars may be won by striking at the strategic heart of an opponent's cyber structures, his systems of knowledge, information, and communication.'' [Arquilla]
The DoD is dependent on information for all aspects of its operation. Historically, components of the DoD have implemented stovepiped information systems designed to fulfill special needs. This has resulted in a coordination problem in joint operations because integrating the diverse information stored in these stovepiped systems is difficult and time consuming, and thus limits the tempo of operations. To fully exploit the advantages of information in warfare, and to reduce the costs associated with information processing, duplicative systems, and redundant data entry, the DoD has made the doctrinal and policy decision to move toward a globally integrated Defense Information Infrastructure (the DII). [NMSD-94]
The complexity, scope, and timeliness requirements of DoD information processing are exemplified by some of the applications supported by the DII:
The accomplishment of military functions, both direct combat operations and support, depend to varying degrees upon the availability and accuracy of information. For example, most activities in modern warfare depend on the reliable communication of command and control and situational information. Many military activities rely on timely, assured access to accurate position, environment, logistics, medical, personnel, or financial information. This dependency is not static based on the content of the information. Rather, employment of particular military weapons or operational tactics at a particular operational tempo depends on the assured availability of a certain quantity and quality of information at a particular time.
By analogy, information requirements are equivalent to petroleum budgets required to maintain a particular operational tempo. If either the information or the petroleum is unavailable, the desired operational tempo will not be obtained. (This analogy is not perfect in that once petroleum is used, it is gone, while information is not consumed in its application.)
In short, nearly every component of the US military and the infrastructure upon which it depends are highly dependent on information and information systems.
Horizontally and vertically integrated command, control, communications, and computer automation for joint and combined forces operations are pivotal to US military force. [NMSD-94] The DII concept was created to: [DMRD918-92]
* Provide a consolidated global information infrastructure. * Provide robustness and resiliency to DoD information services. * Revolutionize information exchange. * Properly and transparently manage information on a global scale. * Reduce information technology burdens on operational and functional staffs.
The creation of the DII will enable DoD operational and functional staffs to access, share, and exchange information worldwide. It will include such improvements as end-to-end information support services, standardized data definitions, and interconnection of all voice, data, imagery, and video communications and computing systems. To remain reliable and transparent, centralized network and system management and diagnostic capability will be put in place. To reduce lifecycle costs, the DII will consolidate or integrate data centers, maintain widely-available communications networks, use commercial off-the-shelf (COTS) and government off-the-shelf (GOTS) products, and centralize acquisition and technical control of these elements. To improve efficiency, redundant data entry will be eliminated, and standardized training will be used.
The cost and efficiency advantages brought about by implementing the DII will increase the DoD's dependency on the DII. If elements of the DII are not available, information is inaccurate, or the DII does not properly provide required functional or information transfer capabilities, time will be lost and overall mission effectiveness will be diminished.
It is critical in understanding the information assurance challenge to understand the difference between information assurance issues which relate to all information and information systems, and secrecy issues which relate to classified or sensitive but unclassified data. Classified or sensitive but unclassified data is controlled based on its content, and is controlled because knowledge of it might be useful in ways that could adversely affect US interests or actions, because release could be a violation of US privacy laws, or because release could result in the assumption of financial risk. Information assurance requirements apply to all information, and are based on use rather than content.
Some assert that existing policies and standards that guide protection of data sensitivity are not adequate for addressing information assurance. [Clark87] There is a need to consider information assurance in defensive information warfare planning.
It would be easy to assume that information assurance is already provided by existing fault tolerant computing standards and practices such as protection against random noise, [Bellcore91] [NCSD3-8] lightning, [MIL-HDBK-419] RF noise, [RS-252-A] loss of packets, [ISO8208] and other transient factors that cause disruptions in information systems. Unfortunately, intentional attackers are not accurately modeled by the statistical models of faults used to develop existing reliability standards. (See note 1)
``Most communication channels incorporate some facilities designed to ensure availability, but most do so only under the assumptions of benign error, not in the context of malicious attack.'' [NRC91] (note 6, p100)
The field of `high assurance' computing addresses information systems for the most critical applications. (e.g., life support systems, flight controls, nuclear warhead detonation) Unfortunately, building `perfect' systems is far too costly and resource intensive for the wide variety of systems and networks found in the DII, and only adequately addresses certain types of very well defined control applications. (See note 2)
For the sorts of general purpose systems in the DII, there are classes of attacks that can not be perfectly defended against. Two well known examples are computer viruses [Cohen86-2] and exploitation of covert channels. [Lampson73] If the DoD spends its resources on trying to implement perfect solutions to these problems, it will surely fail and go bankrupt in the process, but the DoD can not simply ignore these and other similar problems, because they present a real and identifiable threat to national security and directly impact readiness and sustainability of US forces.
Feasible solutions will not be perfect. Rather, they should responsibly trade cost with protection. DISA should support analysis of cost effectiveness to avoid unnecessary duplication and to provide a uniform basis for comparison.
Current US defenses against disruption depend almost entirely on human prevention, detection, differentiation, warning, response, and recovery. Detection of most disruption attacks comes only when people notice something is going wrong. In many cases, detection never occurs, while in other cases, detection takes several months. Differentiating natural, accidental, mischievous, and malicious disruption is a manual process, and the root cause is often undetermined or misidentified as accidental. Warning has to be properly controlled to prevent false positives and false negatives, and depends on forensic analysis. Response commonly takes from hours to days, and is almost entirely manual. Recovery too is almost always a manual process, takes from hours to days, and is often performed improperly. (See note 12)
Human attack detection has several problems besides the limited response time and large numbers of false negatives. Perhaps the most important problem is the expectation of breakage and the inability to differentiate properly between breakage and malicious attack. Another problem is the tendency to detect fewer faults over time in an environment where faults are commonplace. [Molloy92] This can be exploited by an attack wherein the number of disruptions are slowly increased, while the human operator becomes increasingly insensitive to them. Enhanced training improves performance, but humans are clearly still limited, particularly when it comes to detecting subtle attacks characterized by the coordination of numerous seemingly different and dispersed events and attacks designed to exploit the reflexive control aspects of human behavior [Giessler93]
Automated tools for detecting misuse in computer systems and local area networks are currently emerging, and this technology is rapidly approaching commercial viability. [Denning86] The most advanced misuse detection systems include localized responses to statistical anomalies and rule-based response to known attack patterns. \ORG/ should enhance computer misuse detection systems to cover broader ranges of attacks, systems, and responses at the wide area network and infrastructure levels. (See note 14)
Well trained intentional attackers understand the common assumptions made by designers of information and secrecy systems, and explicitly design attacks to exploit the weaknesses resulting from these assumptions. Protective techniques that work against statistically characterized events is rarely effective against directed attack, and techniques designed to provide secrecy is rarely effective against disruption. One relatively limited study of the impact of malicious node destruction using a structure that works very well against random destruction found that preventing intentional attacks with standard fault tolerant computing techniques may require an order of magnitude increase in costs. [Minoli80] Studies and demonstrations of computer viruses in secrecy systems approved for DoD use have demonstrated that these systems are ineffective against disruption. [Cohen94-2]
Current system reliability estimates do not account for deliberate software corruption. [NRC91] (p 55) Telecommunication networks can fail from software malfunction, failures can propagate in operations or control systems, [Falconer90] (p32) and system availability estimates seem to overlook this cascading effect. As an example, telephone networks are supposedly designed for something like 5 minutes of downtime per year, [Gray91] and one company advertises that if 800 service fails, restoration is guaranteed in under 1 hour. Yet in a single incident in 1990, the AT&T (American Telephone and Telegraph) 800 network was unavailable for over 4 hours, [Falconer90] which seems to imply that this failure covers expected outages over the next 50 years! Considering that a similar failure brought down telephones in several major cities for several days in 1991, [FCC91] there appears to have been a flaw in this availability analysis. (see note 3)
According to a National Research Council report: ``As computer systems become more prevalent, sophisticated, embedded in physical processes, and interconnected, society becomes more vulnerable to poor systems design, accidents that disable systems, and attacks on computer systems. Without more responsible design, implementation, testing, and use, system disruptions will increase, with harmful consequences for society. They will also result in lost opportunities from the failure to put computer and communications systems to their best use.'' (The opening paragraph of [NRC91] )
Reliance on any offensive capability the US might have as a defense against disruption of DoD information systems would be misplaced. This is because of two features of non-physical offensive information warfare technologies: vulnerabilities can be exploited by small, mobile, hard to identify, physically distributed groups of individuals located anywhere in the world, [Boorman88] and it is not possible to determine with certainty whether or not an attack is underway [Cohen86-2] or to identify the source of an attack that is known to be under way. [Cohen84-2]
Offensive capabilities can theoretically be used in one of two ways to defend against attacks; pre-emptively or responsively.
Regardless of the power, speed, and accuracy of the offense, the DoD will require an adequate defense if the US is to prevail in a hostile information warfare environment.
The DII design includes information processing components, the DISN transmission segment, the DISN network management segment, and the DISN services segment. [DMRD918-92]
Today's information processing components consist largely of low-assurance computer systems. Every general purpose DoD and civilian computer system tested for information assurance so far has proven vulnerable to disruption. [Cohen94-2] Many existing DoD information processing components don't even meet nominal business operational control requirements common throughout industry. For example, a recent GAO audit to determine whether controls in large data centers were adequate to assure data integrity showed:
``... that both (Cleveland and Indianapolis) DITSO Centers had serious deficiencies (that would) allow any knowledgeable user to gain access to pay data, and to add, modify, or destroy it, or accidentally or intentionally to enter erroneous data, without leaving an audit trail.'' [OIG-93-002]
A degree of assurance in existing DoD systems is provided by their physical isolation from an integrated network. As the DoD moves toward a networked DII, \ORG/ should assure that DoD decision makers understand that these newly connected systems are vulnerable to disruption of a wider variety from more sources, and make suitable investments in information assurance to offset the increased risk.
The vast majority of current communications devices, systems, and networks used in military support systems do not provide high assurance. (See note 8)
``Just how vulnerable our networks have become is illustrated by the experiences of 1988: There were three major switching center outages, a large fiber optic cable cut, and several widely reported invasions of information databases by so-called computer hackers.'' [NRC91] (p2) One outage in 1991 impacted millions of customers and temporarily disrupted air traffic control centers in New York (which caused slowdowns in much of the northeastern US and across the nation).
Many of the legacy systems being integrated into the DISN network management components consist of proprietary designs. The assurance in these systems, as in the information processing components, is provided, in large part, by their physical security. As the DoD moves toward a consolidated network management system for the DII, it will magnify the potential vulnerability of its network management segment to disruption and introduce the potential of causing widespread damage.
Current DII services consist almost entirely of electronic messaging, file transfer, bulletin boards, and directory systems. [DISN-arch] These services are predominantly implemented with low-assurance computer systems that process unclassified information. As the DoD moves toward a networked DII, users will have a high degree of flexibility in selecting services through integrated network management, advanced intelligent network techniques, and common signaling systems. [DISN-conops] This flexibility will make users more dependent on these services to accomplish their mission, and it will also make these services more vulnerable to disruption from a wider variety of sources. (See note 9)
Existing components of the DII have well known and easily exploited vulnerabilities to disruption, but even if these components were individually strengthened against disruption, they would not necessarily provide information assurance when networked together. The combination of otherwise assured systems in an assured network environment can lead to an overall system that is not assured. In one case, two systems that were independently safe against corruptions by a particular computer virus were both disrupted by that virus when they were networked together. The cause was a mismatch in the way integrity was implemented and the way peer-to-peer communications works in modern networks. [Cohen94-2] There is still no overall theory of how to safely connect network components, but in the limited cases where connection safety is understood, unsafe connections should be avoided. [Cohen87] [Cohen87-2]
Simply bolting together a variety of information security features doesn't solve the protection problem. To get synergistic benefits by combining information assurance features, they have to be properly combined, and this is not yet a well understood phenomena. [Cohen94-2] In most cases, rather than enhancing protection by combining features, the entire system is only as strong as the weakest link.
The people who architect the DII must come to understand this issue and exploit that understanding to provide adequate information assurance.
Clearance processes do not detect people who turn against the US after they are cleared, people who have breakdowns, people subjected to extortion, or many other ``insider threats''. Many sources claim that the majority of computer crimes come as a result of an authorized person using that authority inappropriately. Although sufficient evidence is not available to support this contention, there is clearly a potential for ``soft-kill'' harm from an insider that is greater than from an outsider because the insider has fewer barriers to bypass in order to succeed.
The current DII design assumes that insiders act properly to a large extent. A proper infrastructure design should not make such an assumption or depend on it for meeting design criteria. \ORG/ should ensure DII design criteria explicitly address the insider threat.
Elements of the US information infrastructure are also highly vulnerable to physical attack. For example, several authors have noted that US telecommunications capabilities could be disabled for a substantial period of time by proper placement of 20 or fewer small explosive devices. Information processing facilities often depend on easily disrupted public utilities for electrical power and water. Further, the potential targets and methods for physical attack on an exterior structure such as a heat exchanger that can halt computer operations very efficiently are often inadequately protected.
Although this is a vital area to be covered, it is not within the realm of this study to address it, except in one way. By the very nature of the information assurance challenge in a military context, a reasoned response would be designed to detect and react to disruption regardless of the cause. The warning component of an information assurance system provided for the DII should clearly indicate any set of disruptions that appear to be part of a coordinated attack, and help orchestrate a coordinated defense. (See note 15)
It is enlightening to examine the current US Government standards base upon which open systems are now being acquired. [OSE] The DoD standards document begins with a list of protection service standards, including some that seem to be information assurance standards needed to fulfill requirements of the DII. Unfortunately, almost none of the list of service standards is currently specified:
Service Standard Status ---------------------------------------------------------- Authentication Not Available - In Process Access Control Not Available - In Process Non-Repudiation Not Available - In Process Confidentiality Not Available - In Process Integrity Not Available - In Process Auditing Not Available - In Process Key Management Not Available - In Process
Most of the `Not Available - In Process' items are specified as ``This work is still in the early stages and is not yet of practical use. It should not be referenced in a procurement.'' Further, there is no clear migration path, so there is no defined way for the designers of the DII to even plan for their future inclusion. Notice that ``availability of service'' is not even on the list of standards to be developed!
By way of reference, the ISO (International Standards Organization) standard upon which this list was based was in approximately the same incomplete state about 10 years ago, when the protection addendum to the ISO standard was newly created. To date, no significant progress has been made in these areas, and no current ``open system'' COTS products provide substantial coverage of these areas.
In the October, 1993 crisis in Russia, the members of the dissolved parliament escalated to military action by ordering their supporters to take over the Mayor's office across the street, the television station across town, another major telecommunications center, and the Kremlin a few blocks away, in that order. The takeover of the Mayor's office in downtown Moscow was essentially unopposed (only warning shots were fired), but when it came to the television station, the battle became fierce. The other targets were never even threatened. Can there be any question that the Russian leadership on both sides understood the import of information as the key to victory?
Infrastructure has been a major target at least since WWII, when the allies targeted German ball bearing factories. [Dupuy70] This was not only because ball bearings were used in tanks, aircraft, and naval craft, but because they were used in the machinery that made machinery.
Information and information systems are the ball bearings of the information age. Both military and civilian operations depend on this technology at almost all levels. Information technology is used to design information systems, to direct telephone calls and data transmission, to control individual radios and building security systems, and to keep accurate time. Each of these information technologies is vital to the DII.
Information infrastructure is a low risk, high payoff target for disruption. (See note 10)
There are many publicly available examples of the US dependency on both military and commercial information technology, including recently published examples from wartime military operations.
The US Army's Chief of Staff called Desert Shield/Storm the ``knowledge war''. [Campen92] (p ix) The House Armed Services Committee said ``...acquiring support systems consistent with high-tech weapons may be more important than buying the next generation plane or tank.'' [Campen92] (p xxi) According to another author, ``...it is very surprising that very extensive use had also to be made of the international commercial networks, Intelsat and Inmarsat''. [Ansen92] Still another author wrote ``DISA and CENTCOM learned a valuable lesson: A viable information systems architecture requires the total integration of commercial and military communications systems...''. [Slupik92]
Logistics data passing over local and wide area computer networks also became vital. Regarding Marine Corps operations: ``Supply and maintenance information, ... soon came to be seen as critical to the success of the operation. ... these systems had to operate in the same environment as the systems that (performed command and control) functions.'' [Pierce92]
Real US information warfare vulnerabilities are commonly described in both fictional and factual books, articles, and other media. (See note 4)
Ideas about the use of software for military and civil infrastructure attack have been published in the military, computer science, and popular press, so this concept is common knowledge among many computer literate people. Many examples include specific mentions of military targeting. Here are two:
Publicly available sources indicate that well over 30 nations have the capabilities required to launch successful disruption attacks against the DII, that several nations have active programs directed toward understanding and preparing capabilities for information infrastructure attack, and that several relatively small independent organizations have demonstrated substantial attack capabilities. (See note 11)
One paper presented to the Naval Postgraduate School in August, 1993, and available to the public claims that with 20 people and $1,000,000 the author can bring the US to its knees. [Steele93] Other expert claims range from $100,000 and 10 people for large scale DII disruption over a period of weeks, to $30,000,000 and 100 people for total information infrastructure disruption resulting in multi-year recovery time. [KILLALL]
Information warfare can be practiced by small private armies, terrorist organizations, drug lords, and even highly motivated individuals of modest means. This may represent a fundamental shift away from the notion that the hostile nation state is the major threat the US has to be concerned with. [Drucker93] [Tofler91] [Creveld91]
The People's Republic of China has a group headed by Yue-Jiang Huang that has produced both internal and international hardware enhancements to personal computers for protecting against many forms of disruption. This group is also doing substantial work in the use of non-linear feedback shift registers for both secrecy and integrity applications.
In Russia, there is at least one group working on disruption prevention, detection, and response systems. This Moscow based group at the Central Research Institute ``Center'' in Moscow is working on new hardware architectures that provide enhanced integrity protection and limited availability against general classes of malicious threats. They seem to have an emphasis on computer viruses, but far more general application can be made of their architecture. [Titov92]
Research groups in Israel regularly publish results on their research in international journals, and several groups have started work on protection of information systems against general classes of malicious corruption. [Goldreich] [Shamir88]
An Australian research group directed by Bill Caelli and centered at Queensland University of Technology is concentrating a substantial amount of effort in the design of high integrity networks capable of withstanding malicious disruption. They also have people working on cryptographic integrity techniques and key management systems with revocation for use in systems similar to the DII.
At least one Canadian author has published work on limits of testing and coding spaces against malicious disruption attacks as well. [Khasnabish89]
A German research team at the University of Hamburg has gone a step further than most groups in this area by forming a database of parts of computer viruses. They essentially break the thousands of known viruses into component parts (i.e. self-encryption, find file, hide in memory, attach to victim, etc.) and store the partial programs in a database. Many known viruses have common components, but there are on the order of several hundred of each different component part. This gives them both the capability to detect and automatically analyze many viruses in very short timeframes, and the capability to generate on the order of 10^20 different viruses automatically by mixing techniques together.
Several other countries have started to publish papers in the information assurance areas, and although there is no apparent evidence for massive efforts, it seems that the international interest in this field has increased substantially since the Gulf War.
Many publications on computer security identify the most common source of intentional disruption as authorized individuals performing unauthorized activities. The normal clearance procedure has not proven effective in eliminating this threat, and it is therefore prudent to take measures to protect against, detect, and respond to insider attacks.
Accidental disruption is also commonly caused by insiders acting imprudently, and it is sometimes very difficult to differentiate between accidental and intentional disruption in this context. This implies that more stringent techniques may have to be applied to observe insider behavior and reliably trace the specific actions of individuals in order to detect patterns indicative of intent.