Obtaining information assurance will require the application of resources and hard work. It will not come about as a serendipitous feature of the development of an information infrastructure based on open systems. The longer this matter resides on the back burner or is treated as a matter of academic interest, the greater the eventual costs will be to add resiliency to the infrastructure. Ultimately, neglect of this matter could result in major economic loss, the loss of military capability, and military defeat.
\ORG/ should strive to ensure that senior decision makers come to understand that the assured availability and integrity of information are essential elements of US military readiness and sustainability so that they will provide adequate resources to meet this looming challenge.
Military capability is: ``The ability to achieve a specified wartime objective (win a war or battle, destroy a target set). It includes four major components; force structure, modernization, readiness, and sustainability. [JCS102]
Readiness assessment generally involves such factors as people authorized and on hand, their skills and training; operational status of equipment, the time to repair, degree of degradation; training status of units, recency of field exercises, command post training; and other more detailed factors. In the age of information warfare, everyone in the military must recognize that the readiness status of forces, units, weapons systems, and equipment depends on the status of the information infrastructure. An assessment of readiness should include such questions as:
* Are there enough information workers and managers on hand? * Are they properly trained in detecting and reacting to information attacks? * How recently have they undergone defensive information warfare training? * What is the readiness status of the DII? * How much stress can the DII take at this time?
Currently, the DoD appears unable to take comfort in the answers to these questions. Training programs to prepare information workers for the prevention of attack, detection of intentional attacks, differentiation of malicious from mischievous from accidental disruption, and the recovery steps to undertake do not exist. Worse, there is no analysis indicating how many people with what sorts of training and skills are required to operate successfully in an information warfare environment.
The DoD depends on the DII at least as much as it depends on the logistics structure for battle readiness, and yet the DoD does not treat them in the same light. The DoD must assess information assurance as a readiness issue, it must incorporate DII readiness into the overall military readiness assessment, and it must treat DII readiness as a component critical to overall battle readiness. \ORG/ should undertake an awareness campaign that brings these concerns to the attention of OSD Principle Staff Assistants, the Military Departments and Services, and Defense Agencies.
In any conflict against an information warfare opponent, the DII will take battle damage. In order to continue fighting under this sort of attack, the DII must automatically detect, differentiate, warn, respond, and recover from disruption. There must be enough redundancy to meet bandwidth requirements during anticipated levels of disruption, sufficient firewalls to prevent disruption from spreading, sufficient mechanisms to make recovery and reconstitution of the DII feasible in an appropriate time frame, and sufficient training and support to allow that reconstitution to safely take place. In order to meet budget constraints, the DoD must find ways to do this at a tolerable cost. (See note 14)
It is not reasonable to expect that technicians will be able to detect, differentiate, warn, respond, and devise work-arounds for each attack in real-time, and in the case of remote components, they may be unable to gain access to do these things at reasonable cost. For this reason, the designers of the DII must devise mechanisms that are as nearly automatic as feasible, and have built-in resiliency that, at a minimum, puts these mechanisms into known and controllable state sequences when they become ineffective over a period of time. This is very similar to the requirements on remote space exploration vehicles, except that the DII must be designed to behave in this fashion even during hostile attack and at a far lower cost.
In order to spend money wisely and still be properly prepared, DISA must ensure that the DII retains flexibility to adjust to changes in doctrine and strategy over the next 20 years. Compare US warfighting in 1973 to 1993. Predicting 2013 is not a simple matter. Rather than trying to make a 20 year prediction and hinging enormous amounts of money on being right, \ORG/ should promulgate design guidance that ensures a DII capability that is flexible enough to adapt with the times. Fortunately, information systems are easily made flexible, but unfortunately, that flexibility leads to vulnerability to disruption. The designers of the DII must devise information assurance techniques that allow flexibility without increasing vulnerability.
Most current information protection policies include requirements for availability and integrity, but these features are always mentioned along with secrecy. When this policy is translated into implementation, the information assurance elements are usually ignored. An example of this is the recent draft versions of the DISN specification. The top level goals include almost equal emphasis of these three elements of information assurance, [DISN-conops] but in the design process, there is often a deemphasis of information assurance and an emphasis on secrecy. [DISN-security] There seem to be two reasons for this, and top level attention is required in order to resolve them:
In order to assure that information assurance is adequately addressed, policy makers should separate the information assurance requirements from the secrecy requirements, and make it explicit in policy documents that they are separate and different.
To assure that information assurance is properly and consistently practiced, \ORG/ should develop a set of information assurance standards for the DII that address disruption. (See note 5)
There are substantial differences between designing a typical information system and designing a good information infrastructure, and the techniques normally used in information system design are often less than ideal in infrastructure design. One of the most glaring examples of these differences is in the tradeoffs between efficiency and effectiveness. (See note 13) In designing typical information systems, good designers almost always choose to do things for efficiency, while good infrastructure designers almost always choose to do things for long term effectiveness.
\ORG/ should ensure that the top-level technical managers responsible for designing and operating the DII understand the issues of infrastructure design as opposed to typical system design and can help make design decisions that will satisfy the changing requirements over the lifetime of the infrastructure.
In order to transition existing systems into the DII while providing appropriate information assurance, DISA must first understand the weaknesses of these legacy systems, and then find ways to provide these systems with the information assurance features required in order to operate in the DII environment.
A key step in this process is performing a threat assessment which can be used as a baseline for vulnerability analysis. If properly done, such a threat assessment will bring to light a variety of new threats and threat sources that have not historically been considered in DoD vulnerability analysis.
Once the threat assessment is completed, vulnerability analysis of the most common classes of system can begin in order to create baseline vulnerability assessments of the major classes of systems without performing an expensive and unnecessary exhaustive analysis of each system on a piecemeal basis.
While vulnerability analysis is underway, mathematical lifecycle cost and coverage analyses of potential defensive measures against identified threats in different classes of environments can be performed. As vulnerability assessments become available, the results of these assessments can be used in conjunction with defensive measure analysis to identify minimum cost protective measures required to cover identified threats.
As threats, vulnerabilities, and defensive measures are made available to program managers, they can make risk management decisions and implement appropriate controls in keeping with budget and other constraints.
\ORG/ should undertake a substantial study of existing and planned DII components in order to understand their vulnerabilities to offensive information warfare and determine appropriate actions to provide information assurance during the interim period before the DII and enhanced components are fully developed. Specifically:
There are some limited but proven scientific theories about vulnerability to intentional disruption, [Cohen94-2] [Cohen91] and these theories can be used to form hypotheses about potential information assurance problems. From these hypotheses, DISA should sponsor the development of experiments to confirm or refute the existence of actual vulnerabilities, provide immediate awareness of their existence to information assurance personnel, and form approaches to removing or reducing their impact on the DII.
Something that should be clear from the start is that it will be infeasible to analyze software in most legacy systems for potential vulnerabilities, because the DoD has over 500 million lines of customized software in operation today, and the vast majority of it has never been examined for information assurance properties. With that much unexamined software, it is prudent to assume that malicious logic weapons have been implanted.
One way to enhance assurance in networked legacy systems at a very low cost is to provide an external misuse detection capability at the network level. These sorts of enhancements can provide substantial protection improvement at minimal cost, remain flexible enough to be adapted as the DII expands, and can provide a backbone for long term automated detection and response.
In the course of assessment, improved procedures, standards, and documents should be generated to capture and disseminate the limited expertise currently available in this field. A mentor program might also be used to develop more expertise in this area.
According to one recent report, [FCC-NRC-93] the root cause of 30-40 percent of failures in digital cross connect systems is human procedural errors and is the cause of more disruption than any other single source. Many industry studies show similar results for other classes of information systems and networks. One report claimed that over 80 percent of reported intrusions could have been prevented by human procedures. [Bellcore90] Another author posted to the ``risks'' forum that the lack of information from the current CERT (Computer Emergency Response Team) caused numerous disruptions to take place and kept them from being prevented, detected, and corrected. [Neumann95]
``High reliability organizations are defined as high-risk organizations designed and managed to avoid catastrophic accidents. The organization is high-risk due to the high complexity of the technology. Examples include air traffic control and nuclear reactors. ... increasing numbers of serious errors will occur in high-reliability organizations, ... data is lacking on ways to avoid exceeding human capacity limits, and ... design and management strategies to allow safe operation are not understood. ... These organizations have several distinguishing characteristics in common: hypercomplexity; tight coupling of processes; extreme hierarchical differentiation; large numbers of decision makers in complex communication networks (law of requisite variety is cited); higher degree of accountability; high frequency of immediate feedback about decisions; compressed time factors measured in seconds; more than one critical outcome that must happen simultaneously.'' Another study is cited to show that designers are often unaware of the human limits to operating such systems. ``However, as Perrow points out... Designers tend to believe that automatic controls reduce the need for operator intervention and errors, while operators frequently override or ignore such controls due to the constraints...''. [Roberts89]
\ORG/ has to assure the resolution of the role of human components of information assurance to properly protect the DII. There are generally three strategies for improving this situation:
* Automate more human functions. * Improve human performance. * Use redundancy for integrity.
It is generally beneficial to automate functions for enhanced reliability whenever automation enhances performance, reduces cost, or provides other desired benefits. Unfortunately, while the DoD spends a lot of money on enhancing automation for other tasks, one of the areas where automation is severely lacking is protection management. A simple example is the lack of administrative tools in most timesharing computer systems. Systems administrators are expected to keep systems operating properly, and yet:
``Research has shown that performance of certain types of control room tasks increases if the operator has some knowledge of the functioning of the process.'' [Ivergard89]
Improving human performance is most often tied to motivation, training, and education, and again, there is woefully little of this in the information assurance area. Educational institutions do not currently provide the necessary background to make training easy, [Cohen94-2] and existing training programs in information assurance are not widely incorporated in the military. These areas must be addressed if DISA is to provide information assurance for the DII.
In order for the DII to react properly to malicious disruption, it must be able to prevent disruptions where possible, and detect and respond appropriately to disruptions when prevention is not possible. In plain terms, the operators of the DII must be able to manage the damage. During periods of substantial disruption, there are likely to be more tasks to perform than bandwidth available to perform them. In an economic model of a high demand, low supply situation, the value of services naturally increases, and usage decisions change to reflect the relative values.
\ORG/ should prepare for Joint Chiefs of Staff (JCS) approval, an analogy to this economic theory for warfighting priorities so that, as the network manager, DISA can design a priority assessment and assurance scheme so that the value of information passed through the degraded DII is higher per bit than that passing though the non-degraded DII. The JCS needs to specify metrics for, assess value of, and assign priority to information as a function of value at that time and the DII must use these metrics to prioritize its behavior. A sound start in this area could be achieved by developing a military version of the commercially oriented ``Guideline for Information Valuation''. [ISSA93]
If the priority assessment scheme is not a fully automatic process, the DII may have a profound problem in reacting in a timely fashion. The first problem is that if people have to react, they are inherently limited in their reaction time. If the attack is automated, and peoples' reaction times limit the defense, it may be possible to design attacks that vary at a rate exceeding the human ability to respond. A knowledgeable attacker who understands reflexive control may exploit this to create further disruption by misleading the people into reflexive response, and exploiting those responses to further the attack. [Giessler93] A fully automatic response may have similar reflexive control problems except that it is potentially more predictable and normally far faster. This is where design flexibility must also come into play.
Information assurance issues must be flexibly prioritized and adapted as needed in order for the DII to behave properly over the range of operating and disrupted conditions. The metrics associated with information should be evaluated differently in different situations, and should include such factors as time, value, criticality, locality, and redundancy. Each of these values should have an effect on the manner in which the DII prioritizes activities, while each should be controlled by different mechanisms to assure that an attacker cannot circumvent a single mechanism and exploit this to dominate activities.
Even in the most dire of circumstances, unconditional pre-emption should not be the method of choice for prioritizing scarce services. The problem is that pre-emption results in service denial for the pre-empted, and if the assessment of priorities is not accurate, it may be highly desirable to apply some, albeit reduced, bandwidth toward all legitimate needs. It would be preferable to have a scheme whereby higher priorities have a higher probability of domination of resources at any given time, but over any significant period of time, even the lowest priority process has a reasonable expectation of some limited service. This concept is often called `graceful degradation'.
A more fundamental issue that must be resolved is how to prioritize between the basic information assurance measures. If it is better to have wrong information than no information, then availability is more important than integrity. If it better to have no information than wrong information, then integrity is more important than availability. The former appears to be the case from a standpoint of infrastructure recovery, where even low integrity information may assist in service restoration. The latter appears to be more appropriate when making strategic or tactical decisions where a decision based on corrupt information can be fatal.
In most modern databases, it is a simple matter to make undetected modifications. Whereas an outage would be noticed and cause a response, and modern database techniques detect inconsistencies in a database, there is no protection provided in most modern databases for erroneous data entered through the legitimate database mechanism or malicious modification by a knowledgeable attacker. Subtle corruptions typically produce a different sort of failure, such as a missile defense system detecting hostile missiles as friendly, or an airplane flipping upside down as it enters the southern hemisphere. In DoD logistics, command and control, and medical databases, such an error can not only be fatal, but can cause the DoD's automated information systems to be used as a weapon against it.
Prioritization in the DII will involve both communication and computation, and the prioritization schemes must meld together in a suitable fashion across these boundaries. Furthermore, many of the computation components of DII will not be under the operational control of DISA. For example, embedded systems interacting with the DII will have to interact in specific ways in order to assure that no mismatch occurs, and the DII will have to be able to deal effectively with intentional mismatches created to disrupt interaction between communication and computation resources.
Most current network protection strategies are based on the concept that all of the systems in the network behave properly, and many local area network protocols are based on well behaved hardware devices and software products in all of the nodes. When connecting these networks to global systems, imperfectly matched protocols or processes can snowball causing widespread disruption. The priority assessment scheme must not be based on trusting the network components and must be designed to detect and react properly to limit the spread of network wide disruptions regardless of their specific characteristics. There are some theories for addressing protocol inconsistencies, but new basic understandings are needed at the process and infrastructure levels. \ORG/ must promulgate standards that provide assurance based on the assumption of malicious components, and not based solely on lists of known attacks.
Information workers cannot be expected to react properly in combat unless they are properly prepared for defensive information warfare. This involves several key actions:
In the long term, education and training for defensive information warfare must rest upon a well conceived, articulated, implemented, and tested body of strategy, doctrine, tactics, techniques, and procedures. In turn, this body of knowledge must be based, in large measure, on a fairly detailed knowledge of the offensive capabilities available to potential adversaries and the nature of possible attacks on the information infrastructure. In the short term, however, there are several actions that should be undertaken to mitigate disruptions of the information infrastructure.
As a first priority, \ORG/ should make everyone associated with the operation, management, and maintenance of the DII familiar with the concept of information assurance and the nature of likely disruptions, and should undergo regular training and awareness drills to reenforce this training. Primary emphasis should be given to proper prevention, detection, differentiation, warning, response, recovery, analysis, and improvement.
The operators of the elements of the DII must be trained to consider, as a matter of course, the possibility that there are hostile disruptions being undertaken, and that the DoD is unaware of them. Without awareness, advanced training, and education, the human elements of the DII are unlikely to be able to detect attacks unless and until advanced technology-based warning enhancements are implemented. Even then, awareness, advanced training, and education play a vital role in installing, maintaining, and using the automation.
As a second priority, \ORG/ should ensure the provision of similar training and awareness to DII users. While this training may be more narrow in scope, it is essential that the users of the DII be aware of the information assurance issues, how their function can be impacted by DII disruption, what they should do to avoid causing disruption, and what they should do in the event of disruption.
The Defense Agencies, CINCs, and Military Services should make extensive use of simulation capabilities in training individuals and units. This training should be reinforced through the conduct of frequent readiness drills and exercises. These drills and exercises may initially be conducted as stand-alone events, but must eventually be integrated into command post and field exercises involving the forces that use the information processed and disseminated by the DII.
\ORG/ should undertake efforts to include information assurance in the curricula of technical and professional courses of instruction offered throughout the DoD. Information assurance should be embedded in all courses related to information systems, sciences, and management, and courses concentrating on information assurance should be offered as a part of the required curriculum for military students concentrating on computer or information science or engineering.
Another area to which these results can be readily applied is the current effort to implement the NII. There is considerable overlap between the DII and the NII. Both depend in large part on the public switched telecommunications network and on the telecommunications and computer manufacturing sector. The primary difference is on focus; the DII is focused on the national security mission and the NII is focused on national economic progress, itself an element of national security.
``The benefits of the NII for the nation are immense. An advanced information infrastructure will enable US firms to compete and win in the global economy, generating good jobs for the American people and economic growth for the nation. As importantly, the NII can transform the lives of the American people - ameliorating the constraints of geography, disability, and economic status - giving all Americans a fair opportunity to go as far as their talents and ambitions will take them.'' [Brown93]
But this will only be true if the NII can get the right information to the right place at the right time. Recent studies have shown that US industries lose billions of dollars per year because of disruptions in their information systems, [Ballou92] [Crockett90] and the loss is increasing year by year.
``In addition, it is essential that the FEDERAL government work with the communications industry to reduce the vulnerability of the nation's information infrastructure. The NII must be designed and managed in a way that minimizes the impact of accident or sabotage. The system must also continue to function in the event of attack or catastrophic natural disaster.'' [Brown93]
The US economy now depends for its very survival on the information infrastructure. With the inclusion of new services including national health care, access to state and local government information, financial records, and health records, under the promise of the NII, that dependence will grow.
As a nation, the US not only gets involved in military struggles with other nations, but with the emergence of a global economy, the US is in a constant economic struggle with the rest of the world. Even though economic opponents may not be as likely to use physically destructive methods to win the economic war, they already use information weapons against us, and are increasingly pursuing national policies to this end.
Many of the same techniques that will provide information assurance to the DII will be directly applied to the NII to help assure the availability and integrity of the national infrastructure. Just as standards for secrecy have promulgated to industry, it is likely that the standards for information assurance applied to the DII will become de-facto industry standards, and will have a positive impact on national competitiveness for many years to come.
Without careful analysis, it would be easy to bankrupt the Department of Defense in an attempt to `armor-plate' the DII with ad-hoc after-the-fact enhancements. For example, according to industry sources about 20% employee overhead is required for systems administration of integrity protection in a typical banking operation. If the DoD were to add 20% to all staff that use computers just to maintain integrity, the cost would run into billions of dollars per year, and this would not provide availability of services or cover the overall integrity of the DII. \ORG/ should undertake a careful analysis to determine the cost-effectiveness of information assurance techniques on a class-by-class basis. This effort should be undertaken at the earliest possible time in order to afford the greatest cost savings.
There is a great deal of historical data that strongly supports the contention that the DoD should spend money on information assurance now rather than waiting until the DII is widely implemented and operational. Many experts in information protection indicate that after-the-fact protection is much less effective, much more expensive, rarely adequate, and hard to manage. The data from several significant studies indicates that the costs associated with addressing information assurance now may be as much as several orders of magnitude less than addressing it once the integrated DII is widely operating. (See note 7)
Most DoD legacy systems were not designed to provide information assurance in an environment like the DII. Substantial data supports the conclusion that the costs of retrofit for information integrity in most DoD legacy systems would be a factor of 100 more than it would have been during the original system specification. [Boehm81] This implies that for the same cost as providing information assurance to one legacy system, the DoD can provide information assurance to 100 systems of the same scale now in the specification phase. A conclusion of this study is that except in situations where a high cost retrofit is deemed vital or low-cost enhancements are possible, automated information assurance features should only be implemented by altering the specifications and designs of systems still in development, and by implementing network-based information assurance that can cover numerous legacy systems at reasonably low cost.
Under this plan, automated information assurance features would be phased in over a 5-10 year period, based on normal system replacement cycles. Substantial immediate improvement will be attained by implementing network-based protection features and training DoD's information workers in defensive information warfare, and over the long term, information assurance will reach desired levels at reasonable cost. This time lag in technical enhancement will also give \ORG/ time to sponsor much needed research and development that will lead to far better and more cost effective information assurance technologies than those available today.
As this study pointed out earlier, designing `perfect' information assurance for the DII is infeasible. In the opposite extreme, providing minor information assurance enhancements can be quite inexpensive, even in legacy systems. For example, adding a cryptographic checksum to database records to assure that they have not been externally tampered with costs almost nothing, and substantially mitigates risks from all but the most serious attackers. An important subject for further study should be determining the `knee point' in the cost vs. protection tradeoff for both legacy systems and systems still in the design phase. By doing this analysis, the DoD will be able to implement the most cost effective protections first, and only implement very expensive and marginally beneficial enhancements in cases where very high integrity and availability requirements are called for.
Based on these cost factors, it is the conclusion of this study that the most cost effective overall approach to providing information assurance in the DII will be to:
The cost to the US of a DII with inadequate information assurance that sustains significant battle damage in a war can be as high as military defeat. But the cost of implementing information assurance frivolously could bankrupt the nation. The DoD must make prudent financial decisions about information assurance, while implementing as much cost-effective protection as feasible over a reasonable period of time.