Case Studies

The rather extensive case studies are summarized in the body of the text. The main bodies of the larger reports are included here for completeness, to more thoroughly support the conclusions, and to give deeper insight to readers interested in the details.

Case Study 3: XYZ Corporation

The Site Visit

The site visit was used to interview key members of XYZ staff and to personally view the operations. The remainder of this assessment is based on this single visit, and its accuracy and completeness are limited by that constraint.

The site visit involved interviews with several different groups of staff members, walkthroughs of several key areas and typical workspaces, and minimal off-hours investigation.

Comments on the facts gathered during the site visit are indicated in specially formatted areas such as this. These comments would not normally be included in a report to a client, but they should give the reader a sense of what goes through the analyst's mind while listening to this information.

Operations Overview

XYZ Corporation has about 100,000 full-time employees.

Information systems at XYZ Corporation consist of approximately:

50,000 Unix Workstations
100,000 PCs running DOS
50,000 terminal emulators
1000 External client dial-in systems
10,000 Authorized at home dial-in systems

This is probably far too many home dial-in systems for an operation of this sort. It may make protection very expensive to effectively control.

Physical security is staffed by a poorly paid and transient outside guard force directed by an internal staff member.

Low-cost guards are susceptible to bribes, tend to be transitory, and usually aren't willing to risk much for information protection. They tend to miss work, miss scheduled walkarounds, do not remain very alert when performing their duties, rarely notice and report minor inconsistencies, and are not highly motivated. It is also easy to get hired into such a position with minimal background checks, and once in place, it is easy to get assignments guarding high-valued information assets.

Information Technology (IT) staff accounts for about 12,000 full-time members (about 12 percent of the total work force).

There is no comprehensive and well-defined corporate information protection policy. It is the feeling of staff members that the general corporate policies regarding honesty and integrity are adequate to cover this issue.

This is a typical situation that is generally perceived as unimportant when, in fact, it is a key element in detecting a more serious protection problem. Not only is there no comprehensive policy, but people think they are covered by some other policy. The effect is that protection falls between the cracks, and this misperception indicates just how inadequate the training and education process in the organization is.

There are no training or awareness programs in information protection, and protection is currently handled by line managers who independently make piecemeal improvements as they determine a need, using their own budgets.

Not surprisingly, there is no training or awareness program and responsibility is pushed to the lowest levels of management. This is a strong reflection on the lack of policy.

Internal audit covers some top-level elements of the IT department, but only performs an in-depth protection audit when specifically requested and paid for by a line manager. This is rare indeed. For example, no audit trails are generated or analyzed in most systems, of the systems generating audit information almost none are periodically examined, and except for the electronic funds transfer system, none are currently examined or analyzed in real-time.

Typically, internal auditors are understaffed for this sort of work, and should not be tasked with it, but we see that nonauditors don't appreciate this and expect auditors to do tasks normally associated with systems administrators. The auditors are likely to expect the systems administrators to do this, and so this vital protection issue slips through the cracks.

About one quarter of XYZ Corporation staff members are replaced each year. Many of the employees with major responsibility for information protection have been on staff for less than two years.

This is not good. Financial industries have high turnaround rates, but it is important that people tasked with protection be longstanding employees because it makes it more difficult for outsiders to penetrate the layers of protection by getting and holding positions of increasing responsibility. If you are willing to risk 10 years in prison to take a few million dollars, why not spend a few years as an employee, get paid along the way, and then take tens of millions of dollars?

XYZ corporation currently uses consultants for much of its information technology work, and is rapidly increasing its use of outside resources for this purpose.

There is a major movement toward outsourcing information technology today, but there is also a great risk in that it places people over whom you have no direct control in a position to cause great harm.

Group 1

The physical security force consists of a contract guard force and is directed by a single responsible staff member. This staff member sets and implements policy, which he states as protecting facilities. Equipment is only peripherally involved. The value of information is ignored by the guard force.

Individual staff members should not be setting policy on their own. Policy must come from top-level management. Protecting buildings is not an adequate policy for protecting valuable information assets, and this is reflected in the lack of attention to the value of information.

There are no written physical security standards or procedures. Informal and undocumented standards are said to exist, but differing actions of different guards in response to the same events indicates that the standards are inadequate to the task.

The guards each take their responsibilities differently, so that with minimal probes, it is simple to find guards that will allow or even facilitate entry and exit.

In most locations, the guards are only concerned with perimeter security. Although there are walkthroughs at night, there is no specific notion of what to look for in these walkthroughs and since employees often work at all hours, the guards don't react inquisitively to people sitting at desks in the middle of the night typing into computers.

The guards ignore protection of wire rooms, computer disks and tapes, temperature and environmental controls, and other special facilities or components that are important from an information protection perspective.

Some outside doors have card-key or coded entry pad access, but almost every employee knows the entry codes and during the day, these doors are almost always left propped open to facilitate traffic.

Anyone in a decent looking suit who acts like they belong can come and go unnoticed.

No theft prevention devices are used.

Since entry is not prevented, no physical security is used for computing facilities, and no theft prevention devices are used, it would be simple to enter a facility, place several backup tapes in your pocket, and leave unnoticed. Similarly, it would be easy to enter a facility with a floppy disk (perhaps containing a program to initiate communications with the Internet or a custom-made virus intended to disrupt the global network), place the disk in a networked computer, run the program, and walk away with the disk. The damage could be severe and protracted, and the attacker could go completely unnoticed.

The staff member tasked with physical security responsibility is apparently unaware of any incidents of physical intrusion or theft involving information systems.

Other staff members reported several minor incidents to us. This indicates that the physical security team is out of touch with the rest of the organization and that there is inadequate collection and reporting of data in this area. This is probably a reflection on inadequate policies and procedures.

There are no testing programs for guards or physical security systems, and no alarms to detect illicit entry.

This reflects a lack of protection policy which should require regular testing of all components of the protection system.

Sweeps for physical intrusion and bugging devices are not performed and there is no consideration of emanations security.

Since these sorts of attacks are common against high-value targets, it would be advisable to have regular sweeps of select areas where key meetings are held.

Pay scales for guards range from $6 to$12 per hour, and guards have a turn around rate of about 50 percent per year.

Bribery is almost certain to work against some of these employees, and resentment is likely. Consider that they are guarding the offices of people who, in some cases, make more money each day than they do in a year.

During the site visit, we were able to enter facilities without badges or passes by simply walking in next to employees. We were never questioned when we entered an employee's desk area during lunch and typed commands on their computer, which was logged in. Once inside the outer perimeter, no guards were seen, and access to areas such as the mail room and rooms containing file servers went unchallenged.

This seems to be a reflection of a general lack of concern for protection that is deeply embedded in the corporate culture.

Physical security is funded on a per square-foot basis, and no priority is assessed to the value of the assets being protected in the assignment of protection to those assets.

This is a very common situation and is incongruous with rational allocation of resources in an information intensive environment.

Line managers dictate physical security requirements in their own areas. The cost for physical security is also budgeted out of a line manager's operating budget.

The natural result is that physical security is inconsistent and there is a reward for inadequate physical security in the sense of increased budget for other operational requirements. Since the security of each area depends on the security of surrounding areas, those who spend less lower the level of protection for neighboring areas and get some benefit from neighboring areas with increased protection.

Physical security is generally viewed as a service organization that is supposed to react to incidents and not try to prevent them.

The reactive approach in physical security presumes that incidents are detected, but in practice, such attacks as the placement of listening devices and tapping of cables is rarely detected without an explicit search. For example, almost none of the listening devices commonly used by law enforcement are ever detected.

The personnel department and physical security department don't communicate effectively, so an ex-employee's physical access is typically unrestricted for a substantial period of time.

This is a symptom of inadequate protection management.

Physical security receives reports of about 25 stolen laptop computers and 50 other stolen hardware components per year. No follow-up investigation is done and no suspect has ever been arrested.

Apparently, there is not even a deterrent effort. One wonders what the physical security force does.

Group 2

Data is transferred to clients via direct-line dial-out from PCs using standard modems. This is implemented by users with almost no knowledge of information protection who must respond to the promises of salespersons. There is a general lack of control over this sort of process.

This is not likely to be very important because the employees sending information out have legitimate access to the information. As a mechanism to leak information, this is probably less effective than hand carrying a backup tape out of the facility.

Poor passwords are common, passwords are commonly placed on computer screens, and the Crack password guessing program commonly used by attackers guesses 75 percent of passwords in a few hours.

This indicates a lack of training and education, a lack of effective protection management, and a general disregard for corporate information assets.

Viruses infect local area networks about once per week. So far, they have all been cleaned up by commercial products and they are more of a nuisance than anything else.

This indicates very poor controls and inadequate user awareness. Even a moderate effort should reduce this rate dramatically. The idea that they are only a nuisance indicates that the people charged with protection are unaware of the serious nature of this threat.

Group 3

The corporate computing environment consists of:

• A mainframe computer that clears all financial transactions worldwide. The mainframe uses the RACF operating system protection package to enhance access controls.
• Centralized systems tend to lead to severe damage during downtime, but are easier to maintain than a distributed version of the same thing. For a large financial institution highly dependent on computers, it would seem appropriate to have a backup site in a physically secure location. The RACF operating system is highly vulnerable to attack, has minimal integrity and availability controls, and has many widely published weaknesses.
• A global network provides voice, data, and video communications over leased lines between offices in key cities throughout the world.
• This has the potential for abuse because it tends to distribute trust very widely. With billions of dollars at stake, there had better be some additional coverage for this network.
• 50,000 \UNIX/ workstations run a variety of applications. About 1 percent of these systems are file servers, and many of the file servers are also act as gateways between network segments.
• There's nothing fundamentally wrong with this configuration, but the use of file servers for gateways introduces a lot of potential for harm unless they are properly secured.
• 100,000 DOS-based PCs are in place. About 90 percent are networked into Novell file servers, and about 10 percent are used as terminal emulators to access mainframes.
• All of these systems are almost certainly vulnerable to viruses, all manner of insider attacks and, if any of them are connected to the outside world, they may all be vulnerable to external attack.
• 60,000 terminal emulators are connected via the network or telephone lines to the mainframe.
• These are of special concern because the mainframe is responsible for the entire financial assets and has an insecure operating system. The sheer number of access points leads to concerns that auditing and detecting abuse are likely to be inadequate.
• An unknown number of modems are connected through central sites and auxiliary ports on computers throughout the corporation.
• Unknown numbers of external connections spread throughout the corporation indicate a severe lack of control. It is likely that many of these access points are being probed for weakness and that if weaknesses are found, they are being exploited on a regular basis. It is very hard to audit and detect abuse on connections when you do not even know they exist.
• Several ISDN and T1 connections link networks at client and user sites onto XYZ Corporation's global TCP/IP network.
• This extends the protection requirement to all of the interconnected systems at all of these sites. There should almost certainly be a secure gateway used to connect all of these sites in order to prevent the extension of global privilege to these locations. A comprehensive real-time auditing and audit analysis capability should also be used for these connections.

About 60,000 people at XYZ Corporation have user identities, but the total number of user identities reported by systems administration is several times this many. Apparently, the same user gets different identities on different systems throughout the corporation.

This indicates poor control and inadequate central administration. Multiple user identities also introduce numerous entry points for attackers, many of which are probably not used very often by their legitimate owners.

XYZ Corporation currently allows dial-in by vendor maintenance personnel and has no special mechanism to control this activity.

This is a very common entry point for attackers. For example, most PBX systems and timesharing systems can be easily taken over by well-publicized methods over maintenance channels. Furthermore, the privileges available from maintenance channels are generally sufficient to do a great deal of harm in a very short time.

Electronic mail is used throughout the organization for internal memos, altering policies, making hiring and firing decisions, and most other aspects of business operations. This includes changing access to information systems.

Electronic mail is very handy, but it is also very easily forged and examined in most current environments. Without a secure mail system, it is easy to forge memos, examine mail traffic between specific parties to track corporate strategy and tactics, look for specific sorts of memos to learn about potential scandals, and disrupt operations by misdirecting or relabeling mail.

When connecting to the mainframe computer and \UNIX/ machines, a minimal notice that unauthorized users are not permitted is placed on the login screen, but no other information or technique is currently used to notify or discourage potential attackers or legitimate users of restrictions or responsibilities. Remote logins proceed as plaintext transactions containing a user ID and password, which are normally used for all access by an individual throughout the organization.

In New York State (where most financial institutions have major offices) notice is required if civil cases against attackers are to proceed. The lack of adequate notice makes many forms of entry impossible to successfully litigate. Also, the use of plaintext passwords from remote sites opens them up to attack. Additional precautions may be called for.

XYZ Corporation is connected to the Internet by way of Unix-based computer systems designed to act as gateways between the Internet and XYZ's internal computing facilities. These systems allow bidirectional computer mail, Telnet terminal access, and File Transfer Protocol (FTP) from XYZ Corporation to other Internet sites. They are not supposed to allow incoming access other than via electronic mail.

Unfortunately, this sort of protection is rarely done correctly, and requires at least one full-time expert in order to operate the gateway with reasonable protection. For example, the FTP protocol can often be exploited to allow other activities, electronic mail can be used to send in an attack which then opens a channel for external telnet access, and the sorts of activities performed over this channel for such an organization rarely justify its unlimited access to internal networks.

PCs are operated in Novell Netware networks and as standalone computers. There is no added protection on most PCs, but there are site licenses to Norton Antivirus, McAfee Antivirus, and Norton Encryption, none of which are used very uniformly or updated often. No evaluation process was undertaken to determine the effectiveness of these products when purchased.

Many big companies purchase from major providers of anti-virus software without realizing that this is very risky. The first problem is that many of the attackers writing viruses explicitly design them to bypass the most popular products. Another problem is that in large organizations, there is a higher probability of entry by viruses, and thus it is more likely that viruses undetected by these packages will enter and spread unnoticed. A third problem is that in large financial organizations, there is enough potential for financial gain to warrant writing a custom virus for a specific attack, and these products are useless against such an attack. In some cases, these defenses actually spread viruses faster than they would spread without these mechanisms in place.

There is an effort to provide remote-control over networks to LAN servers and PCs. Some of these systems are already operated on a remote controlled basis without any special protection to prevent active tapping. LANs are uniformly used throughout the corporation with no special protection.

Remote control is a very interesting feature, but the potential for abuse is staggering. In essence, this permits any other user to act as if they were sitting at your computer.

XYZ Corporation implements databases on \UNIX/-based machines and PCs using an off-the-shelf database product. In the case of \UNIX/-based machines, file servers operate as database servers.

File servers are now seen to perform network gateway services, file services, and database services. This introduces further dependence on these few computers. A way to exploit this might be to overload the file server with unimportant requests, thus tying up the gateway and disrupting database access. If this database is like most, data integrity will suffer as distributed locking mechanisms time out and computers attached to these databases may not even report these errors or perform retries.

Servers are rebooted monthly to force users to reauthenticate. Users are unwilling to enter a password to use a system and request staff members to disable screen lockout programs requiring password entry.

Clearly, this environment permits any insider to access another user's account. This is probably viewed as unimportant because, despite the longstanding security statistic that insiders are more of a threat than outsiders, most people don't believe their fellow employees launch attacks. On the other hand, this also permits maintenance people and guards that are hired by outside firms unlimited access during off hours. Perhaps even more importantly, this environment displays a lack of concern about information protection.

The mainframe is used to store financial transactions and to allow employees and clients to execute transactions. Mainframe data is controlled by a commercial database product which has an inadequate protection that staff does not depend on. The mainframe operates under the RACF operating system security package which is used to protect libraries used to generate and manipulate application data. Prepackaged software is used to perform database structural modifications. This software includes an automatic test and installation facility that detects errors in field use and other similar problems.

Upon detailed examination, this test system is likely to be inadequate for protection purposes. It probably only verifies that tables are properly accessed and not that the proper operations are performed on them or that only authorized programs manipulate them.

Applications are written in a special language. Several thousand applications programs exist and all requests to add or modify libraries are subject to approval. Backout mechanisms exist for program changes. Ad hoc testing is performed by IT staff responsible for program alterations, and a general attitude of quality assurance is a part of the data center environment. No other change control is in place.

Accurately controlling several thousand programs requires a substantial effort and adequate controls had better be in place to assure that the proposed changes match the actual changes. Because of the high value of the data and the ease of attack from an authorized program, it is vital to have a program of strong change control, but they apparently have none. Ad hoc testing is not adequate for this type of application. At a minimum, there should be a very well structured testing program including regression analysis.

There have been several thefts of laptop computers and other computer hardware, including the theft of several backup tapes. Other thefts may be taking place, but it is impossible to tell because most small equipment is purchased under line manager budgets and there is no comprehensive inventory control that crosses organizational boundaries.

The lack of a good inventory control system means that there could be systematic attacks underway that go undetected because there is no method to correlate them.

This group is aware of many protection problems and has requested funding from management to correct the known deficiencies, but to date, no funding has been forthcoming. Furthermore, they are generally too busy keeping systems operational to spend the time required to implement information protection.

One of the things this organization doesn't seem to realize is that by spending more time on protection, they will save time spent in keeping systems operational. For example, a reasonable change control system would make emergency change backouts rare instead of common, and good protection management would eliminate a lot of extra work required to administer thousands of systems independently.

The staff is aware of many of the current protection problems but has no resources to pursue solutions. Poor password selection is a common problem and no enhancement has been made here either. In a simple test using a public domain password guessing program, over three-fourths of the passwords were guessed within only a few hours. About one-fourth of the passwords were the user ID, the user's first or last name, or their employee number.

There is a pattern here indicative of sloppy overall controls, lax policy and procedure, and poor management of the protection function. The password examples are typical.

More than 500 people from all around the world perform systems administration and all of them have unlimited access to systems throughout the organization. This is done to allow remote management and because no simple technological solution is currently in place to allow better control.

Any of these people could modify the passwords on all of these systems and thus deny services worldwide for at least several hours. With a more concerted effort (e.g., typing a simple command on each of those systems), information could be wiped out worldwide in a matter of an hour or less, and it would take at least a day to get even a few of these systems operational. Massive loss of financial data could result, the organization may not be able to meet the clearing requirements for completing transactions for several days, and the long-term implications could be devastating in such an institution.

Many automated administrative functions are implemented as command scripts. These scripts are stored on file servers and used to automate mundane management tasks on many systems. These scripts are launched remotely and in many cases contain user identities and passwords required to perform restricted functions. They are commonly left readable on file servers in order to allow them to be centrally maintained and run from remote computers without the need to allow those computers special access.

These scripts can thus be read by anyone on the network and the user identities and passwords can be exploited to break into highly sensitive areas in vital databases.

There is no special network protection such as encryption or physical security in place.

This leaves the entire global network open to (1) observation with public domain software, (2) corruption with somewhat more effort, and (3) disruption with widely publicized techniques. Unless special precautions are taken, outsiders as well as insiders will be able to exploit this in any number of ways, ranging from theft of long haul communications services, to transfer of fungibles, to theft of confidential information.

Group 4

Group 4 provides redundant communications lines in case of failures on primary circuits, and locally manages telecommunications through a variety of PBXs.

The use of redundant communications increases resiliency against disruption, but care must be taken that the apparent redundancy is real. For example, in this case, the supposedly redundant lines are provided by the same carrier through the same cables. Any common mode failure such as a cable cut would sever the redundant pair, leaving no communication channels.

PBX break-ins occurred during the last three holiday weekends, despite the best efforts of the staff to prevent them. Nevertheless, the staff members charged with this effort are now convinced that their system will not be broken into again.

It seems almost certain that further break-ins will occur. If the last two attempts at repair were inadequate, why should we believe that they did it right this time? Upon more detailed study, it is likely that more entry points will be found.

There is no cryptographic or physical protection for voice or data lines, but staff believe that the protocols used to transmit data are so complex that nobody could ever decode the information. The main concern of the staff is the increasing use of wireless communications.

If this were true, then how would they be able to decode the information at the other end of the lines? The staff members appear to believe that because they don't know how the protocols work, nobody else can figure them out, but of course this is foolish. This demonstrates their lack of knowledge in this field and indicates a need for a strong program of education.

The major concern of this group is the broad access to data. Physical (i.e., paper-based) information is seen as most critical.

This is not a very healthy attitude in the information age. Clearly, the value of the information in electronic form is far higher than the paper information in this business.

Dial-in lines are seen as a more serious threat than other telecommunications lines, but no risk assessment has been done to associate numbers with these risks.

Without a detailed analysis, this is guesswork, and there are good reasons to believe it could be wrong. For example, if the Internet gateway were bypassed, it would provide a very high bandwidth connection to the global network.

There is concern over electronic transfers of fungibles which are easy to initiate and alter from anywhere on the internal network. There is no protection against an active tapper in the network.

Combined with the serious concerns about unlimited access to the internal network, this appears to be a very serious threat indeed.

Leased lines and microwave radio links are used for high speed communications circuits between offices. There is no encryption or authentication on any communications line other than an authentication used for \UNIX/ systems.

Without encryption and authentication, it may be impossible to determine if any systematic attacks are underway.

Several toll frauds that exploit loopholes in PBX systems have been successful. They were detected after the fact by the presence of large telephone bills on holiday weekends. The loss from each incident was in excess of $100,000. The detection of these incidents by excessive telephone bills indicates that other toll frauds may be ongoing at levels below the threshold of detection. For example, it would be an easy matter to take$1,000 per day during regular business hours in telephone tolls without detection.

Group 5

Background checks are performed on all new employees to comply with corporate policy and government regulations.

This is good, but upon deeper questioning, it was determined that this does not apply to short term consultants or outsourced work. Since these comprise a substantial portion of the staff working on information technology, additional effort is required.

Employees complete a form and an outside service provider does background checks of education, credit history, and military service records over the last seven years.

It is pretty easy to create a false history for many of these checks, especially if it is planned as a means for infiltrating an organization that has enormous potential for insider financial abuse.

Employees start to work while background checks are done, and the checks are usually completed in two or three weeks. No background check has ever resulted in an employee being terminated.

It is hard to believe that a thorough background check of an average of over 20,000 people per year (two year average turn-around and 100,000 employees) has never come up with any case where an employee had to be terminated. Something is not right here.

All new employees are subjected to a drug screening by urinalysis. If a new employee does not pass the drug test, it is retaken. Four failed tests result in termination.

It is hard to believe that none of the 100,000 employees uses illicit drugs. Since nobody has ever been terminated, but several employees have failed the first several tests, we suspect that after a few failures, employees resign rather than face the potential of being fired.

During new employee orientation, employees are asked to sign a certificate of compliance with the corporate policy statement which mentions the value of information but has no explicit protection information. No training on information protection is performed during orientation.

Signing a compliance certification related to a general document is not a very strong assurance measure. Most people who are presented with these documents don't thoroughly read them before signing, and few ask questions about the contents. I have been repeatedly astonished to find that people are surprised when I read documents before I sign them. When asked questions, they invariably say that I'm the first person who has asked them a question, and they have to refer the document to someone else for interpretation.

Equipment signed out to an ex-employee is supposed to be returned before the final paycheck is sent and a procedure is in place to implement this policy.

The problem is that equipment is not usually signed out to an employee, so there is no tracking to assure that this procedure reflects reality.

Procedures are also used to remove access and authorization from information systems.

This is a direct contradiction to other statements that no procedure is in place to do this and is at odds with the reports of internal auditors. It seems that the beliefs of this group do not meet the reality seen by other groups. This implies a serious management problem.

No attempt has been made to link particular jobs with information sensitivity or access or to include protection responsibilities in job descriptions or personnel evaluations.

How can employee access possibly be removed when there is no linkage between employees and their information-related job functions?

Exit interviews are conducted, with particular attention paid to employees terminated for cause. According to staff members responsible for this function, there is no reason to believe that ex-employees are motivated to do harm.

History shows that employees terminated for cause are indeed likely to do harm. This indicates a lack of adequate training and education.

Group 6

The most desirable situation would be an environment in which authorized users could perform legitimate operations unhindered and unauthorized users could not. The less effort required by the user for protection, the better.

We all wish this were possible, but in today's environment, it is not.

Substantial access problems exist for staff members who travel. Entry requirements are different in different sites, and being identified as a legitimate employee sometimes presents a problem. A single universal identifier would be highly desirable.

This is another common problem. Although technically it is feasible to solve this problem, legally, there are restrictions on export and import of these technologies that prevent universal identification without a great deal of effort.

In current database systems, access to the entire database is either permitted or denied with no middle ground for limited access.

This means that when several people use the same database, there is no way to audit or control individual behavior. For example, the database used to store sales commission data is accessible by all of the sales people, so a clever sales person could take credit for other peoples' work or assign themselves select accounts.

Viruses consistently disrupted one area.

It is hard to understand why the people in this area haven't resolved this situation. It clearly indicates inattention to the problem.

A bomb destroyed offices in one city. A second bomb was found by chance at a new site.

This obviously indicates a serious concern that has to be addressed.

Personnel doesn't provide data in a timely enough fashion to allow accounts to be managed properly when people change jobs or leave the organization.

The result is potential access to or modification of sensitive information by unauthorized people.

Consultants are responsible for many of the trusted systems, and there are no standards or bonding requirements placed on them.

Outsourcing is becoming popular, but the implications of this to information protection may be severe. This issue should be seriously examined.

Illegal copies of software are used throughout the organization.

This potentially introduces civil and criminal liability, and a concerted effort should be made immediately to resolve this problem. The clear implication is that the legal department is not adequately involved in protection, and that policy, standards, procedures, and training are inadequate.

Confidential information is thrown out without concern for proper destruction.

This indicates inadequate policy and procedures.

Group 7

In light of the recent World Trade Center bombing in New York, business continuity planning is now being completed for all corporate facilities worldwide.

Late is better than never.

There is substantial concern about implementing such a plan properly, since it depends heavily on the inputs from business units about what is critical to them. There is currently no way to test the accuracy of the assessments made by the people in business units.

Experience shows that this sort of activity works best in the context of an overall information protection program. In this context, more cooperation and better testing can be achieved.

Several systems have on-line audit trails implemented, but the vast majority of systems do not use the audit capabilities available to them. Audit reduction tools are used periodically to analyze the available audit data. These tools generate reports of potential violations. All current information systems have been examined at least once. No audit trail or attempt at analysis is in place in any system other than those explicitly listed in the audit reports.

This helps address select attacks on select systems, but does not reflect an overall audit policy, probably has inadequate testing, almost certainly detects only the most obvious attacks, and will almost certainly be bypassed by most serious attackers.

User identities are audited periodically, and in every audit to date, unauthorized user identities have been detected.

It appears that systems administrators wait for auditors to detect problems and then react to them rather than proactively maintaining an adequate protection posture.

Many critical bookkeeping computers have no detection capabilities.

Modified books would not be detected unless they triggered financial changes detected by financial audit. For example, shifting profits and losses between clients or commissions between employees would not be detected unless they were so large as to attract obvious attention.

Information system related purchases are not tracked and for that reason, accurate inventory control is impossible.

Numerous incidents have been detected or reported:

• An audit for illegal copies of software in a small percentage of the organization's computer systems indicated that $10 million dollars worth of illicit PC software is probably in use throughout the corporation. The illegal copies were removed as they were detected. • The fact that this much illegal software is detected indicates inadequate policy, awareness, training, education, and procedures. It also introduces the potential for very large civil and criminal penalties. • Virus incidents are detected about once per week. • This confirms previous reports and indicates a lack of adequate attention to long term prevention. • An attempt to tap the CEO's PC was detected by chance. • This indicates that a more serious problem may exist. • A laser printer disappeared one day and observation cameras intended to detect such behavior showed nothing. • This suggests an insider theft. • Password guessing is fairly common, and in at least one case, an attempt was definitively determined to be an illicit attempt at access. • This should result in serious action, but according to previous interviews, nobody has been fired for any such incident. This seems to imply inadequate personnel policy. • There was one case of e-mail being used to get a computer operator to illicitly transfer more than$1 million dollars. The money was eventually recovered and the person was caught.
• This clearly indicates inadequate standards and procedures.
• Fungibles are commonly sent to the wrong account. Detected errors are repaired and reported monthly to the managers in charge of those areas but no further action has ever been taken.
• This indicates inadequate integrity controls.
• Forged accounts payables have been used to move fungibles.
• Clearly, there is an insider threat.
• One copy of PC-based usage logging detected unauthorized access on the average once per month. No follow-on investigation was carried out.
• This could easily indicate a systematic attempt to examine data in information systems throughout the organization, but without more auditing, there may never be a definitive determination.
• There have been instances of terminated employees and consultants entering facilities without authorization and for unexplained reasons.
• Unexplained reasons means that they were never questioned and inadequate follow-up was done. Entry by terminated employees indicates a lack of adequate physical controls.
• Intruder incidents have occurred at several facilities. At least one entry was made through a maintenance elevator.
• This indicates inadequate physical perimeter control.}

Tapes used for server backups are stored in the same physical location as the servers they back up, and are not physically protected from theft or alteration.

This indicates a lack of adequate education, policy, procedures, and standards.

More than one quarter of policy and procedure documents are out of date at any given time.

This shows that the personnel department's attempts to get everyone to sign policy statements is not effective. Improved policies, procedures, and management supervision is required.

Group 9

On the mainframe, passwords are set to a minimum length of four characters and a maximum length of eight. Vendor-supplied passwords are left in place, and passwords are commonly programmed into hot keys

Any other mainframe security depends on this authentication process, and yet four character passwords are used. This is the system that stores all financial data. This indicates a poor policy decision and inadequate standards and procedures.

Transfers of stocks and bonds are essentially unprotected, even though the financial values of these transactions can be as high as several billion dollars.

An outsider can guess a four-character password within only a few minutes, and is then able to transfer billions of dollars worth of stocks and bonds. This is clearly not a very good situation. It certainly indicates inadequate policy, standards, procedures, technical safeguards, training, education, and an overly lax attitude.

Users have expressed a desire for automatic terminal inactivity lockout, but security deliberately took this capability out.

Perhaps insecurity is a better name for this group.

No formal risk assessment has ever been done.

Why am I not surprised?

The information provided during the site visit contained several inconsistencies and seemingly unsubstantiated reports of incidents. In order to clarify these issues, we asked the point of contact to follow up. Every incident was denied, including those that had already been publicized in the press and reported to the police. Internal management sources assert that the denials of attacks are false.

The staff members are becoming defensive and are probably nervous about their jobs. They figure that denial of everything is the safest position.

Findings

When taken on their own, the many shortcomings identified in this report are cause for concern, and some of them justify immediate action. But when taken in combination, they reflect an overall lack of attention to an area that will likely have a great impact on the future of XYZ Corporation.

Protection Management

Protection management is inadequate. This inadequacy appears to stem from the management of information technology by line managers and a general ignorance of the issues of information protection. Even when members of technical staff reported incidents and potential problems, management did little or nothing to address these problems.

The inadequacy of protection management is reflected in many ways, but perhaps the most obvious one is the lack of an incident reporting system. The result is that even when incidents occur, the knowledge of the incident is so localized that nobody can see an overall trend. There could have been over a thousand locally detected incidents in the last year, and management would never be aware of it. Without knowledge of the nature and extent of a protection problem, it is not possible to coordinate an effective protection program.

The many lapses we found reflect most clearly on inadequate protection management. Until the protection management problem is properly addressed, great risk will remain for the organization as a whole.

The list of protection management problems is essentially endless. The only way these problems are likely to be resolved in a meaningful way is by a concerted effort by top-level management to put an appropriate management structure in place, place an appropriate person at the top of that management structure, and provide adequate independent assurance that the structure operates properly.

Protection Policy

A formal information protection policy should be established. At a minimum, this policy should include:

• Protection management personnel and responsibilities
• Individual, organizational, and corporate responsibilities
• Incident reporting and investigation requirements
• Protection requirements in all areas listed in this report
• Sanctions for policy violations
• Exceptions to the policy

Standards and Procedures

Current procedures are the product of individual initiative. They are not consistent across organizational lines, and thus are not standards. Overall standards and procedures should be developed by consolidating existing procedures where possible and developing new ones where appropriate.

In order to be effective, standards and procedures must be:

• Documented in writing
• Detailed enough to allow measurement of compliance
• Practical and understood by those who implement them
• Of real value to the business units they apply to
• Tied to protection awareness, training, and education
• Facilitated by knowledgeable and properly trained people
• Periodically reviewed and revised to meet changing situations

Emphasis on the need for standards and procedures is a management responsibility and should be implemented through the protection management structure created with the implementation of a protection policy.

Documentation

There is a notable lack of necessary documentation. The lack of documentation is evident both corporate-wide and within the business units we observed. The existence of select pieces of documentation does not reflect an overall approach to creating, maintaining, distributing, and using documentation.

At a minimum, documentation should include:

• System hardware and software configurations
• Formal risk assessments
• Software acceptance testing
• Protection countermeasures
• Protection test and evaluation
• Contingency plans
• Disaster recovery plans
• Incident response guidelines
• Software change control
• Proper use of licensed software

Documentation should be located where it will be used, clearly identified, and backed up by accessible copies in disaster recovery and regional sites.

Protection Audit

The internal audit department has made a substantial effort to regularly audit information systems despite a serious lack of adequate funding and personnel. Internal auditors have done a good job of identifying obvious deficiencies, but in order for audit to become an effective protection tool, a great deal more must be done. Internal audit had the most comprehensive list of incidents and was more aware of the overall protection posture than any other group interviewed. Auditors also had a good sense of where inadequacies existed and how they might be dealt with.

This suggests that internal auditors should play a key role in improving information protection and that they should be intimately involved in the creation of all aspects of the protection program.

External protection audit should be performed at least once per year to assure that internal audit teams are properly performing their function.

Technical Safeguards

Almost all of the appropriate technical safeguards required for effective protection in this environment are lacking. Furthermore, the environment actively promotes practices which cripple the effectiveness of existing technical safeguards. Current safeguards don't detect or prevent even a small percentage of known incidents. It is possible, perhaps even likely, that there are numerous ongoing attacks at this time and that technical safeguards are not acting to alert anyone of their presence.

Appropriate technical safeguards must be determined and implemented as soon as possible. The highest exposures, such as the ability to transfer large amounts of fungibles, should be defended against immediately, while other technical safeguards should be implemented in concurrence with newly developed protection policy over time.

Incident Response

Incident response is reactive and ad hoc, and this is inadequate to the task at hand. An incident response team should be put in place and centralized reporting and response should be implemented with all haste. This team should maintain a database of incidents and assure that all incidents are resolved in a timely fashion.

Testing

Testing needs to be enhanced. Although minimal testing is done according to vendor specifications, this only verifies that software is properly configured and compiled, and does not verify that it operates properly. A rigorous test program for all aspects of information protection should be instituted as part of the process of developing new protection.

Physical Protection

Time did not permit full investigation of a number of physical security and facilities issues reported by staff members; however, it is clear that physical protection is inadequate.

If physical protection is to be effective, there must be an atmosphere that challenges outsiders and an organizational will to have physical security. XYZ Corporation must evaluate the atmosphere it wishes to maintain in conjunction with the protection to be afforded to determine an appropriate mix of physical security measures.

Personnel Issues

There is a sense in the personnel department that there are adequate policies and procedures in place with respect to information protection issues, but this is not supported by the facts.

Personnel department enhancements that support information protection should be implemented, including but not limited to enhanced new employee training, employee awareness programs, new standards and procedures for employee termination, and improved communication with other departments.

Legal Considerations

The legal department is supposed to keep track of both domestic and international requirements. They apparently do not. Notice and consent is required in many jurisdictions in order to enforce restrictions on use. The legal department has apparently not made any effort to have this requirement enforced. Due diligence requirements are not met by current corporate policies and procedures. Widespread use of illegal copies of software indicates a lack of attention to legal requirements. The list goes on. The legal department should be trained in areas related to information protection and should be tasked with providing appropriate advice regarding all aspects of the protection program.

Protection Awareness

No protection awareness program is currently in place. A concerted effort should be made to implement a comprehensive program of protection awareness. This program should include at a minimum:

• One hour of training per quarter for all staff members
• Coverage of all of the protection issues discussed in this report
• Detailed instructions and documents on how to detect and respond to incidents
• Explanation and discussion of corporate protection policies
• Discussion of personal and corporate responsibilities
• Discussion of recent protection-relevant events in the media

Training and Education

There is no substantial training and education in information protection at XYZ Corporation today. A concerted effort should be made to provide adequate training and education to all personnel with information protection responsibilities. This effort should, at a minimum, include training and education in relevant specialty areas for:

• Managers
• Auditors
• Programmers
• Guards

Organizational Suitability

XYZ Corporation seems well suited to the introduction of information protection at this time. Current restructuring efforts make the environment amenable to change. At this time, staff members strongly resist even minimal effort in support of information protection. If adequate protection is to be attained, either these staff members have to change, or the corporation has to pay for protection that requires no effort by these people. The latter will almost certainly be too expensive to attain.

Staff levels may require some adjustment. Typical figures for properly protected environments are 20 users to each systems administrator. There may be special features of the corporate environment that affect these figures, but none were observed.

A Plan of Action

We recommend a comprehensive realignment of information protection posture. In order to accomplish this realignment, the following four phase plan is proposed.

Phase 1

Extreme exposures should be immediately addressed:

• Secure the fungibles transfer capability. The current exposure is so great that a single incident could severely damage the corporation and there is currently no meaningful protection in place to prevent such a loss.
• Make a comprehensive set of backups and store them off-site. This is necessary in order to assure that existing information assets are not lost due to accident or abuse.
• Install a corporate-wide real-time audit analysis and reduction system. This system should automatically analyze audit trails currently being generated, generate and analyze audit trails not currently being generated, and detect a wide variety of known attack patterns as they occur.
• Perform a sampled protection audit of critical systems. This is necessary in order to determine at a detailed level how systems currently in place are being used, whether there are widespread ongoing attacks, what weaknesses are poorly addressed by administrative controls, and what vulnerabilities must be covered in each class of system to provide adequate protection. A complete audit would be far too time-consuming, so random samples are used to provide statistical data.
• Place minimal sensors and alarms in key areas. This is required in order to provide some level of assurance that equipment and information critical to ongoing operation is not tampered with or stolen.
• Perform a comprehensive audit for illegal copies of software and either remove the software or purchase legitimate licenses. This is necessary in order to address possible criminal liability, law suits, and negative publicity.

Phase 2

Planning should be initiated as soon as possible to provide long-term protection suitable to the need.

• Form a comprehensive corporate information protection policy and implement it throughout the corporation.
• Form an information protection advisory council. The purpose of this council is to act as a focal point for making policy decisions about information protection, investigating reports on protection incidents, hiring an appropriate corporate staff member to manage long-term information protection requirements, and assisting that staff member in carrying out their duties.
• Find a cost effective set of long-term protective measures suitable to the environment.
• Make information protection part of the overall planning process of the corporation. By involving protection as early in the planning process as possible, large cost savings and improved protection result.
• Train staff members in the technologies implemented during the first phase of this effort.

Phase 3

After the first two phases are completed, a substantial set of technical, procedural, and personnel measures should be implemented to provide proper protection.

Phase 4

After appropriate protective measures have been instituted, the fourth and ongoing phase of activity will begin. In this phase, protection will be maintained and updated to reflect changes in environment, technology, and organizational priorities.

Case Study 4: The Alma System

Vulnerabilities and Protection Techniques

Three categories of vulnerability will be considered; leakage of classified data, corruption of information, and denial of services.

Other aspects of protection will not be considered in this study, except in as far as they relate to these issues of concern.

Alma secrecy requirements are fairly well understood in a general sense and are addressed by existing regulations. The other issues are less well understood and are not adequately covered by regulations. This assessment will primarily address the threats that are realistic over the operational lifetime of Alma, and regulations will be largely ignored in our analysis.

There are several disturbing vulnerabilities in Alma as it currently exists and in light of its potential deployment. These are briefly outlined here and detailed more fully below:

• Alma was originally designed to operate in a Tempest environment. It was not designed to withstand any sort of threat in a non-tempest environment. The move from a Tempest environment to an open environment creates numerous and various vulnerabilities ranging from problems of securing the hardware and software during distribution to maintaining operational safeguards against field threats.
• Alma was designed as a prototype and, as in many such cases, is now being deployed operationally. The prototype requires some rework in order to improve performance, audit high level events, improve systems administration, augment database labeling and integrity, improve reliability, and for other reasons not related to protection.
• The use of standard protocols for sharing information presents a number of vulnerabilities that may weaken Alma protection and enable attackers to succeed with a very low workload. The use of these protocols has to be examined in some depth to determine if the vulnerabilities presented by them are worth the expense of covering them with additional safeguards and, if so, to determine whether they can continue to be used in some cases or must be completely redone.
• The data exchange between Alma and other systems has not been adequately addressed either from a performance standpoint or from an integrity standpoint. The low bandwidth digital diodes currently in use should probably be replaced by high bandwidth devices capable of handling Ethernet speeds while providing protection against covert channels. There are also cases where redundant data entry is being performed and in which the time differences between Alma entry and other entry is inadequately addressed. Several other interface issues are also likely to cause problems in the future.
• Protection administration is inadequately addressed for a widely distributed set of Alma systems. Although the current administrator appears to be quite well versed and probably performs his duties quite well, there is no proper mechanism for training, testing, and assisting in the systems administration process. History has shown this to be a key area of vulnerability.
• Integrity is not adequately addressed in Alma. There are almost no safeguards against corruptions in the LAN, in individual systems, of personnel, of software, of data, of protection functions, of peripheral devices, or of external communications.
• Auditing is inadequately handled. The current mechanism does not provide a realistic way for the systems administrator to read and understand audit data, but only a simplistic way to view it.
• Availability is not adequately addressed. There are several single points of failure that could seriously cripple or completely disable Alma, including a single corrupt systems administrator, a single corrupt program propagating through the network, and various jamming techniques.
• Rapid adaption over time to new situations and inputs should be considered. It is likely that Alma will be interfaced to many new input sources over its life cycle, and each of these connections introduces a potential protection problem for Alma.
• Future networking enhancements are likely, and there is no protection plan in place for dealing with such changes. If Alma is networked with other systems using imperfectly matched protection techniques, this networking is likely to introduce vulnerabilities to both environments.
• Field deployment introduces new problems of secure distribution. If rapid field deployment is permitted, security provisions must cover distribution, installation, and maintenance of all hardware and software components.

In the remainder of this report, threats will be designated with boldface abbreviations and protection techniques with [bracketed italics] abbreviations. These abbreviations will then be used in the tables for ease of presentation. For vulnerabilities, abbreviations begin with capital letters as follows to designate our major areas of concern:

T=Tempest
R=Retrofit/Redesign
P=Protocol
D=Data exchange
I=Integrity
A=Audit
AV=Availability
F=Field deployment

Tempest Problems

The move from a tempest environment to a non-tempest environment changes everything about the hardware and software vulnerabilities.

• \A{T-SWdist} Software and hardware distribution have to be secured in the Alma deployment process because anybody in the environment could place a device or a disk in a shipping container and have a reasonably high degree of assurance that it would be inserted into the system and used. [Peterzell]
• \A{T-HWleak} A transmitter/receiver in a computer or LAN hardware device could be exploited to cause arbitrary denial, corruption, leakage, or accounting failure. [Whidden]
• \A{T-defect} The use of off-the shelf hardware is a vulnerability in itself because of the ease of modifying standard components, interrupting the commercial distribution environment, the possibility of a known defect being exploited, the possibility of jamming to cause selective denial, etc. [Alexander] [SUN]
• \A{T-Ebeam} An incoming energy beam could be used to cause heating of select components causing false input signals, reduced reliability, or even complete denial of services.
• \A{T-HWjam} Energy signals of sufficient strength can jam LAN cables, serial cables, keyboard connectors, mouse cables, and other electromagnetic devices. [Brewer]
• \A{T-bugs} Short wave-length radar can be exploited to listen to larynx movements and observe subliminal speech commonly used by operators while entering data such as passwords, cryptographic keys, and other vital information. [Henderson]
• \A{T-emanate} It would be a simple matter to plant or drop Van Eck devices proximate to an Alma system and through them to retransmit emanations to extract timely mission specific information. [VanEck]
• \A{T-IOjam} Jamming can be effective against the cryptographically covered radio links used in current Alma sites, which can disable portions of the network.
• \A{T-target} The emanations from Alma systems could be exploited to guide weapons against Alma installations, thus easing the targeting problem for enemies. [Valkenburg]
• \A{T-visual} Video display outputs, keystrokes entered at keyboards, and lip movements could be observed with cameras at visible or invisible frequencies and that data could be retransmitted or magnified by observers at substantial distances (including airborne, satellite, ground-based, and underground observation sites). [Preston2]
• \A{T-conduct} Conducted emanations over power lines can be exploited to extract data from computer systems. [VanEck]
• \A{T-physical} Clothes, gifts, or other items shipped into the environment or distant observation posts could be used to include audio or data receivers/transmitters to leak information from the environment. This includes keyboard sounds which yield input sequences, voices, etc. [WireTap]
• \A{T-audio} Audio output devices can be exploited to transmit audio versions of electronic data at non human-audible frequencies. These could be observed and retransmitted or directly observed at considerable distance. Audio input devices could be exploited to introduce commands in a similar fashion, producing a bi-directional high speed covert data channel. [Shannon] One technique used to find certain types of hidden microphones is to use two ultrasonic generators at different frequencies, with a difference frequency in the audible range, to combine in the microphone and produce enough tone to hear. This might be used to produce audio voice'' commands in microphones used for voice control of systems. Likewise, the ability to produce an audio varying small voltage on microphone leads could accomplish this. These would bypass normal authentication procedures for a workstation in use. Microwave pulses with the correct timing pattern could probably generate enough voltage to simulate keyboard output to the workstation. It may be possible to detect and interpret the ultrasonic horizontal sweep signal from monitors to reproduce the display, through a tent, using a directional microphone. Directional microphones are useful for 300 feet, or perhaps farther using speech signal processing for conversation in tents.
• \A{T-keys} Cryptographic device emanations may be exploited to detect the cryptographic keys or secret data during operation. Keys could be further exploited to introduce false information into the network. [Smulders]
• \A{T-EMP} EMP attacks operate far more easily against non-tempest equipment. [Fulghum] [Fulghum2]
• \A{T-Radar} Normal air traffic control radar can cause interference and intermittent failures in computer equipment within a radius of several hundred yards. [AvWeek]
• \A{T-future} In the future, the governments currently engaged in these sorts of activities will likely expand considerably on these attacks. [DISAdoc]

Two basic techniques are involved in reducing the threat from tempest attacks. One is to reduce the emanations that can enter or leave an area, and the other is to introduce sufficient noise so as to make the detection or modification of signals too difficult to attain at a reasonable price.

\D{noise} Noise can easily be introduced into the Alma environment to reduce the emanations threat, but the amount of noise required to reduce existing threats to acceptable levels is not yet known. This analysis would require the use of special-purpose hardware devices to test for radiated and corruptive emissions effects in a real Alma environment under simulated or actual load conditions. This would require several weeks of time and the use of specialized equipment. Once proper levels are determined, hardware noise generation devices can be designed and implemented to generate appropriate noise characteristics so as to reduce signal to noise ratios appropriately. An appropriate plan would also be required to guide and train Alma installers on how and where to place these devices to achieve proper effect. The estimated affect would be about one-third as effective as emanations controls and no reasonable amount of improvement is feasible beyond that.

There are also devices available which sample both emanated and conducted signals from workstations, and adjust to output proper levels of confusing signals. They require a wire loop around the workstation and cost several hundred dollars each.

\D{perimeter} Emanations can also be reduced by increasing the distance between sources and sensors. Protective measures could be put in place so as to reduce the emanations threat by searching and securing large perimeters on an ongoing basis. This is the least reliable of the techniques considered here. Although it has low initial costs, it involves substantial amounts of people time and training in order to be effective, substantially increases perimeter sizes and search requirements, and is far less likely to be effective than the technical alternatives.

\D{shield} Emanations and introduced signals can also be greatly reduced by providing a tempest protection environment specially designed for field use. Training would be required for installers and a limited production run would be required in order to produce sufficient supplies to create enough suitable hardware to supply numerous Alma systems. For a multilevel secure version of Alma, we estimate the cost of shielding at $20,000 per 100 sq. ft. of floor space in a deployed setting. Assuming that only classified components of Alma require coverage, this cost could be limited to under$40,000 per installation. There is an added cost of about $10,000 for the first implementation associated with finding and assessing vendors, testing the implementation to assure that it meets the specifications, training installers, and so on. For a single level Alma implementation, the area to be secured would likely be at least three to four times this large, for a cost of at least$150,000 per installation.

There is available material designed for portable shelters to shield emissions from observation and targeting by RF missiles. If this were used several feet outside a tent, it might cut Tempest problems enough to be worthwhile, if the main threat is from less than 20 or 30 degrees elevation.

\SSS{Prototype Conversion}

Although effective protection may be feasible without modifying the Alma software, there are some substantial opportunities to save time and effort, reduce vulnerabilities, and enhance performance through this effort.

• \A{R-dbase} A major problem in the Alma design is that there is no single interface to the underlying (Oracle) database. Since each Alma call to the database comes from a different module, there is no central way to introduce auditing at the Alma call level, authenticate database contents, label data by security level, or encrypt data stored in the database.
• \A{R-flex} There is a potential vulnerability in only allowing one underlying database (Oracle). A flaw or unknown limitation in that system, a vendor-placed intelligence threat, a protection problem, a problem in Oracle distribution, a business failure, or other similar problems are a serious threat to Alma's future, and the DoD should move toward database independence as much as possible. Moving the database calls into an independent module allows vendors to compete for price and function in future Alma installations and allows Alma to adapt to more environments more quickly.
• \A{R-X11} The X11 interface to Alma has not been thoroughly examined and approved by the NSA, and the protection mechanisms added to X11 are not adequate from an audit or integrity perspective. [Faden] [Epstein]
• \A{R-stress} From a reliability standpoint, it is likely that under high-load conditions and other stresses, the current Alma system will have substantial failures. Even if current needs are met under stringent test conditions, it is highly likely that as time passes, the nature and scope of things Alma will be asked to do will greatly increase. By doing this work now, the DoD will clear the path for future enhancements to Alma functionality and scope of operations, while increasing current system reliability and reducing dependence on single-vendor database solutions. [Spencer]
• \A{R-MLS} Redesign will be critical if a multilevel secure version of Alma is to be implemented, as this is required in order to effectively separate data of different levels, cover them with cryptography, and track their movement through the network.

\D{redesign} A redesign of Alma for improved auditing, performance, reliability, authentication, and classification would cost on the order of $1,500,000, and would dramatically reduce the overall cost of securing the Alma environment. \D{redoMLS} If a redesign were made toward a multilevel secure Alma implementation, the redesign cost would likely be about$500,000 higher.

Both of these are one-time costs.

Protocol Issues

Alma currently uses four types of protocols, each of which has little or no protection capability as currently implemented.

• \A{P-TCP} The TCP-IP protocol suite has known vulnerabilities, including but not limited to the ease of corruption and spoofing, ease of service denial, ease of leakage, and traffic analysis. [Bellovin] These vulnerabilities result respectively from a lack of cryptographic integrity checking, hardware limitations of media, the inability of TCP-IP to exploit redundancy for reliability, a lack of cryptography for secrecy, and the exploitation of low channel usage as a technique for scheduling. [Futcher]
• \A{P-NFS} NFS is also used in Alma and this introduces additional problems including but not limited to disk-caching-induced protection failures, different protection algorithms used for remote versus local file access, and limited local denial of services when remote file systems become unavailable. [Cohen-93] NFS also has no coverage against the sorts of vulnerabilities specified for TCP-IP.
• \A{P-YP} The network protocols currently used for remote authentication are inadequate to provide proper protection, again due to the same sorts of deficiencies as TCP-IP, but also including inadequate auditing capabilities, inadequate protection against password guessing attacks, no protection against downloading password files and attacking them off-line, and inadequate protection of automated update spoofing of local file system copies. [Jobusch]
• \A{P-dbase} The database-specific protocols used for performing remote database access also have the same problems just described for TCP-IP.
• \A{P-IO} A different protocol suite is used for communications with peer systems and this suite may also have similar problems.

The protocol vulnerabilities are particularly important in light of the lack of strong physical security in the deployment of Alma. Any physical breech of the network or any of the computing components introduces the possibility of corruption, denial, and leakage.

\D{newprotocols} The cost of rewriting protocols would be far too high to be of great value. An estimated cost of $1,250,000 would be required to augment the existing protocols to include a modicum of protection, but this would negatively affect long-term reliability, performance, and compatibility. \D{mls} + \D{shield} By moving to a multilevel secure version of Alma with cryptographic LAN protection and Tempest protection for classified areas, the secret level components of the Alma system could be secured. Shielding costs were discussed earlier, MLS software adds about$1,000 to the cost of affected workstations, and cryptography is already used between Alma areas.

\D{cmds} Another partial solution to the protocol problem would be the use of network-based computer misuse detection systems to detect protocol attacks in real-time. The cost of this protection would be on the order of $250,000 plus$20,000 per Alma site.

Data Exchange

Alma data exchange with other systems generally falls into five categories: peer exchanges with other systems now existing and to be implemented in the future; read-only inputs from unclassified systems and future systems; write-only outputs to other systems; downgrading of information from other systems; and downgrading of information from Alma to unclassified systems. At present, each of these exchanges is a problem.

• \A{D-peer} Peer-to-peer exchanges are not covered by any protection at present. There are no audit records of these exchanges except as stored through editable Alma database entries, which means that problems cannot be traced to their source and audit trails cannot now be automatically and reliably analyzed for signs of abuse. There is no integrity checking to allow the contents of messages to be properly assured, which means that protection relies entirely on people noticing and correcting errors. This may be effective in some areas where targeting information only makes sense with proper routing information, but in other areas, such as the peer-to-peer connection with other systems and the future links to soon-to-be peer systems, it is and will likely remain ineffective unless we do something about it. Furthermore, merely detecting a problem without being able to correct it produces either reduced integrity or denial of services.
• \A{D-up} Read-only inputs from unclassified systems are currently limited to 2,400 baud incoming serial lines implemented as digital diodes. The current implementation suffers from low performance and poor reliability because it is over a serial line and because the digital diode is implemented in a non-optimal fashion. Bandwidths up to 19.2K baud could probably be reliably attained by improvements to the digital diodes currently in use, but higher (i.e., 1-10M baud) connections require hardware that does not yet exist. There is also no assurance that this information cannot contain interpreted code which could result in Alma corruption. Any information that is automatically interpreted or can be caused to be automatically interpreted by misuse of Alma should have additional constraints placed upon it including, but not limited to, syntax and semantics checking, and restraint to a limited function environment.
• \A{D-up} Write-only exchanges are currently assured by the recipient and the line used to perform output. If these connections are not currently protected, Alma should implement protection to assure that no protocol attacks can have an effect on Alma and to assure that Alma does not receive any information via covert channels that it should not receive. There should also be some sort of internal Alma cryptographic coverage to assure that if signals go out over the wrong wire or if the cryptographic hardware covering the transmission should fail, signals cannot be easily read by an enemy in a time frame that could harm a mission.
• \A{D-down} Information downgraded for use by Alma from TS systems is currently sent without assurance that it cannot contain information that could be interpreted by Alma so as to cause corruption. This risk should be handled in the same manner as the risk of read-only inputs from unclassified systems is handled.
• \A{D-down} Information downgrades from Alma to base-management systems are not currently provided. In order to provide these downgrades, it will be necessary to implement a guard application and enhance internal Alma controls to assure that only unclassified data in unaltered form is downgraded.
• \A{D-reenter} Redundant data entry is currently used because of the inability to downgrade Alma data to other systems. In addition, the one-way links from current unclassified systems cause integrity problems with database entries because there is no accurate date and time information about when events took place, only about when the data was entered. Thus, a more current status in Alma can be overwritten when a less timely entry is subsequently made in an unclassified system and that information is transmitted to Alma. By allowing automated downgrades in Alma, we can eliminate much of the incoming data from other systems and some of the redundant data entry into those systems. This both lowers costs and enhances integrity.

The most important items for providing enhanced performance and reliability while providing strong protection for data exchanges are the implementation of high-speed digital diodes and a secure automated downgrade facility.

\D{p-diode} For medium speed (i.e., 19.2K baud) digital diode applications, we can easily enhance the existing digital diode design at fairly low cost and over a very short time frame. Something like one person-month will be required in order to implement and test a digital diode of this sort ($10,000), and the cost per unit will be on the order of$100 each (assume 5 per site) in quantities of several hundred. This would seem a reasonable solution for most of the Alma interfaces with external systems. A medium speed digital diode would also be fairly easy to get approved by security certification groups because it would be very similar to the existing approved design.

\D{e-diode} For high speed (i.e., Ethernet) digital diode applications, the cost will likely be far greater in design time, in price per unit in quantity, and in time to get approval. Design time would likely be on the order of six person-months ($60,000), and would result in an operational prototype. Depending on the design, unit cost (not including design costs) will likely be in the range of$500 to $1500 per unit in quantity 100. The approval process will likely take longer because it is not a simple enhancement to an existing approved technology, but the principles are the same, and approval should not be excessively difficult to attain. \D{ccs} \D{guard} \D{guard-HW} \D{cmds} \D{mls} Portions of this problem could alternatively be covered with enhanced change detection and control via cryptographic checksums \D{ccs}, automated guards \D{guard}, and the use of CMDS or a similar misuse detection system \D{cmds} in conjunction with a multilevel secure implementation of Alma \D{mls}. A custom Alma guard for automated real-time downgrading would cost about$500,000. If the guard hardware \D{guard-HW} were also required (i.e., in a non-MLS version of Alma), it would raise the price by $10,000 per installation. Protection Administration Protection administration is a universal problem that has never been adequately addressed in off-the-shelf systems. In Alma, at least the following problems exist today: • \A{PA-doc} \A{PA-man} \A{PA-train} Current practices are not well documented \A{PA-doc} + \A{PA-man} and no training program \A{PA-train} is in place for training and maintaining the several hundred systems administrators that may be managing Alma systems when they become widely used. A protection manual should be developed to address this issue and a training program should be designed to provide training and education. • \A{PA-Tprot} \A{PA-Tauthen} \A{PA-Taudit} \A{PA-Tpwd} \A{PA-Tadmin} Current tools are inadequate to assure to a reasonable degree that workstations in Alma are properly configured and used. This problem includes, but is not limited to, the inability to verify and correct protection settings \A{PA-Tprot}; inadequate assurance in the distribution of authentication databases \A{PA-Tauthen}; inadequate audit analysis \A{PA-Taudit}; inadequate password complexity assurance \A{PA-Tpwd}; and high complexity of administration \A{PA-Tadmin}. [Jobusch] • \A{PA-Aaccess} \A{PA-Ainteg} \A{PA-Apasswd} \A{PA-Aspoof} \A{PA-Aproc} \A{PA-Aactive} \A{PA-Aexec} \A{PA-Afiles} \A{PA-Aattack} Inadequate alarms are currently available for real-time or even delayed detection of attack or abuse. This includes such areas as illicit access attempts \A{PA-Aaccess}; integrity problems \A{PA-Ainteg}; password guessing attempts \A{PA-Apasswd}; spoofing attempts \A{PA-Aspoof}; process and file system problems \A{PA-Aproc}; activity and inactivity \A{PA-Aactive}; execution of unusual programs \A{PA-Aexec}; access to unusual files \A{PA-Afiles}; and known attack patterns \A{PA-Aattack}. \D{newdoc} \D{newman} \D{newtrain} \D{training} Alma documentation \D{newdoc} and \D{newman} and training \D{newtrain} can be addressed by about 5 person months of effort ($50,000) in conjunction with several person-weeks of effort by current administrators. This is a one-time cost. As further enhancements to Alma are made, documentation and training can be upgraded with an additional 10 precent level of effort. Once manuals and training techniques are developed, training can also be provided for time and materials, or select Alma personnel can be used for training purposes as desired \D{training}. This is estimated to cost about $5,000 per Alma system per year. \D{cmds} \D{simptools} \D{newtools} Systems administration tools should be augmented to include the capabilities listed above. This might be best done by a combination of CMDS or a similar system \D{cmds} which is very strong in the area of alarm generation and audit consolidation, and a set of simple tools \D{simptools} and new tools \D{newtools} for augmenting administration. Simple tools are commonly available, and with only about$10,000 of effort, they can be integrated. More effective tools would require about $100,000 of effort. \D{mls}-\D{simptools}-\D{newdoc} In a multilevel secure version of Alma \D{mls}, both simple tools \D{simptools} and protection documents \D{newdoc} are provided, which should reduce these cost significantly, leaving only the Alma-specific protection techniques to be implemented. \D{redesign} \D{cmds} \D{mls} \D{redoMLS} \D{custaudit} Most of the auditing and control problems are addressed by the redesign of Alma \D{redesign}, the implementation of CMDS \D{cmds}, and an MLS operating environment \D{mls} + \D{redoMLS}. Without this option, the cost of covering audit \D{custaudit} will be fairly extreme (on the order of$2,500,000 initial cost plus ongoing time required of systems administrators that could otherwise be spent on useful work). Such an implementation would be inadequate without at least the minimal redesign of Alma \D{redesign}.

Integrity Protection

Integrity is widely ignored in the computing community and Alma is no exception. The following broad categories of integrity problems exist in Alma today:

• \A{I-lan} LAN corruption, forgery, active tapping, and aliasing are all possible in Alma. Without additional coverage of the LAN, any physical attack on hardware devices can potentially result in widespread denial of services, corruption, leakage, and audit destruction. This includes hardware manufacture, test, distribution, assembly, shipping to forward installations, installation, and use. This problem can only be addressed by very good hardware and personnel security throughout the process, or by the use of cryptographic authentication of LAN traffic. [Abrams] [CohenShort]
• \A{I-SW} Software and/or data corruption, \A{I-data} inside computer systems in Alma are not covered. Database data corrupted intentionally or by hardware error is undetectable. Corruption of system-critical files including audit trails, password files, and configuration files are not covered and any of these can result in widespread corruption. Corruption in the operating system, support systems, or applications software are not covered and any of these can result in widespread corruption. Any corruption in one system can easily propagate throughout Alma, thus extending the effect to widespread denial, leakage, corruption, and repudiation. This requires, at a minimum, built-in integrity checking in an integrity shell mode in order to provide effective protection. [CohenMod]
• \A{I-prot} Protection facility corruption is also possible in the Alma systems, including, but not limited to, modification of protection settings on files and the resulting effect of these modifications on devices and local and remote file systems. To properly address this problem would require substantial enhancement of the Unix systems to include mandatory access controls or the movement to MLS or other similar Unix environments at greatly increased cost. A fall-back position would be to create a set of controls including internal monitoring software on each file server, and the enhancement of the Alma and X11 software to provide additional protection setting checks and real-time warnings.
• \A{I-phys} Peripheral devices are often treated as limited function devices, but most modern peripherals contain general-purpose computers and, in some cases, they can be exploited to attack systems. Network printers are particularly common as sources of denial, corruption, and leakage. Other peripherals can sometimes be exploited so as to modify the operating system in memory. This does not currently appear to be a serious threat to Alma, but as technology appears and is integrated into the environment, this may become a serious problem. Protection techniques include tracking and securing peripheral hardware from the womb to the tomb and digital diodes to restrict activity to output or input as appropriate to the device. Medium bandwidth digital diodes seem most appropriate to this application. [Klossner] [Stanley] [Fellows]
• \A{I-HW} Hardware integrity has been addressed elsewhere.
• \A{I-IO} External communications and data integrity have been addressed elsewhere.
• \A{I-inputs} Inputs from other systems should have limitations on input syntax and semantics, and labeling, authentication, and encryption should be applied to all input from all sources both before its arrival and as it is integrated into the environment. This would integrate well into the effort to secure external communications and move Alma from a prototype to a production environment.
• \A{I-other} Other input sources including software distribution, backup tapes, and other maintenance media represent avenues for the introduction of corruption and the leakage of data. Policies and procedures should be in place for the use of these peripherals, and technical controls should be considered including backup encryption and authentication, secure software distribution via encryption and authentication, and adequate change control over software. [Preston] [Duff]
• \A{I-transitive} Transitive effects should be considered in all integrity analysis since the current design of Alma is highly integrated and allows very rapid dissemination of information throughout the system, and thus a corruption will rapidly have widespread effect. [Stoll]

\D{ccs} \D{embedccs} \D{mls} The majority of the software problems require cryptographic authentication \D{ccs} that can be attained for costs on the order of $150,000 in software and five person months of effort for an additional$50,000. In untrusted systems, this capability should be embedded in other facilities \D{embedccs} which increases cost by an estimated $250,000. At present, there is no alternative to this solution for attaining enhanced integrity other than using an MLS \D{mls} to reduce the need to embed protection as heavily. With an MLS many of these requirements can be reduced because of the ability to use the MAC capabilities to prevent corruption (e.g., system software is at system low). Auditing In the current environment, no systems administrator can effectively use the audit data provided by the system to detect or counter attacks either in real-time or on a post-mortem basis. • \A{A-OS} Operating system data is too voluminous and difficult to understand to provide meaningful audit analysis by a systems administrator. A minimal effort of three person-months could eliminate this problem to a substantial extent. [Lunt] • \A{A-lan} Network-based attacks cannot be reliably detected without combining information across platforms and there is no facility to provide this capability. This can only be addressed by a network monitor. [Snapp] • \A{A-dbase} Database auditing is inadequate because it is meaningless at the level audits occur and does not associate actions with people and understandable Alma events. A more rational approach would be to provide audit trails in the interface between Alma and Oracle or whatever other database may eventually be used. This would be a natural side effect of the movement to a production environment described earlier. • \A{A-Alma} Alma level auditing is non-existent and will be vital to detecting correlations or the lack thereof between user actions and database and operating system events. Effective application would also require a correlation between Alma events and database events by a network monitor. • \A{A-extern} No auditing of external information sources or sincs is provided, which makes tracking attacks to their source very difficult. All incoming data should be audited, authenticated, marked with its source and security level, covered by a cryptographic checksum to detect corruptions, and encrypted if it is classified. • \A{A-meaning} No mechanism exists to combine the various levels of audit that are or eventually may be available into a form that permits meaningful review. The combination of audit information from multiple levels of auditing is one of the key ways we can assure proper operation, and this can only be effectively done with a network monitor. • \A{A-monitor} Network activity monitoring is non-existent in Alma. This would be necessary in order to detect network-based attacks, statistical anomalies indicative of many physical attacks, and other similar scenarios, assuming they were not otherwise covered. A network monitoring and analysis system would be required to cover this vulnerability unless it is covered from other protection mechanisms such as LAN encryption, authentication, and so on. It is clear that if auditing is to be effective in detecting attacks and acting to mitigate problems that occur, it will be necessary to expend a substantial effort toward audit automation if only to provide effective means for administrators to deal with the volume. Other aspects of auditing cover various vulnerabilities, but may not be the most cost-effective covering technique or may be best applied in conjunction with other techniques. \D{cmds} \D{redesign} \D{custaudit} It appears that misuse detection \D{cmds} in conjunction with Alma redesign \D{redesign} is the best solution to this problem. Alternatives require custom auditing and analysis which likely cost on the order of$2,500,000 and end up less effective and less flexible over the Alma life cycle. \D{custaudit}

Availability

Availability is not adequately addressed. There are several single points of failure that could seriously cripple or completely disable Alma, including:

• \A{AV-admin} A single corrupt systems administrator could bring down any given Alma installation with ease in a very short time frame and keep it down for an extended period. Similarly, administrative error could disrupt services for several hours. [Peterzell]
• \A{AV-prog} A single corrupt program propagating through the network could easily disrupt services, corrupt data, leak secrets, or repudiate responsibility. This could be sustained over a long period of time and cause great damage.
• \A{AV-power} A power failure of sustained duration or of the uninterruptable power supply could bring an entire Alma network down. Alma currently uses both an uninterruptable power system and backup generators. These should be retained and properly managed. [Fulghum]
• \A{AV-HWfail} A single hardware failure in a processor or an IO port could cause substantial damage to the current Alma implementation. This should be addressed for deployed versions.
• \A{AV-lanfail} A LAN failure could bring down a substantial portion of Alma in the current design. This should be covered by proper redundancy.
• \A{AV-jam} Jamming techniques and other similar attacks could be used to cause Alma to fail for sustained periods. [AvWeek2] [Eastlund]

\D{decenter} \D{ups} \D{multilan} \D{multiserver} \D{redesign} The availability issue requires that systems administration of a Alma network be decentralized \D{decenter}, that emergency power be provided in a distributed fashion \D{ups}, that multiple networks be implemented to cover single failures \D{multilan}, and that the database be replicated on multiple servers \D{multiserver}. Redundant servers are already planned. Decentralizing network administration requires a slight redesign of the Alma networking administration scheme. This would cost about $20,000. Redundant emergency power would double the cost of current uninterruptable power supplies. A generator should also be included with every Alma system as a backup, and should be tested regularly for proper operation and adequate fuel supply. Multiple networks in Alma must be addressed if Alma is to be implemented as an MLS system, and the addition of redundant networks would cost only about$2,000 per Alma system. Efficient redundant file servers would require the Alma redesign effort described earlier \D{redesign} and would be included in that effort.

In the current computing environment, rapid adaption of operational systems to differing sources of input is vital to operational continuity in deployed situations and as technologies and capabilities change. This presents an ongoing design problem for Alma which will likely require retrofits throughout its life cycle because of its high degree of interoperation with other systems. The current design permits this sort of adaption in a fairly straightforward manner, but in order to facilitate protection in such an environment, it is advisable that these considerations be included in any redesign effort. This requirement also contributes to the increased cost of using nonstandard protocols and the savings afforded by an Alma redesign at this time. Otherwise, no specific coverage is indicated.

Future Networking Enhancements

Future networking enhancements are likely and there is no protection plan in place for dealing with this issue. If we allow Alma to be networked with other systems using imperfectly matched protection, we may introduce vulnerabilities to both environments that neither had without the other. It is advisable that every time a system is networked to Alma, a similar analysis be performed for that system, and that an analysis of the interactions of the mechanisms on these different systems be performed for each such connection. Otherwise, no specific enhancement is required for this problem.

Field Deployment Issues

Field deployment introduces new problems of secure distribution:

• \A{F-distrib} Secure distribution of hardware and software must be used to prevent the hardware and software from being corrupted at the point of manufacture, during shipment, or at any other step in the process. Without secure distribution, arbitrary leakage, corruption, denial, and repudiation can be easily attained by an attacker at very low cost.
• \A{F-install} Secure installation requires that corruption during installation be prevented and that at installation, the integrity of the secure distribution and installation be verifiable.
• \A{F-maint} Secure maintenance requires that protection be maintained during maintenance as well as normal system activities. This includes secure backups, secure reconfiguration and restoration, secure channels for updates, suitably cleared maintenance personnel, secure supplies and supply channels, and secure change control over hardware and software modifications.

\D{noTro} Hardware and software must be securely distributed from the manufacturer to prevent the introduction of Trojan horses into the system. This cost is nominally covered by secure distribution costs and a slight fee for increased assurance from vendors.

\D{preconfig} Field-deployed systems should be preconfigured at a central site to avoid the protection problems associated with performing full installations in the field.

\D{sectransit} The fully configured system should then be broken down into components, labeled for reassembly, and packaged for physically secure transit to the field location. Only fully sealed packaging should be used for distribution because of the ease of introducing Trojan horses in relatively small and innocuous looking packages.

\D{field-proc} Field installation should consist of physically securing the site and introducing all required Tempest, power conditioning, and other components before computers are introduced. Computers and related components should be assembled last and only removed from their packaging in the secured areas. We estimate that this will require about $5,000 of additional packaging and other personnel costs for each distribution. \D{replacements} Replacement parts should be included in the distribution, and an additional secure distribution capability should be provided for further replacement parts. None of these requirements has substantial direct costs not already born in Alma deployment. They require only good procedural controls and training. Summary The following tables summarize by listing each vulnerability, the sets of protection techniques that cover it, and the costs of each protection technique, broken up into initial, per unit, and total amortized over 50 units. A '*' is used to indicate that this solution only applies to the MLS architecture, a '+' is used to indicate that this solution applies only to a non-MLS architecture, and a '!' is used to indicate items that only apply to systems without shielding. # Vulnerabilities and Protection Techniques {ll|} Vulnerability & Protection \hline \A{T-SWdist} & \D{field-proc} \A{T-HWleak} & \D{noTro} + \D{sectransit} \A{T-defect} & \D{noTro} + \D{sectransit} \A{T-Ebeam} & \D{shield} \A{T-HWjam} & \D{shield} \A{T-bugs} & \D{shield} \A{T-bugs} & \D{noise} \A{T-emanate} & \D{shield} \A{T-emanate} & \D{noise} \A{T-IOjam} & \D{shield} \A{T-target} & \D{noise} \A{T-target} & \D{shield} \A{T-visual} & \D{noise} \A{T-visual} & \D{shield} \A{T-conduct} & \D{shield} \A{T-physical} & \D{shield} \A{T-audio} & \D{shield} \A{T-keys} & \D{noise} \A{T-keys} & \D{shield} \A{T-EMP} & \D{shield} \A{T-Radar} & \D{shield} \A{T-future} & \D{shield} \A{R-dbase} & \D{redesign} \A{R-flex} & \D{redesign} \A{R-X11} & \D{redesign} \A{R-stress} & \D{redesign} \A{R-MLS} & \D{redoMLS} \A{P-TCP} & \D{mls}+\D{shield} \A{P-TCP} & \D{newprotocols} \A{P-NFS} & \D{mls}+\D{shield} \A{P-NFS} & \D{newprotocols} \hfill {|ll} Vulnerability & Protection \hline \A{P-YP} & \D{mls}+\D{shield} \A{P-YP} & \D{newprotocols} \A{P-dbase} & \D{mls}+\D{shield} \A{P-dbase} & \D{newprotocols} \A{P-IO[B} & \D{mls}+\D{shield} \A{P-IO} & \D{newprotocols} \A{P-IO} & \D{cmds} \A{D-peer} & \D{cmds} \A{D-peer} & \D{mls} \A{D-up} & \D{e-diode}+\D{guard} \A{D-up} & \D{p-diode}+\D{guard} \A{D-up} & \D{mls}+\D{guard} \A{D-down} & \D{e-diode}+\D{guard} \A{D-down} & \D{p-diode}+\D{guard} \A{D-down} & \D{mls}+\D{guard} \A{D-reenter} & \D{mls}+\D{guard} \A{PA-doc} & \D{mls} \A{PA-doc} & \D{newdoc} \A{PA-man} & \D{newman} \A{PA-train} & \D{newtrain} \A{PA-Tprot} & \D{newtools} \A{PA-Tprot} & \D{mls} \A{PA-Tprot} & \D{cmds} \A{PA-Tauthen} & \D{ccs} \A{PA-Taudit} & \D{cmds} + \D{mls} \A{PA-Taudit} & \D{custaudit} \A{PA-Tpwd} & \D{simptools} \A{PA-Tpwd} & \D{mls} \A{PA-Tadmin} & \D{newtools} \A{PA-Aaccess} & \D{mls} + \D{cmds} {ll|} Vulnerability & Protection \hline \A{PA-Aaccess} & \D{newtools} \A{PA-Ainteg} & \D{ccs} \A{PA-Apasswd} & \D{mls} \A{PA-Apasswd} & \D{cmds} \A{PA-Aspoof} & \D{mls} \A{PA-Aspoof} & \D{cmds} \A{PA-Aspoof} & \D{custaudit} \A{PA-Aproc} & \D{newtools} \A{PA-Aproc} & \D{cmds} \A{PA-Aproc} & \D{custaudit} \A{PA-Aactive} & \D{newtools} \A{PA-Aactive} & \D{cmds} \A{PA-Aactive} & \D{custaudit} \A{PA-Aexec} & \D{newtools} \A{PA-Aexec} & \D{cmds} \A{PA-Aexec} & \D{custaudit} \A{PA-Afiles} & \D{newtools} \A{PA-Afiles} & \D{cmds} \A{PA-Afiles} & \D{custaudit} \A{PA-Aattack} & \D{newtools} \A{PA-Aattack} & \D{cmds} \A{PA-Aattack} & \D{custaudit} \A{I-lan} & \D{cmds} \A{I-SW} & \D{ccs} \A{I-SW} & \D{cmds} (weak) \A{I-data} & \D{ccs} (weak) \A{I-data} & \D{cmds} \A{I-prot} & \D{cmds} \A{I-prot} & \D{custaudit} \A{I-phys} & \D{cmds} \hfill {|ll} Vulnerability & Protection \hline \A{I-phys} & \D{custaudit} \A{I-HW} & \D{cmds} \A{I-HW} & \D{noTro} + \D{sectransit} \A{I-IO} & \D{custaudit} \A{I-IO} & \D{cmds} \A{I-inputs} & \D{guard} \A{I-other} & \D{cmds} (weak) \A{I-other} & \D{custaudit} \A{I-transitive} & \D{ccs} \A{A-OS} & \D{cmds} \A{A-OS} & \D{custaudit} \A{A-lan} & \D{cmds} \A{A-dbase} & \D{redesign} + \D{ccs} \A{A-dbase} & \D{redesign} + \D{cmds} \A{A-dbase} & \D{redesign} + \D{custaudit} \A{A-Alma} & \D{redesign} + \D{ccs} \A{A-Alma} & \D{redesign} + \D{cmds} \A{A-Alma} & \D{redesign} + \D{custaudit} \A{A-extern} & \D{guard} \A{A-meaning} & \D{cmds} \A{A-monitor} & \D{cmds} \A{AV-admin} & \D{decenter} \A{AV-prog} & \D{ccs} \A{AV-prog} & \D{cmds} \A{AV-power} & \D{ups} \A{AV-HWfail} & \D{multilan} + \D{multiserver} \A{AV-lanfail} & \D{multilan} + \D{multiserver} \A{AV-jam} & \D{multilan} + \D{multiserver} \A{F-distrib} & \D{noTro} + \D{sectransit} \A{F-install} & \D{noTro} + \D{preconfig} \A{F-maint} & \D{noTro} + \D{sectransit} # Protection Techniques and Their Costs {l|r|r|r} Protection & Initial Cost & Unit Cost & at 50 units \hline \D{ccs} & 200,000 & 0 & 4,000 \D{cmds} & 250,000 & 20,000 & 25,000 \D{custaudit}+ & 2,500,000 & 0 & 50,000 \D{decenter} & 20,000 & 0 & 400 \D{e-diode}+ & 60,000 & 1,000 & 2,200 \D{embedccs}+ & 250,000 & 0 & 5,000 \D{field-proc} & 0 & 5,000 & 5,000 \D{guard} & 500,000 & 0 & 1,000 \D{guardHW}+ & 0 & 10,000 & 10,000 \D{mls} & 0 & 2,000 & 2,000 \D{multilan} & 20,000 & 2,000 & 2,400 \D{multiserver} & n/a & n/a & n/a \D{newdoc} & 50,000 & 200 & 1,200 \D{newman}+ & 50,000 & 200 & 1,200 \D{newprotocols}! & 1,250,000 & 0 & 25,000 \D{newtools} & 100,000 & 0 & 2,000 \D{newtrain} & 40,000 & 5,000 & 5,800 \D{noTro} & n/a & n/a & n/a \D{noise} & 10,000 & 20,000 & 20,200 \D{p-diode}+ & 10,000 & 500 & 700 \D{perimeter}! & n/a & n/a & n/a \D{preconfig} & n/a & n/a & n/a \D{redesign} & 1,500,000 & 0 & 30,000 \D{redoMLS}* & 500,000 & 0 & 10,000 \D{replacements} & n/a & n/a & n/a \D{sectransit} & n/a & n/a & n/a \D{shield}* & 10,000 & 40,000 & 40,200 \D{shield}+ & 10,000 & 150,000 & 150,200 \D{simptools}+ & 20,000 & 0 & 400 \D{training} & 20,000 & 5,000 & 5,400 \D{ups} & n/a & n/a & n/a } Analysis Method: The chosen analysis technique is designed to find the minimum cost covering for the vulnerabilities considered. This technique uses a covering table like the example depicted here (this sample depiction is not accurate): Analytical Technique {l|c|c|c|c|c|c|} Vulnerability,Protection & Integrity & CMDS & Guard & Noise & Walls & ... \hline Downgrade Leakage & yes & no & yes & no & no & ... \hline Upgrade Leakage & yes & no & no & no & no & ... \hline Program Corruption & yes & yes & no & no & no & ... \hline Network Corruption & no & yes & no & no & no & ... \hline Tempest & no & no & no & yes & no & ... \hline ... & ... & ... & ... & ... & ... & ... \hline At least one yes'' is required in each row if we are to cover'' each vulnerability with a protective technique. The following algorithm can be used to derive a list of candidate covers: 1. If only one yes covers any given row, the technique in that column is required in order to cover the corresponding vulnerability. Select the column with that yes as necessary''. Since all rows covered by the necessary column are now covered, remove those rows from further consideration. In the preceding example, CMDS and Integrity checking are necessary because they are the only covers of Network Corruption and Upgrade Leakage, respectively. Since these techniques also cover all other vulnerabilities listed except for Tempest, all of the other rows are removed. 2. If there are columns left without a yes, remove them from the matrix. In this example, Guard is removed, since it provides no required coverage. We are now left with only trade-offs between competing techniques. 3. For each remaining technique, if any one technique both costs less than or the same as another technique, and covers everything the other technique covers, remove the more expensive technique. This selects in favor of techniques with equal or superior coverage and lower or equal costs. 4. For each remaining technique, select that technique, act as if it were necessary under the preceding criteria, and proceed through these steps until there is nothing left to cover. This process will generate a set of all possible minimum cost covers. 5. Add the costs of the list of items selected for each minimal cost cover, and select the cover with the lowest total cost. This will produce the lowest cost single cover. The correctness of this technique will not be shown here, but it is very similar to covering problem solutions used in other fields and follows the same theoretical process. \vbox{\centering \small # MLS Implementation Protection Techniques and Their Costs {l|r|r|r} Protection & Initial Cost & Unit Cost & at 50 units \hline \D{ccs} & 200,000 & 0 & 4,000 \D{cmds} & 250,000 & 20,000 & 25,000 \D{decenter} & 20,000 & 0 & 400 \D{field-proc} & 0 & 5,000 & 5,000 \D{guard} & 500,000 & 0 & 10,000 \D{mls} & 0 & 2,000 & 2,000 \D{multilan} & 20,000 & 2,000 & 2,400 \D{multiserver} & n/a & n/a & n/a \D{newdoc} & 50,000 & 200 & 1,200 \D{newtools} & 100,000 & 0 & 2,000 \D{newtrain} & 40,000 & 5,000 & 5,800 \D{noTro} & n/a & n/a & n/a \D{noise} & 10,000 & 20,000 & 20,200 \D{preconfig} & n/a & n/a & n/a \D{redesign} & 1,500,000 & 0 & 30,000 \D{redoMLS} & 500,000 & 0 & 10,000 \D{replacements} & n/a & n/a & n/a \D{sectransit} & n/a & n/a & n/a \D{shield} & 10,000 & 40,000 & 40,200 \D{training} & 20,000 & 5,000 & 5,400 \D{ups} & n/a & n/a & n/a } \vbox{\centering \small # nonMLS Implementation Protection Techniques and Their Costs {l|r|r|r} Protection & Initial Cost & Unit Cost & at 50 units \hline \D{ccs} & 200,000 & 0 & 4,000 \D{cmds} & 250,000 & 20,000 & 25,000 \D{custaudit}* & 2,500,000 & 0 & 50,000 \D{decenter} & 20,000 & 0 & 400 \D{e-diode} & 60,000 & 1,000 & 2,200 \D{embedccs} & 250,000 & 0 & 5,000 \D{field-proc} & 0 & 5,000 & 5,000 \D{guard} & 500,000 & 0 & 10,000 \D{guardHW} & 0 & 10,000 & 10,000 \D{mls} & 0 & 2,000 & 2,000 \D{multilan} & 20,000 & 2,000 & 2,400 \D{multiserver} & n/a & n/a & n/a \D{newdoc} & 50,000 & 200 & 1,200 \D{newman} & 50,000 & 200 & 1,200 \D{newprotocols}! & 1,250,000 & 0 & 25,000 \D{newtools} & 100,000 & 0 & 2,000 \D{newtrain} & 40,000 & 5,000 & 5,800 \D{noTro} & n/a & n/a & n/a \D{noise} & 10,000 & 20,000 & 20,200 \D{p-diode} & 10,000 & 500 & 700 \D{perimeter}! & n/a & n/a & n/a \D{preconfig} & n/a & n/a & n/a \D{redesign} & 1,500,000 & 0 & 30,000 \D{replacements} & n/a & n/a & n/a \D{sectransit} & n/a & n/a & n/a \D{shield} & 10,000 & 150,000 & 150,200 \D{simptools} & 20,000 & 0 & 400 \D{training} & 20,000 & 5,000 & 5,400 \D{ups} & n/a & n/a & n/a } \SSS{The Covering Table Results} The covering tables for MLS and non-MLS designs with and without shielding were generated from the Alma data previously described. These results are summarized by the following cost table: {l|r|r|r|r|r|} Design Choice & Initial cost & Unit cost & ea.(1) & ea.(10) & ea.(50) \hline MLS w/shield & 3,140,000 & 74,000 & 3,214,000 & 388,000 & 136,800 MLS w/o shield & 4,390,000 & 54,000 & 4,444,000 & 493,000 & 141,800 NonMLS w/shield & 2,690,000 & 184,200 & 2,874,200 & 453,205 & 238,000 NonMLS w/o shield & 3,940,000 & 54,200 & 3,994,200 & 448,200 & 133,000 The key to understanding the trade-offs for substantial numbers of Alma systems seems to lie in assessing two issues: 1. Is it worth forgoing the coverage of shielding for a savings of$3,000 per Alma? (This is the cost difference between the lowest cost non-shielded version of Alma with coverage and the lowest cost shielded version of Alma with coverage.)
2. Is the extra operating expense of running a system high (non-MLS) Alma more than the $3,000 saved on each Alma by using a non-shielded non-MLS implementation? It seems clear that the expense of running system high will greatly exceed$3,000 per system, and that the cost of shielding the classified portions of Alma is far less than the potential benefit in reducing attacks.

Case Study 5: The DoD and the Nation as a Whole

Throughout the rest of this case study, the term we is used. In the original report, this referred to the information security organization within DISA for which this report analysis was performed, but in my opinion, the responsibility lies with all of us to assure that these things get done.

Information Assurance Is a Military Readiness Issue

We should strive to ensure that senior decision-makers come to understand that the assured availability and integrity of information are essential elements of U.S. military readiness and sustainability so that they will provide adequate resources to meet this looming challenge.

Military capability is: The ability to achieve a specified wartime objective (win a war or battle, destroy a target set). It includes four major components; force structure, modernization, readiness, and sustainability.'' [JCS102]

• a. force structure - Numbers, size, and composition of the units that comprise our Defense forces; e.g. divisions, ships, air wings.
• b. modernization - Technical sophistication of forces, units, weapon systems, and equipment.
• c. readiness - The ability of forces, units, weapons systems, or equipment to deliver the outputs for which they were designed (includes the ability to deploy and employ without unacceptable delays).
• d. sustainability - The ability to maintain the necessary level and duration of operational activity to achieve military objectives.

Readiness assessment generally involves such factors as people authorized and on hand, their skills and training; operational status of equipment, the time to repair, degree of degradation; training status of units, recency of field exercises, command-post training; and other more detailed factors. In the age of information warfare, everyone in the military must recognize that the readiness status of forces, units, weapons systems, and equipment depends on the status of the information infrastructure. An assessment of readiness should include such questions as:

• Are there enough information workers and managers on hand?
• Are they properly trained in detecting and reacting to information attacks?
• How recently have they undergone defensive information warfare training?
• What is the readiness status of the information infrastructure?
• How much stress can the infrastructure take at this time?

Currently, the DoD appears unable to take comfort in the answers to these questions. Training programs to prepare information workers for the prevention of attack, detection of intentional attacks, differentiation of malicious from mischievous from accidental disruption, and the recovery steps to undertake do not exist. Worse, there is no analysis indicating how many people with what sorts of training and skills are required to operate successfully in an information warfare environment.

The DoD depends on the DII at least as much as it depends on its logistics structure for battle readiness, and yet the DoD does not treat them in the same light. The DoD must assess information assurance as a readiness issue. It must incorporate information infrastructure readiness into the overall military readiness assessment, and it must treat DII readiness as a component critical to overall battle readiness. A recent awareness campaign has had a substantial effect on the top levels of government; however, the awareness must be spread throughout the DoD in order to have lasting effect.

National Planning Should Reflect Information Age Warfare

\BOX{In any conflict against an information warfare opponent, the information infrastructure will take battle damage. Whether the war is military or economic, and whether the weapons are bombs or computer viruses, in order to continue as a nation under this sort of attack, the NII must automatically detect, differentiate, warn, respond, and recover from disruption.}

There must be enough redundancy to meet bandwidth requirements during anticipated levels of disruption, sufficient firewalls to prevent disruption from spreading, sufficient mechanisms to make recovery and reconstitution of the NII feasible in an appropriate time frame, and sufficient training and support to allow that reconstitution to safely take place. In order to meet budget constraints, we must find ways to do this at a tolerable cost.

It is not reasonable to expect that technicians will be able to detect, differentiate, warn, respond, and devise work-arounds for each attack in real-time, and in the case of remote components, they may be unable to gain access to do these things at reasonable cost. For this reason, the designers of the NII must devise mechanisms that are as nearly automatic as feasible, and have built-in resiliency that, at a minimum, puts these mechanisms into known and controllable state sequences when they become ineffective over a period of time. This is very similar to the requirements on remote space exploration vehicles, except that the NII must be designed to behave in this fashion even during hostile attack and at a far lower cost.

We Must Retain Flexibility

In order to spend money wisely and still be properly prepared, the NII must retain flexibility to adjust to changes in direction and applications over the next 20 years. Compare U.S. war fighting in 1975 to 1995. Predicting 2015 is not a simple matter. Compare business computing over the same time frame. Rather than trying to make a 20-year prediction and hinging enormous amounts of money on being right, we should use designs that ensure an NII capability that is flexible enough to adapt with the times. Fortunately, information systems are easily made flexible, but unfortunately, that flexibility leads to vulnerability to disruption. The designers of the NII must devise information assurance techniques that allow flexibility without increasing vulnerability.

Information Assurance Policies And Standards are Needed

Most current information protection policies include requirements for availability and integrity, but these features are always mentioned along with secrecy. When this policy is translated into implementation, the information assurance elements are usually ignored. An example of this is the recent draft versions of the Defense Information Systems Network (DISN) specification. The top-level goals include almost equal emphasis of these three elements of information assurance, [DISN-conops] but in the design process, there is often a deemphasis of information assurance and an emphasis on secrecy. [DISN-security] There seem to be two reasons for this, and top-level attention is required in order to resolve them:

• Information assurance is usually brought up in conjunction with protection of classified information. Even though these areas are distinctly different, they are specified, discussed, and addressed together.

In order to assure that information assurance is adequately addressed, policy makers should separate the information assurance requirements from the secrecy requirements, and make it explicit in policy documents that they are separate and different.

• There are no information assurance standards explicitly referenced in top-level specifications. When specifications are translated into implementations, standards influence a large part of the design process. Standards are commonly viewed as checklists that have to be met, and where no standards are specified, there is no checklist, and thus no features are implemented.

To assure that information assurance is properly and consistently practiced, we should develop a set of information assurance standards for the NII that address disruption.

We should engage on a program of education to ensure that the top-level technical managers responsible for designing and operating the NII understand the issues of infrastructure design as opposed to typical system design and can help make design decisions that will satisfy the changing requirements over the lifetime of the infrastructure.

In order to transition existing systems into the NII while providing appropriate information assurance, we must first understand the weaknesses of existing systems, and then find ways to provide these systems with the information assurance features required in order to operate in the NII environment.

A key step in this process is performing a threat assessment which can be used as a baseline for vulnerability analysis. If properly done, such a threat assessment will bring to light a variety of new threats and threat sources that have not historically been considered.

Once the threat assessment is completed, vulnerability analysis of the most common classes of systems can begin in order to create baseline vulnerability assessments of the major classes of systems without performing an expensive and unnecessary exhaustive analysis of each system on a piecemeal basis.

While vulnerability analysis is underway, mathematical life-cycle cost and coverage analyses of potential defensive measures against identified threats in different classes of environments can be performed. As vulnerability assessments become available, the results of these assessments can be used in conjunction with defensive measure analysis to identify minimum cost protective measures required to cover identified threats.

As threats, vulnerabilities, and defensive measures are made available to program managers, they can make risk management decisions and implement appropriate controls in keeping with budget and other constraints.

Technical Vulnerability Should Be Assessed

NII planners should undertake a substantial study of existing and planned NII components in order to understand their vulnerabilities to offensive information warfare and determine appropriate actions to provide information assurance during the interim period before the NII and enhanced components are fully developed. Specifically:

• Perform disruption oriented assessments to identify potential vulnerability.
• Perform safe and authorized experiments to more precisely assess the extent to which accidental and intentional disruption has been addressed in the NII components in place today.
• Analyze the overall NII in conjunction with these analytical and experimental results to assess overall NII vulnerability to disruption today.
• Determine methods by which existing and proposed NII components can or should be cost effectively upgraded or replaced over time to provide enhanced information assurance for the NII.

There are some limited but proven scientific theories about vulnerability to intentional disruption, [Cohen-Wiley] [Cohen-Unix] and these theories can be used to form hypotheses about potential information assurance problems. From these hypotheses, the government and key industry members should sponsor the development of experiments to confirm or refute the existence of actual vulnerabilities, provide immediate awareness of their existence to information assurance personnel, and form approaches to removing or reducing their impact on the NII.

Something that should be clear from the start is that it will be infeasible to analyze software in most existing systems for potential vulnerabilities. The DoD alone has more than 500 million lines of customized software in operation today, and the vast majority of it has never been examined for information assurance properties. With that much unexamined software, it is prudent to assume that malicious logic weapons have been implanted.

One way to enhance assurance in networked systems at a very low cost is to provide an external misuse detection capability at the network level. These sorts of enhancements can provide substantial protection improvement at minimal cost, remain flexible enough to be adapted as the NII expands, and can provide a backbone for long term automated detection and response. Such systems exist today and anyone with a substantial network should consider using them. [Denning86] [Lunt88] [Lunt88-2] [Heberlein] [Proctor]

In the course of assessment, improved procedures, standards, and documents should be generated to capture and disseminate the limited expertise currently available in this field. A mentor program might also be used to develop more expertise in this area.

According to one recent report, [FCC-NRC] the root cause of 30 to 40 percent of failures in digital cross connect systems is human procedural errors and this is the cause of more disruption than any other single source. Many industry studies show similar results for other classes of information systems and networks. One report claimed that more than 80 percent of reported intrusions could have been prevented by human procedures. [Bellcore] Another author posted to the risks forum that the lack of information from the current computer emergency response team (CERT) caused numerous disruptions to take place and kept them from being prevented, detected, and corrected. [Risks]

High reliability organizations are defined as high-risk organizations designed and managed to avoid catastrophic accidents. The organization is high-risk due to the high complexity of the technology. Examples include air traffic control and nuclear reactors. \dots increasing numbers of serious errors will occur in high-reliability organizations, \dots data is lacking on ways to avoid exceeding human capacity limits, and \dots design and management strategies to allow safe operation are not understood. \dots These organizations have several distinguishing characteristics in common: hypercomplexity; tight coupling of processes; extreme hierarchical differentiation; large numbers of decision makers in complex communication networks (law of requisite variety is cited); higher degree of accountability; high frequency of immediate feedback about decisions; compressed time factors measured in seconds; more than one critical outcome that must happen simultaneously.'' Another study is cited to show that designers are often unaware of the human limits to operating such systems. However, as Perrow points out \dots Designers tend to believe that automatic controls reduce the need for operator intervention and errors, while operators frequently override or ignore such controls due to the constraints \dots''. [Roberts]

We have to assure the resolution of the role of human components of information assurance to properly protect the NII. There are generally three strategies for improving this situation:

• Automate more human functions.
• Improve human performance.
• Use redundancy for integrity.

It is generally beneficial to automate functions for enhanced reliability whenever automation enhances performance, reduces cost, or provides other desired benefits. Unfortunately, while we spend a lot of money on enhancing automation for other tasks, one of the areas where automation is severely lacking is protection management. A simple example is the lack of administrative tools in most timesharing computer systems. Systems administrators are expected to keep systems operating properly, and yet:

• There are typically millions of protection bits that have to be set properly to prevent disruption and there are virtually no effective or supported tools to help set, validate, verify, or correct them. [Cohen-Unix]
• The DoD requires systems administrators of many systems to examine audit trails daily for signs of abuse, but it is virtually impossible for people to detect intentional disruption by this process, and the time and effort consumed in this activity is quite substantial. [5200.28] According to one report, audit records for a system with seven users executing an average of one command per minute over a period of six hours results in 75 Mbytes of audit data! [Proctor]
• Current audit analysis requirements don't require real-time analysis or response. Even automated audit reduction tools are inadequate in today's environment if they cannot act in near real-time, because disruptions can spread through a network at a very high rate unless response times are very short. For example, one AT&T switching system will disrupt the local central office unless failures are detected and responded to within 1.5 seconds of their occurrence. [Pekarske]
• Local area network administration tools are just now emerging, and the few tools that are commercially available open unlimited opportunity for intentional disruption. Some of the most powerful tools for network analysis are available for free and allow even an unsophisticated user to observe network packets. In many current local area networks (LANs), this allows passwords to be observed as they are entered.

Research has shown that performance of certain types of control room tasks increases if the operator has some knowledge of the functioning of the process.'' [Ivergard]

Improving human performance is most often tied to motivation, training, and education, and again, there is woefully little of this in the information assurance area. Educational institutions do not currently provide the necessary background to make training easy, [Cohen-Wiley] and existing training programs in information assurance are not widely incorporated in the military or industry. These areas must be addressed if we are to provide information assurance for the NII.

In order for the NII to react properly to malicious disruption, it must be able to prevent disruptions where possible, and detect and respond appropriately to disruptions when prevention is not possible. In plain terms, the operators of the NII must be able to manage the damage. During periods of substantial disruption, there are likely to be more tasks to perform than bandwidth available to perform them. In an economic model of a high demand, low supply situation, the value of services naturally increases and usage decisions change to reflect the relative values.

It would be prudent to create an analogy to this economic theory for NII priorities so that the network manager can design a priority assessment and assurance scheme so that the value of information passed through the degraded NII is higher per bit than that passing though the non-degraded NII. Someone needs to specify metrics for, assess value of, and assign priority to information as a function of value at that time and the NII must use these metrics to prioritize its behavior. A sound start in this area could be achieved by developing a national version of the commercially oriented Guideline for Information Valuation. [ISSA]

If the priority assessment scheme is not a fully automatic process, the NII may have a profound problem in reacting in a timely fashion. The first problem is that if people have to react, they are inherently limited in their reaction time. If the attack is automated, and peoples' reaction times limit the defense, it may be possible to design attacks that vary at a rate exceeding the human ability to respond. A knowledgeable attacker who understands reflexive control may exploit this to create further disruption by misleading the people into reflexive response, and exploiting those responses to further the attack. [Giessler] A fully automatic response may have similar reflexive control problems except that it is potentially more predictable and normally far faster. This is where design flexibility must also come into play.

Priorities Should Be Properly Addressed Over Time and Circumstance

Information assurance issues must be flexibly prioritized and adapted as needed in order for the NII to behave properly over the range of operating and disrupted conditions. The metrics associated with information should be evaluated differently in different situations and should include such factors as time, value, criticality, locality, and redundancy. Each of these values should have an effect on the manner in which the NII prioritizes activities, while each should be controlled by different mechanisms to assure that an attacker cannot circumvent a single mechanism and exploit this to dominate activities.

Even in the most dire of circumstances, unconditional pre-emption should not be the method of choice for prioritizing scarce services. The problem is that pre-emption results in service denial for the pre-empted and, if the assessment of priorities is not accurate, it may be highly desirable to apply some, albeit reduced, bandwidth toward all legitimate needs. It would be preferable to have a scheme whereby higher priorities have a higher probability of domination of resources at any given time, but over any significant period of time, even the lowest priority process has a reasonable expectation of some limited service. This concept is often called `graceful degradation'.

Criticality of Function Should be Properly Addressed

A more fundamental issue that must be resolved is how to prioritize between the basic information assurance measures. If it is better to have wrong information than no information, then availability is more important than integrity. If it better to have no information than wrong information, then integrity is more important than availability. The former appears to be the case from a standpoint of infrastructure recovery, where even low integrity information may assist in service restoration. The latter appears to be more appropriate when making strategic or tactical decisions where a decision based on corrupt information can be fatal.

In most modern databases, it is a simple matter to make undetected modifications. Whereas an outage would be noticed and cause a response and modern database techniques detect inconsistencies in a database, there is no protection provided in most modern databases for erroneous data entered through the legitimate database mechanism or malicious modification by a knowledgeable attacker.

Subtle corruptions typically produce a different sort of failure, such as a missile defense system detecting hostile missiles as friendly or an airplane flipping upside down as it enters the southern hemisphere. [Risks]

In DoD logistics, command and control, and medical databases, such an error can not only be fatal, but can cause the DoD's automated information systems to be used as a weapon against it. In the national power grid, such a failure could literally wipe out electrical service throughout the country.

Priorities Should Interact Properly Across Components

Prioritization in the NII will involve both communication and computation, and the prioritization schemes must meld together in a suitable fashion across these boundaries. Furthermore, many of the computation components of NII will not be under the operational control of the network managers. For example, embedded systems interacting with the NII will have to interact in specific ways in order to assure that no mismatch occurs. The NII will have to be able to deal effectively with intentional mismatches created to disrupt interaction between communication and computation resources.

Most current network protection strategies are based on the concept that all of the systems in the network behave properly and many local area network protocols are based on well-behaved hardware devices and software products in all of the nodes. When connecting these networks to global systems, imperfectly matched protocols or processes can snowball causing widespread disruption. The priority assessment scheme must not be based on trusting the network components and must be designed to detect and react properly to limit the spread of network-wide disruptions regardless of their specific characteristics. There are some theories for addressing protocol inconsistencies, but new basic understandings are needed at the process and infrastructure levels. We must promulgate standards that provide assurance based on the assumption of malicious components, and not based solely on lists of known attacks.

We Should Train for Defensive Information Warfare

Information workers cannot be expected to react properly under stress unless they are properly prepared for defensive information warfare. This involves several key actions by those who manage components of the NII:

• NII components must act together to develop proper policies and procedures, to define specific defensive information warfare tasks to be carried out, and to specify the manner in which they are to be performed.
• NII components must train information workers in how to properly carry out their duties under stress, so that they are able to efficiently carry them out as required under battle conditions.
• NII components must hold readiness drills and regular exercises so that the skills developed and honed in training do not decay with time.
• NII components should hold war games in order to determine weaknesses in strategies and improve them over time.

In the long term, education and training for defensive information warfare must rest upon a well conceived, articulated, implemented, and tested body of strategy, doctrine, tactics, techniques, and procedures. In turn, this body of knowledge must be based, in large measure, on a fairly detailed knowledge of the offensive capabilities available to potential adversaries and the nature of possible attacks on the information infrastructure. In the short term, however, there are several actions that should be undertaken to mitigate disruptions of the information infrastructure.

As a first priority, everyone associated with the operation, management, and maintenance of the NII should become familiar with the concept of information assurance and the nature of likely disruptions, and should undergo regular training and awareness drills to reenforce this training. Primary emphasis should be given to proper prevention, detection, differentiation, warning, response, recovery, analysis, and improvement.

The operators of the elements of the NII must be trained to consider, as a matter of course, the possibility that there are hostile disruptions being undertaken and that nobody, other than the attacker, is aware of them. Without awareness, advanced training, and education, the human elements of the NII are unlikely to be able to detect attacks unless and until advanced technology-based warning enhancements are implemented. Even then, awareness, advanced training, and education play a vital role in installing, maintaining, and using the automation.

As a second priority, training and awareness should be given to NII users. While this training may be more narrow in scope, it is essential that the users of the NII be aware of the information assurance issues, how their function can be impacted by NII disruption, what they should do to avoid causing disruption, and what they should do in the event of disruption.

Extensive use of simulation capabilities is called for in training individuals and groups. This training should be reinforced through the conduct of frequent readiness drills and exercises. These drills and exercises may initially be conducted as standalone events, but must eventually be integrated into the day-to-day operations of the NII components.

Information assurance should become part of the curricula of technical and professional courses of instruction offered throughout the nation. Information assurance should be embedded in all courses related to information systems, sciences, and management, and courses concentrating on information assurance should be offered as a part of the required curriculum for students concentrating on computer or information science or engineering.

fred at all.net