Protecting Your Information Assets
Information protection is a multifaceted discipline encompassing a wide range of management and technical specialties, and affecting every aspect of modern life. In the space available, it is impossible to adequately cover even a small portion of the information protection field in great depth. Instead, protection issues will be covered from several different perspectives to present a broad view of the overall problems and solutions.
It may seem strange at first not to address protection by simply detailing how to protect against each attack. This well-known process of elimination seems logically sound at first glance. It is commonly used in many fields, and it is one of the fundamental processes of rational problem-solving.
The problem with the process of elimination in this particular field is that potentially infinite numbers of different attacks exist. [Cohen-Wiley] [Cohen86] [Turing] As the world famous philosopher of science Karl Popper so rightly pointed out, [Popper] you cannot prove the truth of a universal statement (e.g., we are protected) about an infinite set (e.g., against all attacks), with a finite number of examples (e.g., by this list of defenses). It only takes one falsification (e.g., one successful attack) to prove the statement false.
A good example of a reactive protection posture is the way most people defend against computer viruses today. Until they experience a virus, most people will do nothing. During their first experience, they buy products until they find one that detects and removes the computer virus they encountered. They may then use that product sporadically for some time. If they get hit again, they may find that their current product doesn't work on this new virus, and they will purchase a new version that does handle it. If one of these viruses destroyed all of the information on their disk, they will suffer the consequences, but very few people, even those who have lost a lot of work, have gone to a proactive approach. They wait till they see an obvious symptom, and then they react by looking for a cure.
The technical aspects of this field are deeply interesting to many information technologists, but a common mistake is to forget that technology only exists in a human environment. The quality of protection and its cost and effectiveness are determined by the quality of the judgments made by those who design and implement it.
Technology experts have a tendency to use their knowledge as a lever to get to what they feel is the right solution. In the computing field in general, and in the information protection field in particular, technical people who don't understand people solutions seem to leap to technical solutions and stand behind them as if they were the only solution with any hope of working. To managers, it may seem like they are getting a snow job, but it's usually not intentional. The friction between the manager and the technologist causes frustration which makes for more friction, and on it goes. The net effect is that the technical and management people don't work well together, and protection ends up being ineffective.
The solution I offer when I encounter this problem is that the technologist should learn about the broader issues at stake and the manager should learn enough about the technical field to keep from being snowed. That's one of the main goals of this chapter. For those with a narrow focus and a lot of technical knowledge, I hope to demonstrate that the solutions to protection problems are not purely technical and that no technical solution will ever completely solve the problem. For those with a broad focus but little technical knowledge, I hope to provide the ammunition required to defend against technobabble. For those with both a broad perspective and a strong technical background, I hope to increase both breadth and depth of understanding of the information protection issue.
Information is defined as ``symbolic representation in its most general sense''. Protection is defined as ``keeping from harm''. Since the concept of harm does not apply to inanimate objects (you can damage them but not harm them), the field of information protection is preventing harm caused by symbolic representations.
As I have said before, my basic perspective is that information protection is something you do, and not something you buy. The most effective efforts at information protection are proactive in nature, customized to the organization, and at least 50 percent people solutions.
Infrastructures Are Different Than Other Systems
There are substantial differences between designing a typical information system and designing a good information infrastructure, and the techniques normally used in information system design are often less than ideal in infrastructure design. One of the most glaring examples of these differences is in the tradeoffs between efficiency and effectiveness.
In designing typical information systems, good designers almost always choose to do things for efficiency, while good infrastructure designers almost always choose to do things for long-term effectiveness.
Unlike the automotive highway systems developed under the WPA, our information infrastructure was and is being developed by commercial vendors for the purpose of making a profit. That's one reason cable television is rarely found in rural areas, while the highway system extends throughout the country.
If commercial vendors were building toll roads, they probably wouldn't put them where they are now. They would probably have a hub and spoke system like the airlines have, and freight would be handled very efficiently by a few large freight companies through their selected hubs. Eventually, high-speed trains would replace much of the highway system because they can carry freight far more cost-effectively and faster than highways.
The tradeoffs between social goals, such as equal access and survivability, and financial goals, such as cost and performance, are being made by corporations today. Without making a value judgment about which is better, it is important to understand that efficiency under normal operating conditions almost never equates to efficiency or effectiveness under exceptional conditions. For this reason, any system designed purely for financial efficiency under normal operating conditions is highly susceptible to intentional disruption.
The infrastructure design issues of the NII are not now being addressed effectively and the implications on long-term information assurance may be significant. Most organizations that build their own information infrastructures face these same issues.
A Piecemeal Approach
With this overall perspective in mind, I will now give examples of piecemeal protective techniques that are effective to some extent against each of the elements of attack described earlier. The list of defenses for each attack is by no means exhaustive or even close to it, just as the list of attacks is not exhaustive, but this should give you some idea of the different ways in which attacks and defenses are approached on a piecemeal basis.
As I have discussed earlier, piecemeal approaches are not cost-effective and leave numerous weaknesses. On the other hand, many people who actually defend against systems still use them. This description of piecemeal approaches should give you some idea of why they fail. All you have to do to understand this is to contemplate implementing all of these techniques in your organization in order to provide a reasonable level of protection. Clearly, you will run out of time, patience, and money before you are done.
Attackers can be eliminated from the pool of threats by a wide variety of different techniques and policy decisions. Someone once said that if you are worried about people throwing stones, don't live in a glass house. It is much the same way with information systems attackers. We can eliminate tiger teams by simply not inviting them in. Many hackers can be stopped by placing adequate notice on system entry to convince them that they are not welcome. Outsiders launching over-the-wire attacks can be eliminated by separating the externally networked portion of the information environment from the internal network components. So much for the easy ones.
Terrorists tend to be reasonably well funded, but they aim at disrupting infrastructure, are particularly oriented toward the financial industries, and want to generate high-profile incidents. Terrorists can be reduced as a threat by locating information systems in places they are not likely to attack.
Many studies in the 1960s and 1970s relating to anti-war activists indicated that locating computer installations in particular sorts of buildings decreased the chance of attack. The bottom floors of low-profile buildings seem to be relatively safe. Buildings that are not marked or otherwise promoted to indicate the fact that they house major information centers are also less likely to be attacked. Physical distribution of resources also reduces the effect of any single incident and thus reduces the value of attack for terrorists.
Insiders are commonly cited as a major source of attack. Eliminating the insider threat is a complex issue, but some aspects include good pre-employment employee screening practices, good internal employee grievance practices and procedures, high organizational morale, and good personal support services for employees under stress. For example, in one company I used to run, we had a policy of doing whatever we could to help employees who got in trouble. One employee ended up arrested for driving while intoxicated. We helped get him into a half-way house, drove him to and from work, and helped him get counselling. Over the years that followed, he was one of our most loyal, trustworthy, and hard-working employees. Although the American corporation no longer provides lifelong job security, there is a lot of benefit to maintaining good working relationships.
Private detectives and reporters are pretty hard to stop once they decide to try to get some information about an organization, but there are some things that work well once you are aware that they are trying to attack your information systems. One thing to do is get an expert in surveillance to sweep for outside attackers. Although this is cost-prohibitive for many small businesses, for large organizations, the $20,000 or so that it costs to locate obvious problems may be very effective. Legal means are also quite effective in limiting these sorts of attacks because these attackers are generally anxious to stay out of jail and retain their licenses or jobs.
Effective methods against potentially hostile consultants include strong contractual requirements, very high-profile public prosecution of consultants caught violating protection, and refusal to hire ex-computer-criminals as computer consultants. This last point is one of the most daunting of all to me. As one of the many hard-working legitimate experts in this field, I often find it unbelievable that a computer criminal is hired over a legitimate consultant. Most computer criminals know far less about protecting information systems than legitimate expert consultants. They usually only know a few tricks that the consultants are also aware of.
By paying ex-criminals in this way, you discourage honest experts and encourage crime.
Whistle blowers are best stopped by not doing anything that someone may feel they have to blow a whistle about. Most whistle blowers have at least a semi-legitimate complaint to air. If your organization has a legitimate means to air that complaint and give it a legitimate hearing, you will eliminate the vast majority of these problems. A further step might be to provide the means for the whistle blower to get to the legitimate authority through legitimate means without getting the press or other parties involved. It is far better to provide the information through the legal process than to have it show up on the front page of the New York Times when you don't know it's coming.
Club initiates are a very minor threat to anyone with a strong program of information protection. As a rule, they take the path of least resistance, and as long as you don't provide that, they will get frustrated with trying to attack your systems and attack someone else.
Crackers tend to be far more determined than hackers or club initiates. They also tend to have a much better set of tools, are more willing to take risks, and are less afraid of the implications of being caught. In other words, they are fairly serious attackers. But they don't have the large financial resources or motivation to launch the extremely high-profile attacks that more serious criminals have the capability to launch. Except for producing high-profile cases against crackers, it is very hard to prevent cracker attacks by discouraging them.
A broad range of defenses are likely to prevent most crackers, but these defenses don't necessarily need to be very deep in order to be effective.
Competitors may be very hard to stop. This problem is significantly worsened by the fact that competitors probably know more about how your operation works than noncompetitors, they probably know how to do subtle damage that is hard to detect, and they may well have several of your ex-employees available to intentionally or accidentally help them in gaining intelligence. The thing that usually stops competitors is the fear of being caught, and this is commonly increased by projecting the impression of a good comprehensive protection program. Please note that impressions are not always reflective of reality.
Maintenance people are hard to defend against by piecemeal methods. By their nature, they have access to a lot of facilities, they are in those facilities when other people tend not to be, and they are relatively low paid. This makes them high risks. The normal defense is a combination of good personnel practices, procedural techniques such as sign-in, sign-out, and good inventory controls, and physical security such as locked doors and alarms for information system sites. Unfortunately, in today's distributed computing environment, the computing power and information is widely distributed throughout the organization, and only special pieces of equipment like file servers and PBX systems can really be protected against maintenance people by physical means for reasonable cost.
Professional thieves tend to look more deeply into attacks than some of the other attackers we have considered here. This means that it is that much harder to keep them out. They also tend to be more willing to learn about technology than other attackers. They will often pick locks or enter through windows in order to get high-valued items. Part of the saving grace here is that they also tend to steal things of high value and small size and they work for money so they tend not to cause denial of services attacks unless it facilitates a theft. The best defense is a strong overall information protection program, since they might use any combination of attacks in order to achieve their goal. Piecemeal approaches will not be of any real value here.
Unlike professional thieves, hoods tend to use brute force over stealth, which makes them susceptible to physical security precautions. Strong doors, well-trained guards, good alarm systems, and similar things tend to be most effective here.
Vandals are a major problem in a lot of inner cities, especially during riots or periods of social unrest. South Central Los Angeles is one of the worst areas in the United States for vandalism, and yet the University of Southern California is right in the middle of that community and suffers relatively little in the way of vandalism. This is because of a combination of a reasonably good system of gates and walls, a university police force oriented toward reducing vandalism, and very good relationship with the surrounding community.
A good public image may be the most important part of their strategy because by keeping the surrounding community supportive of the University, they greatly reduce the pool of attackers and bring social pressures to bear against would-be vandals.
Activists tend to be most interested in denial of services and leaking information that demonstrates the validity of their cause. The best defenses against activists tend to be strong public relations, legal proceedings, portraying them as criminals, and of course not having a high-profile or widespread reputation for doing things they tend to protest against. During the Vietnam war, protests tended to concentrate against schools with reserve officer training corps (ROTC) programs, while the environmental movement leads people to protest against oil companies, fisheries, and other similar targets.
Crackers for hire are really dangerous, except that they tend to be less expert and less willing to research than the professional thief. The most effective defenses will be the same for the cracker for hire as the cracker not for hire, except that the motivation may be higher and thus more assurance is necessary in order to provide effective defense.
Mentally-ill people seem fairly random by their nature and might even be treated in the same way as a natural disaster. They may do anything depending on their particular illness and they tend to be hard to predict. Today, they are using information systems to find out personal information about people as a part of stalking or attempts to locate and seduce victims, but they may turn to other things as time passes. In this case, keeping information confidential is a vital component to protection, and this relies on a wide variety of techniques ranging from controlling information flow in the organization to personnel policies and procedures, to awareness, and on and on.
Organized crime has money, attacks with strong financial motive, and is willing to do great bodily harm in order to steal money. They are willing to bribe people, extort money, and place employees in organizations, which makes them very difficult to defend against without a comprehensive protection program. No piecemeal defense will work against this sort of attacker.
Drug cartels tend to have money and use violence to get what they want, but they appear to be less involved in attacking corporations than law enforcement agencies. Their strongest desire seems to be finding the identities of drug enforcement agents, undercover police, and other people they are at odds with. They are willing to spend a lot of money, bribe people, place employees in key organizations, extort cooperation, and otherwise do whatever they feel necessary in order to protect their interests. Protection against these attacks requires a strong overall information protection program. No piecemeal defense will work against this sort of attacker.
Spies tend to be well funded, are provided with very good technical information, have a long time to penetrate an organization before carrying out attacks, are well trained, are cautious, and tend to behave as a malicious insider. The most effective methods for detecting spies historically have been auditing and analysis of audit trails, particularly of personal financial data.
Police armed with search warrants are almost impossible to stop legally. The only real defense against the state is the proper application of lawyers. The same is basically true for government agencies.
Infrastructure warriors are very hard to stop in a free society. The problem is that in order to prevent attacks against infrastructure, you have to either prevent the free movement of people, or use some form of detection to locate attackers before they carry out their attacks. This often calls for an aggressive strategy in which you actively seek out potential attackers and try to track and observe them so as to catch them before they carry out disruption. It is helpful if you can use borders to keep them out, but in the United States, borders are, for all intents and purposes, open to any serious attacker wishing to enter. For example, the border between the United States and Mexico is crossed by hundreds of illegal entrants each day. Passing the Canadian border usually only requires a driver's license which is not well-verified at the border.
Nation states and economic rivals can only be successfully defended against by a nationwide effort at every level of government. More details of this sort of defense are described in a case study. Military organizations and information warriors are also covered to a large extent by case studies later in the text.
Finally, geographic location has a great deal to do with attacks on information systems. By placing data centers in the Midwest, for example, organizations can eliminate many of the serious attackers from the list.
Many things motivate people to attack information systems, and it is important to remain flexible in considering protection alternatives. For example, during one study, I heard a very brief description of the new set-top boxes proposed for the next generation of cable television. I began to ask about colors and shapes, and was loudly rebuffed by a manager on my team who thought it was a ridiculous question to ask and could not possibly be related to the protection issue we were examining. I explained that we had all seen hundreds of intentionally damaged set-top boxes in the repair room earlier that day. Since the set-top boxes used in this cable system were black metallic boxes and since there are many studies that have shown that people react differently to different colors and shapes, I thought it might be worth looking into letting people choose from different colors and shapes.
It turns out that this seemingly unimportant change can save as much as a few thousand dollars per day in reduced damage to information systems in each of several hundred cable systems which are part of this cable corporation. The total savings could exceed $100 million per year.
There are two important points here. One is that seemingly small and unimportant changes may have dramatic financial implications. Limiting the scope of investigation limits its value. The second one is the reason this example is provided under the heading of eliminating motives. This change is designed to change the average person's perception of the set-top box. It uses color and shape to make people feel better about the information system in their environment. This reduces their desire to damage systems and provides protection against many attackers who could not be stopped by many other more expensive techniques.
Naturally, color and shape are only some of the things that can be used to reduce peoples' motives to launch attacks. The motives that relate to any particular organization differ with the sort of things they do. For example, very few political motives exist for extracting confidential information from cardboard box manufacturers in rural America. In order to address motive, it is important to get a handle on who might be motivated to launch attacks against your organization. Armed with this list, you can try to find ways to reduce or eliminate these motives.
One of the most important motives has historically been revenge. Revenge typically comes from disgruntled employees and ex-employees, customers, business contacts, and others that may have a reason to believe that they have been harmed by an organization or one or more of its employees. Reducing this motive is strongly tied to the way people believe they are treated. Public relations is a very important aspect of controlling motives because it addresses general perceptions that people have of the organization. An abusive employment screening and interview process is commonly cited as a cause of disgruntled people. The way customers and business contacts are treated is another very important aspect of eliminating the motives for possible future attackers.
Another important motive for attack is money. One of the ways this motive can be reduced or eliminated is by creating the perception that there is no financial gain in attacking your information systems. Just as it is risky to announce that this weekend, you will have a record amount of cash in your bank vault, there is a risk in publicizing the fact that very valuable information has been placed in your information systems.
Fun, challenge, and acceptance motives can be easily countered in one of two ways. One way is to put up enough of a barrier to entry that it isn't very fun or affordable to disrupt information systems. The other tactic is to make it so easy to disrupt systems that it is no fun or challenge to do so. The latter strategy was taken by MIT when students were using creative methods to deny services on computers during the school year. The installed a crash program that would deny services to all users by simply typing a command. The system would go down safely and automatically return to service an hour or so later. At the beginning of every semester, there were crash commands entered for a few days, but once the novelty wore off, nobody bothered to launch an attack.
Self-defense is a powerful motive. The only way to stop people from defending themselves is to not back them into a corner. Similarly, religious or political beliefs are typically unchangeable, however, you can often avoid their consequences by proper use of public relations.
Coercion is hard to eliminate as a motive since, under the right circumstances, almost anyone can be coerced into almost anything.
Military or economic advantage will likely always exist as a motive as long as people live in a competitive society with geopolitical inequities. Gathering or destruction of evidence is another motive that is unlikely to be eliminated without eliminating all crime.
One thing seems clear. An organization can often reduce motivation by controlling the perceptions that people have and the way they treat and interact with people. This involves ergonomics, psychology, public relations, personnel, marketing, and many other elements.
For any specific attack, there are a host of technical defenses. Technical defenses against specific attacks are often classified in terms of their ability to prevent, detect, and correct attacks and their side effects. Defenders can try to eliminate attack techniques by addressing them one-at-a-time, but by the time all of the attacks we have discussed are eliminated in this way, the cost of implementing and operating defenses will be so high that the solution will be impractical.
Most Trojan horses can be prevented and detected by a strong program of change control in which every change is independently examined before being put into use. Sound change control increases programming costs by about a factor of 2. Detection of the damage caused by a Trojan horse would require a comprehensive and generic detection capability, since a Trojan horse can cause arbitrary damage. The cure once a Trojan horse is identified is to remove it, try to find all of the damage it has done, and correct that damage. Time bombs and use or condition bombs are similar to Trojan horses in terms of defenses.
Dumpster diving can be avoided by using a high quality data destruction process on a regular basis. This should include paper shredding and electrical disruption of data on electronic media for all information being abandoned. Detecting dumpster diving can be done by relatively inexpensive surveillance (a camera or two and a few motion sensors), at least until the point where the trash is picked up by the disposal company. When dumpster diving is detected, it is important to determine what information could have been attained and how it could be used, and then to make changes so that attempts to use the information will be useless.
Fictitious identities can be detected to a large extent by thorough background checks, but even the United States government cannot prevent all fictitious people from getting into highly sensitive positions. Once a fictitious person has been detected, effective countermeasures include getting law enforcement involved, determining what sort of damage might have been done by that person to date, and preventing further damage by changing responsibilities or having the person arrested. In an intelligence operation, it may also be worth providing the person with fictitious information so that they feed the enemy false or misleading data which will waste enemy resources.
Protection limit poking can only be prevented effectively by a mandatory access control policy. Systems using such a policy are available today, but they are rarely used because most organizations are not willing to take the time and effort to understand them and decide how to use them effectively. Simplistic detection of anomalous protection settings can be done by using automated checking programs that detect changes or incorrect protection settings, but these are far less effective and more resource-consumptive than a mandatory access control policy. Naturally, they are far more widely used.
E-mail overflow can be countered by designing an electronic mail system that works properly. Specifically, it should detect space restrictions and react by automatically replying that the mail should be resent later. A priority system should be used to allow lower priority mail to be blocked sooner than higher priority mail. Most organizations don't have a desire to redesign the software they use as a part of their protection program. Other alternatives include using mail gateways that limit the impact of overflow, administering systems so that mail overflow doesn't adversely affect other operations, and detecting potential overflow before it affects systems, then using human intervention to counter the situation.
Infrastructure interference is impossible to prevent without good physical security throughout the infrastructure and sufficient redundancy and automated usage of that redundancy in the infrastructure to force disruption of more infrastructure elements than the attacker is capable of destroying in a given time period. Detection of subtle interference involves integrity checking systems, while less subtle disruption (such as bombs) often reveal themselves too late.
Infrastructure observation is impossible to prevent in today's environment, however, the use of strong cryptographic systems can make the observed information very expensive to extract value from and thus useless to all but the most serious attackers.
Sympathetic vibration exists in any underdamped feedback system. The defense is to make the network protocols overdamped so that at any frequency at which it is possible to launch attacks, the feedback mechanisms prevent this problem. Each node in a network should also have provisions for temporarily ignoring neighboring nodes that are acting out of the normal operating range and rechecking them at later times for restored service.
Human engineering can only be effectively prevented by training and awareness programs that keep employees from being taken in by potential attackers. This is most effective in combination with policies and procedures that reroute potential attacks to people specially trained in dealing with these problems. For example, some telephone companies hand off calls to a specialist with the right facilities and skills. If something illegal is done, the caller can be traced and law enforcement authorities can be brought into the case. False information can also be provided to give access to a system designed to track attacks, gather information on attack techniques, and get enough evidence for a conviction.
Bribery is prevented by raising children with strong moral values, providing adequate compensation to employees, keeping them aware of the potential penalties of breaking the rules, keeping them aware of moral responsibilities, instilling a good sense of right and wrong, good background checks for financial difficulties, and other personnel techniques.
Getting a job and using that position to attack information systems can only be prevented by good hiring practices.
Proper protection will also limit the effect that an individual can have on the overall information system, so that in a large organization, no individual can cause more than a limited amount of harm. This involves access controls, separation of function, redundancy, and other similar techniques.
Password guessing can be prevented by requiring hard-to-guess passwords, by using additional authentication devices or techniques, by restricting access once authentication has succeeded, by restricting the locations from which access can be attained to each account, and by using usage times, patterns, or other indicators of abuse to restrict usage.
Invalid values on calls can be prevented by properly designing hardware and software systems. Unfortunately, there is often little choice about this, so detection through auditing and analysis of audit trails, redundancy, and other techniques are commonly used to augment protection. A good testing program can also be used to detect these inadequacies before they are exploited by attackers, so that other protective measures can be used to cover the inadequacies of these system components.
Computer virus defenses tend to be very poor today. They are expensive to use, ineffective over time, and ineffective against serious attackers. The virus scanning technology in widespread use is only good against viruses that have been reported to virus defense companies, which means that if someone writes a virus to attack a specific organization, it will not be detected by these techniques.
The only strong techniques are a comprehensive change control program, integrity checking with cryptographic checksums in integrity shells, and isolation to limit the spread of viruses. This, in turn, implies proper management of changes in information systems.
Data diddling is normally prevented by limiting access to data and limiting the methods that can be used to perform modification. This, in turn, requires strong access controls, effective management of changes, and integrity checking for both software and data. Redundancy can often be used to facilitate integrity. Rapid detection is mandatory if data diddling protection is to be effective. Trying to correct data diddling is usually very expensive, while the cost of incorrect data may increase dramatically over time.
Packet insertion is feasible any time control over the hardware level of any system on a network is attained. To prevent insertion, it must be impossible to gain hardware control. Packet insertion can also be done by placing a radio transciever or other hardware device on the network. This can only be prevented by physical security of the network hardware. An alternative to preventing insertion is to make inserted packets less harmful by encrypting network traffic, but then the encryption hardware and software will almost certainly be available to anyone with physical access. One means of detection includes using hardware identification information to authenticate packet sources, but this fails when traffic passes through routers and gateway machines. Another technique of detecting illegal equipment is the use of a time domain reflectometer that detects every physical connection and provides distance information. A very high degree of configuration control is required in order to use this technique since any change in wiring will set off an alarm.
Packet watching can be done with the same technology as packet insertion. To prevent theft of passwords, they should be encrypted. The same is true for other valuable information sent over networks. If network-based backups are used, cryptographic or physical protection is required to prevent an attacker from getting a complete copy of all information on the network in only a few hours. Prevention and detection are the same for packet watching as packet insertion except that the packet watcher plays no active role, so techniques like associating hardware identities with packets don't work.
Van Eck bugging can only be prevented by reducing emanations, increasing the distance to the perimeter, or the introduction of noise. Noise generators that are able to effectively mask Van Eck bugging tend to be in violation of Federal Communications Commission (FCC) transmitter standards, and may interfere with radio and television signals, cellular telephones, and other electronic equipment. Increasing perimeter distance can be quite expensive since the necessary distance to prevent observation of video display emanations is on the order of 200 meters. Any receiver and retransmitter in that perimeter can be used to extend the distance. Finally, reducing emanations increases costs of displays by about 25 percent. It is fairly common for organizations with serious concerns about emanations to make entire facilities into Faraday boxes because of the increased convenience and reduced overall cost of protection. By doing this well, it is also possible to eliminate some other forms of bugging and electronic interference.
PBX bugging defenses are almost always dependent on getting an expert to examine the configuration of the PBX system and correct known deficiencies. With the addition of voice mail systems and the use of multiple PBX systems, this becomes a far more difficult problem. For example, a common technique is to restrict access to outbound trunk lines from anyone attached to an incoming trunk line. But with a dual PBX configuration, an attacker can go from one PBX to the other. Most current PBX systems will then lose the information about the incoming call source. The attacker then uses the outbound trunk lines of the second PBX to make an outbound call. Thus, all of the problems of networked computers may come into play in a complex telephone system, and protection techniques may involve a great deal of expertise, possible alteration of existing PBX software, audit trail generation and real-time analysis, and other similar techniques. Similarly, it is often necessary to disconnect maintenance lines when they are not being used for maintenance, to enforce strict rules on passwords used in voice mail systems, and to implement training and awareness programs.
Open microphone listening can be prevented by physical switches on telephone lines, but in most digital systems, this is expensive and causes the system to lose function. In one telephone system, some of the more expensive terminal devices have a handset/headset switch which disables the speaker phone and handset features. By combining this with a physical switch in the headset, protection may be afforded. In other systems, a thorough audit can be used to detect some attacks underway at the time of the audit, but even once attacks are detected, it may be hard to track down the attacker. Even with the knowledge of how to launch this attack, it may be hard to prevent its repetition without a comprehensive approach to protection.
Video viewing can often be stopped by adding physical switches on video and audio devices and a pair of lights, one that is hardware-activated whenever the device is in use, and the other which is hardware activated when the device is not in use. The pair of indicators covers attacks that involve broken lights. Another way to limit this attack is by using a secure operating system environment, but for this particular attack, most current systems do not provide very high assurance.
Repair and maintenance functions, by their nature, may require physical access to equipment. Thus, any physical security is likely to be ineffective against attack by maintenance people.
For a high level of assurance, one or more trusted individuals will have to supervise every action of the repair team, only certified and physically secured components can be used in any installation or repair process, and the inventory and distribution processes must be carefully controlled. To limit exposure, information can be stored in an encrypted form with the encryption keys and hardware physically separated from other components, such as disks and backup devices. Preventing corruption during maintenance requires that maintenance procedures have the same quality of controls as normal operating procedures.
Wire closet attacks are usually preventable by simply locking wire closets. For higher valued installations, alarms can be added to provide detection. Physical security and good personnel policies also help reduce this threat.
Shoulder surfing has been around ever since passwords were used in computers, A little bit of education and awareness can almost completely eliminate this problem. A proper social environment should also help in eliminating any embarrassment caused by asking someone to step aside while a password is typed. Some of the other mechanisms used to eliminate password guessing also apply to shoulder surfing. Toll fraud networks may be avoided by the same techniques that are used against shoulder surfing. It is also helpful for service providers to have real-time fraud detection in this sort of defense. For example, the same caller cannot realistically make several calls at one time from several different phones in the same airport, nor can they make calls from San Francisco five minutes after making calls from Los Angeles.
Data aggregation is very hard to stop except by preventing the data from getting to the attacker. Thus access controls and other techniques are effective against outside attackers, but authorized users performing unauthorized functions require tighter and tighter real-time audit trail analysis.
Process bypassing is most often prevented by separation of function. For example, two people that don't know each other are unlikely to cooperate in a fraud, so by splitting the process into two parts and requiring a double authentication before sending a check, this sort of fraud is usually eliminated. The down side is that this makes systems seem less efficient, and in some cases may cost more than the loss from frauds. An alternative is a strong audit policy and random checks to assure that legitimate claims are fulfilled and few illegitimate claims are made. Audit trails also help catch the people responsible for these attacks.
Backup theft really results from organizations treating backup information differently than live information. In effect, the same sorts of precautions must be taken with all copies of information in order to provide effective protection.
Login spoofing is defended against by providing a secure channel between the user and the system. For example, a hardware reset button on a personal computer can be very effective in removing some sorts of spoofing attacks. Login spoofing in networks is far more complex because when a network resource crashes or become inaccessible for one reason or another, it is simple to forge the identity of that computer and allow logins to proceed on the forged machine. Cryptographic authentication can increase assurance, but this depends on properly functioning information systems.
Hangup hooking can often be eliminated by purchasing the right hardware and configuring the operating system to enforce disconnection properly. Hardware hangup is detectable on many modems if properly configured. To a large extent, this depends on using the proper hardware handshaking, which in turn relies on using a full modem connection rather than a two or three wire connection as used to be common in external modems. Unfortunately, over many modern networks, this hardware solution does not work. Instead, remote users must explicitly logout or a forger can continue a session after the originating party disconnects or reboots their computer. In this case, encryption or authentication of network traffic may be required in order to effectively address the problem.
Call-forwarding fakery can often be blocked by restricting functions of PBX systems or requiring entry of forwarding information from the telephone wire being forwarded from. This limits attacks to those who can modify the PBX, those who can physically access the telephone wires, and telephone instruments using nontrivial telephone equipment. An example of how nontrivial telephone equipment can be exploited is an attack based on the dial-out capability of some modern fax machines. For example, the Brother 780-MC fax machine allows FAXes to be recorded and forwarded to another telephone number. If you can guess the 3-digit password, remote access can be used to dial call-forwarding information into the telephone company, which in turn reroutes the line. To counter this, it is necessary to disable these features either at the telephone company or at the fax machine. Either one restricts legitimate access to restrict attacks.
Email spoofing can be prevented by using authentication in electronic mail systems. For example, the public domain software package PGP provides very good authentication and reasonable confidentiality.
Input overflow attacks can be avoided by proper program design, but unfortunately, most programmers are not properly trained in this aspect of program design and most programming aids don't provide adequate automation to prevent this sort of disruption. In this environment, the best we can do is prevent these errors from adversely affecting the rest of the environment by providing integrity checking and using access controls and other similar techniques to limit the effects of these design flaws.
Illegal value insertion can easily be kept from causing harm through the use of integrity checking on all data entry. Unfortunately, many programs do a poor job of this, but fortunately, many of the newer program development systems provide stronger input validation, and this helps reduce the problem in many cases.
Induced stress failures exist in many cases because systems are inadequately tested and debugged before being fielded and because the designs don't use conservative estimates or strong verification mechanisms. One of the contributing factors to this problem is the rush to market for new products and the rapid product replacement cycle, particularly in the PC environment. This may be very good for the companies that sell these products, but it is not good for their customers. The solution is a more conservative approach to purchasing and a thorough testing and evaluation process.
The damage from false update disks can be prevented by having strong policies, procedures, training, and education. In order to be effective, this must be backed up by a good testing and configuration control program that is uniformly and universally enforced. Other techniques for defending against this threat are strong on-line integrity facilities and comprehensive access controls.
Network services attacks are usually countered by removing network services, concealing them within other services so that additional authentication is required, the creation of secure versions of these services, and the creation of traps so that when attackers try to use services, they can be traced. This is essentially the issue addressed by the network firewalls now becoming widely popular.
Combined attacks raise the bar far higher. For example, suppose sample software is provided for your testing program by transmission over a large network such as the Internet. The program must first be decompressed before it can be placed on a floppy disk for transfer to the testing system because it uses a decompression program for Unix that doesn't run on the DOS system used to test this product. In the decompression process, it creates a network control file that is rarely used. This control file causes the computer on your network to get instructions from the attacker about what to try on your system and returns results via electronic mail to the attacker.
The firewall on current network gateways to the Internet won't stop this program from looking for its instructions at a remote site, because the firewall is designed to prevent outsiders from getting in, not insiders from getting out. It won't stop electronic mail either, since that is what these firewalls are usually designed to allow to pass.
Your change control system probably won't detect the introduction of a new and legitimate program into your file space on your Unix account, and the files transferred to the test site won't include the file used for the attack.
So the attack works even though each of the elements individually might have been covered by a defense. For example, you may have used strong change control to protect the on-line Unix system from getting in-house software without approval, strong testing to prevent that software from being used before test, a trusted decompression program to decompress the incoming files, a secure operating environment to prevent the decompression program from placing files in another user's area, a firewall to prevent over-the-wire attacks, reduced network services to prevent attackers from entering from over the network, and awareness to prevent this attack from being launched by sending a floppy disk directly to the user. Yet, despite all of these precautions, this attack can succeed because it uses synergistic effects and piecemeal defenses don't provide synergistic defense. In order for piecemeal defenses to be effective against synergistic attacks like this, each combination of attacks must be explicitly covered.
Eliminating Accidental Events
Unlike intentional events, accidental events can almost always be characterized by stochastic processes and thus yield to statistical analysis. There are three major ways people reduce risk from accidental events. They avoid them, they use redundancy to reduce the impact of the event, and/or they buy insurance against the event. Avoidance typically involves designing systems to survive events and placing systems where events don't tend to occur. Redundancy involves determining how much to spend to reduce likelihoods of events in a cost-effective way. Insurance is a way of letting the insurance company determine the odds and paying them to assume the risks.
Errors and omissions happen in all systems and processes. For example, the Internal Revenue Service (IRS) reports about 20 percent data entry error rates, and they enter more data than almost any other organization on Earth. Better results come from the U.S. Postal Service (USPS) which enters zip codes from hundreds of millions of letters every day. Protection against errors and omissions requires redundancy of one form or another. If the redundancy costs more than the damage from the errors and omissions, it is not cost-effective to protect against them.
Power failure occurs quite often. Even disruptions that don't turn off the lights, such as voltage spikes and low voltage periods, may interfere with information systems. Some power problems may not cause an information system to stop operating, but may induce transient errors that are not detected by most modern systems. Protection against power disruption includes surge protection, uninterruptable power supplies, motor generators, and redundant power sources. Power failures of substantial duration are more likely on the East coast and the West coast of the United States than in the central states.
Cable cuts can be avoided by increased use of the cable location system. This system provides a telephone number to call before digging holes. When you call and identify where you are planning to dig, they provide you with information on the cables in that area so you can avoid hitting them. Organizations can try to protect themselves by using redundant service connections, but it is vital to be certain that these connections are actually redundant. At an infrastructure level, we currently have no way to track cable redundancy, and this is a necessary change if we are to provide high availability against this sort of incident.
Fire has many causes, ranging from electrical wiring failures to smoking in airliner bathrooms. Fire also tends to disrupt electrical power, lighting, and other elements often required for information system operation. Automatic fire suppression equipment is commonly used in large data centers, while smaller offices use hand-held fire extinguishers or alarm systems. Fire resistant safes are vital to assuring the integrity of backup tapes and other magnetic and paper media during a fire. Small fire-rated safes can be purchased for less than a few hundred dollars. Off-site backups are also commonly used for protection of vital information against loss in a fire.
Floods almost always occur in low-lying areas or next to creeks, rivers, or other bodies of water. The best protection against floods is proper location of facilities. Computer centers should not be placed in basements, except in rare cases. The use of raised floors helps reduce the threat of water damage.
Earth movement tends to happen near fault lines and where mine subsidence causes sinkholes and other similar phenomena. Again, location is a vital consideration in reducing this risk.
Solar flares increase background radiation which in turn introduces noise into communications and data storage devices. Effective redundancy in communications and storage includes the use of cyclic redundancy check (CRC) codes, parity check codes, and other similar protection. Satellite communication is most affected, followed by radio, overground wiring, and underground wiring.
Volcanos are almost always at known locations. The further away you are, the safer you are. They also damage infrastructure when they erupt and infrastructure dependencies should be examined for enough redundancy to cover such eruptions. There is no effective physical protection against a volcano at this time.
Severe weather, static electricity, air conditioning loss, and that whole class of scenarios cannot be completely avoided by geographic location, however, some places seem to encounter more severe weather than others, and some tend to have infrastructure that withstands these incidents better than others. Analysis of infrastructure reliability is appropriate to finding proper locations.
Moving computers should include a pre-move backup and a post-move verification of system and file integrity.
This implies a method for integrity checking. Most PC-based systems don't have substantial storage integrity mechanisms, however, systems like Unix and VMS do have built-in facilities for this purpose. Computers are relatively fragile, and as they age, wires become brittle, connections of boards to backplanes become corroded, and component connections become less flexible. For this reason, mechanical movement should be followed by a thorough diagnostic check of all system elements.
System maintenance sometimes causes downtime, but as a rule, the downtime is worthwhile because the result is, more often than not, beneficial. In cases where downtime is unacceptable, systems that use redundancy to continue to operate during maintenance are appropriate. Inadequate maintenance is generally far worse than downtime caused during maintenance because flaws that might be prevented, detected, or corrected during maintenance occur during normal operation instead.
Testing often uses rarely used and poorly tested features of an information system, and as a result, it is more likely to induce failures on a step-by-step basis than normal operations. Many systems now have built-in self-test capabilities, and these sorts of systems are usually designed to be tested during operation or under well-defined conditions. Testing should be carried out under well-defined conditions, both because this makes the results of the test more meaningful in relation to other comparable tests, and because it provides procedural mechanisms for assuring that tests don't cause harm.
Humidity, smoke, gasses, fumes, dust, heat, and cleaning chemicals can best be prevented from affecting information systems by environmental controls such as air filters, temperature controls, humidifiers and dehumidifiers, and other similar technologies. In circumstances where these factors are present, controls should be used. It is also helpful to avoid this sort of circumstance by locating information facilities as far as possible from the sources of these factors.
Temperature cycling normally results from turning systems off and on and from differences in day and night temperatures. The solution is a temperature-controlled environment and continuous operation 24 hours a day, 7 days a week. In most systems, this extends the life.
Electronic interference can be avoided by not locating equipment in areas with interfering signals, by using power conditioners and other similar equipment, and by enclosing equipment in a Faraday box.
Vibration is primarily a problem in mobile systems such as cars, aircraft, and space vehicles. When conditions warrant, there are systems specifically designed to withstand vibrations.
Corrosion is normally a problem in unprotected environments. The best protection against corrosion is to provide special environments for computer equipment. It is also prudent to locate information systems away from potentially corrosive materials.
Geographic location has a great deal to do with natural disasters. In the United States, the West coast has volcanos and earthquakes, the East coast has hurricanes and winter storms, and the central states have flooding in low-lying areas. By placing data centers in the Midwest, organizations can eliminate most of the natural disasters.
Corruption is most commonly prevented by the use of redundancy. In order to be effective against intentional attack, the redundancy must be designed so that it cannot be forged, removed, or otherwise bypassed without high cost.
Denial of services is also most commonly prevented by the use of redundancy, and it has the same restrictions when dealing with intentional attack as corruption.
Leakage is most commonly prevented by making information inaccessible or hard to understand once accessed illicitly. This involved access controls and cryptography.
An Organizational Perspective
It should now be apparent that if organizations try to provide information protection on a piecemeal basis, they will spend a great deal of money and get very little in the way of overall long-term protection.
This is what the vast majority of organizations do today and, as a result, they get fairly poor protection, pay too much for what they get, and encounter an ongoing series of difficulties that they have to deal with on a case-by-case basis.
There is of course a far better way of dealing with the protection challenge.
The basic concept is that organizations must deal with the information protection issue from many organizational perspectives. By combining these perspectives, the net effect is a comprehensive protection program with built-in assurance. The overall cost is lower because optimization is done at an organizational level instead of a piecemeal level.
To appreciate this, it might be helpful to think about the many types of protection described in the last section. I think you will find that every aspect is covered from the organizational perspective, but even more importantly, many scenarios that have not been called out in this book are also covered.
There is one other point to be made before going on. Any organization that takes the organizational approach to protection will almost certainly have to go through a cultural change in order for the approach to work. This approach is designed to create a culture in which information protection is effective.
Protection management is the management part of the process by which protection is implemented and maintained. In order for protection to work, adequate resources must be applied in an appropriate fashion. Protection management works toward optimizing the use of resources to make protection as effective as possible at or below budgeted cost levels. In most large commercial organizations today, the highest-level information protection management personnel report directly to corporate officers. Protection management typically takes place at all levels of the organization. It is fairly common for lower-level technical information protection management to be carried out by relatively untrained systems and network administrators who also have nonprotection roles in maintaining the operation of information systems. Top-level information protection managers are usually dedicated to the information protection task.
Proper protection management affects all elements of the protection process and improper management makes effective protection almost impossible to attain.
Two of the most critical functions of protection management are in budgeting and leadership.
Without adequate and properly allocated funding, protection becomes very difficult to attain. For example, it is common to spend a lot of money on people who are partially involved in the protection task while spending far too little on those who are fully dedicated to the task and on the support systems to facilitate protection. This is in large part because the hidden costs of information protection are amortized across the entire organization. For example, every systems administrator spends time in protection functions but this time is not commonly differentiated from other time. The effect is that from a budget standpoint it appears to be more cost-effective to have the administrators spend their time on protection than to have more protection specialists. In fact, a protection specialist is usually far more effective in resolving issues quickly, has much greater knowledge and far better tools, and is much more likely to come up with better and more cost-effective long-term solutions.
The leadership issue usually comes to a head when the top-level managers in an organization are subjected to the same protection standards as everyone else. It is commonplace for these top-level managers to simply refuse to be subjected to these standards. The net effect is that a privileged class comes into being. The rest of the organization then enters into a seemingly eternal power struggle to get the same privileges as the top-level executive and, over time, more and more people choose to ignore protection as a symbol of their status. It doesn't have to be this way. Perhaps one of the best examples of leadership is what happened some years ago when AT&T changed its policies.
Basically, AT&T implemented a system where internal auditors would go from area to area identifying protection issues to be addressed. Line managers would be given a form which they could either sign on one side to agree to pay for the protection improvements out of their operating budgets or sign on the other side to refuse to pay for the improvements and take responsibility for the results. Failure to sign the form results in sending the form to the next higher-level manager. The first such form was not signed, sent up the ladder, and in a short period of time, reached the CEO. The CEO could have done many things, but his leadership changed the corporate culture in short order. He called the next manager down the line who had refused to make the decision and told him to have one side or the other signed by the end of the day, or start looking for a new job. By the end of the day, the effect had rippled all the way down the ladder, and the lowest-level line manager had signed one side or the other. That was the last time the decision was ever passed up the ladder. The result is an organization where people at every level take responsibility for information protection very seriously. The effect has been a high degree of information assurance.
Protection policy forms the basis upon which protection decisions are made. Typical policy areas include but are not limited to:
Protection policy is normally the responsibility of the board of directors and operating officers of a company and has the same stature as any other official statement of corporate policy.
I keep talking about a protection policy as the basis for considering integrity, availability, and privacy. The reason policy is so important is that it is impossible to achieve a goal unless you know what the goal is.
The first and most vital step in achieving protection in any environment is to define the protection policy. This is something that should be done very carefully. For example, suppose I just use my earlier goal ( i.e., get the right information to the right place at the right time) as the policy. If this is the only guiding force behind decisions, the policy may put me out of business very quickly. I will never achieve my real goal, which should be a balance between cost and protection.
Policies can range from the very simple (such as the AT&T policy of having auditors indicate appropriate requirements and managers make decisions about implementation) to the very complicated (such as the DoD policy which consumes several encyclopedic volumes on each sort of information system). There are many books filled with protection policies.
Protection policy defines the set of circumstances that protection is supposed to cover and how the tradeoffs work between cost and assurance. To the extent that policy doesn't explain these things, it leaves holes that will be filled or left open at the discretion of those who try to implement policy. The result of poor policy is lack of control, and lack of control in information protection has the potential to lead to disaster. At the same time, policy should not over-specify things that require lower-level decisions, or it will tie the hands of those who implement protection to the point where they may be forced to act foolishly.
One way to think of policy is in terms of what it does and does not say to those who have to implement protection. It should specify that all of the components of protection need to be addressed in an ongoing fashion and that everyone in the organization should participate in the effort at some level. It should not specify standards and procedures, specific documents, audit techniques, safeguards, or other elements of protection in detail. It should state goals and specify that certain tradeoffs need to be addressed, but it should not specify what to do or how to do it at a detailed level. It should specify top-level elements of personnel structure and minimum levels of involvement of top people, but it should not be tied to personalities or special capabilities of people in particular positions. It should be designed for the long term and thus change rarely, but it must be able to adapt without dramatic change in order to retain stability in the organization.
Standards and Procedures
Standards and procedures are used to implement policy at an operational level. It is not unusual to have employees who are unaware of or ignore standards, and procedural compliance is rarely adequate to effectively implement the desired protection. In assessing standards and procedures, it is important to determine that they are adequate to meet operational requirements, and that they are adequate in light of the level of compliance, training, and expertise of the people implementing them. Standards and procedures apply at all levels of the personnel structure, from the janitor to the board members.
Standards provide goals to be achieved, while procedures provide approved means of reaching those goals.
We may have a standard that dictates that all job and duty changes should be reflected in changes in authorization to information systems within 48 hours of the change of status, except in the case of termination or demotion, in which case changes to authorization must be made by the time the employee is notified of the change. The standard may seem pretty reasonable, and a lot of companies have a standard of this sort, but many of the companies with this standard lack the procedures to carry the standard out. The procedures that go with this standard must include, but not be limited to, the mechanisms by which employee job and duty changes get reported and to whom, who is responsible for making changes, how the current status of employee authorization and the authorizations associated with each job are specified and maintained, what to do in the event that the person normally responsible for this activity is on vacation, how to protect the information on employee authorizations, how to change authorization information reliably, and what to do with information created and/or maintained by the employee when job changes take place.
To a large extent, standards and procedures dictate the degree of assurance attained in the protection effort. If the standards miss something or the procedures that implement them are poor, the level of assurance will almost certainly be low. If the standards are well done and comprehensive and the procedures implement those standards efficiently and reliably, assurance will be high and protection will work well within the rest of the organization.
Documentation is the expression of policy, standards, and procedures in a usable form. The utility of documentation is a key component of the success of protection. Protection documentation is intended to be used by specific people performing specific job functions, and as such should be appropriately tailored. Documentation should be periodically updated to reflect the changing information environment. These changes should be reviewed as they occur by all involved parties. Documentation is normally in place at every level of a corporation, from the document that expresses the corporate protection policy to the help cards that tell information workers how to respond to protection-related situations.
Documentation essentially increases assurance by providing a way to access the procedures in written form when necessary. Many modern organizations are moving toward on-line documentation, but they often ignore the requirement for documentation in a form that is usable when their computers are not operational. If the documentation required to make the computer usable is not available when the computer breaks, it is not of any value.
On-line documentation is not likely to be very useful in recovering from an attack that deleted all of the on-line information or in recovering from a widespread power failure.
Documentation cannot realistically cover every situation that can come up in full detail, but good documentation provides a range of information including immediate responses to a select number of critical problems, more in-depth material that explains how and why things are done the way they are, and even the detailed technical information used to create the information technology in the first place.
In order to be effective, documentation also has to be properly located and considered as a part and parcel of the information systems themselves. Detailed design documents are rarely helpful to the end user of today's information technology, while most help cards must be instantly accessible if they are to have utility. Documentation also has to be tracked and updated to prevent having the wrong information used when problems occur. In today's environment, there are commonly many versions of a particular software package. It is common to have incompatibilities from version to version. Having an improperly matched manual can waste a lot of time and effort.
Most modern computer hardware and software manuals do not place protection prominently and, in some cases, they don't mention protection issues at all, even though they greatly affect product operation. For example, manuals almost never tell users what protection settings should be used for different files under different conditions. Products rarely provide protection options in their user interfaces. When they do, there is almost never a way to automate the proper protection decision. This means that in order for these programs to operate properly, the user has to become aware of the protection implications on their own, make good protection decisions without adequate supporting material, and manually set protection to the proper values. Protection documentation should address these concerns explicitly so that the user can do the right thing without having to be a protection expert.
Documentation should also be created by information workers as they work. A simple credo may help: The work isn't done 'till the documentation is finished.
Protection audit is vital to assuring that protection is properly in place and in detecting incidents not detected by other operating protection techniques. Audit is also important to fulfilling the fiduciary duty of corporate officers for due diligence, detecting unauthorized behavior by authorized individuals, and assuring that other protective measures are properly operating. Audit is normally carried out by internal auditors, and verified by independent outside personnel with special knowledge in the fields of inquiry, who work for and report to corporate officers, and who have unlimited access to examine, but not modify, information.
Protection audit is used to test and verify coverage. In the process, it increases assurance and identifies vulnerabilities. Audit is normally a redundant function used to provide increased integrity, but in some organizations, it is used as a primary means of detection and response. For example, in accounting, most cases of fraud and embezzlement are detected during the audit process.
Automated audit generation and analysis techniques have also been implemented in order to provide near real-time detection of events. For example, in most timesharing computer systems, real-time log files are generated to record protection-relevant events. These audit trails can then be analyzed by real-time analysis programs to detect known attack patterns and deviations from normal behavior.
Technical safeguards are commonly used to provide a high degree of compliance in addressing specific, known classes of vulnerabilities. For example, cryptography, access controls, and password systems are all technical safeguards.
Technical safeguards must not only meet the requirements of addressing the vulnerabilities they are intended to mitigate, but must also be properly implemented, installed, operated, and maintained. They are also subject to abuse in cases where they are inadequate to the actual threats or where they create undue burdens on users. Technical safeguards are typically implemented by systems administrators based on guidance from protection managers.
Technical safeguards are designed to cover attacks. They cover such a broad range and have been the subject of so much work that many books and thousands of scientific articles on the subject have been published. Technical safeguards range from special-purpose techniques that detect a specific attack to generic techniques that cover large classes of accidental and intentional disruptions. Technical safeguards also range from well thought out, well designed, carefully implemented customized solutions to poorly conceived, poorly designed, and poorly implemented off-the-shelf products.
Unfortunately, most technical safeguards have not been designed to be easily and effectively managed by an average person. In fact, most of them cannot be effectively managed even by a well trained protection specialist without special training and custom made tools.
The lack of adequate protection management tools in support of technical safeguards is slowly being addressed, and over time, operational management of technical safeguards may reach a level where average users can protect themselves. But for now, this is a major shortcoming of technical safeguards that limits their effectiveness.
Incident response is required whenever a protection-related incident is detected. The process of response is predicated on detection of a protection-related event, and thus detection is a key element in any response plan. The response plan should include all necessary people, procedures, and tools required in order to effectively limit the damage and mitigate any harm done to as large an extent as is possible and appropriate to the incident. In many situations, time is of the essence in incident response, and therefore all of the elements of the response should be in place and operating properly before an incident occurs. This makes planning and testing very important. Incident response is normally managed by specially trained central response teams with a local presence and cooperation of all affected users.
Incident response is, by definition, reactive in nature. That is, it covers incidents that are not covered proactively. Incident response also implies incident detection, which opens a whole new set of issues. Specifically, if it can be detected, why can't it be corrected or prevented automatically? If it cannot be detected, how can incident response help, since no incident can be responded to unless it can be detected. The ultimate goal of incident response is to identify and implement protection enhancements.
Then there are the issues of what to respond to at what level. If the incident response team overreacts every time an incident is detected, it may cost a lot and make true emergencies less likely to get the desired level of attention. If the incident response team underreacts, it could be very costly. This means that incident response must be measured according to the seriousness of the incident, but this then places still more emphasis on the detection system. Response teams must be able to properly differentiate between incidents of different levels of severity in order to provide the proper measured response. The more we know about differentiating incidents, the better we are able to defend against them in a proactive way.
Next, we have the issue of reflexive control as applied to the incident response team.
A serious attacker may stress the incident response capability to detect weaknesses or may hide a serious incident in the guise of an incident that has low response priority.
The resources available for incident response tend to be limited. A clever attacker may stress the response team by forcing them to use more and more resources. This then forces the incident response system to increase the threshold of detection in order to continue to meet cost constraints. By raising the level high enough, an attacker can force the incident response team to ignore the real attack in favor of diversions.
The solution is a well-qualified response team. Incident response almost invariably involves people. To be effective, these people must be well-trained and experienced in the field of information protection. A good incident response system also involves all members of the organization to the extent that they often must act in conjunction with the response team in order for response to operate effectively.
Any system that is expected to work properly must be adequately tested. The testing requirement applies to human as well as automated systems, and to the protection plan itself. It is common to implement new or modified computer hardware or software without adequately testing the interaction of new systems with existing systems. This often leads to downtime and corruptions. Similarly, disaster recovery plans are often untested until an actual disaster, at which point it's too late to improve the plan. Testing is normally carried out by those who have operational responsibility for functional areas.
Testing increases assurance by providing verification that systems operate as they are supposed to. It is commonplace for inadequately tested systems to fail and affect other connected systems. This commonly results in denial and corruption. The response to denial and corruption, in turn, often leads to leakage. Testing extends far beyond any special system components provided for protection. It also includes all components of information systems including, but not limited to, people, procedures, documentation, hardware, software, and systems.
Many large organizations have configuration control that involves testing each new component in a special test environment to increase the assurance that the new component will operate properly within the existing environment. This commonly delays the introduction of new software into a PC-based environment by several months. Many mainframe environments stress change control which uses testing as a key component to assuring that changes operate properly on test cases as well as samples extracted from real data. Most other environments seem to be poorly controlled from a testing standpoint today, and the effect on availability and integrity is sometimes severe.
Testing is almost never perfect. Performing a test of every possible interaction between two fairly small programs operating in a PC-based environment would probably take more time than the expected lifetime of the universe.
On the other hand, almost no system of any substantial complexity works perfectly the first time it is tested. As a rule, the cost-effectiveness of testing goes down as more testing is done on the particular configuration. For this reason, most testing efforts concentrate on the interaction between a recent change and its environment. A good testing program will be designed so that it tests for high probability events first and gets high coverage quickly. Testing should be terminated when the likelihood of finding new errors becomes so small that the cost of testing exceeds the likely loss from remaining errors. Most companies stop testing well before this point.
There is no effective information protection without physical protection of information assets and the systems that store and handle those assets. Physical protection is expensive, and thus must be applied judiciously to remain cost-effective. There is a strong interaction between the design of systems and their physical protection requirements. Physical protection is typically implemented and maintained in conjunction with operations personnel who are responsible for other aspects of the physical plant. It sometimes involves an internal security force of one form or another. Physical protection covers disruptions and increases assurance provided by other protective measures by making it more difficult to modify the way systems operate.
Almost no hacker is willing to deal with guards holding rifles. Even some pretty serious criminals are unwilling to risk assault and battery charges to enter a facility.
Some of the key decisions in physical security lie in deciding who the protection system is designed to keep away from what and what natural events are to be defended against. Natural events are normally dictated by physical location. For example, flood protection on the top of a mountain is rarely appropriate, while any vital site in California had better have adequate earthquake protection. Mischievous attacks from disgruntled ex-employees should probably be defended by almost any organization, but terrorist protection is rarely necessary for small firms unless they are located very close to major targets, because terrorists don't get as much press coverage or cause as much damage by blowing up a small company as a big one.
Information systems are tools used by people. At the heart of effective protection is a team of trustworthy, diligent, well-trained people. Although there are no sure indicators of what people will do, individuals who have responsibilities involving unusually high exposures are commonly checked for known criminal behavior, financial stability, susceptibility to bribery, and other factors that may tend to lead to inappropriate behavior. Although most people provide references when submitting a resume, many companies don't thoroughly check references or consider the effects of job changes over time. Personnel security considers these issues and tries to address them to reduce the likelihood of incidents involving intentional abuse.
In most corporations, there is a personnel department that is supposed to handle personnel security issues, but it is common to have communications breakdowns between personnel and technical protection management which results in poor procedural safeguards and unnecessary exposures.
In order for protection to be effective, the linkage between the personnel department and the information technology department must work well and be supported by standards and procedures.
Personnel department protection efforts concentrate on enhancing assurance, eliminating insider attacks, and reducing motives for attack. Motives are reduced by properly selecting employees who are less likely to be motivated to launch attacks and more likely to participate willingly and actively in the protection effort. Insider attacks are reduced by proper compensation, hiring, firing, and change of position procedures. Proper investigative procedures also greatly decrease the chances of insider attack, but it is important to differentiate between the things that make people unique and indicators of malice. Different isn't always bad. Many companies have hiring policies that tend to exclude the very people who might best serve them.
Legal requirements for information protection vary from state to state and country to country. For example, British law is very explicit in stating the responsibility to report computer crimes, while U.S. laws do not punish executives who fail to report incidents. Software piracy laws, privacy laws, Federal Trade Commission regulations, recent federal statutes, contracts with other businesses, health and safety regulations, worker monitoring laws, intellectual property laws, and many other factors affect the proper implementation of information protection.
Legal matters are normally handled in conjunction with corporate legal staff and involve all levels of the organization.
Protection awareness is often cited as the best indicator of how effective an information protection program is. Despite the many technological breakthroughs in information protection over the last several years, it is still alert and aware employees that first detect most protection problems. This is especially important for systems administrators and people in key positions.
The main role of protection awareness is to increase assurance and, as such, awareness programs should be directed toward the specific goals of the overall protection effort. For example, if terrorists are not of particular concern, the awareness program shouldn't emphasize them, but it is generally beneficial for people to be aware of their existence, what they tend to do, and why they are or are not of great concern in this particular organization.
Awareness programs normally extend to all employees and consume anywhere from a few hours to a few days per year. This time is commonly divided into regularly scheduled events such as a computer security day, a quarterly training session, or a monthly newsletter. Posters are commonly used to increase awareness and discussions of current events provide a way to bring the issues home.
Training and Education
Training has consistently been demonstrated to have a positive effect on performance, especially under emergency conditions. For that reason, training programs are a vital component in the overall protection posture of an organization.
It is common for people who once helped the systems administrator by doing backups to become the new systems administrators based on attrition, and end up in charge of information protection by accident. The net effect is an environment which relies on undertrained and ill-prepared people who are often unable to adequately cope with situations as they arise. The need for training increases with the amount of responsibility for protection.
Training and education increase assurance in much the same way as protection awareness does, except that in an awareness program, the goal is to remain alert, whereas in an education and training program, the goal is to impart the deep knowledge required to enhance the entire protection program and to create a responsive environment in which actions reflect policy.
Every employee with protection as a substantial concern should have at least a few hours per quarter of training and education, while those who are responsible for protection should attend regular and rigorous professional education programs.
The lack of information protection education in universities places a special burden on other organizations to provide the core knowledge that is lacking, while the rapidly changing nature of information technology makes it necessary to keep up-to-date on protection issues on a regular basis.
Education in information protection commonly takes one of two forms. For organizations with a relatively small number of people with protection responsibilities in any given geographical area, training courses are offered by short-course companies in cities around the world. These courses tend to use well-qualified specialists to address a wide variety of subjects at appropriate levels of depth. For organizations having at least 10 employees with protection responsibility located in physical proximity, in-house short courses are often preferred.
The main advantage of in-house courses is that they are specialized to the organization's requirements, they allow confidential questions to be asked in an appropriate venue, and the cost is far lower than the cost of sending the same number of employees to an outside short course. Good short courses in this field typically cost about $500 per person per day, not including transportation or housing. For less than $5,000 a very well-qualified expert can fly in from anywhere in the United States, stay over night, provide a full day of semi-customized education, include a great deal of supporting material, and return home.
Many of the best experts in the information protection field do this sort of educational program for companies on a regular basis as a part of their consulting practice. The long-term relationship built up by this practice is very beneficial to both parties.
Information protection spans the organizational community. It crosses departmental and hierarchical levels, and affects every aspect of operations. This implies that the information protection manager be able to communicate and work well with people throughout the organization.
To be effective, the mandate for information protection must come from the board of directors and operating officers.
The organizational culture is very important to the success of information protection. A proper culture reinforces cost-effective, high-assurance coverage of the protection issue. A culture that punishes those who report protection problems, that gives no rewards for good protection or punishment for poor protection, and that doesn't provide the power base from which protection gets priority over less important issues, makes effective protection almost impossible to attain.
There are clearly other ways of organizing protection issues, and this one is not particularly better than any others except in that it seems to provide broad coverage from a perspective that has proven useful to many organizations. I sometimes get comments like it doesn't include $x$, where $x$ is disaster recovery or access control or some such thing. Like the pasta sauce commercial, my response is: It's in there. Here are some examples:
Some Sample Scenarios
These two examples serve to demonstrate the difference between the organizational and piecemeal approaches to protection.
In the near future, your organization is attacked by a deranged individual who uses a high energy radiation field (HERF) gun to destroy information systems from a range of about 20 meters. This individual has decided that your organization is the devil incarnate and will continue to do this until caught.
Suppose you have decided on a piecemeal defense based on the list of attacks and defenses described earlier. Now this new attack comes up. You are not prepared and your organizational response is unpredictable. Typically, the failure of several information system components in close physical proximity will be viewed by the local administrator as a result of a power fluctuation, a lightning strike, or some such thing. Replacements will be purchased, and nothing else will be said. This may happen once a week in a large organization and, since each administrator is acting on their own, the overall picture may never become clear. The problems may simply persist indefinitely. I have seen this happen in many large organizations (although none of them have been due to a HERF gun yet).
The cost will be shared across the organization, and it will reflect in inefficiency and poor overall financial performance. It may never be traced back to the attack.
Perhaps a few administrators chatting in the hall will eventually describe a common incident and the alarm will go out, but even then, the response will be piecemeal.
Now suppose we have an overall organizational protection system set up. The first attack will probably work in much the same way, except that it will be reported to the incident response team, and they will assist in resolving the problem. When the second attack takes place, the response team will again be notified, and now they will begin to suspect a correlation. As the symptoms become clear, they will probably contact a real expert in the field to tell them what could have caused such a thing and how to defend against it. Within a few days, they will know that this attack was caused by a HERF gun, and they will probably contact the FBI (for interstate) or local law enforcement to help them solve the problem. The resources that will be applied will probably allow rapid detection of this attack as it is underway, and the perpetrator will be hotly pursued. Everyone in the organization will be made aware of the situation, and special precautions will be put in place to limit the lost time and information until the attacker is found.
Learned Shared Attack
Over the last several weeks, a network-based attacker has probed your technical defenses against over-the-wire attack and has finally found an attack that works. The attacker has shared this attack with thousands of other attackers who are now running rampant through your information systems during the evening hours. The attackers may be denying services, leaking confidential information, or corrupting information throughout the organization's information systems. One of the first things they do is modify the security systems to allow reentry once the shared attack is defended against.
With a piecemeal defense, someone may eventually notice that people are logged in at unusual hours, or that performance is slow at night, or some such thing. Then, if this is a systems administrator, they may try to track the source down using whatever tools they have. If they are really good at what they do, they will eventually figure out that a serious attack is underway and then call in help from the rest of the organization or describe it to the systems administrators' group at the next regular meeting, if there is such a thing. Eventually, this problem will be tracked down to the specific attack being used and this hole will be filled, but by that time the attackers will have introduced several other holes through which they can regularly enter, and the battle may rage indefinitely. Something similar happened at AT&T in the 1980s several months after attackers had been regularly entering systems all around the United States used for routing telephone calls. The AT&T incident continued for more than a year after it was first detected.
As I write this book, I know of another similar attack pattern against U.S. military systems that, by my accounting, has been underway for more than a year. This attack was not detected until at least six months after it began.
The military attack just described cannot be effectively countered at this time because there are no funds allocated for calling in external experts or implementing more effective defenses, and many of the organizations under attack don't want to take responsibility for the defense because of political issues. A small team of specialists have been trying to keep up with the attacks for the last year, but frankly, they are overwhelmed.
With an overall organizational protection system in place, detection is far more rapid because people throughout the organization know to notice suspicious things and report them. The awareness combined with training and education of the incident response team should dramatically reduce the time before the attack is detected. Once detected, experts can be brought in to analyze and counter the specific attack, and then organizational resources can be applied to provide improved protection against this entire class of attacks for the future.
Strategy and Tactics
Two critical planning perspectives for information protection are usually not addressed or even differentiated: strategic planning and tactical planning. An important reason to explicitly look at these two perspectives is that they reveal a lot about how money is spent and how planning interacts with operations.
The difference between strategy and tactics is commonly described in terms of time frames. Strategic planning is planning for the long run, while tactical planning is planning for the short run. Strategic planning concentrates on determining what resources to have available and what goals an organization should try to achieve under different circumstances, while tactical planning concentrates on how to apply the available resources to achieving those goals in a particular circumstance.
In the planning process, so many things should be considered that I cannot even list them all here. They tend to vary from organization to organization and person to person, and they involve too many variables to draw general conclusions without sufficient facts. I have collected what I consider to be the major issues in planning and done some initial analysis to help in your planning, but these efforts cannot possibly substitute for expert analysis based on technical and organizational knowledge of your requirements.
First and foremost, planning a defense is a study in tradeoffs. No single defense is safest for all situations, and no combination of defenses is cost effective in all environments. This underscores the basic protection principle.
Protection is something you do, not something you buy.
General Strategic Needs
At the core of any strategy is the team of people that develop it and carry it out, both in the long run and in tactical situations. The first and most important thing you should do is gather a good team of people to help develop and implement strategy.
General Tactical Needs
Some Widely Applicable Results
Even though there are a vast array of different environments, there are some strategies and tactics that seem to be almost universally beneficial.
The Cost of Protection
Organizations don't like to spend money on information protection if they can avoid it, because it is not profitable in the normal sense of the word. If you have better information protection, you don't lower costs and you don't increase sales or profit margins. All you do is prevent loss. In that sense, information protection is like insurance, a necessary cost of doing business, but not one that most organizations are anxious to emphasize.
Another major problem in analyzing the cost of protection is that, without good statistics, it is impossible to quantify the actual benefit associated with the cost. Good statistics aren't widely available because reporting is inadequate and mechanisms for compiling reported statistics don't exist.
I am not going to solve either of these problems here. But I do want to address another perspective on the cost of protection. Rather than address how much should be spent, I want to address the question of when to spend in order to be cost-effective.
Cost Increases Dramatically with Lifecycle Phase
There is a great deal of historical data that strongly supports the contention that organizations should spend money on information protection now rather than waiting until the NII is widely implemented and operational. Many experts in information protection indicate that after-the-fact protection is much less effective, much more expensive, rarely adequate, and hard to manage. The data from several significant studies indicates that the costs associated with addressing information assurance now may be as much as several orders of magnitude less than addressing it once the integrated NII is widely operating. According to numerous studies on the cost of making changes to information systems as a function of when the change is introduced in the lifecycle, cost increases exponentially with lifecycle phase.
According to one study, compared to finding and correcting problems in the analysis phase, the average cost of a change (i.e., correcting a software fault), is increased by a factor of 2.5 in design, 5 in testing, and 36 in system integration. [Wolverton] In another study of large high-assurance software designs with high quality specifications and extensive testing, the cost of a change after a system is in operation is calculated to be 100 times the cost of a change during the specification phase. [Boehm] The same study showed that the larger the system, the more cost advantage there was to making changes earlier.
According to one software engineering text (that may be less reliable than the previous two extensive studies), the cost of fixing an error rises as more work is built upon that error before it is found and fixed. ``The cost to catch a mistake and make a change at the time of writing the requirements specifications may be $10, and during the design $300. While the product is being built, the error may cost $3000; after the product has been delivered, the mistake could cost as much as $15,000 to fix, and possibly much more in losses to the client because the product didn't work.'' [Steward] The costs of extensive testing alone can double the overall system costs [Wolverton] while producing little advantage against malicious attacks. The following figure illustrates the cost of changes.
Covering intentional disruption is a more stringent requirement than covering random events, but the costs of added coverage are not always substantial. The study of node destruction in a uniformly connected network demonstrated that 10 times more links were required in some circumstances to withstand intentional attack than random destruction. [Minoli] On the other end of the spectrum, cost-analysis of fairly strong proactive integrity protection techniques proved 50 times more cost effective over the lifecycle of a system than defenses based on a reactive approach to attacks. [Cohen-costs]
It appears from the historical data that several orders of magnitude in reduced cost may be attained by making proper information assurance decisions early in the design process. Perhaps more realistically, organizations cannot afford adequate protection unless it is designed into systems from the start.
Procurement Is Only Part of Lifecycle Cost
Another issue in cost analysis that must be considered is the difference between lifecycle costs and procurement costs. Perhaps no area demonstrates the lack of attention to this difference more clearly today than the area of computer virus defenses. Many organizations have purchased virus scanning programs as a defense against computer viruses on the basis that the cost per system is only about a dollar. Unfortunately, this is only the purchase cost and not the usage cost. The factor of 50 cost increase previously described represents the difference between using a virus scanner every day and using a more cost-effective protection technique. [Cohen-costs] The cost of the scanner may be only $1, but the two or more minutes per day consumed by performing scans at system startup brings the lost time to more than 600 minutes (10 hours) per system per year. Even at only $10 per hour of downtime, the costs of using the scanner are 100 times more than the cost of purchase in this example. Other factors in virus scanners make them far more expensive to use than alternative technologies, and more recent analytical results show that using imperfect scanners (which all current scanners are) may lead to the spread of harder to detect viruses, just as the use of antibiotics have led to the so called ``superbugs'' which resist antibiotics. [Cohen-Wiley]
According to industry sources about 20 percent employee overhead is required for systems administration of integrity protection in a typical banking operation. [Cohen-Wiley] In some industries, the overhead is substantially lower, but any organization with serious intentions to implement information protection should budget at least 5 to 10 percent of the information technology budget for information protection. This budget normally consists of at least 90 percent ongoing costs and at most 10 percent acquisition costs. Because of the wide range of requirements involved in implementing a comprehensive information protection program, lifecycle costs of technical safeguards had better be a very small budget item.
Cost Increases with Immediacy
To perform an emergency upgrade of all current systems to implement all of the protection requirements described in this book would require an enormous amount of money. This would be impossible to do by the end of today no matter how much money was available for the task. The cost of implementing protection is far lower if it is well planned and carried out over a reasonable amount of time. This is shown in the following figure.
For most modern information systems, the replacement cycle is on the order of 2 to 4 years. This is closely related to the replacement cycle for PCs because they constitute more than 80 percent of the computers in the world and they have a very short replacement cycle.
The situation is quite different for infrastructure elements such as cables and satellites, where investments are intended to pay for themselves over time frames of 10, 20, or even more years. In these cases, the time frames have to be adjusted appropriately. The same is true for physical protection of buildings, where replacement cycles are on the order of 40 to 80 years.
Cost of Incidents Increase with Time
The longer you wait before providing protection, the more losses you suffer from incidents. The cost of these incidents will eventually exceed the cost of providing protection. In addition, reported incidents have increased rather dramatically over the last several years and the number of potential attackers increases dramatically as computers are interconnected. It is a reasonable expectation that, all other things being equal, the number of network-based attacks increase with the number of computers to which your computers are attached. If this is true, then the number of network-based attacks on the Internet should double every 8 to 10 months over the next 5 to 10 years and then level off as market saturation is reached in the United States. Worldwide, this increase could realistically continue for another 15 years, but not much longer. This is shown in the following figure.
Given the AT&T and DEC figures of more than one network-based entry attempt per day as of early 1994, we would expect that by the end of a 10-year period, there would be about 1,000 entry attempts per day. This is a a little less than one every minute.
This exponential growth in attacks provides substantial support to implementing protection sooner rather than later. Of course, exponential growth in attacks by a finite population of people can only go on for so long. Eventually, the curve will level off. On the other hand, automated attacks are becoming more common, and as computer performance increases, so does the rate and sophistication level of these attacks.
Cost Increases With Coverage and Assurance
Spending more on protection generally attains increased coverage and assurance, but people commonly use this fact as an excuse to limit coverage and follow fads in spending money on protection. In fact, the cost of protection can be greatly reduced by making well thought out decisions. One analysis described in a case study in Chapter 6 produced savings of almost 50 percent over a previous design.
The most common analysis of protection is based on statistical analysis and assumes random events with given probabilities. This risk analysis technique asserts that you can assess an expected loss from each of a set of attacks by multiplying the loss for each attack by the number of times that attack will be experienced in a given time interval. Risk can then be reduced by implementing protection that has known cost and reduces the probability of loss or the amount of loss. If the reduction in expected loss exceeds the cost of the defense, then the defense is cost-effective. The standard advice is then to implement the most cost-effective protective technique first and continue implementing cost-effective techniques until none are left to implement.
This standard form of risk analysis, in my opinion, is utterly ridiculous. I say this with some trepidation because if you ask 100 experts in this field about it, probably 98 will agree with the standard risk analysis, at least until they read my explanation. So here it is. The problem with this sort of risk analysis is that it makes assumptions that do not reflect reality. Specifically, it assumes that attacks are statistically random events, that they are independent of each other, that available statistics are valid, that protective measures are independent of each other, that expected loss reduction can be accurately assessed, and that the only costs involved in protection are the costs of the protective techniques. All of these are untrue and each is a very significant defect that independently could invalidate the analysis. In combination, they completely invalidate standard risk analysis:
If experts can't accurately determine loss from a known incident after it takes place, how can they be expected to accurately predict expected loss from large classes of incidents before the fact? They cannot and they do not. Instead, they use an estimate without any real basis such as the worst case they could think of, or a wild guess. And that is only for the expected loss. When it comes to reduction in expected loss, the hand-waving really begins.
Combined Cost Factors
When we combine all of these cost factors together, we get an interesting picture of the overall cost of protection. Without using any specific numbers for any specific situation, the picture generally looks like the following figure.
In words, the most cost-effective way to implement protection is to spend a reasonable amount of money over a reasonable period of time.
Don't be in such a rush that you waste money, but don't wait so long that you lose more from disruption than prudent protection would have cost.
The problem in analysis is to determine the region of the curve with minimum cost. Naturally, nobody has ever performed enough analysis to be able to provide a closed-form solution to this problem, so everyone who tries to do this analysis is left with highly speculative estimates. My speculative estimate is that the most cost-effective time frame for protection is closely related to the replacement cycle of equipment when that time frame is less than three years. In every other case I have seen, budgetary constraints limit the time frame for effective protection.
Without careful analysis, it would be easy to bankrupt any organization in an attempt to ``armor-plate'' information systems with ad hoc, after-the-fact enhancements. Organizations should undertake a careful analysis to determine the cost-effectiveness of information protection techniques on a class-by-class basis. This effort should be undertaken at the earliest possible time in order to afford the greatest cost savings.
For obsolescent systems, the cost of injecting protection may be astronomical, so a different approach should be considered. A time frame should be established for replacement or enhancement of these systems, and planners should plan on requiring appropriate information protection features in replacement systems over that time frame. Based on normal replacement cycles, this process should be completed over 3 to 7 years for most organizations, and 7 to 12 years for high capital investment systems such as national infrastructure elements.
History shows that the cost of incremental improvement increases as perfection is approached. Rather than strive for perfect protection, risks should be managed in a reasonable way that balances cost with the protection it provides.
Based on these factors, the most cost-effective overall approach to providing protection is to immediately incorporate protection requirements into design standards, and to provide network-based tools and techniques to detect and respond to disruptions of current systems. Information assurance features are phased in over time based on normal system replacement cycles. Substantial immediate improvement is attained by implementing network-based protection features and training information workers. Over the long term, protection will reach desired levels at reasonable cost. This time lag in technical enhancement will also give the organization time to adapt to these changes.
An Incremental Protection Process
Some people think that the piecemeal approach to protection has a major advantage in that it involves only incremental investment to meet current needs. Fortunately, you can have both a sound overall organizational approach and an incremental investment strategy. The result is lower cost and better protection. The incremental approach to organizational protection described here has been effective both from a standpoint of reducing loss and controlling protection costs for many organizations.
To give a frame of reference, when protection is implemented in this way, the overall cost is typically under one half of one percent of annual operating budget. Piecemeal protection commonly costs on the order of one percent or more of the overall operating budget and is far less effective. Unfortunately, piecemeal protection cost is often hidden in reduced productivity and expenses taken from operating budgets of small organizational units, so it is rarely detected by top-level management.
Protection Posture Assessment
An information protection posture assessment is normally the first step in addressing the protection issues in an organization. The purpose of this sort of assessment is to get an overall view of how the organization works, how information technology is applied, and what protective measures are in place.
This assessment should be comprehensive in its scope, cover all aspects of the information technology in use in the organization, and address the core issues of integrity, availability, and privacy. The reason a comprehensive approach is important is that there are interactions between components of the protection process. For solutions to work, they have to work well together.
Such an assessment is usually qualitative in nature, and the quality of the result is tied to the quality of the people used to perform the assessment. For that reason, it is important to apply the best available experts to this sort of assessment.
Such an assessment should be done over a fairly short time frame. I have done assessments like this for very large organizations, and time frames of only a few months are usually sufficient. For a small organization with only a few hundred employees, such an assessment can often be completed in only a week, and I have offered a one-day assessment service for small businesses with under 25 employees on a regular basis. The reason for the short time frame is that top-flight people know their field well enough to ask the right questions, determine the implications of the answers, and provide a well-supported result fairly quickly. It is also important to resolve the top-level issues quickly so that the rest of the process can proceed.
Assessments are normally provided in a form that can be easily read and understood by top management. This is vital in order for the issues to be meaningfully addressed from an organizational point of view. At the same time, the assessment has to withstand the scrutiny of the top expert available in the organization, and it may be passed through another outside expert if the implications are important enough. So it has to be technically solid and well-supported by facts.
If properly done, such an assessment yields a review of the information gathered in the process, a set of findings that describe the identified shortcomings in a systematic way, a plan of action that can be used immediately to carry out the rest of the protection process, and an estimate of the costs associated with the proposed plan of action. The result should be designed to provide metrics for evaluating vendors, considering time frames, and considering the financial rationality of the measures.
Emergency Measures and Long-term Planning
It is common for a protection posture assessment to call for two immediate steps: emergency measures and long-term planning.
Emergency measures are normally associated with exceptionally high exposures (i.e., areas with a high maximum potential for losses) that are not adequately covered by existing protective measures. In other words, situations where you could lose your shirt and the closet is left open with a take me sign posted in front. One example from a recent assessment included the following emergency measure:
``Secure the fungibles transfer capability. The current exposure is so great that a single incident could severely damage the corporation and there is currently no meaningful protection in place to prevent such an attack.''
Long-term planning is usually required in the areas where inadequate plans are already in place, where previous planning didn't adequately consider alternatives, or where weaknesses were identified in the posture assessment. For example, in one study, a long-term plan included a quantitative assessment of the costs of alternative architectures and a list of protective measures for meeting the needs of a particular sort of installation. By implementing a plan over a multi-year period and properly selecting the mix of techniques, the overall cost of protection could be reduced by almost a factor of 2 from the previous plan and the effectiveness of protection increased along the way.
The time frame for completing emergency measures and long-term planning phases typically ranges from three to nine months, and a combination of people with different skills and experience levels are usually involved. In this activity, top experts are used to supervise the process and verify that things are done properly, while second tier experts are used for doing analysis, optimization studies, market surveys, implementing emergency measures, and making other plans.
Implementation and Testing
Once a detailed plan is in place and emergency situations are taken care of, the plan is implemented over an appropriate period of time.
It is quite common for a full-scale implementation to be put in place over a period of 1 to 3 years. In some exceptional cases, especially in large organizations where major investments in information technology are underway, the plan can call for action over a 3 to 7 year period. In infrastructure cases, the plan may last as long as 10 years.
Implementation of an information protection plan typically involves many organizational components and, for large organizations, this may even involve substantial changes in organizational momentum. It typically involves a number of vendors, a substantial management component, cooperation throughout the organization, retraining of information workers, and a whole slew of other things that must be handled. If properly done, this is a relatively painless process.
In this phase of the process, second tier experts typically supervise at a hands-on level, while top experts periodically spot check to verify that things are going well and review work underway with the second tier experts to verify the propriety of any decisions made during the implementation.
Operation and Maintenance
Operation and maintenance of protection systems requires special attention, since protection, like any other process, doesn't just happen on its own.
The operational aspects of protection should be part and parcel of the overall protection plan, and will involve ongoing investments. Whenever possible, operation and maintenance should be handled internally. Some exceptions will likely include disaster recovery, which may use a leased site, and external audit, which by definition requires outside experts.
During normal operation, there will be incidents requiring special expertise that would be too expensive to maintain in-house, and the ongoing education process should include some outside experts or the use of widely available external short courses.
Maintenance operations offer special challenges in the area of information protection because during maintenance, the normal safeguards may not be active or used in the same manner as during normal operating periods. Similarly, maintenance, by its very nature, involves testing under stress condition. Under stressed conditions, the likelihood of failure is greater, and thus special precautions must be taken to provide the necessary assurance even under these circumstances.
Decommissioning, Disposal, and Conversion
The decommissioning process typically begins by identifying any services provided by the system to be decommissioned, information residing in that system that has value other than in that system, and all resources related to that system.
Once these have been identified, replacement systems are normally put in place, information of value is converted to the new system, the new system is operated in parallel with the system about to be decommissioned, and finally, the system being decommissioned is shut down.
After shutdown, the system being decommissioned is normally cleaned of any residual information that might be left on disks, video terminals, or other obvious or subtle storage media. Any components with value are then stripped from the system and used or resold, and finally, the residual is recycled or disposed of.
After the system being decommissioned is shut down, elements of supporting infrastructure are normally made available for other purposes or decommissioned along with the rest of the system. For example, any wiring, cooling systems, power supplies, special lighting, fire suppression equipment, computer floors, security doors, video display terminals, printers, networking hardware, or other similar elements of the computing environment that are freed up may be used for other purposes.
fred at all.net