Throughout the history of war, deception has been a cornerstone of successful offense and defense. [Dunnigan95] Indeed, the history of information protection includes many examples of the use of deception for defense including the use of honey pots to gain insight on attacker behavior, [Bellovin92] [Cheswick91] [Drill-Down] the use of lightning rods to draw fire, [Cohen96] [Drill Down] and the use of program evolution as a technique for defending against automated attacks on operating systems, [Cohen92] [Drill-Down] Even long before computers existed, information protection through deception was widely demonstrated. For example, in World War 2, information about the upcoming landings on D-day was protected by a series of deceptions ranging from Patton's false army in Britain to the use of a deceased worker's body made to look like a diplomatic courier and given false and planted information to convince Hitler that the invasion was taking place elsewhere. The entire field of stegonography is concentrated around the art of concealment. As an example from classical literature, in Charles Dickens' A Tale of Two Cities the identities of those to be executed in the revolution were concealed in knitting. In fact, if we look at it closely enough, it may be hard to find examples of effective information protection methods that do not use some element of deception. Cryptography is based on the concealment of information by transformation into unusable form. Much of computer security relies on the concealment of information, and to the extent that we conceal presence, we are deceiving.
But the history of information protection also demonstrates that the use of deception by attackers far outstrips its use by defenders in this field. Examples are rampant - to the point where large classes of deceptions for attack have been categorized [Cohen97] [Drill-Down] and include such techniques as audio/video viewing, audit suppression, backup corruption, below-threshold attacks, call forwarding fakery, content-based attacks, covert channel exploitation, select techniques for data aggregation, data diddling, desychronization and time-based attacks distributed coordinated attacks, electronic interference, emergency procedure exploitation, environment corruption, error insertion and analysis, error-induced misoperation, false updates, fictitious people, get a job, hang-up hooking, illegal value insertion, implied trust exploitation, induced stress failures, insertion in transit, invalid values on calls, kiting, man-in-the-middle attacks, modification in transit, password guessing, PBX bugging, peer relationship exploitation, perception management a.k.a. human engineering, piggy-backing, process bypassing, reflexive control, repair-replace-remove information, replay attacks, repudiation, resource availability manipulation, salami attacks, shoulder surfing, spoofing and masquerading, system maintenance, Trojan horses, undocumented or unknown function exploitation, viruses, and wire closet attacks.
There are many examples of each of these techniques and most of them are classes containing potentially infinite numbers of examples. Indeed, the history of information system attack is almost entirely a history of deception in which attackers deceive, and defenders are open and honest.
While defensive uses of deception are somewhat more limited, there are a fairly substantial number of such techniques that have been used. As in the case of deceptive attacks, deceptive defenses have also been categorized. [Cohen97-2] [Drill-Down] Examples include concealed services, encryption, feeding false information, hard-to-guess passwords, isolated sub-file-system areas, low building profile, noise injection, path diversity, perception management, rerouting attacks, retaining confidentiality of security status information, spread spectrum, and traps.
Perhaps one of the most important points to be brought out in this regard is that out of 140 defensive techniques, only one in ten could be considered deceptive in nature, while about half of the attack techniques involve deception. It is also important to understand that most of the defensive deception is only peripherally deceptive. For example, while some areas of cryptography are based on concealment, the main objective of most cryptosystems is not to trick an attacker into thinking that a message means something is does not mean, but rather to increase the cost of gaining sufficient understanding of content or making unauthorized modifications.
This paper concentrates on the role of deception in information protection, and as such, its main focus is on addressing different perspectives on deception. We begin by examining the historical use of deception for information protection in more depth, consider the moral issues associated with the use of deception for protection, and examine techniques for deceptive defense and complexities in their use. Next we describe theoretical issues behind the use of deception in the Deception ToolKit (DTK), practical results on the use of DTK in the Internet and in an experimental environment, and notions about the widespread use of DTK and similar tools. Finally, we summarize results, draw conclusions, and discuss further work.
It would be impossible to cover the history of deception in any depth in the limited space of this paper, however, we will try cover some of the basic examples of deception for protection to give a sense of how it has been applied. We will begin by examining the listed deceptive protection techniques from the introduction in light of Dunnigan and Nofi's classification scheme (i.e., concealment, camouflage, false and planted information, ruses, displays, demonstrations, feints, lies, and insight). [Dunnigan95]
Concealed services: Concealment is used to provide services only to those who know how to access them. Examples include secret hallways with trick doors, menu items that don't appear on the menu, and programs that don't appear in listings but operate when properly called.
Hard-to-guess passwords: Hard-to-guess passwords (something you know) are used to make password guessing difficult. Examples include automatically generated passwords, systems that check passwords when entered, and systems that try to guess passwords in an effort to detect poor selections. This form of concealment is the hiding of a valid item (the password) in a large space of possibilities (the space of all possible passwords).
Low building profile: Facilities containing vital systems or information are kept in low profile buildings and locations to avoid undue interest or attention. Examples include computer centers located in remote areas, indistinctive and unmarked buildings used for backup storage, and block houses used for storing high valued information. This form of concealment is intended to make it more difficult to identify targets and do the intelligence required to make a successful attack.
Path diversity: Multiple paths are used to reduce the dependency on a single route of communications or transportation. Examples include the use of different motor routes selected at random near the time of use to avoid hijacking of shipments, the use of multiple network paths used to assure reliability and increase the cost of jamming or bugging, and the use of multiple suppliers and multiple apparent delivery names and addresses to reduce the effect of Trojan horses placed in hardware. This form of concealment makes it harder to find information vital to the success of an attack, again by hiding the actual path in a large space of possible paths.
Retaining confidentiality of security status information: Information on methods used for protection can be protected to make successful and undetected attack more difficult. Examples include not revealing specific weaknesses in specific systems, keeping information on where items are purchased confidential, and not detailing internal procedures to outsiders. Again, the use of concealment is based on making it harder for an attacker to find what they seek in order to make a successful attack.
Spread spectrum: The use of frequency-hopping radios or devices to reduce the effect of jamming and complicate listening in on communications. Examples include spread-spectrum portable telephones now available in for homes and spread spectrum radios used in the military. Again, this is a case of concealing actual signals in the space of all signals to deny the attacker access to information required to attack.
Noise injection: Noise is injected in order to reduce the signal to noise ration and make compromise more difficult. Examples include the creation of deceptions involving false or misleading information, the induction of electrical, sonic, and other forms of noise to reduce the usability of emanations, and the creation of active noise-reducing barriers to eliminate external noises which might be used to cause speech input devices to take external commands or inputs. This form of camouflage has been very successful at making it more difficult for attackers to get a clear picture of the items being sought.
Perception management: Causing people to believe things that forward the goal. Examples include the appearance of tight security which tends to reduce the number of attacks, creating the perception that people who attack a particular system will be caught in order to deter attack, and making people believe that a particularly valuable target is not valuable in order to reduce the number of attempts to attack it. In this case, false information is planted though the provision of indicators that would tend to lead attackers to incorrect conclusions.
Rerouting attacks: Attacks are shunted away from the most critical systems. Examples include honey pots used to lure attackers away from real targets and toward false and planted information, lightning rods used to create attractive targets so that attackers direct their energies away from real targets, shunts used to selectively route attacks around potential targets, and jails used to encapsulate attackers during attacks and gather information about their methods for subsequent analysis and exploitation. These sorts of ruses are commonly used as a way to deflect attacks.
Ruses have not been widely carried off in information systems, perhaps because their success in the physical domain is based on the inability to accurately differentiate friend and foe. In the information domain, while friend and foe may be hard to identify, the use of unique addresses and the tendency for low-grade attackers to come in small groups makes such deceptions more difficult to carry off.
Encryption: Information is transformed into a form which obscures the content so that the attacker has difficulty understanding it. Examples include the use of DES encryption for protecting transmitted information, the use of the RSA cryptosystem for exchanging session keys over an open channel, and the one-time-pad which provides perfect secrecy if properly implemented. In a sense, encryption is a display that indicates to the attacker that it is not worth trying to attain this information. Even though some of today's encryption systems are relatively hard to defeat from a technical standpoint, their effect on attacker will is that of a display of strength and intent.
While few widely touted displays have been used in information protection, there are some peripherally related examples that are worthy of note. One is high profile prosecutions. In this case, the high press attention creates a public perception that the odds of getting caught and the punishment associated with computer crime is high, even though in reality, the chances of being caught are very low, the amount stolen tends to be quite high, and the punishment tends to be far less than comparable non-computer crimes.
Perhaps more than anything else, demonstrations are ways of proving to the enemy that you are serious and that you have the ability to do things that they cannot or are not willing to defend against. Three examples of demonstrations were the capture of Captain Midnight in 1986, [Pessin86] the capture of Robert Morris, Jr in 1988, and the tracking down of Sameer in 1996. [Cohen96-04] [Cohen96-DCA] In each case, large-scale investigations involving thousands of leads were carried out in order to track down attackers, and in each case, similar further attacks have not been widely reported.
Again, no general categories of feints have been identified in the published literature on computer security other than the use of a zero-tolerance policy created in order to create the perception that all attempted entries would be dealt with harshly. [Cohen96-03] [Drill-Down] In this case, a polite message is sent for an initial break-in attempt and after subsequent attempts are made the impression is given that action will be taken unless an attack is stopped and the attacker is tracked down, but of course there is no enforcement behind such an impression because no substantial enforcement mechanism currently exists for Internet-based attacks.
Feeding false information: False information is fed to attackers in order to inhibit the success of their attacks. Examples include: providing misleading information to cause foreign governments to spend money on useless lines of research, providing false information that will be easily detected by a potential purchaser of information so that the attacker will lose face, and the creation of honey pots, lightning rods, or similar target systems designed to be attractive targets and redirect attacks away from more sensitive systems.
Isolated sub-file-system areas: Portions of a file system are temporarily isolated for the purpose of running a set of processes so as to attempt to limit access by those processes to that subset of the file system. Examples include: the Unix Chroot environment, mandatory access controls in POset configurations, and VM's virtual disks.
Traps: Traps are devices used to capture an intruder or retain their interest while additional information about them is attained. Examples include man-traps used to capture people trying to illicitly enter a facility, computer traps used to entice attackers to remain while they are traced, and special physical barriers designed to entrap particular sorts of devices used to bypass controls.
The ability to deceive an opponent by out-thinking them is demonstrated in all manner of games, including chess, where sacrifices are made for improved position and poker, where people create false tells to convince their opponent that they are bluffing when they are not or vice versa. In information protection, the only demonstration of a system intended to demonstrate this capability is highly experimental and only one paper has been submitted for publication on this technique to date. [Cohen98]
Perhaps the most noticeable result of this very brief historical examination is that concealment is the main application of deception and demonstrations, feints, ruses, and insight are essentially unused. It is a fundamental thesis of this paper that deception is underutilized in information protection, and this discussion would seem to lend credence to that view.
Another point worth noting is that deception has been more widely used in the physical arena of information protection than in the informational domain. This would seem to reflect on the historical role of deception in military applications which have historically been dominantly physical in nature.
Perhaps one of the reasons that defenders have hesitated to use deception for defense of information systems is the perception that it is somehow immoral to lie, even if you are lying to someone who is lying to you. This stoic view of protection asserts that a good defender can absorb an arbitrary amount of abuse while successfully defending their systems. The return of fire is viewed as some sort of weakness.
But while defenders sometimes hesitate to use deception in defense, attackers are not the least bit hesitant to deceive. In fact, it is one of the major bases of attacking information systems, and without which most attackers would be unable to succeed in the least. Furthermore, defenders have responded to deception in defense by creating large quantities of adverse publicity in an attempt to manage the perception of the legitimate community so as to prevent the use of deception for defense. For example, when deceptive defense in the form of feints was used at the site, attackers created a long series of falsehoods, attempted to generate defensive responses in order to claim they were somehow wronged, used misdirection and distributed coordinated attacks in order to cause responses to be sent toward intermediaries, and carried out a broad set of other deceptions in order to try to deceive the legitimate community into believing that this method was somehow wrong. [Cohen96-04] [Drill-Down] [Cohen96-DCA] [Drill-Down]
An example may also be helpful here. One of the more common methods for gaining access to an information system is to call an employee and lie to them by creating the impression that the attacker is in fact a systems administrator testing out some critical system. They will, for example, ask the employee for a password. This is a deception by the attacker. One moral question we might ask is whether it is acceptable for the employee to lie to the attacker and indicate that their password is different than it actually is. Certainly telling the proper password is inappropriate, but at the same time, telling the caller that you know they are not supposed to get that information and refusing to reveal it may not be the best defensive move. It may tell the attacker that they have been detected, which may cause them to use other techniques you cannot defend against as well. It may also cause the attacker to try another person who is less well trained and thus grant them access anyway. If a false account is provided on a deception computer intended for the purpose of consuming attacker time and tracking their activities, it may be a far more successful protective measure. The question is whether lying in this case is morally right.
To students of philosophy, this may quickly degenerate into a discussion of means and ends and the classical question of whether the ends justify the means or the means justify the ends. Mohatma Ghandi has been quoted as saying: Take care of the means and the ends will take care of themselves. But it is also very common to hear positive feedback in response to stories of sting operations in which the police create a ruse to catch people they are seeking. Most people who think and write about this subject come to the conclusion that the lines of morality are not black and white. Lying in order to save a life is generally accepted as making the best of a bad situation, while little white lies are less likely to be viewed as moral, and lying in order to take advantage of someone ranges, in some cases, to a criminal activity called fraud. Perhaps the moral issue to be addressed is not whether deception is or is not acceptable, but rather, what sort of deception is morally acceptable in what circumstances.
Another view of morality that is often taken has to do with cultural norms. For example, in the process of negotiations, different societies take different views on the requirement to be factually accurate in claims and assertions. Moral standards vary with location, situation, and other factors. Western cultures typically frown on lying unless there is a good purpose behind it and the lie only effects those who would otherwise do a greater harm. In politics in the United States today, deceptions are taken for granted. In many societies, caveat emptor (let the buyer beware) reigns supreme. In historical times, and in some societies, morality was far different than it is in today's information environment.
A very different point of view that is commonly brought out in popular culture is the notion of moral superiority. The question of an eye for an eye and a tooth for a tooth comes up in dramatic presentations where a murderer is caught by the parent or wife of a previous victim and the moral decision about whether the murderer should be killed or not is made. In most popular movies, the decision is taken to have the murderer tried by a court of law, and the notion of fair play and not reducing yourself to their level holds significant emotional leverage in some societies.
Having taken account of some moral issues does not mean that we believe we have a solution to the moral dillemas surrounding the use of deception in information protection. We do believe that it is important to understand that these issues exist, to address them directly, and to take them into account when making decisions about information protection.
We also do not take the position that research should proceed for the sake of knowledge regardless of the consequences that might result from the effort. Research into swords produced many deaths throughout the ages, and deception is indeed a two-edged sword. The fact that this paper is intended for publication means that the author has taken the decision that deception for information protection is morally right in some, perhaps even most, circumstances. But at the same time, it is the author's belief that the extent to which we may wish to use deception and the circumstances of deceptions should be moderated by moral judgments.
In A Midsummers' Night's Dream William Shakespere wrote "Oh what a tangled web we weave when first we practice to deceive." Indeed, as we look at the technical issues behind deception, we will find that deceiving with impunity is not a simple matter if those we are deceiving are more than a little bit sophisticated. In order to understand the complexity of the use of deception against adversaries with differing skills, we will look at each of the classes of deception identified earlier and try to gain some clear understanding of how they might be used and the difficulties we may encounter in keeping our deceptions consistent.
In a sense, concealment implies that there is a natural environment that is expected by an attacker and that whatever is concealed is hidden somehow by that natural environment. A menu selection that is not indicated by the menu system that provides the service is an example.
When we hide something in plain view there are several problems we may encounter. One of them is that exhaustive search may reveal what is hidden. For example, in the process of protection testing, it is common to try large subsets of the set of possible input sequences to detect the presence of sequences that are concealed. Since the complexity of finding a concealed sequence by exhaustion increases exponentially with the size of the input symbol space and length of the sequence required to reveal the service, there is a requirement to make the length of the sequence long enough to prevent exhaustion. This in turn makes it that much harder for the legitimate user of the concealed service to enter the required sequence, thus reducing its utility. As we look more deeply into this issue, we see that the same issues that have arisen over the last 20 years or more in the use of passwords and password guessing recur in the case of concealment. [Cohen85] Just as in the case of passwords, we encounter the problem that hard-to-guess calls are hard to use, that observation of traffic in transit reveals concealed items, that people have a tendency to document concealed items, and that covert channels often exist in such systems that turn the guessing process into a linear rather than exponential search. Concealment depends on keeping secrets that are often exposed to plain view, and this is not an easy matter from a practical standpoint. The term security through obscurity is often used to indicate the weakness of such methods.
Camouflage is based on the creation of an artificial cover that makes it appear as if one thing is in fact another for the purpose of making it harder to find or identify.
As an example, a server that looks like a pornography server may be used as a gateway to details about the U.S. government. In this particular case, since many government workers in the United States are barred from accessing such sites, they will be unable or unwilling to look behind the camouflage to see what's really going on. Another example of camouflage would be the use of prompts that look like DOS command line prompts on a system running the Unix operating system or the use of an interface that indicates you are running a particular email server when you are actually running a very different one. The notion is that attackers looking for vulnerabilities use initial indicators such as this to determine where to look more deeply and which systems to ignore.
Similarly, corporate users might want to create fictitious .edu domain names in the Internet in order to gain access to sites with relative anonymity to prevent knowledge of what company they are actually part of from being exploited to gain intelligence about their usage patterns and interests. Individuals in the Internet use fictitious identities all the time in order to camouflage where they are from, what they do, and so forth. This is a protection for personal privacy that is widely accepted and is in fact part of the assumed culture of the Internet.
Camouflage has several serious difficulties in today's networked information environment, perhaps the most serious of which is the widespread use of automation to gather information about all of the available information on the Internet. In essence, these search engines, agents, and crawlers provide such a thorough search of the environment that camouflage is nearly impossible to maintain if your information is being actively sought. Even the links between different identities may be derived from analysis of linguistic characteristics and other patterns in postings. Analysis can produce lists of individuals who have expressed similar views, make similar spelling errors, have similar grammars, and so forth. As more resources are applied, the legal system or technical methods can be exploited to destroy the anonymity provided by purely technical means, and traceability is for harder today than it was in the past.
Analysis on the effectiveness of camouflage and the complexity of circumventing camouflage is hard to find in the current literature, the examples where seemingly strong technical methods of camouflage have been bypassed suggests that it is difficult to effectively use camouflage today against serious attackers and furthermore, no current techniques of camouflage appear to provide the ability to detect attempts to circumvent it as a defense. Thus, while this class of techniques may be effective at preventing attackers with limited skills and resources, it has yet to prove effective against skilled or well-resourced attackers.
False and planted information is based on the notion that attacks depend for their success on information about their victims and that by providing inaccurate information, successful and undetected attack becomes far more difficult.
Two key parts of creating successful false and planted information are determining what fictions are desired from a standpoint of the consequences they produce in the actions of potential attackers, and the creation and sustenance of those fictions in such a manner as to successfully produce those actions over the desired time-frames of the deception.
Determining what fictions are desired would seem to be most effective if they are part of a strategic plan to cause attackers to act differently than they would act if those fictions were not in place. In order to be effective in this, it has proven helpful to have accurate models of the behavior of the attackers of interest, and this in turn implies the creation of threat profiles and in-depth understanding of attacker decision processes. A useful example may be the deception used by the allies in World War II which successfully delayed the support of German troops defending against the D-day invasions by several days.
This rather elaborate scheme involved understanding what kinds of information had to be provided in what sorts of ways in order to have a combined effect of convincing the German command structure, and in particular Hitler, that the Normandy invasions were feints rather than the actual attacks. In this case, the consequences of a failed deception were potentially catastrophic, and as a result, a great deal of effort was put into creating several consistent and seemingly independent indicators and finding convincing ways of providing these indicators indirectly to the German command structure in such a way that their decision cycle and reaction times would be hampered. In the end, a combination of a false army in England and headed by Patton, a planted fictitious intelligence officer with a diplomatic brief case and convincing accoutraments, the creation of ruses in the form of miniature soldier-dolls with fire-crackers to simulate guns being fired, and the destruction of select communications lines was effective enough to delay German reaction to the invasion forces by several days - long enough to secure the beachhead and put enough forces on the mainland of Europe to turn the tide of the war.
It would be naive to believe that this was dumb luck or a few simple tricks generated independently that just happened to work together. In fact, the effort in making the fiction effective involved a great deal of work by skilled experts with many specialized skills. This brings us to the issue of how effective deceptions are created.
Creating effective fictions involves understanding the intelligence capacities of the attacker and finding ways to cause their intelligence operations to go awry in desired ways. For opponents with sophisticated intelligence efforts, a great deal of understanding is required and a set of redundant and seemingly independent sources of information that are trusted and verifiable by the attacker are exploited in order to create a total picture that deceives on a broad scale.
Returning again to World War II, the Battle of Britain was largely effected by the ability of the British to deceive the Germans into moving their V2 rocket targeting points away from the center of London. This was done in large part because they had gained control of all the German agents in Britain and controlled radio and other media. All of the diverse, trusted, and seemingly independent information available to the Germans indicated that the center of London was being hit on a daily basis when in fact, the rockets had been redirected to the outskirts.
The lessons of military deceptions should not go ignored if modern deceptions are to be effective against attackers of modern information systems. Simple deceptions can be effective against certain classes of attackers, while far more elaborate means will be required for countering the efforts of highly sophisticated intelligence organizations. While no mathematical results are available to provide technical guidance on the subject at this time, it seems clear that effective deceptions of this sort cannot be completely resolved for high-grade intelligent threats based on mathematics alone.
Having said that high-grade threats require high-grade fictions, it is worthy of note that simplistic attackers using off-the-shelf tools are, by comparison, easily fooled by simple false and planted information. While this subject will be covered in substantial detail later in this paper, the Deception ToolKit has demonstrated the ability to fool automated attack systems to the extent of providing substantial and measurable improvements in the effectiveness of protection.
The only research we are aware of that looks at the mathematical complexity of generating and detecting false and planted information comes from work on the difficulty of producing forged audit trails in an environment where redundant audit information is used to detect forgeries. [Cohen95-3] [Drill-Down] While there are no mathematical complexity results from this paper, it appears that it is easier to detect inconsistencies than to generate them, especially when generation must happen in near-real-time and detection may come from extensive analysis.
Ruses are normally used to cause attackers to believe that they are observing friendly forces when in fact, they are not. In small, closely knit attack group situations, this may be difficult to do without the use of long-term planted spies. In larger scale attack and defense situations, ruses should be relatively easy to carry out. For example, by choosing an IP address in the Internet that appears to be from the same nation or agency as an attacker, it may be relatively easy to convince the other party that you are friendly. This is what attackers exploit when they issue false orders through forged email, and the same techniques can be used on the defensive side with substantial effect.
In order to have effective operations of this sort in the information domain, the defender's inability to authenticate and differentiate friend and foe must be exploited. For small groups of attackers, this typically involves the development of long-term relationships. This is greatly aided by the use of false identities and other similar information. As an example, several of the widely-known and highly trusted members of the Internet attack community are in fact defenders who have build deep covers over a long time and exploited the relative anonymity of the Internet to their advantage. In some cases, multiple false identities have been used to hand off from one person to the next.
The difficulty in identifying and differentiating friend and foe is far harder in larger organizations because the relationships between people tends to be far more tenuous. For example, in a large-scale military or corporate situation, it is rare when you personally know all of the people you deal with on a daily basis. It is common to get calls or email messages from total strangers who assert some relationship, and it is common to believe everything they say unless there is a reason to be suspicious of them.
One of the most effective defenses against ruses is cryptographic authentication, but this introduces substantial key management issues, depends on a technology that is not widely deployed for many of the types of systems in day-to-day use, and perhaps more importantly, is ineffective against attackers who are able to gain entry into systems.
Perhaps the most limiting factor in the use of ruses as a defensive technique is that they inherently seem to require attack of enemy systems in order to be effective, and this is beyond the moral scope of what most organizations are willing to sustain. It is therefore limited, in most cases, to the use of intelligence agents who seek general intelligence from attackers through the use of false identities. There is no great complexity in creating fictitious identities in information systems, and all that is required is a willingness for the individual to deceive.
The objective of displays is to make the attacker see what isn't there.
One of the most common examples of a display is the creation of a fictional computer security organization, a set of policies, procedures, and other similar protective techniques which appear to be in place but in fact are not. Another example, many buildings have numerous camera positions clearly indicating that the attacker is being watched, and intrusion detection systems are sometimes advertised in entry banners. These sorts of displays are intended to make the attacker believe that they will be detected, but in information protection, they more often fool the defenders into a false belief that they are secure than deterring attack.
Creating effective displays depends to some extent on having attackers who can tell what they are looking at and are afraid of being detected or defeated. For example, cameras housings in ceilings may be effective against professional thieves who will notice them, believe that they could be real cameras, and don't want to get caught. On the other hand, the average low-grade attacker is unlikely to even see the camera housing or know what it is if they do see it. Even if they believed they might be caught on camera, they are likely to also believe that they would never be found from those pictures.
Unfortunately, it is hard to make effective displays against sophisticated attackers because they have a tendency to do more intelligence over a longer period of time than ametures. Furthermore, the non-physical and long-term nature of most information system attacks have a tendency to make the use of displays far less effective than they are in military situations where split-second decisions are often the difference between life and death. It is simply too easy to back off of an information system attack and disappear back into the global networks while you think about what just happened.
This having been said, if the objective of a defense is to get serious attackers to temporarily back off, displays may be an effective way to go about it. For example, suppose we were to display a message indicating that an access was unauthorized and that a traceback was underway to determine proper handling procedures. While insiders may have been told that such messages were generated at random by the system as a secret deterrent to outsiders, the attacker who does not know this may be hard pressed to ignore such a message.
The objective of demonstrations is to convince the attacker that adequate force can be applied to defeat them and apply force against them to prevent further attacks.
In some sense, a demonstration is more of a warning shot than a falsehood. It says in no uncertain terms that you are serious about defense and threatens, by implication, the use of more force in the future. It is intended to show just how far you are willing to go to defend and just how capable you are of defending yourself.
The three examples of demonstrations given earlier may be instructive because they were largely effective at accomplishing their goals and, in the view of the authors, are acting even today as a deterrent to further similar attacks. We will describe these incidents further here.
In 1996, an attacker by the name of Captain Midnight broke into the HBO presentation of "The Falcon and the Snowman" with a message protesting the monthly charges for satellite access on Showtime and the Movie Channel by uplinking a stronger signal to the distribution satellite than the default HBO signal. [Pessin86] The FBI initiated a nationwide hunt for the perpetrator and after sorting through a large volume of information captured the perpetrator in a matter of a few days.
In 1987, a user sent an email virus to a small number of users of IBM mainframe computers couched within a Christmas card. The Christmas Card virus spread throughout the global mainframe network in a matter of a few days and caused wide-scale global outages. The perpetrator was tracked down by analyzing global audit trails in a matter of a few days.
In 1988, Robert Morris, Jr. created the Internet Virus which entered a large portion of all the computers on the Internet at that time. A very rapid nationwide hunt was commenced involving searches of thousands of computers and hundreds of individuals, resulting in the capture of Mr. Morris in a matter of days.
The tracking down of Sameer and others in 1996 involved a case in which several perpetrators jointly attacked an Internet-based computer system using intermediate computers to conceal their true identities and locations. The attack involved more than 2,000 attempted entries passing through more than 800 computers from all over the world. The perpetrators were identified in a matter of hours with the use of automation and the cooperation of systems administrators from more than ten countries and hundreds of sites. [Cohen96-04] [Cohen96-DCA] [Drill-Down] Another attempt was made using related techniques against the same site within the next few months, and this time it resulted in more than 1,000 purveyors and collectors of illegal software being identified within a matter of less than a day and the tracking down of the individual who was primarily responsible for the activities being identified in a matter of hours. Again, hundreds of sites and several countries were involved.
These cases have a few important things in common; (1) they involved large numbers of individuals or systems from a wide geographical area, (2) they were implemented by the perpetrator(s) in such a way as to attempt to conceal their identity, (3) the attackers likely believed that they could not be caught, (4) they were the first attacks of their kinds on a large scale, (5) they all received substantial public attention among the attacker community, (6) the perpetrators of each wer identified quickly by large-scale efforts, and (7) none of these techniques were widely used for attacks again for a long time.
Perhaps the most important point to be made is that the rapid and public identification of the perpetrators and the demonstration that defenders could catch them so quickly and on such a large scale provided an object lesson in the form of a demonstration that likely prevented many others from attempting similar attacks later on. In short, the demonstration of strong defenses appears to have a strong suppressive impact on attacks.
The objective of feints is to make the attacker believe that they are being pursued in force in order to cause them to react to the attack against them. Reactions can vary from counterattack to abandonment of positions, but the key to the deception is that in a feint, the full power available to the defender is not actually used.
There are few large-scale examples of feints in the information protection literature, but small-scale feints are done on a regular basis through the use of the legal system. In essence, most law suits or threats of law suits related to information protection are feints. The most common goal is to get the attacker to settle for some amount in order to recover some losses and in order to make the attacker think a little bit harder about doing the same thing again.
In the case of attacks against corporations, the options for the defenders when the amount of the loss is not extreme, are quite limited. It would prove more expensive to pursue legal action than it is worth, while letting the perpetrator get away with the crime is an invitation to more crime. They resort to feints in the form of dishonerable dismissals or similar agreements. In essence, they are threatening strong action and acting as if they might use strong action, but the real intent is to get that attacker to stop what they are doing and move on from there.
While the use of feints for low-level attacks is a fairly unsophisticated affair, feints against high-grade attacks engaged in large-scale activity can be far more serious and require far more resources. No large scale feints in the information arena come immediately to mind, and this would seem to indicate that the subject either has not been widely studied, that it is ineffective for some reason that is not apparent to the author at this time, or that its use on a large scale has not been widely revealed.
The objective of a lie is to either convince the enemy that something that is not true is true or to convince them that they will not get reliable information by asking for it.
Convincing an unsophisticated enemy that something is not true may be a simple matter. Some people believe anything. But eventually, if the lie is important, it has to be backed up with something, and this is where false and planted information comes in. It is also quite difficult to tell a set of convincing lies without an overall strategic plan. Eventually, if the attacker is looking closely enough, lies will be revealed by their inconsistency with themselves or the observable facts.
Convincing the attacker that they will not get reliable information is far easier to do, but may generate a range of reactions from intensifying their attacks to getting them to try elsewhere. The simplest lies are most effective. For example, when a person meets someone else at a bar and lies to the other person about their telephone number, it doesn't take much thought and it tends to be effective. On the other hand, such a lie can come back to haunt you if you encounter the same person again. Lying is easier at a distance, so lies via email or telephone tend to be more easily done by more people. The lack of face-to-face contact and non-verbal information makes it far harder to tell a lie from a truth without testing it.
We are unaware of any formal studies of the complexity of lying in general, however, we suspect that generating consistent lies is closely related to the results on false and planted information given earlier.
Insight involves the psychological ability to understand what deceptions will work and to out-think the opponent. It is the battle of the minds that makes deception succeed or fail, and the ability to out-think the opponent that determines whether deceptions will succeed or fail.
While there are no general results about the ability of one person to outhink another in terms of deception, there are certainly numerous examples from day-to-day life. But perhaps the place where we can make the most progress in a short time frame is in the ability to devise deceptions that outwit automated attack systems.
Automated attack and defense systems have long been a part of military culture, and mixed man-machine systems have been used in the form of strategic and tactical war games since at least 1790. [Wilson68] One of the key issues in devising such systems is in preventing exploitation of reflexive control behaviors. In the case of using deception as a tool against automated attack systems, insight into their operation is key to defending against them. Again, the notion of automated deception in the form of Deception ToolKit is an example of an implementation based on this strategy. The automated defense, in this case, makes many forms of automated attack obsolete and changes the defense effort into one of out-thinking the opponent by providing automated deceptions good enough to fool their automated attacks.
The Deception ToolKit (DTK) [Drill-Down] is the first publicly available off-the-shelf deception system intended for widespread use. It is designed primarily to provide the average Internet user with a way to turn on a set of deceptions in a few minutes that will be effective in substantially increasing attacker workloads while reducing defender workloads. For the purposes of this paper, it serves as an example of how deception can be used, its effectiveness, and its limitations.
In it's off-the-shelf form, DTK is designed to provide fictions that are adequate to fool current off-the-shelf automated attack tools into believing that defenses are different than they actually are. The net effect is that attack tools that automatically scan for known vulnerabilities find what appear to be large volumes of vulnerabilities. When the attacker tries to interpret the results of automated scans, there is not enough information to tell which of the detected vulnerabilities are real, and the number of detected vulnerabilities is very high and dominated by deceptions. The attacker is then faced with spending inordainent amounts of time trying to figure out which of the indicated attacks really work, and at the same time, all of the attacks attempts against deceptions are indicated to the defender. This has a few interesting side effects:
From a standpoint of insight, DTK substantially reduces predictability for attackers. [Cohen98-04] [Drill-Down] This means that attackers gain less insight per unit effort, while defenders gain more.
DTK's deception is programmable, but it is typically limited to producing output in response to attacker input in such a way as to simulate the behavior of a system which is vulnerable to the attackers method. As a programmable deception capability, DTK provides a low-cost method for a defender to create custom deceptions of arbitrary complexity. For example, it is a fairly simple matter to create a series of convincing electronic mail messages that indicate a false intent to an attacker. If the attacker is cleaver enough to break into the pop3 email server deception using known attack techniques, they are provided with false and planted information.
While this is a simple first step to a larger deception, we return to the underlying issue that convincing deceptions of this sort are complicated things to generate and rely on an overall strategic vision for their utility against strong adversaries. DTK is not intended to be the end-all to deceptions in information systems. It is only a simple tool for creating deceptions that fool simplistic attacks, defeat automatic attack systems, and change the balance of workload in favor of the defender. One of the most important theoretical issues to be addressed is the extent to which this workload shift takes place.
A simplistic view of deception for the purposes of analysis is that out of a few thousand widely known vulnerabilities in modern information systems, most current systems are only vulnerable to a small percentage of them, if only because most modern system don't use most of the capabilities of their systems for useful applications. For example, out of the 1024 system ports in the IP protocol, few systems actually use more than the most common 5 or 6 services, and about half of the ports are not yet allocated for a particular function. This translates to about 1 in 100 of the well-known IP ports being required for legitimate use in a typical system. Suppose that we use DTK to emulate vulnerable services on all of the unused IP ports.
If an attacker does not observe traffic before trying to test the defenses, and assuming that access attempts are not concentrated more on actual services provided than on deceptive services, every attempted access that has a possibility of success has a high probability of triggering a deception rather than an actual service. If we assume random guessing is taking place, then the odds are 99 out of 100 on the first attempt that a deception will be encountered rather than an actual service. While normal statistics of random selection without replacement does not really apply here, using it as an assumption, the odds will change to 98 out of 99 on the next try unless the first try was successful, and so on until the attacker eventually finds a service that is legitimate and vulnerable. But the attackers job is far more difficult than this. For each detected service, the attacker must make additional efforts to try to exploit the vulnerability. If this is fully automated, the attacker's workload is only increased but he amount of additional computer time required to press the attack further, but if not, the attacker's human resources are consumed at a rapid rate, and this typically leads to exhaustion of resources in a fairly short time frame.
This analysis appears to indicate an increase in the attacker's workload by two orders of magnitude, but it also has a pleasant effect for the defender. Since it is impossible to perform an experiment without effecting the environment being experimented on, every miss by the attacker represents a detection by the defender. This means that defenders, instead of ignoring unused services which is the normal default situation, are alerted to all of the failed attempts. In the normal course of events, only failed attempts against legitimate services would be detectable, and those would have to be sought out by some explicit mechanism in order to produce detections. With deception in place, every use of a deceptive service constitutes the detection of an attempted attack. This means that 99 out of 100 attempted attacks are detected on the first attempt through deception, and this represents at least two orders of magnitude improvement in detection over the default detection capability.
In general, there are two problems for the designer of automated attacks against deceptive defenses such as those demonstrated in DTK. The first problem is generating automation that differentiates between deceptions and real services. The second problem is finding a way to succeed in the attack before the defender is able to react.
While the problem of differentiating deception from reality is, in general, a very complex problem equivalent to the general problem of finite state machine differentiation, the realities of DTK today limit its deceptions to relatively simple state machines. Since the attacker has access to the most widely used deceptions in the same manner as the defenders have access to them, writing differentiation routines for complex services should be simple, while writing differentiation routines for simple services may be impossible. This may seem counterintuitive at first, but it is really quite straight forward. In essence, the more complex the deception requirement, the harder it is to make a good deception, and thus the easier it is to differentiate real systems from limited deceptions. On the other hand, simple systems such as pop3 servers are so simple to build that a deception can be built easily that completely and precisely mimics the real service.
The much harder problem is defeating deception by moving so quickly that the decision cycle of the defender cannot block the attack. In normal systems without deception, the attacker has a long time before detection takes place. It is not unusual for even obvious failed attacks to go undetected indefinitely. But with deception in place, every attempt to use a deception service generates an immediate detection. If automated notification and response procedures are in place for each detected attack attempt, the attacker may have to succeed in a matter of milliseconds in order to prevent the defender from acting. Since this is normally infeasible, avoiding detection in such circumstances more complex attack methods have to be employed involving multiple actions such as audit suppression or reflexive control. This substantially increases the complexity of attack, but we have no mathematical description to date of the precise effect.
While our coverage of theoretical issues has hardly been comprehensive, it seems clear that there may be significant theoretical advantages to the the use of deception in the the form of DTK and that further theoretical analysis is required in order to make these results clearer and more definitive.
From a purely practical standpoint, DTK has been used in experiments and in real-world systems since its public release early in 1998. We thus have two sets of empirical results to report. One set of results comes from the use of DTK on a small number of systems on the Internet. The other comes from its use in experiments on automated attack and defense of computer networks.
One of the most interesting result we have come across is interesting because it relates to previous papers on attack rates against Internet computers. In a recent Ph.D. thesis based on reports to the CERT team at Carnegie-Mellon University, John Howard [Howard97] reported that incident rates for attempted intrusions on the Internet were on the order of one incident per Internet host per 45 years, far lower than previous results of about 1 incident per host per day. [Cohen96-03] [Drill-Down] To date, no independent experimental study had been undertaken to determine attack rates, and rates were largely determined by reports. With the introduction of DTK into an otherwise unannounced Internet computer, this situation has changed. Since the IP address for the test site was not announced, has no useful services to unauthorized users (and few to authorized users), and is not otherwise identified as a place to attack, its single IP address should be a valid data point for detecting overall Internet attack rates independent of these other factors.
The log file produced by DTK on this random and otherwise unidentified computer over the period from 1998/05/20 through 1998/06/17 are provided below. Fields in this listing are, from left to right, IP address, port number, service port number, date, time (PST), process ID of the handler, process ID of the invocation of the handler and instance number, name of the handling program, state number of the finite state machine used for handling the service, and inputs - sometimes including indicators by DTK of actions it has taken or situations it has encountered. Identifiers such as Init, WeClose, and TheyQuit indicate such situations as the reinitialization of a deception handling routine, the closing of a connection by DTK based on a timeout, use restriction, or termination of the finite state machine handler, and the disconnection of the remote host. Some services include a remote system user ID as indicated by TCP wrappers (e.g., davem), others may include the name of the service (e.g., gopherd), and others include detailed input sequences (e.g., GET / HTTP/1.0^).
188.8.131.52 25 25 1998/05/22 07:57:08 4700 4700:1 Generic.pl S0 davem in.smtpd unknown 184.108.40.206 25 25 1998/05/22 07:57:08 4700 4700:1 Generic.pl S0 TheyQuit 220.127.116.11 80 80 1998/05/22 07:57:13 4701 4701:1 Generic.pl S0 davem thttpd testing 18.104.22.168 80 80 1998/05/22 07:57:18 4701 4701:1 Generic.pl S0 GET / HTTP/1.0 22.214.171.124 80 80 1998/05/22 07:57:18 4701 4701:1 Generic.pl S0 WeClose 126.96.36.199 80 80 1998/05/22 07:57:25 4702 4702:1 Generic.pl S0 davem thttpd testing 188.8.131.52 80 80 1998/05/22 07:57:32 4702 4702:1 Generic.pl S0 GET /cgi-bin/test-cgi?* HTTP/1.0 184.108.40.206 80 80 1998/05/22 07:57:32 4702 4702:1 Generic.pl S0 WeClose 220.127.116.11 110 110 1998/05/22 07:57:42 4703 4703:1 Generic.pl S0 davem in.pop3d unknown 18.104.22.168 110 110 1998/05/22 07:57:42 4703 4703:1 Generic.pl S0 TheyQuit 22.214.171.124 10350 79 1998/05/22 07:58:00 4704 4630:1 listen.pl S0 Init 126.96.36.199 10350 79 1998/05/22 07:58:00 4704 4630:1 listen.pl S0 WeClose 188.8.131.52 12374 11 1998/05/22 08:31:06 5023 4633:1 listen.pl S0 Init 184.108.40.206 12395 79 1998/05/22 08:31:06 5027 4630:2 listen.pl S0 Init 220.127.116.11 12395 79 1998/05/22 08:31:06 5027 4630:2 listen.pl S0 WeClose 18.104.22.168 12374 11 1998/05/22 08:31:06 5023 4633:1 listen.pl S0 WeClose 22.214.171.124 12820 8000 1998/05/22 08:31:18 5040 4638:1 listen.pl S0 Init 126.96.36.199 12820 8000 1998/05/22 08:31:18 5040 4638:1 listen.pl S1 NoInput 188.8.131.52 25 25 1998/05/22 08:31:21 5025 5025:1 Generic.pl S0 unknown in.smtpd unknown 184.108.40.206 21 21 1998/05/22 08:31:21 5028 5028:1 Generic.pl S0 unknown wu.ftpd unknown ... 220.127.116.11 0 0 1998/05/22 08:35:16 5080 5080:1 coredump.pl S1 TheyQuit 18.104.22.168 13973 11 1998/05/22 08:42:06 5171 4633:3 listen.pl S0 Init 22.214.171.124 13973 11 1998/05/22 08:42:06 5171 4633:3 listen.pl S0 WeClose 126.96.36.199 21 21 1998/05/24 12:59:39 13258 13258:1 Generic.pl S0 unknown wu.ftpd unknown 188.8.131.52 21 21 1998/05/24 12:59:39 13258 13258:1 Generic.pl S0 TheyQuit 184.108.40.206 21 21 1998/05/24 13:31:01 13311 13311:1 Generic.pl S0 unknown wu.ftpd unknown 220.127.116.11 110 110 1998/05/24 13:31:01 13312 13312:1 Generic.pl S0 unknown in.pop3d unknown 18.104.22.168 80 80 1998/05/24 13:31:01 13315 13315:1 Generic.pl S0 unknown thttpd testing 22.214.171.124 25 25 1998/05/24 13:31:01 13313 13313:1 Generic.pl S0 unknown in.smtpd unknown 126.96.36.199 110 110 1998/05/24 13:31:01 13312 13312:1 Generic.pl S0 TheyQuit 188.8.131.52 80 80 1998/05/24 13:31:01 13315 13315:1 Generic.pl S0 TheyQuit 184.108.40.206 25 25 1998/05/24 13:31:01 13313 13313:1 Generic.pl S0 TheyQuit 220.127.116.11 21 21 1998/05/24 13:31:01 13311 13311:1 Generic.pl S0 TheyQuit 18.104.22.168 21 21 1998/05/24 20:38:00 14017 14017:1 Generic.pl S0 unknown wu.ftpd unknown 22.214.171.124 21 21 1998/05/24 20:38:00 14017 14017:1 Generic.pl S0 TheyQuit 126.96.36.199 80 80 1998/06/14 21:15:25 6326 6326:1 Generic.pl S0 sammy thttpd testing 188.8.131.52 80 80 1998/06/14 21:15:25 6326 6326:1 Generic.pl S0 TheyQuit 184.108.40.206 80 80 1998/06/14 21:17:55 6328 6328:1 Generic.pl S0 sammy thttpd testing 220.127.116.11 80 80 1998/06/14 21:17:55 6328 6328:1 Generic.pl S0 GET http://home.netscape.com/IISSamples/Default/welcome.htm HTTP/1.0 18.104.22.168 80 80 1998/06/14 21:17:55 6328 6328:1 Generic.pl S0 WeClose 22.214.171.124 21 21 1998/06/17 17:20:16 14024 14024:1 Generic.pl S0 Choc wu.ftpd unknown 126.96.36.199 21 21 1998/06/17 17:20:16 14024 14024:1 Generic.pl S0 TheyQuit
This log indicates that 6 different sources attempted illicit access to services on a single Internet computer over about 28 days, an average of about 1 remote location every five days. Of these, several could be considered mistakes of one form or another, but also included in this list are attempted exploitation of known vulnerabilities such as the GET /cgi-bin/test-cgi?* entry and the port scan used by 188.8.131.52. Indeed, over a period of 2 months, we also observed attempted exploits of a known pop3 defect intended to grant root access, password guessing both for telnet and pop3 access, attempted exploits of Trojan horse user IDs and passwords, attempted exploits of FTP holes, attempted remote access to rsh and rlogin daemons, and attempted exploits of known /cgi-bin/handler and /cgi-bin/phf vulnerabilities.
While such a small statistical sample is a poor indicator, this result would seem at first glance to indicate that the typical Internet host is attacked far more often than indicated by CERT data but somewhat less than previously indicated by our data. It is our belief that these results indicate, as previously indicated by our earlier results and those of Bellovin and Cheswick [Cheswick94] that detection is poor and that enhanced detection increases detection rates by two orders of magnitude. A two order of magnitude increase in the CERT results from 1995 (the last year studied in Howard's dissertation) would lead us to one attempt every 0.45 years. Those results also indicated that attack rates were roughly doubling annually, so a factor of 8 is needed to get to 1998 rates, which would bring the attack rates up to about one every 20 days. This is reasonably close to the results presented here of one attempt every 5-10 days.
It seemed apparent from the log files produced by DTK that most of the probes we encountered were relatively mild in nature, and seemed to be largely dedicated to probing for services or making simplistic attempts at exploiting inappropriately set defaults. A few cases attempted to exploit known vulnerabilities in Web services, some tried to see if the site was in use for Warez (i.e., illegal copies of software typically stored on and retrieved from unsuspecting sites), others apparently tried to see if the headers on services indicated known vulnerabilities, and others provided attackers who guessed simple passwords with password files for them to crack. While it is hard to tell for certain what the effect on attacker workload was, there were no obvious indicators of attackers spending prolonged periods of time on the site. On the other hand, this site offered essentially no attractive features that would make it worth bothering to attack, it was not advertised, and it did not have the most common services.
One of the unanticipated side effects of the use of DTK was its effect on reducing the vulnerabilities of systems to denial of service attacks. While DTK is currently implemented as a Perl script, the use of common (i.e., shared) code segments by all invocations of Perl on a given system combined with the small size and quantity of state information required in order to implement DTK's finite state response services produces a relatively small performance and memory impact on systems, and essentially no effect when attacks are not underway. In experiments with common denial of service attacks, we found that DTK was able to sustain operations when the normal service daemons that are attacked in typical denial of service attacks were susceptible. In other words, the simplicity of the deceptions when compared to normal services yielded increased availability.
DTK was designed with the intention of being resilient to resource exhaustion attacks by including timeouts and input length limitations that most normal service programs do not have - to their detriment. DTK uses the same methods as the secure Get-only Web server [Cohen95] [Drill-Down] to provide additional protection in a secure daemon, however, its current implementation in Perl is far from provably secure.
In its most basic form, the ability to deceive insiders is limited by the extent to which they have access to information systems and understand how those systems work. Deception is unlikely to be effective against someone who is able to differentiate deceptions from realities without performing any experiments or making any mistakes. On the other hand, experimental results tend to indicate that deceptions are adequate to cause errors and omissions even by their designers who cannot tell on first glance that a deception is not the real service. This, however, introduces the notion of false positives.
One of the servers used in our work is configured so that telnet access is only permitted from a particular other server. The decision is made based on IP address. It turns out that the server with remote access is connected to a switch box that allows four computers to share a single keyboard, display, and mouse. It also turns out that two of those computers are configured to look nearly identical. Several times per month, the wrong machine is switched to the display, keyboard and mouse when remote access is attempted. The result is a deception that fools the user into incorrectly logging in from the wrong computer.
This is a typical user mistake, and a significant portion of the detections generated by DTK, while accurately reflecting unauthorized access attempts, may be simply errors and omissions rather than intentional attacks. Analysis of the severity of the attacker's attempt is also available in DTK in the form of simplistic analysis of the states of the attack.
Some examples may be useful here. The listing below shows an attacker (email@example.com) repeatedly trying requesting access of the pop3 daemon with interspersed telnet attempts, but the attack never proceeds past state S0 which is the initial state of the deception server's state machine. It seems like a somewhat concerted effort, it involves rapidly repeated attempts during three different time slots (18:30:57, 03:39:49, and 13:42:41) with 9 and 10 hour separations, and there is a repeated pattern of three pop3 attempts followed by a telnet attempt. It seems likely that there is some automated attack that uses this sequence in some manner, but it never got far enough to become worthy of real attention in this environment.
184.108.40.206 110 110 1998/08/02 18:30:57 28052 28052:1 Generic.pl S0 root in.pop3d unknown 220.127.116.11 110 110 1998/08/02 18:30:57 28054 28054:1 Generic.pl S0 root in.pop3d unknown 18.104.22.168 110 110 1998/08/02 18:30:57 28055 28055:1 Generic.pl S0 root in.pop3d unknown 22.214.171.124 110 110 1998/08/02 18:30:57 28052 28052:1 Generic.pl S0 TheyQuit 126.96.36.199 110 110 1998/08/02 18:30:57 28055 28055:1 Generic.pl S0 TheyQuit 188.8.131.52 110 110 1998/08/02 18:30:57 28054 28054:1 Generic.pl S0 TheyQuit 184.108.40.206 23 23 1998/08/02 18:30:58 28057 28057:1 Telnet.pl S0 220.127.116.11 110 110 1998/08/03 03:39:49 29066 29066:1 Generic.pl S0 root in.pop3d unknown 18.104.22.168 110 110 1998/08/03 03:39:49 29068 29068:1 Generic.pl S0 root in.pop3d unknown 22.214.171.124 110 110 1998/08/03 03:39:49 29066 29066:1 Generic.pl S0 TheyQuit 126.96.36.199 110 110 1998/08/03 03:39:49 29068 29068:1 Generic.pl S0 TheyQuit 188.8.131.52 110 110 1998/08/03 03:39:49 29069 29069:1 Generic.pl S0 root in.pop3d unknown 184.108.40.206 110 110 1998/08/03 03:39:49 29069 29069:1 Generic.pl S0 TheyQuit 220.127.116.11 23 23 1998/08/03 03:39:50 29071 29071:1 Telnet.pl S0 18.104.22.168 110 110 1998/08/03 13:42:41 30530 30530:1 Generic.pl S0 root in.pop3d unknown 22.214.171.124 110 110 1998/08/03 13:42:41 30533 30533:1 Generic.pl S0 root in.pop3d unknown 126.96.36.199 110 110 1998/08/03 13:42:41 30532 30532:1 Generic.pl S0 root in.pop3d unknown 188.8.131.52 110 110 1998/08/03 13:42:41 30530 30530:1 Generic.pl S0 TheyQuit 184.108.40.206 110 110 1998/08/03 13:42:41 30533 30533:1 Generic.pl S0 TheyQuit 220.127.116.11 110 110 1998/08/03 13:42:41 30532 30532:1 Generic.pl S0 TheyQuit 18.104.22.168 23 23 1998/08/03 13:42:42 30535 30535:1 Telnet.pl S0
Here is someone trying to login thinking that this is a Warez site. Warez is illegally copied software, typically placed on Internet servers without the knowledge or consent of the owner. People who use the illegal software come to Warez sites to retrieve the stolen goods. In this compressed listing, the - indicates a repetition of previous entries and +0 and +1 indicate a time differential. Note that in this listing, the user ID and password commonly used for warez sites is clearly provided. Again, no significant progress is made.
22.214.171.124 21 21 1998/04/01 06:46:02 5426 5426:1 listen.pl S0 126.96.36.199 21 21 1998/04/01 07:06:14 5458 5458:1 listen.pl S0 - - - +0 - - 5458:1 - S0 user warez - - - +1 - - 5458:1 - S0 pass warez
In this demonstration example, a user from the local machine (127.0.0.1) successfully logs in as root using the simple password provided in the telnet deception. Note the states of the attack proceed from S0 at initiation to S1 where a valid user ID has been entered to S2 where a valid password has been entered to S3 where commands are being tried and finally to S4 where a fake password file has been provided to the attacker on their request. In this case the progress of the attack is clearly modeled by increasing state numbers, and ultimately, the attacker has generated action by DTK in the form of a notice to the systems administrator via email.
127.0.0.1 23 23 1998/04/02 05:34:23 8041 8041:1 listen.pl S0 - - - +3 - - 8041:1 - S1 root - - - +1 - - 8041:1 - S2 toor - - - +2 - - 8041:1 - S3 ls - - - +2 - - 8041:1 - S3 df - - - +4 - - 8041:1 - S3 cat /etc/passwd - - - +0 - - 8041:1 - S4 NOTICE //dtk/notify.pl 23 4 Email fred at all.net Just sent a password file to an attacker - t!
Here's another example demonstrating the FTP bounce scan attack as detected by DTK. A normal ftp login as anonymous generates states S0 through S2 while the repeated entries in S2 are indicative of the bounce port scan attack. In this case, the DTK state machine is not configured to further differentiate, but it would be an easy matter to generate special states for this attack if differentiation were desired.
127.0.0.1 21 21 1998/03/18 18:53:46 27166 27166:1 listen.pl S0 - - - +8 - - 27166:1 - S0 USER anonymous^M - - - +0 - - 27166:1 - S1 PASS -wwwuser@^M - - - +2 - - 27166:1 - S2 PORT 1,2,3,4,0,1^M - - - +0 - - 27166:1 - S2 LIST^M - - - +0 - - 27166:1 - S2 PORT 1,2,3,4,0,2^M ... - - - +0 - - 27166:1 - S2 PORT 1,2,3,4,0,1023^M - - - +0 - - 27166:1 - S2 LIST^M - - - +0 - - 27166:1 - S2 PORT 1,2,3,4,0,1024^M - - - +0 - - 27166:1 - S2 LIST^M ...
From these listings, is seems clear that the design of the state machines used in generating deceptions can be done so as to easily reveal the severity and intent of the attacker in terms of malice, while automatically suppressing false positives by giving them differentiable state numbers. In fact, the use of state numbers as metrics is feasible and is closely related to the thresholding schemes used by some intrusion detection systems try to eliminate false positives. [Cohen97-3] [Drill-Down] DTK has a tremendous advantage in that it is able to closely control events on deception services, while the typical intrusion detection system has to merely watch an essentially unrestricted flow of information. DTK thus significantly improves the signal to noise ratio for detection.
While the simplistic deceptions that are distributed with DTK are adequate for exploring properties of deception against naive threats, more complex deceptions are required in order to have great effect against sophisticated attackers reminiscent of high-grade threats. In order to prove effective against high grade threats, deceptions must be of sufficient quality as to create misperceptions over an appropriate amount of time so as to harm the effectiveness of the attacker. In addition, high grade threats tend to use their own deceptions in order to increase their effectiveness and decrease their risks. The question of how effective deception is against deception is clearly open to analysis.
DTK is clearly limited in the richness of the deceptions it can provide. For example, it is simple to differentiate between a real computing environment and the limited capabilities demonstrated by a finite state machine wth a small number of states. On the other hand, against most modern automated attack tools, this is adequate. But against a serious attacker, differentiation even by an automated tool would be a simple matter. For example, the attacker could pseudo-randomly select a series of commands from a nomal environment, run them on their local machine, and compare results to those of the same commands run against a machine under attack. Differences would indicate possible deceptions. A sophisticated attacker could break into an intemediate site, test the site under attack for deception, differentiate deception services from legitimate services, and then exploit the legitimate services from a different location. This type of distributed coordinated attack (DCA) [Cohen96-DCA] [Drill-Down] makes the sort of limited deceptions provided by DTK ineffective, and it is certainly within the means of any well-resourced attacker. In essence, this example applies deception to defeat deception.
For sophisticated deceptions, there is no limit to the amount of effort that can be applied on either attack or defense. The goal of a deception must therefore be clearly defined in order to make prudent decisions about the level of effort and methods to be applied. For example, customized deceptions are far more effective against insiders or attackers with specific goals and intelligence operations than off-the-shelf deceptions.
DTK provides a limited unlimited customization capability. It is unlimited in that, theoretically, you can simulate anything that a Turing machine can do with finite state machines and unlimited memory, while in practice it is limited by the ability of customization to adequately deceive. A good example is the simulated mail server. While it can be easily programmed to provide access to forged email, generating a sequence of meaningful emails to create a deception is not such a simple task.
Deception, in general, and widespread simplistic deception, in particular, may provide a valuable tool for the global information protection community, particularly in the Internet and other similar networked environments. The results described above indicate the value for an individual system using deception, but in today's networked environment, it is important to consider wide scale implications of such a technology.
If used on a wide scale, deception sours the milk - so to speak. If one person uses deception, they may see attacks coming well ahead of time. If a few others start using it, we will may be able to exhaust all but well funded expert attackers and they will go somewhere else to run their attacks. If a lot of people use deception, the attackers will find that they need to spend orders of magnitude more effort to break into systems and that they have a high risk of detection well before large numbers of attacks succeed.
If enough people adopt deception and work together to keep deceptions up to date with attacks, we will eliminate all but the most sophisticated attackers, and all the copy-cat attacks will be detected soon after they are released to the Internet community. This will not only sour the milk, it will also increase the cost for would-be copy-cat attackers and, as a side effect, reduce the "noise" level of attacks to allow us to more clearly see more serious attackers and track them down.
If deception becomes very widespread, one of the key deceptions included with DTK will become very effective. This deception is TCP/UDP port 365 - which we have staked a claim for as the deception port. Port 365 indicates whether the machine you are attempting to connect to is running a deception defense. Attackers who wish to avoid deceptive defenses may check there first, and if a deception is indicated, they may go elsewhere. Eventually, simply running the deceptive defense notifier will be adequate to eliminate many of the less serious attackers - sort of like warning notices are supposed to do. Of course some of the defenders may not turn on the deception announcement message so they can track new attack attempts by those who avoid deceptive defenses, so the attacker's level of uncertainty rises, and the information world becomes a safer place to work.
The deception port has also been experimentally used in order to facilitate two kinds of enhancements. One form of ehnancement is to provide a means for remotely accessing log information in order to centralize intrusion management. In this way, fully decentralized deceptions can be implemented in a large network with remote reporting and control. Of course this remote control has to have proper protections to prevent it from being exploited by attackers, but the details of these protocols is beyind the scope of this paper and is well within the classes of existing cryptographic protocols used for intrusion detection and remote network management systems. [Cohen97-3] [Drill-Down] [Cohen98] [Drill-Down]
Another enhanced use of the deception port currently being experimented with is its use for communication between deception systems for the purpose of coordinating defensive efforts in a fully distributed fashion. In this experimental system, deception systems communicate with other deception systems engaged in related business functions. They independently schedule deceptions to increase and replace non-critical functions as detected attacks increase, decrease deceptions and enhance non-critical functions as attacks decrease, and pseudo-randomly insert and alter deceptions during times of low-level activity so as to make it impossible for even the expert insider who set up deception to be certain of going undetected. This is all done in a fully distributed, automatic, and hard-to-predict fashion while still being forced by mathematical methods to meet operational constraints and consuming very small amounts of time and space and operating on heterogeneous networks. We believe this technique may hold great promise for large-scale deceptive defense.
One other area of deception has been experimented with, but we have too few results to date to provide meaningful information beyond the fact that these techniques operate. These are internal deceptions wherein programs that could be used to gain unauthorized access once inside a system are instrumented and augmented to include deceptions. For example, if an unauthorized Unix user attempts to use the Unix su command, a deception is used to allow the root password to be easily guessed. The user is then placed in a jail-like enclosure [Cheswick91] [Drill-Down] to allow any attempts at further access or exploitation of illicit access to be observed, analyzed, and recorded.
This type of deception is significantly complicated by the large volume of information that is legitimately provided to most internal users. For example, in most current systems, normal users are granted access to examine, checksum, verify dates and sizes, and perform other functions on files, proceeses, and other system attributes that may be helpful in uncovering the use of deception. Nevertheless, it appears to be scalable and to be easily configured so as to interoperate with the sorts of deceptions released in DTK.
In this paper, we have started the process of discussing deception issues for information protection on scales ranging from individual systems to the global information infrastructure, and from simplistic deceptions intended to defend against off-the-shelf attacks to complex deceptions intended to make attack by sophisticated adversaries more risky. We very briefly discussed historical issues in deception and listed substantial numbers of deception techniques used for attack and defense. Moral issues in the use of deception for protection were discussed briefly and the major lines of discussion were outlined. We briefly discussed initial results on complexity and related matters based on Dunnigan and Nofi's deception classification scheme and gave examples of techniques in use today and their limitations. Finally, we discussed the issues surrounding the deception toolkit, gave examples of its use, demonstrated new results on the rate of incidents based on experience with deception, discussed the implications of deception on denial of services, considered the issue of false positives and how detection statistics are affected by deception, and summarized some notions regarding the widespread application of deception.
Our major conclusions based on the results available to date are that:
Clearly, further work in this area is called for, and if nothing else, this paper can be viewed as a call for such work. The key issues that we believe need to be resolved are the introduction of rigorous mathematical analysis, the exploration of more advanced deceptions, the analysis of fully distributed large-scale deceptions, a more in-depth debate over the moral issues, technical enhancements of deceptions to make them viable against the majority of widely known attack techniques, the creation of and experimentation with introduction internal deceptions, and real-world results on the wide-spread use of deception for defense.
[Dunnigan95] Jim (James F.) Dunnigan and Albert A. Nofi, Victory and Deceit - Dirty Tricks at War, William Morrow and Co., 1995.), [In this book, examples of the historical use of deception are categorized into concealment, camouflage, false and planted information, ruses, displays, demonstrations, feints, lies, and insight.]
[Bellovin92] S. M. Bellovin. There Be Dragons. Proceedings of the Third Usenix UNIX Security Symposium. Baltimore (September 1992). [In this paper, numerous foiled attacks from the Internet against AT&T are described and the author details how some of these are traced and what is done about them.
[Cohen96] F. Cohen, Internet Holes - Internet Lightning Rods Network Security Magazine, July, 1996. [This paper describes the use of a system as an intentional target over a period of several years to draw fire from more critical systems and to learn about attack and defense behavior.] [Drill Down]
[Cheswick91] Bill Cheswick, Steve Bellovin, Diana D'Angelo, and Paul Glick, An Evening with Berferd [In this paper, the details of an attack rerouted to a Honey Pot are demonstrated. The defenders observed and analyzed attacks with a jury-rigged fake system that they called the 'Jail'.] [Drill-Down]
[Cohen92] F. Cohen, Operating System Protection Through Program Evolution Computers and Security 1992. [In this paper, techniques for automatically modifying programs without changing their operation are given as a method of camouflage to conceal points of attack.] [Drill-Down]
[Cohen97] F. Cohen, Information System Attacks - A Preliminary Classification Scheme Computers and Security, 1997. [This paper describes almost 100 different classes of attack methods gathered from many different sources.] [Drill-Down]
[Cohen97-2] F. Cohen, Information System Defenses - A Preliminary Classification Scheme Computers and Security, 1997. [This paper describes almost 140 different classes of protective methods gathered from many different sources.] [Drill-Down]
[Cohen96-03] F. Cohen Internet Holes - The Human Element, Network Security Magazine, March, 1996 [Drill-Down]
[Cohen98] F. Cohen et. al. Model-Based Situation Anticipation and Constraint
[Cohen96-04] F. Cohen, Internet Holes - Incident at All.Net [This paper described an Internet-based distributed coordinated attack against the all.net Internet site and gives examples of deception used by attackers to create the impression that deception for defense is unfair and inappropriate] [Drill-Down]
[Cohen96-DCA] F. Cohen, A Note On Distributed Coordinated Attacks , [This paper describes a new class of highly distributed coordinated attacks and methods used for tracking down their sources.] [Drill-Down]
[Cohen85] F. Cohen, Algorithmic Authentication of Identification, Information Age, V7#1 (Jan. 1985), pp 35-41.
[Pessin86] Esther Pessin. Pirate, New York (UPI). April 29, 1986. [HBO on January 15 became the first major cable channel to scramble its signals to prevent satellite dish owners from picking up HBO programming for free and the interruption which appeared during a movie early Sunday apparently was a protest of the policy. The hacker dubbed himself "Captain Midnight" and broke into the film "The Falcon and the Snowman" with a message that flickered on television screens across the nation for anywhere from 10 seconds to more than a minute. The cable raider broke into HBO with a multicolor test pattern and a five-line message printed in white letters: ``Good evening HBO From Captain Midnight $12.95/month? No way! Showtime/Movie Channel beware.'' ]
[Cohen95-3] F. Cohen, A Note on Detecting Tampering with Audit Trails, IFIP-TC11, `Computers and Security', 1996 [Drill-Down]
[Wilson68] Andrew Wilson, The Bomb and the Computer Delacorte Press, 1968. [This excellent book describes much of the history of strategic and tactical war gaming from its inception through the introduction of computers to the art.]
[Cohen98-04] F. Cohen Managing Network Security - The Unpredictability Defense [Donn Parker asserts that in interviewing hundreds of computer criminals who had been caught, a few things stood out in common. One is that they depend on predictability of defenses as a cornerstone of their attacks. Many of them stated that unless they were certain of how and when things would happen, they would not commit their crimes. Furthermore, the way many of them were detected and caught was by unanticipated changes in the way the defenses worked. If Donn is right...] [Drill-Down]
[Howard97] J. Howard, An Analysis Of Security Incidents On The Internet Dissertation at Carnegie-Mellon University [This research analyzed trends in Internet security through an investigation of 4,299 security-related incidents on the Internet reported to the CERT. Coordination Center (CERT./CC) from 1989 to 1995. Prior to this research, our knowledge of security problems on the Internet was limited and primarily anecdotal. This information could not be effectively used to determine what government policies and programs should be, or to determine the effectiveness of current policies and programs. This research accomplished the following: 1) development of a taxonomy for the classification of Internet attacks and incidents, 2) organization, classification, and analysis of incident records available at the CERT./CC, and 3) development of recommendations to improve Internet security, and to gather and distribute information about Internet security. ... "Estimates based on this research indicated that a typical Internet domain was involved in no more than around one incident per year, and a typical Internet host in around one incident every 45 years."]
[Cohen96-03] F. Cohen, Internet Holes - The Human Element ["I've mentioned our Internet site before, and I've probably told you that we detect more than one suspicious activity per day on average."] [Drill-Down]
[Cheswick94] , W. Cheswick and S. Bellovin, Firewalls and Internet Security - Repelling the Wiley Hacker Addison-Wesley, 1994. [This book is one of the most authoritative early sources of information on network firewalls. It includes many details of attack and defense techniques as well as case studies of attacks against AT&T.]
[Cohen95] F. Cohen, Why is thttpd Secure? Published in slightly altered form in Computers and Security, 1996 [ A "secure" server daemon was written by Management Analytics in the week of June 5-9, 1995. We believe this daemon to be secure in the sense that it does exactly what it is supposed to do - nothing more and nothing less. This paper describes the inner workings of this very small program, why we think it is trustworthy, and where our assumptions may fail. This server was subsequently mathematically proven to meet its security requirements.] [Drill-Down]
[Cohen97-3] National Info-Sec Technical Baseline - Intrusion Detection and Response [This paper covers the state of the art in intrusion detection and includes an extensive review of the literature while identifying key limitations of current intrusion detection technology.] [Drill-Down]
[Cohen-98] National InfoSec Technical Baseline - At the Intersection of Security, Networking, and Management [This paper covers the state of the art in network security management and secure network management and includes an extensive review of the current state of the art and identifies key limitations of current technology.] [Drill-Down]