In this paper, we examine the structure of intrusion and intrusion detection and discuss the issues surrounding effective intrusion detection and its limits.
The concepts and, perhaps, even the particulars in this paper are not new, and should not be surprising to the knowledgeable reader. Rather, they represent a roll-up and organization of the way in which we do and think about intrusion detection and the limitations that exist now and likely will continue to exist over time with intrusion detection as a protection technology. If there is a contribution in this paper it is in the structuring of the intrusion detection problem and its placement in context.
The basic outline of the paper goes along with the basic structure of intrusion detection. We will be using a typical Internet environment as the canvas, but I believe the notions are generic. We begin with the hardware level, move up into protocols, operating environments, library and shared resources, applications, recursively defined sub-languages, and the difference between reality and what is provided by the user. We also examine interactions at and between levels and other interactions. In a sense, this gives the whole story away, but details will be provided throughout the rest of this paper nonetheless.
If the hardware of a system or network is altered, it may behave arbitrarily differently than expected. While there is a great deal of history of tamper-detection mechanisms for physical systems, no such mechanism is or likely ever will be perfect. So be it. The use of intrusion detection systems for detecting improper modifications to hardware today consist primarily of built-in self-test mechanisms such as the power on self test (POST) routine in a typical personal computer (PC).
Clearly, if the hardware is altered by a clever intruder this sort of test will not be revealing. Motion sensors, physical seals of different sorts, and even devices which examine the physical characteristics of other devices are all examples of intrusion detection techniques that may work at this level. In software, we may detect alterations in external behavior due to hardware modification, but this is only effective in large scale alterations such as the implanting of additional infrastructure. This is also likely to be ignored in most modern systems because intervening infrastructure is rarely known or characterized as part of intrusion detection. Perhaps it should be.
Intrusions can also be the result of the interaction of hardware of different sorts rather than the specific use of a particular type of hardware. This type of intrusion mechanism appears to be well beyond the capability of current technology to detect or analyze.
Numerous protocol intrusions have been demonstrated, ranging from exploitations of flaws in the IP protocol suite to flaws in cryptographic protocols. Unfortunately, except for a small list of known flaws that are part of active exploitations, current intrusion detection systems do not cover (read detect in this context) such vulnerabilities. In order to fully cover such attacks, it would likely be necessary for such a system to examine and model the entire network state and effects of all packets and be able to differentiate between acceptable and unacceptable packets.
Although this might be feasible in some circumstances, the more common approach is to differentiate between protocols that are allowed and those that are not, while applying increasing granularity can be used to differentiate based on location, time, protocol type, packet size and makeup, and other protocol-level information. This can be done today at the level of single packets, or in some circumstances, limited sequences of packets, but it is not feasible for the combinations of packets that come from different sources and might interact within the end systems. Large scale effects can sometimes be detected, such as aggregate bandwidth utilization, but without a good model of what is supposed to happen, there will always be malicious protocol sequences that should be detected and are not. There are also the interactions of hardware with protocols. For example, there may be an exploitation of a particular hardware device which is susceptible to a particular protocol state transition, resulting in a subtle alteration to normal timing behaviors. This might then be used to exfiltrate information based on packet content going over different routes.
But even if all of this were detectable, we would still have the challenge of detecting the interaction of packets with the operating system environment of the sending, receiving, and intervening hosts.
At the operating system (OS) level, there are a very large number of intrusions possible, and not all of them come from packets that come over networks. Users can circumvent operating system protection in a wide variety of ways. For a successful intrusion detection system to work, you have to detect this before they gain the access necessary to disable the intrusion detection mechanisms (the sensors, fusion, analysis, or response elements or the links between them can be defeated to avoid successful detection). In the late 1980s a lot of work was done in the limitations of the ability of systems to protect themselves, and integrity-based self defense mechanisms were implemented that could do a reasonable job of detecting alterations to operating systems that cross the bootstrap process, however these systems are not capable of defeating systems that invade the operating system without altering files and reenter the operating system from another level after the system is functioning. Process-based intrusion detection has also been implemented with limited success.
Any host-based IDS and the analytical part of any network-based IDS involves some sort of operating environment that may be defeatable. But even if defeat is not directly attainable, denial of services the the components of the IDS can defeat many IDS mechanisms, replay attacks may defeat keep-alive protocols used to counter these denial of service attacks, selective denial of service against only desired detections are often possible, and the list goes on and on. If the operating systems are not secure, the IDS has to win a battle of time in order to be effective at detecting things it is designed to detect.
Operating systems can, of course, have complex interactions with other operating systems in the environment as well as between the different programs operating within the OS environment. For example, variations in the timing of two processes might cause race conditions that are extremely rare but which can be induced through timing of otherwise valid external factors. Heavily usage periods may increase the likelihood of such subtle interactions, and thus the same methods that would not work under test conditions may be inducible in live systems during periods of high load. An IDS would have to detect this condition and, of course, because of the high load the IDS would be contributing to the load as well as susceptible to the effects of the attack. A specific example is the loading of a system to the point where there are no available file handles in the system tables. At this point, the IDS may not be able to open the necessary communications channels to detect, record, analyze, or respond to an intrusion.
Operating systems may also have complex interactions with protocols and hardware conditions, and these interactions are extremely complex to analyze. To date, nobody has produced an analysis of such interactions as far as I am aware. An effective IDS would have to take this into consideration.
Of course an IDS cannot detect all of the possible OS attacks. There are systems which can detect known attacks, detect anomalous behavior by select programs, and so forth, but again, a follow-up investigation is required in order for these methods to be effective, and a potentially infinite number of attacks exist that do not trip anomaly detection methods. If the environment can be characterized closely enough, it may be feasible to detect the vast majority of these attacks, but even if you could do this perfectly, there is then the library and support function level intrusion that must be addressed.
Libraries and support functions are often embedded throughout a system and are largely hidden from the programmer so that their role is not as apparent as either operating system calls or application level programs. A good example of this is in languages like C wherein the language has embedded sets of functions that are provided to automate many of the functions that would otherwise have to be written by programmers. For example the C strings library includes a wide range of widely used functions. Unfortunately, the implementations of these functions are not standardized and often contain errors that become embedded in every program in the environment that uses them. Library-level intrusion detection has not been demonstrated at this time other than by the change detection methodology supported by the integrity-based systems of the late 1980s.
An excellent recent example is the use of leading zeros in numerical values in some Unix systems. On one system call, the string -08 produces an error, while in another it is translated into the integer -8. This was traced to a library function that is very widely used. It was tested on a wide range of systems with different results on different versions of libraries in different operating environments. These libraries are so deeply embedded in operating environments and so transparent to most programmers that minor changes may have disastrous effects on system integrity and produce enormous opportunities for exploitation. Libraries are almost universally delivered in loadable form only so that source codes are only available through considerable effort. Trojan horses or simple errors or system-to-system differences in libraries can make even the most well written and secure applications an opportunity for exploitation. This includes system applications, commonly considered part of the operating system, service applications such as web servers, accounting systems, and databases, and user level applications including custom programs and host-based intrusion detection systems.
The high level of interaction of libraries is a symptom of the general intrusion detection problem. Libraries sometimes interact directly with hardware, such as the libraries that are commonly used in special device functions like writing CD-rewritable disks. In many modern operating systems, libraries can be loaded as parts of device drivers that become embedded in the operating system itself at the hardware control level. A hardware device with a subtle interaction with a library function can be exploited in an intrusion, and the notion that any modern IDS would be able to detect this in any way is highly suspect. While some IDS systems might detect some of the effects of this sort of attack, the underlying loss of trust in the operating environments resulting from such an embedded corruption is plainly outside of the structure of intrusion detection used today.
Applications provide many new opportunities for intrusions. The apparent user interface languages offer syntax and semantics that may be exploited while the actual user interface languages may differ from the apparent languages because of programming errors, back doors, and unanticipated interactions. Internal semantics may be in error, may fail to take all possible situations into account, or there may be interactions with other programs in the environment or with state information held by the operating environment. Known attack detection tools and anomaly detection have been applied at the application level with limited success. Network detection mechanisms also tend to operate at the application level for select known application vulnerabilities.
As in every other level, there may be interactions across levels. The interaction of an application program with a library may allow a remote user to generate a complex set of interactions causing unexpected values to appear in inter-program calls, within programs, or within the operating system itself. It is most common for programmers to assume that system calls and library calls will not produce errors, and with the exception of languages like Lisp, most programming environments are poor at handling all possible errors. If the programmer misses a single exception - even one that is not documented because it results from an undiscovered error in an interaction that was not anticipated - the application program may halt unexpectedly, produce incorrect results, pass incorrect information to another application, or go into an inconsistent internal state. This may be under the control of a remote attacker who has analyzed or planned such an interaction. Modern intrusion detection systems are ill prepared to detect this sort of interaction.
In many cases, application programs encode Turing Machine capable embedded languages, such as a language interpreter. If these languages can interpret user-level programs, there is an unlimited possible set of embedded languages that can be devised by the user or anybody the user trusts. Clearly an intrusion detection system cannot anticipate all possible errors and interactions in this recursive set of languages. This is an undecidable problem that no IDS will ever likely be able to address. Current IDS systems only address this to the extent that anomaly detection may detect changes in the behavior of the underlying application, but this is unlikely to be effective.
These recursive languages have the potential to create subtle interactions with all other levels of the environment. For example, such a language could consume excessive resources, use a graphical interface to make it appear as if it were no longer operating while actually interpreting all user input and mediating all user output, it could test out a wide range of known language and library interactions until it found an exploitable error, and on and on. The possibilities are literally endless. All attempts to use language constructs to defeat such attacks have failed to date, and even if they were to succeed to a limited extent, any success in this area would not be due to intrusion detection capabilities.
It seems that no intrusion detection system will ever have a serious hope of detecting errors induced at these recursive language levels as long as we continue to have user-defined languages that we trust to make decisions affecting substantial value. Unless the IDS is able to 'understand' the semantics of every level of the implementation and make determinations that differentiate desirable intent from malicious intent, the IDS cannot hope to mediate decisions that have implications on resulting values. This is clearly impossible,
Content is generally associated with meaning in any meaningful application. Unfortunately, the correspondence between content and realities of the world - also known as correspondence to reality, one of the properties of general integrity - cannot reasonably be tracked by an intrusion detection system. Intrusions often take the form of generating human misperceptions or causing people to do the wrong thing. In the end, if this wrong thing corresponds to a making a different decision than is supposed to be made, but still a decision that is a feasible and reasonable one in a slightly different context, only somebody capable of making the judgment independently has any hope of detecting the error. Only certain sorts of input redundancy are known to be capable of detecting this sort of intrusion and this becomes cost prohibitive in any large-scale operation. This sort of detection is used in critical applications.
Some may cry foul at this point, asserting that intrusion detection systems were never intended to detect human behavioral changes at this level, but of course a large portion of all intrusions involve exactly this sort of behavioral change. The attackers commonly use what they call 'social engineering' (a.k.a., perception management) to cause the human operator to do the wrong thing.
Of course such behavioral changes can ripple through the system as well, ranging from entering wrong data to changing application level parameters to providing system passwords to loading new software updates from a web site to changing a hardware setting. All of the other levels are potentially affected by this sort of interaction.
Intrusion detection without prevention or response is of little meaning. Without prevention, the IDS itself is likely to be easily defeated, while without response, the detection does nothing of any discernible value. If a tree falls in the forest and nobody hears it, does it make a sound? Like the tree in the forest, it's a matter of definition. If sound is the motion of the air, the tree makes a sound and the IDS detects the intrusion. If sound is somebody hearing the tree fall, the tree makes no sound, and the IDS does not detect the intrusion.
Breaking the link between detection and response is equivalent to defeating the IDS, while gaining control over the response process through the ability to generate intrusion detections of the types desired by the attacker, similarly defeats the IDS. The so-called reflexive control attack, wherein the IDS is used to cause repeated responses until the response becomes depressed is only one of many examples of this.
Intrusion detection systems are also themselves systems and subject to all of the frailties of other systems. They tend to reside in the same sort of fragile environment as the systems they are designed to help protect. They are generally limited in their ability to protect themselves and the interconnections between them and the environment they serve are tenuous and subject to exploitation as well. The less closely linked to the environment, the less able such systems are to detect intrusions that are based on complex interactions.
As if this weren't enough of an issue already, there are several complicating factors to the overall intrusion detection problem that relate to the nature of intrusion in larger contexts. For example, there may be cross--application interactions that can be exploited for intrusions, combined cross-level and cross application interactions, end-to-end intrusions affecting a whole application that resides in a distributed network, and interactions between the set of all applications in the global situational context. A few examples should help to demonstrate these issues:
Cross-level interactions: Cross-level interactions have been discussed to a reasonable extent throughout this paper and are included here just to remind the reader that even though we tend to partition systems into layers for analytical purposes, they are not in fact partitioned in reality. Events are not independent across these boundaries and their interdependence and interactions may be exploited for intrusions.
Cross-application interactions: When more than one application exists in a given environment, interactions between the applications may be exploited for intrusion and may have to be examined for intrusion detection. For example, when disk space is exhausted by one application, it may interact with another application in a wide range of ways. An attacker may be able to exploit a fault in one application that permits resource exhaustion to the audit mechanism of another application, resulting in the ability to attack the other application at will without intrusion detection and response being effective. Thus detection of the intrusion may only be accomplished by the correlation of the activities across the applications. This is similar to the general covert channel problem.
Cross-level and application interactions: Of course the combination of cross-level and cross-application interactions may be exploited as well. For example, the use of one application to consume network resources required by another application may result in deadlock across the entire system even though neither application sustained a detected intrusion and no single level sustained a direct intrusion. These subtle interactions result from the fact that resources are rarely adequate to handle worst case scenarios because of the high cost of such a configuration and the waste of resources when unused resources are sitting idle.
End-to-end for the whole application: Intrusion detection that is to operate across a distributed environment such as those that are in common use today have the further complication that, in order to be effective against complex intrusions, they need to operate across the entire set of entities involved in applications they cover. Even if a web server is perfectly protected, an intrusion into a browser, a back-end inventory system, a credit-card processing system, intermediary firewalls, or any of the infrastructures involved in a given transaction could all effectively intrude on the application. Indeed partial intrusions in multiple elements of this complex interacting system could combine to cause an intrusion that might only be detectable by correlating across all of these systems. Suppose a DNS server is modified to cause a user to access a different application server than was originally anticipated. The false server might then collect the user's password, perform a man-in-the-middle operation to allow legitimate access, and insert data in the transaction sequence to add new user IDs. Then, with the new user IDs, the attacker might send messages to other users with Trojan horse programs that cause their machines to make numerous small requests and report back results. In the aggregate, these requests might leak information that is not supposed to be made available to the attacker or any other individual. In this case, every action taken by each individual (other than the attacker who modifies a DNS server outside of the control of the system) is - on its own - 'legitimate'. Only the aggregation of actions when taken together could be considered inappropriate, and that aggregation is also done outside of the system.
All applications in the global situational context: By extension, the interaction of arbitrary systems and applications with each other can cause intrusions. For example, as a side effect of a distributed coordinated attack that brings down one part of an infrastructure, rerouted traffic could interfere with performance required for a real-time application. In this case, an unrelated intrusion resulted in an effect on service that may or may not be detected by an intrusion detection system and is unlikely to be properly associated with the real cause by an intrusion detection system that fails to take into account the totality of interrelated events involved in an apparent intrusion against the function being covered.
At this level of understanding, the best human defenders are often able to track down root causes when it is important enough, but there appears to be no hope of an automated system in today's technology being able to approach this level of performance, understanding, or awareness.
Intrusions are generally the result of sequences of events that are interlinked. The result is an attack graph wherein an attacker starts with the world in one state and ends with the world in another state having induces a sequence of state transitions of a desired type. While intrusion detection systems are able to capture and analyze some set of sequences of events and correlate them either with known bad behaviors or against known good behaviors, the total space is so enormous and the understanding so limited that intrusion detection systems can realistically only cover a very small subset of these attack sequences. Furthermore, a large portion of the space of attack sequences are so closely interwoven with the legitimate behaviors of systems that they cannot possibly be discerned without a clear understanding of the semantic meaning of information in its application.
Modern intrusion detection systems are fairly good at detecting known linear (no parallelism) sequences that are closely packed in time, involve only one level of system function, and do not involve any subtle interaction that crosses session, user, or other similar boundaries. Anomaly detection systems have some ability to detect intrusions that produce behaviors that are statistically discernible from normal behavioral patterns at any single level of system function.
Substantial improvements could be made to current systems by removing these limitations, and some of these limitations may be removed with relative ease. For example, anomaly detection might be fairly easily extended to correlate information across sessions and across system levels. The question is whether such an extension could produce meaningful alarms without producing enormous numbers of false positives, and the answer is not yet known. Perhaps a more important question is how the complexity of intrusion detection is likely to grow as these limitations are removed and how many false positives and false negatives are likely to result from such an expansion of function.
It is also important to note that even if a complex intrusion were reported it seems unlikely that the average human analyst would have a hope of understanding the full implications of such an intrusion. Suppose the detection system indicated that an abnormal correlation between user keystrokes, disk accesses, and library routines occurred when they were browsing a web site. Suppose further that this was really the result or a perception management attack from 6 weeks earlier wherein the user was induced into loading a library upgrade from a site that had been corrupted but was the normal place for obtaining such upgrades.
I cannot see any reasonable scenario under which even this simple attack sequence would be tracked down or the implication understood based on this detection. Even if a thorough investigation would lead to the eventual detection of the attack (which is highly unlikely considering the difficulty that people have in detecting a Trojan horse even when presented with source code and told where in the source code to look for the Trojan horse) the likelihood of such an investigation in such a case seems extremely low. If this pattern appeared repeatedly over time, it is likely that the intrusion detection system would be adapted to accommodate it.
Time is of the essence in intrusion detection. In some cases, response time must be very fast in order to mitigate consequences of an attack, while in other cases, the intrusion detection mechanism will be defeated if detection and response doesn't out-pace the attacker. This has to do with the OODA (Observe, Orient, Decide, Act) loop. Intrusion detection involves observation and some orientation, while response is supposed to include the rest of the OODA loop. In order for intrusion detection to be worthwhile, it has to be able to be integrated into the OODA loop in a timely enough fashion to have an effect on consequences, or it simply isn't worth doing. Time requirements therefore must be related not only to the time between an attack and its consequences but must take the rest of the OODA loop into consideration and be timely enough for the rest of the OODA loop to be effective with the time remaining. Since most current response systems are very slow relative to computer time (seconds is rare and for any sort of complex intrusion minutes to hours is rare), doing real-time intrusion detection against high-speed attacks would seem of little value.
Clearly, the overall problem of intrusion detection is very complex and well beyond the meager beginnings we have seen in the field. But as Alan Perlis wrote on the walls of Carnegie-Mellon University when it was still Carnegie Institute of Technology: "Problems worthy of attack, prove their worth by fighting back". The intrusion detection problem is rich with possibilities. Here are some of them:
Hardware level: At the hardware level, we can observe electromagnetic behavior looking for anomalies or known abuse patterns, create fictitious hardware paths and mechanisms designed to be attacked, add covering sheethes which can help indicate attempts at intrusion, build detection mechanisms against tampering such as those in the abyss processor and more recent systems at the hardware level, hardware-level detection of remote partner devices, traffic interaction devices that detect interference patterns, tamper-detecting positive or negative pressure tubes and similar invasion detectors, motion detectors, removal or manipulation detection sensors and disablers, and a wide range of other similar technical detection technologies.
Protocol level: At the protocol level, detection of any datagram that does not strictly meet the standards for datagrams is a must. Loop detection is critical for eliminating obvious livelock situations and better livelock detection may be useful as well. Correlation of protocol elements across infrastructures is also very useful for detecting traffic patterns and correlating traffic between locations - as well as detecting what passes through firewalls (for example) and what does not. The appearance or disappearance of protocols or protocol elements or changes in the statistics of protocol use are likely indicators that something interesting is happening in a network. Changes in the paths of protocols, the patterns of communicating parties, the information content of messages between parties, and detection of encryption, compression, specific data, a lack of specific data, syntax violations, and patterns of packet size and content are all strong protocol level indicators.
OS level: At the operating system level, call frequency associated with users or programs have been used to detect anomalies and things like access controls and audits generated by access failures are often used for detecting intrusions. These mechanisms generally depend on the operating system being secure because attackers commonly use tools to reduce trace evidence of their presence once operating system level privileges are attained. It will be impossible to really discuss this issue without discussion the OODA loops in an IDS, and this will be done below.
Library and support-function level: As this level, operating system features may be used to detect corruption. Anomaly detection holds promise for detecting statistically significant behavioral changes in library routine function as well as their function when called by particular application-level programs. For attacks that are not known ahead of time and don't produce abnormal behavior in the library routines themselves or the programs that call them, anomaly detection will fail. If the attack is simply using an existing library routine in a normal sort of way but producing undesirable side effects, some sort of systemic understanding of the nature of the attack or the undesirability of the operation will be required in order for detection to have a chance at succeeding.
Application level: At the application level, automated mechanisms have been used to detect authorization access or uses (e.g., attempt to use by an unauthorized user ID), input sequences that are not within valid program input syntax (e.g., non-numerical data included in a numerical input stream), semantic errors (e.g., payment dates before loan dates), time, location, and other similar anomalies in application use (e.g., a transaction before the bank is opened for business), and context-dependent anomalies (e.g., inconsistencies between postal code and state data fields in an address entry). Similarly, known attack patterns against applications are detectable at this level.
Recursive submachines: In order to do intrusion detection in these machines, some sort of understanding of how these machines operate will be required. While anomaly detection at the submachine level may be anticipated, for recursive submachines (e.g., an interpreter for financial transactions within a visual basic program within a word document) without experience with the specific submachine, detection of anomalous patterns will be unknowable without prior experience. Intrusion detection in the form of detecting operations that are not thought to be appropriate or excessive resource utilization will likely produce many false positives for certain classes of applications, but this is the nature of mobile code if it is allowed.
Accuracy to reality: Accuracy relative to reality (i.e., ground truth) has been achieved through redundant data entry under the assumption that collusion is limited to some maximum number of participants. Intrusion detection on entered data has been done by comparison of redundant data copies, by comparing redundant but non-identical factors (e.g., by detecting inconsistencies between radiation at different energy levels in a radon detection system), and by detection of implausible values (e.g., temperature figures of millions of degrees and winds of thousands of miles per hour on Earth). Similarly, statistical deviations beyond a given number of standard deviations could trigger alarms at any desired alarm rate depending on the number of deviations chosen.
Cross-level interactions: Cross-level interaction detection would require some method for being able to associate actions at one level with actions at other levels (e.g., the association of system call frequencies and patterns with applications). In general, this would require matching across all levels - from hardware to recursive languages. Known intrusion detection might also be feasible for this type of interaction. Finally, a rule-based or other similar mechanism for detecting wild values or unexpected circumstances might be feasible for this class of detection.
Cross-application interactions: One way to detect some sorts of interactions would be to assess resource expectations for applications and detect excessive or inadequate use of resources. The notion of checking resource availability for detecting certain classes of interactions is similar to the problem of detecting deadlock, which is known to be NP-complete. Other classes of inappropriate interactions may be detected by characterizing normal interactions and detecting statistically significant changes in interaction characteristics, but since we cannot yet characterize the class of all interactions, this would be limited for the time being. In general this would seem to be similar to the covert channel problem where we are trying to detect all covert channels between co-existing programs sharing a resource. We will never apparently be able to do this perfectly if we continue to share resources and according to information theory, arbitrary amounts of information can be leaked through such channels. [Shannon-48]
Cross-level and application interactions: When we combine level crossing and application crossing intrusions we include all of the problems of both and seemingly multiply the detection problems of each by the other in order to do effective detection. There may also be cross-products and reflexive control issues (e.g., the intrusion detection mechanism is an application that consumes resources in detecting an intrusion. This use of resources may produce changes in a different part of the system that are detectable at a different level in a different application (e.g., the detection of a recursive submachine may involve sending numerous packets that may change the behavior of the normal network access of a key exchange for another application thus producing a reduction in available services). Recursive intrusion detection (i.e., where we detect intrusions that involve the intrusion detection system) is required to detect this example.
End-to-end for whole applications: The entire application and all of the interdependent infrastructures are involved in this sort of detection problem, and thus we need to consider not only the applications and systems, but the cross-implications of combined intrusions in any of these systems. An example might help here. Suppose we have a secure collaborative realm wherein users login from different points and exchange data. If somebody broke into any of the machines in the collaborating group, the whole set of communications could be observed or corrupted, even though the secure server facilitating this process was undisturbed. In order for users within the group to be able to respond to the attack, it would be necessary for the detection system to pass the intrusion information to the rest of the group from a system that is out of their control and may not have been part of the collaboration at the time it was attacked. Any imperfection at one host might be exploited to intrude on other systems in the collaboration through the introduction of Trojan horses, and this too would have to be detected and communicated across the collaboration even though the Trojan horses might not be made active during the collaboration periods.
All applications in the global situational context: When we extend this to the global context, we clearly must approach the completion of the 'Orient' aspect of the OODA loop, since it is necessary to come to an understanding of the impact of unrelated infrastructure elements in order to determine whether or not an interaction with those elements resulting in a change in behavior in your system constitutes an intrusion and, if so, whether the intrusion is directed against your systems or if this was merely an accidental side effect. Now we are getting into the realm of intent.
The structure of attacks and the notion of time: Unless the detection and response OODA loop is fast enough to counter attacks or the observation process provides the data to an external analysis capability before it is defeated by the attack, an attack cannot be successfully defeated. Because of this class of problems, many IDS mechanisms export intrusion data to remote analysis systems, but this introduces the possibility of denial of service to the external system as the previous step in such an attack, which then means that precautions are required for this part of the IDS system as well. For example, an OODA loop for detecting the inability of the normal IDS OODA loop to properly function can be used to change the OODA loops of all of the systems sharing the impaired function. This leads to the notion of intrusion detection for the intrusion detection system, and this must of course be applied recursively as well. For complex attacks, in order to differentiate intrusions from normal events that may cause very similar outcomes, intrusion detection must expand to include the investigative phase and root cause analysis. Otherwise, intrusion detection simply becomes an identification that 'something happened' and this is not very helpful in terms of tightening the OODA loop to the point where a reasonable response can take place on time scales needed for high-speed automated attacks.
Intrusions and intrusion detection have a structure and that very structure leads to such fundamental limitations that the enormity of the intrusion detection problem would seem to be nearly unfathomable. While we see systems implemented on a daily basis to catch some small portion of some select classes of intrusions, we almost never see the entire picture in context.
While this paper may seem to be more philosophical than technical, it is the enormity of the detailed technical issues that drives this. I hope that this paper serves to put a context on the nature of the intrusion detection problem and that it dispell any misperceptions that people might have about how close we are or are likely to soon be to solving the problem.