Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
Some years ago, I did a study for some people who were working for some government trying to figure out where global information technology was going over the next 20 years. So far, my predictions are just about right, but it has been less than ten years, so we will have to wait and see for the final results.
One of the more interesting results of this study was the notion that simulation would become one of the key aspects of information technology as computing cycles and storage became far less expensive, as we ran into the limits of computational complexity, and as were no longer able to find closed form solutions and analytical results for many of the major challenges facing us. Suffice it to say that I still believe this, and have increasingly started to act upon it.
In this article, I am going to describe some of the issues that I find interesting surrounding network security simulation. But first some short history of my involvement in security simulation.
I started using security-related simulations back in the 1970s when I simulated the protocols for what was then the next generation of networked systems. The protocols were to assure integrity, availability, and confidentiality while facilitating efficient network operations. I wrote a number of simple simulators over the following years (all the best ones are simple at their core) and in the early 1980s I wrote a simulator for a parallel pipelined computer called the roving emulator - a system designed to assure that the execution streams of processors was working correctly by redundantly simulating sequences of instructions and comparing results to those shown on the system bus.
I was simulating network allocation algorithms for 'antigravitational allocation' in 1983 when the notion of computer viruses came to me, and as part of my virus research I implemented simulators to evaluate how viruses would spread through computer systems and networks. The same simulation engine (written in LISP when I was a graduate student at USC) was used for both purposes.
I simulated less for security than for other issues in the late 1980s, but security simulation picked up again in the mid 1990s when I engaged in a number of simulations - verging on strategic and tactical security games of various sorts. Then in about 1997 I was working on the all.net database and game theoretic ways to use the information when I developed a simulator similar in function to the AI programs that did medical diagnosis. This simulator would take detected attack mechanisms, drive them back to their possible causes and then drive back forward to other mechanisms and consequences. Several other simulators followed until, late in 1998, I created a whole series of games and the 'network security simulator' - many of which can be accessed via the Web at the all.net web site.
The first thing to do in creating a simulation for network security analysis is to figure out the goal of the effort. Some of the network security simulators I am aware of include:
Systems Administrator Trainers: Tony Barteletti at Lawrence Livermore Labs has also built a marvelous simulator for mimicking Unix systems under attack. The whole simulation runs from a CD-ROM on a PC, so that the systems administrator who wishes to practice defending systems can do so on a home computer just like you might play a video game. In this case, the simulator provides a rich environment for attack and defense and the means for a defender to practice against pre-programmed attacks.
Attack Simulators: I worked with a small team building an attack simulator that allows 'attackers' to select real attacks from menus and run them against real systems where the defenders try to protect themselves. This simulator is regularly used for experiments in automated attack and defense and to train people on how attacks and defenses work.
Defense Simulators: Another simulator that has been used in some of my experiments provides simulation of the attack process without any actual attacks being launched. It simply simulates the detections that an intrusion detection system would generate if it detected particular attacks. It is used to test responses to attacks without having to generate real attacks, and allows us to get reproducible results without large numbers of experiments.
Network Performance Simulators: There are a number of off-the-shelf network traffic simulators than can be customized to provide detailed emulations of systems and networks under attack. I have not personally used any of these but I believe that others do use them to model denial of services attacks at the data transport level.
Strategic Games: There have been a number of games that are used to simulate the decision process involved in attack and defense of computer networks and to provide realistic scenarios for attack and defense without going into much in the way of details. These are particularly useful for getting the feel of attack and defense without going into the details.
Design Aide Simulators: I recently developed a network security design and analysis simulator that provides data on the performance of attacks and defenses in a computer network by exploring a large number of scenarios and giving statistical results on items of interest such as the time till an attack succeeds, the likelihood of successful attack as a function of attacks and defenses put in place, the effect of improving performance of detection and response, the effect of different security architectures across a range of threat profiles, and so forth.
In the rest of this month's article, I will be discussing some of the issues related to simulation for training, awareness, design, and analysis of networks and giving examples of their uses.
Like the wide variety of training equipment in the military, systems administrator trainers can be designed to emulate the realities of battle to the level appropriate to the learning desired. The system can train people at a simplistic level so they understand the basics, or it can be as advanced as an environment in which incidents are intentionally thrown at the systems administrator and they have to react of face failures of their missions.
The trainers now under development take training to the level where the systems administrator can see information roughly equivalent to what they will see in front of the console of a real computer they are defending, but without the realities of the surrounding environment and the stresses of day-to-day activities that surround them during real attacks.
As the training requirements become increasingly oriented toward realism, we move into the area of red teaming.
Attack simulators, as the name implies, simulate attacks - or more specifically, they simulate sequences of events that occur in attacks. There are a lot of attack simulators, varying from tools that 'scan' for known attacks to tools that use real attacks to simulate the things real attackers do. I have also used special simulators to simulate the effects attacks might have on intrusion detection systems as a method to analyze response technologies and decisions.
For the most part, attack simulators are used to learn things about systems or people who manage those systems, but they can also be used to test defenses, to demonstrate and practice things that might happen, and to get to a level of awareness that allows decision makers to make better decisions.
Defense simulators are useful primarily for learning about how attacks might be blocked and trying out different possibilities for attack and defense. Normally, you simulate what you are not trying to do, so you would think that a defense simulator would be most useful to attackers who want to practice attacks, but this is not always the case.
There are a wide variety of simulators that are used to look at performance of protocols and networks under load conditions. These range from telephone system simulators that analyze the effects of local outages on other parts of the network to power grid simulators that analyze stability conditions for maintaining the balance of power in a power grid, to network protocol simulators that simulate the behavior of packets and cells in various kinds of networks, and on and on.
Performance simulators tend to have one thing in common - not because it has to be so - but because that's how they seem to usually be built. They are typically poor at analyzing or simulating intentional attacks. They also tend to concentrate on large scale issues of service availability, which means that they will typically advise systems designers to shed load (e.g., your connection) rather than risk large scale outages to keep individual customers working.
Strategic and tactical games are simulators as well, and if you've looked at my Web page lately, you will see a number of games that are in essence simulators of network attack and defense. These games can be designed for a wide range of different functions, from enjoyment or education to awareness and deception. I use these games for the dual purpose of testing the skills of the attackers to teaching would be attackers that there are dangers in launching attacks against sites that protect themselves.
One of the most important aspects of some of the simulations is that they indicate to the potential attacker some of the very real risks associated with attacks on information systems as well as the wide range of skills required to be a strong overall attacker. From the defender's standpoint, they demonstrate that there are many ways into a system - from picking locks to guessing passwords.
Some of our players really get into the spirit of the game. For example, one individual actually wrote a program to try all passwords at one juncture where they didn't know the default PIN number for a particular type of subsystem. This is typical of the sort of thing a real attacker does, and it's good to see that the players are really involved with the activity. It also keeps them off the streets - or in this case the information superhighway.
The last category of simulators I am going to talk about, and the one that will take up the rest of this discussion is the design aide simulator - so named because its purpose is to help make design decisions about real information protection systems. To my knowledge, there are only a very small number of such things in the security field, and only one in information protection.
The goal of design aide simulators is to help designers make better choices with regard to protection decisions. In the best case, they provide accurate data that can be used to make sound decisions about tradeoffs, but if poorly done, they tend to lead to simple justification for whatever decisions people are going to make anyway.
The tricky part of design aide simulations is that the size of the space of possible events sequences and protection configurations that can take place is enormous - far too enormous to ever hope to completely cover computationally. As a result, the role for design aide simulators today tends to be to explore the implications of a small number of design options and characterize particular parts of the design space where key decisions are made.
The Network Security Simulator (NSS) is an example of a design aide simulator. The basic idea is to characterize an organization's network and operations by collecting and entering in data reflective of the way the systems and people in the organization work. This defensive characterization is then used as a basis for creating a model and selecting the parameters of the simulation. The simulation engine is configured to use these parameters and is run thousands of times against a desired set of threat profiles. The result is a picture of how the defense responds to attack. With this as a starting point, we can explore the effect of changes in protection, network configuration, detection and response time, the quality of personnel running the defenses, and so on. The results are then quite helpful, both in terms of providing insight about where to expend additional resources, and about identifying previously unnoticed weaknesses in the protection posture and characteristics of attack and defense.
As an example, a simple network model is given below.. It consists of 8 nodes named Angel, Baker, Charlie, David, Edward, Frank, George, Harry, and Internet. Each node has a set defensive measures associated with it and links to other nodes. The picture below shows the network and its interconnections pictorially with one-directional links between nodes indicated by arrows.
Internet v +-Angel-+ | | Baker--Charlie ^ | | David +-+ | | Edward | v +-Frank-+ v v George--Harry
Protection in this network is described in terms of the defenses at each node. Specifically:
When we describe an attack against this network, we are typically discussing a sequence of attempts to gain access that will either succeed or be detected and reacted to so as to prevent its continuation. We will call this a run of the simulator. Each run is performed by a threat that has specific attack capabilities, a strength, a starting point, and a goal node. The attack proceeds through a series of attempts to gain entry to each of a series of nodes which will lead to the goal from the starting point. Thus, an attacker starting at David and trying to get to Baker might go through Edward to Frank to Baker or through Charlie to Baker. The set of paths from starting point to goal point is described as a Partially Ordered Set (POset). At each step, the attacker's goal is to find a way to break into the next system along the way to the goal along the selection of paths chosen at the start of the run.
If you are totally confused by now, that's probably just about right. An example is worth a lot of words (and writers get paid by the word):
(simulate '(35) "Internet" "Frank" 90) ATTACK: Internet:@..perception management a.k.a. human engineering->Internet [931 !< 0](9 < 52) =======> Prevention will fail DETECT: Angel:@ 1m..perception management a.k.a. human engineering->Angel detected [629 < 864] by ((anomaly detection) (testing) (time, location, function, and other similar access limitations)) in 2h ATTACK: Angel:@ 1m..perception management a.k.a. human engineering->Angel prevented [87 < 898] by ((perception management) (testing) (time, location, function, and other similar access limitations)) DETECT: Angel:@ 2m..multiple error inducement->Angel detected [201 < 540] by ((anomaly detection) (testing)) in 2h ATTACK: Angel:@ 2m..multiple error inducement->Angel prevented [251 < 855] by ((testing)) ATTACK: Angel:@ 3m..invalid values on calls->Angel [961 !< 855](36 < 52) =======> Prevention will fail ATTACK: Baker:@ 3m 1s..undocumented or unknown function exploitation->Baker [502 !< 0](87 > 52) -> bad luck ATTACK: Baker:@ 3m 2s..call forwarding fakery->Baker [575 !< 0](78 > 52) -> bad luck ATTACK: Baker:@ 3m 32s..breaking key management systems->Baker [173 !< 0](57 > 52) -> bad luck ATTACK: Baker:@ 4m 32s..infrastructure observation->Baker [745 !< 0](68 > 52) -> bad luck ATTACK: Baker:@ 4m 33s..simultaneous access exploitations->Baker [492 !< 0](63 > 52) -> bad luck ATTACK: Baker:@ 4m 43s..false updates->Baker [478 !< 0](34 < 52) =======> Prevention will fail ATTACK: Charlie:@ 1h 4m 43s..race conditions->Charlie [626 !< 0](1 < 52) =======> Prevention will fail ATTACK: David:@ 1h 4m 45s..invalid values on calls->David [390 !< 0](39 < 52) =======> Prevention will fail ATTACK: Edward:@ 1h 4m 46s..strategic or tactical deceptions->Edward [341 !< 0](73 > 52) -> bad luck ATTACK: Edward:@ 1h 5m 46s..call forwarding fakery->Edward [719 !< 0](1 < 52) =======> Prevention will fail ATTACK: Frank:@ 1h 6m 16s..error-induced mis-operation->Frank [950 !< 855](34 < 52) =======> Prevention will fail A WINS:@ 1h 7m 16s.. =======> Defeated Frank
In this 'run' we show a sequence of events in which that attacker uses perception management (they lie) to get an Internet account, exploits an invalid value on a system call to break into a firewall machine (Angel), false updates to break into the Baker machine, a race condition to get past Charlie, an invalid value on a system call to get into David, and error-induced mis-operation to get into Frank. This attacker was very skilled and this whole process (including some other attacks that did not work) took only an hour, seven minutes, and sixteen seconds. Some of the attempts were detected, but the attacker was too fast for those detections to translate into any defensive actions before the attacker won.
ATTACK: Internet:@..get a job->Internet [545 !< 0](71 > 52) -> bad luck ATTACK: Internet:@ 28d..Trojan horses->Internet [456 !< 0](61 > 52) -> bad luck ATTACK: Internet:@ 28d 10s..inappropriate defaults->Internet [748 !< 0](39 < 52) =======> Prevention will fail ATTACK: Angel:@ 28d 20s..dumpster diving->Angel prevented [45 < 675] by ((waste data destruction)) DETECT: Angel:@ 28d 1h 20s..below-threshold attacks->Angel detected [585 < 828] by ((sensors) (time, location, function, and other similar access limitations)) in 1h 10s ATTACK: Angel:@ 28d 1h 20s..below-threshold attacks->Angel prevented [314 < 855] by ((perception management) (time, location, function, and other similar access limitations)) DETECT: Angel:@ 28d 1h 1m 20s..insertion in transit->Angel detected [25 < 540] by ((anomaly detection) (testing)) in 2h ATTACK: Angel:@ 28d 1h 1m 20s..insertion in transit->Angel prevented [626 < 855] by ((testing)) DETECT: Angel:@ 28d 1h 1m 21s..man-in-the-middle->Angel detected [228 < 828] by ((sensors) (time, location, function, and other similar access limitations)) in 1h 10s ATTACK: Angel:@ 28d 1h 1m 21s..man-in-the-middle->Angel prevented [559 < 886] by ((path diversity) (time, location, function, and other similar access limitations)) DETECT: Angel:@ 28d 1h 2m 21s..insertion in transit->Angel detected [112 < 540] by ((anomaly detection) (testing)) in 2h ATTACK: Angel:@ 28d 1h 2m 21s..insertion in transit->Angel prevented [763 < 855] by ((testing)) ATTACK: Angel:@ 28d 1h 2m 22s..input overflow->Angel prevented [727 < 895] by ((testing) (time, location, function, and other similar access limitations)) DETECT: Angel:@ 28d 1h 2m 23s..call forwarding fakery->Angel detected [58 < 877] by ((auditing) (sensors) (testing) (time, location, function, and other similar access limitations)) in 1h 30m 5s ATTACK: Angel:@ 28d 1h 2m 23s..call forwarding fakery->Angel [919 !< 895](13 < 52) =======> Prevention will fail ATTACK: Baker:@ 28d 1h 2m 53s..bribes and extortion->Baker prevented [403 < 450] by ((perception management)) REACT-: Angel:@ 28d 2h 30s..below-threshold attacks@ 28d 1h 20s [850 !< 819]=> ((perception management) (time, location, function, and other similar access limitations)) REACT+: Angel:@ 28d 2h 1m 31s..man-in-the-middle@ 28d 1h 1m 21s [190 < 630]=> ((time, location, function, and other similar access limitations)) after 1h 10s======> Reaction will succeed in 1d ATTACK: Baker:@ 28d 2h 2m 53s..below-threshold attacks->Baker [751 !< 450](17 < 52) =======> Prevention will fail ATTACK: Charlie:@ 28d 2h 3m 53s..network service and protocol attacks->Charlie [891 !< 855](78 > 52) -> bad luck ATTACK: Charlie:@ 28d 2h 3m 54s..device access exploitation->Charlie prevented [742 < 898] by ((effective mandatory access control) (trusted applications)) DETECT: Charlie:@ 28d 2h 4m 4s..password guessing->Charlie detected [51 < 810] by ((feeding false information)) in 2h ATTACK: Charlie:@ 28d 2h 4m 4s..password guessing->Charlie prevented [634 < 895] by ((effective mandatory access control) (feeding false information)) ATTACK: Charlie:@ 28d 2h 4m 14s..replay attacks->Charlie [123 !< 0](83 > 52) -> bad luck ATTACK: Charlie:@ 28d 2h 4m 24s..invalid values on calls->Charlie [962 !< 0](7 < 52) =======> Prevention will fail ATTACK: David:@ 28d 2h 4m 25s..dependency analysis and exploitation->David prevented [479 < 810] by ((time, location, function, and other similar access limitations)) DETECT: David:@ 28d 2h 5m 25s..shoulder surfing->David detected [425 < 810] by ((time, location, function, and other similar access limitations)) in 2h ATTACK: David:@ 28d 2h 5m 25s..shoulder surfing->David prevented [581 < 810] by ((time, location, function, and other similar access limitations)) ATTACK: David:@ 28d 2h 6m 25s..hardware failure - system flaw exploitation->David [376 !< 0](50 < 52) =======> Prevention will fail REACT-: Angel:@ 28d 2h 32m 28s..call forwarding fakery@ 28d 1h 2m 23s [750 !< 630]=> ((time, location, function, and other similar access limitations)) REACT: Angel:@ 28d 3h 1m 20s.. insertion in transit@ 28d 1h 1m 20s No Reactions Available REACT: Angel:@ 28d 3h 2m 21s.. insertion in transit@ 28d 1h 2m 21s No Reactions Available REACT+: Charlie:@ 28d 4h 4m 4s..password guessing@ 28d 2h 4m 4s [447 < 630]=> ((feeding false information)) after 2h======> Reaction will succeed in 6h REACT-: David:@ 28d 4h 5m 25s..shoulder surfing@ 28d 2h 5m 25s [995 !< 630]=> ((time, location, function, and other similar access limitations)) D WINS: Charlie:@ 28d 10h 4m 4s..Original Attack@ 28d 2h 4m 4s Detected@ 28d 4h 4m 4s Reacted with:((feeding false information)) after 6h
In this second 'run' the defender wins - despite an identical initial situation. The attacker started out with a poor choice by trying to get a job in order to get concealed Internet access. After a few rejections, they moved on and the real attack commenced. After trying various ways to get into Angel, many of which were detected, time, location, function, and other similar access controls generated an effective defensive reaction to a man-in-the-middle attack, but this reaction was not the one that defeated the attacker - because it took too long to be effective. Instead, the attack that was attempted at 28d 2h 4m 4s (password guessing) was detected and defeated by feeding false information back to the attacker. The attacker tried for almost a full month (unless it was February in which case it was more than a full month) and was defeated.
Now there are a lot of things to notice about this simulation, and I'm not going to notice them for you in this month's article - but if you are interested - I will be publishing a more extensive paper on this simulator soon. Suffice it to say that despite the obvious lack of precise correspondence with detailed realities, the results of multiple simulations turn out to be rather useful.
Simulation is a data-driven activity. With good data, any reasonable simulation will produce realistic results. With poor data, the best simulations that can be devised will provide results that are misleading at best. The ability to amplify data is the key to simulations power, and when garbage is put in, this amplification produces unprecedented quantities of garbage out. The hard part of doing useful simulation - especially useful security simulation - is therefore the collection of proper data with which to drive the effort.
Some people think that security is something you can learn easily, but those who have studied it at length often find that it is a nearly infinite sink of effort. Nowhere is this pointed out more clearly than in the attempt to gather data for an accurate simulation of network security. To put this in more concrete terms, the network security simulator simulation engine requires something like 14,000 data elements to characterize protection in a networked environment, plus as much as a few thousand data elements per node simulated. While it produces interesting and believable results, it is hardly accurate to the level of a physics simulation of a molecule or a traffic simulation of city streets.
If there is one thing that we learn when we try to do accurate simulations of security systems, it is how little we really know and how poorly our ability to describe that knowledge is.
You can try some of these simulators for yourself over the Internet by pointing your Web browser at /, selecting different items from the 'Do You Want to Play a Game?' menu, and pressing 'go'.
About The Author:
Fred Cohen is a Principal Member of Technical Staff at Sandia National Laboratories and a Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection. He can be reached by sending email to fred at all.net or visiting /