Networks dominate today's computing landscape and commercial technical protection is lagging behind attack technology. As a result, protection program success depends more on prudent management decisions than on the selection of technical safeguards. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
In the information protection arena, there are at least two important dimensions to consider in terms of the mechanisms in use. One is what the mechanism does to mitigate risks. This is the one we tend to concentrate on in most discussions of protection. The other is the level of assurance we wish to attain in this mitigation process.
To a large extent the time-to-market competitive issues have induced software manufacturers to coldly and, in my view, appropriately, sell software that is more or less crappy. This extends to many - but not all - security vendors as well as many - but not all - operating system and application vendors.
The question of assurance comes into play when you are mitigating risks because the level of certainty with which the risk mitigation method functions has to be balanced with the costs of increased assurance. This leads us into the issue of...
Assurance can be achieved in different ways and to different extents at different costs by different techniques. There isn't room to list them all here - nor could I necessarily be certain to get them all if I tried. My level of assurance that I can identify all of the techniques for achieving assurance is not that high. I don't know some things about it, however, and I have embedded them in my thought processes for use at times when they seem appropriate.
My thought process is not as high assurance as I would like it to be. As a result, like every other person in the world, my cognitive limitations limit the ability of my embedded assurance capabilities. The same is true of every security device we have. It is limited, at a minimum, by its ability to process information. But there is another feature of my brain that limits my assurance even further. It is the lack of perfect programming.
Yes, it's true. My brain was, after all, programmed by me. And even though I pride myself on being a good and careful programmer, unlike my secure web server, I cannot prove that my brain does what it's supposed to and nothing else. In fact, I don't think I want to. I like my brain doing weird things every now and then. It's part of my humanity. But I don't want my computers doing weird things, not now, and not ever. So the question arises of how to make my computer less like my brain in terms of having higher assurance.
OK - a brain is about as embedded as a system can get. And yet, I have just said that brains are not as high assurance as we would sometimes like. This would seem to speak against the idea of embedding for high assurance.
And yet, my brain actually does a pretty good job of keeping my body functioning. It has never yet forgotten to eat or how to go about it, how to breath or how to go about it, or any of a long list of other things it is suppose to take care of. The reason it does not fail at these tasks apparently has a lot to do with the fact that it is built - has evolved if you like - to do those things with high assurance. These things are deeply embedded in my brain and yours.
Some of the side functions - like day dreaming and keeping track of lists don't work as well, but they tend to be less critical. The real reason my brain does some things better than others is because the really critical functions are more deeply embedded into the hardware and less maliable to my reasoning. I can try not to breath all I want, but my override of the breathing function will fail when I lose consciousness, while the reflex to breath will continue on even when I lose awareness and control of my override.
Of course those of us in the business have known that embedded security is fundamental to success for a long time. For example, the notion of security as an afterthought has always been scorned, while the notion of 'trusted systems' has been embraced for many years - by the research community. We know, for example, that a 'plug-in' security add-on is almost never as high in assurance as a built-in security mechanism. On the other hand, if we are not going to do a very good job of designing and testing our built-in security mechanisms, plug-ins have the advantage of being flexible enough to be updated as needed. Embedded security is largely about the tradeoff between flexibility and assurance. Like our brain achieves high assurance of breathing by using embedded systems that cannot be overridden to our deaths by conscious thought alone, embedded security functions in computers are intended to assure that dispite the efforts of the malicious user to the contrary, the system will maintain some aspects of its function.
Another critical aspect of understanding embedded security is the notion of separation. The reason we keep breathing is because the brain structures that do the breathing are separate from those that can temporarily override them in our consciousness. As we fall asleep from lack of oxygen, the part that overrides breathing stops working before the part that assures breathing. In computers. the hardware is generally unmodified at reboot, so if the computer 'falls asleep' and we can wake it up with external signals (reboot), we can get high levels of assurance that it will go through some restart process.
This notion of separation also applies to the use of separate devices for firewalls, logging, intrusion detection, and so forth. The idea is that a firewall might be broken into or bypassed, but if it sends its logs to a logging server, that server isn't so easy to compromise at the same time, so there is increased assurance that the audit trails from the break-in will be available even if the system under attack is completely overrun. There is another advantage to this sort of separation in that a logging server and a firewall, as examples, do not need to share any common services or programs. This, of course, makes breaking into both at the same time a great deal harder, thus increasing assurance against the overrun of both at the same time.
Suppose you buy into the notion that separation of function and higher quality software are good things for increased assurance. The next question is how to do this and what it costs. The answer seems to lie in embedded security devices - limited use devices that are built to perform specific desirable security functions.
These sorts of devices can have some excellent properties that make them worthwhile. For example, the least expensive hardware device I have seen suitable to this sort of purpose in a regular network has a hardware unit cost of under $250, low power, is silent, and has a small footprint. It is somewhat complex to configure and get operational, but once it works it seems to work well. It can't handle a substantial disk drive, but it can handle a compact flash card which could provide up to a gigabyte of storage for a few hundred dollars. More practical units with storage or higher performance requirements can run into the thousands of dollars, So much for the cost.
The 'what it does' is something the market has not yet settled out. Such boxes can be used to run reasonably good firewalls, logging servers, intrusion detection systems, and other special purpose security functions. Over time it is to be expected that if such devices work in the market, they will become so deeply embedded into systems that every computer with an Ethernet will have a firewall computer built into the interface, with an independent intrusion detection chip, an independent logging chip, and perhaps other similar devices, all embedded at the board level. Every computer turns into a local area network with more processing power, more capabilities, and higher assurance. I can't wait...
Embedded security systems are likely to gain an increasing place in the global computing environment as functional requirements stabilize and the need for high assurance increases. They will offer increasingly lower cost limited function alternatives to the more expensive and less reliable multi-functional systems of today.
While there is a limited market in these products today, more and more vendors will enter this space over the coming years and more and more decision makers will select these systems over the alternatives. A relatively limited number of very standard systems of this sort will become large market players in the coming years.
It may never come to pass that we have multiple embedded security processors in typical computers, but in the laboratory, such systems are already being tested, and their advantages may become sufficiently cost effective if the volumes increase to the point where they become standard features on manufacturer systems.
About The Author:
Fred Cohen is researching information protection as a Principal Member of Technical Staff at Sandia National Laboratories, helping clients meet their information protection needs as the Managing Director of Fred Cohen and Associates, and educating cyber defenders over-the-Internet as a practitioner in residence in the University of New Haven's Forensic Sciences Program. He can be reached by sending email to fred at all.net or visiting http://all.net/