Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
The 50 Ways series of articles on vulnerabilities of information security was started as a joke and has become one of my most successful ventures in getting the word out about the limits of security. So when Richard Power of the Computer Security Institute asked me to do a job on Public Key Cryptography, I spent the requisite 60 minutes and pushed out a short piece on the emerging crypto-industry. Take a look at it ( /journal/50/index.html ) if you want the details.
Former NSA researcher Blaine Burnham, who is now a fairly high muckety muck at Georgia Tech University spear-heading their information security program, has told me for many years of his concerns about the level of trust people place in cryptography, and I guess I share his views on this. The basic notion is that, like most other aspects of our information infrastructure, it's easier to copy than to think. Whenever someone comes up with a good idea for one application, everybody and their brother tries to leverage it for all applications. The low cost of copying in the information age means that there is a lot of reward for the copy cat and less reward for the more detailed thinker. Despite all the talk about being agile, we are in fact still using the industrial age notions of mass production in the way we build our systems.
This means that Microsoft doesn't have a notion about how its software works or even its data formats, because much of its software is just a copy of some other software that was 'close enough' for the purpose. They license it in, put it in their product, and shove it out the door. The lack of quality in much of our software is closely linked, in my view, to this phenomena. It means that we are building very tall information buildings on inappropriate foundations and we are starting to see the effect in the form of more down time, higher costs, enormous computing resources required for even simple tasks, and so on. Our graphical interfaces allow pop-up smiley faces with moving smiles to ask us if we want help, but this doesn't really cover up the fact that the computer can't stay up long enough to get the answer.
If you review some of my past articles you will find that I am more than a bit intolerant of this way of thinking. In particular, I wrote a piece called "Change Your Password - Doe See Doe" in September of 1997 in this series that identified the foolish way we propagate policies about changing cryptographic keys into changing passwords without thinking bout the consequences or the rational. This mindless application of rules of thumb where they don't apply is what we are doing today with cryptography on a massive scale.
My regular readers probably knew I would return eventually to the subject at hand, but I wasn't so sure. At any rate, the subject is the foolish ways in which we use cryptography, so let's dig right in with the notion that cryptography can protect secrets for an extended period of time. It cannot.
Any decent scientist must be a student of history. Please note the term MUST as opposed to should or some other word. I choose the word because, by its very nature, science depends on experimental confirmation and refutation to throw out things that don't work. If you don't know the history of the confirmations and refutations of the theories you are working on, you can hardly consider yourself a scientist. The reason the last few Hughes satellite launches failed is that they ignored the experimental basis of science. They thought that they could build and test parts with computer models without doing the necessary real testing of the real parts. In case you missed it, we no longer do nuclear weapons testing - which is intended I guess to lead to the elimination of nuclear weapons. I figure that the credibility of deterrence cannot last long without testing, especially when we demonstrate on a regular basis that the lack of science in our modern religious cult of computing.
The history of cryptography tells us that every cryptographic system ever build has eventually been defeated. In the 1940s, Shannon's efforts defeated ALL previous systems and he defined the 'one-time-pad' system which is theoretically undefeatable - but which is also impractical for the vast majority of current cryptographic applications. Since that time, we have seen several public key systems defeated, the creation of the DES and its defeat, the defeat of the RSA for key sizes thought to be secure over the long run only 10 years earlier, and so on. Even the NSA's cryptographic sculpture has been broken recently - and not by an NSA cryptographer!
The notion that we can build a practical cryptographic system that can not be defeated over time is refuted by the historical evidence and the notion that some new kid on the block will change this situation is equally well refuted by a long list of new-age cryptographers who have repeated the mistakes of history by ignoring them. It is an easy pit to fall into, I have fallen into it myself and been pulled out by those wiser than myself, but it is a pit nonetheless. Do not trust cryptography for long-term secrecy of information. It just won't work.
The military has a long history of using cryptography for tactical operations. The idea is to encrypt things like where a tank should go next. If the bad guys break it in a few minutes, you might lose a tank, but if they break it in a few days, the tank will be long gone. They understand that, in many situations, moving the tank is more important than keeping the movement secret. As a result, if the cryptography has a technical problem that keeps the tanks from moving, the commander can make a command decision to turn off the cryptography and get the tanks moving without it. This is called risk management. It is real-time and it is done in the military all of the time. The commander understands the risks associated with the move because he or she has driven a tank and perhaps shot at while driving a tank in which movement was ordered without cryptography turned on in a hostile - if only simulated - environment. The risks are very real to the commander - both the risks of getting killed by the enemy knowing where the tanks are going and the risks of not moving the tanks.
In business, such decisions are typically made by a different sort of commander and the notion of tactical decisions to turn off cryptography are rarely considered. Let's talk, for example, about my credit card processing company. Like most such companies that accept (or more often reject) credit cards over the Internet, there is no risk management decision made today about items like the address used for the card. This presents enormous problems in that the bank's version of my address is not quite identical with the other computers in the credit processing system. With nominal judgment, they could decide that MS9011 is the same as M.S. 9011 and Mail Stop 9011 - but this sort of judgment is not made by the automation. Instead, we generate errors by having inconsistent data entry and data type information on different systems. So when one person tries to enter M. S. 9011 they may find that it is not valid on their system because of format restrictions. The net effect is that to mitigate the risk of sending my order to the wrong place, they can't tell that I am sending it to the right place and they cost me time and money which reduces the number of things I am able to buy from them.
To me, this represents the inadequacy of risk management in industry and it reflects on the lack of a clear tie between risks to the organization and risks to the individual. When the military commander in issues an order and decides to do it in the clear, there is a very personal risk involved, both from a standpoint of the military review process and from the standpoint of seeing your friends and comrades, whose lives you are responsible for, die if you are wrong. In today's business environment, the tie between risks and consequences to management are weak at best. If your credit card information is stolen, the manager at the ISP who was supposedly holding it securely, just doesn't care. If Amazon.Com sells books to someone in Africa or the Pacific rim and takes the money from my bank account to do it, the manager at Amazon probably makes more money because more books were sold. How is this related to cryptography? Simple - these are examples of places that use cryptography to protect my information - or claim to. Of course, the cryptography doesn't protect you, but it may sell you on their system as being secure - which brings me to a definition.
If you have ever looked up the word 'secure' in the dictionary you are likely to find out that it means the feeling of safety. It has nothing to do with the reality of safety. So in a very real sense, cryptography does provide security to those who do not understand it - because it can be marketed to make you feel safe. I tend to use the term protection - which means 'keep from harm'. A very different word with a very different meaning. Of course this series is called 'managing network security' - NOT managing information protection. That's because it's written for "Network Security Magazine" and I had to find a title that would convince the editors and the casual readers to read it - which is to say - I am using the term security to market this article to you - but I am actually telling you about information protection. That's another form of cryptography - the art of secret writing (by definition). I am using an encrypted message to improve protection. It's a public key system, since I am publishing the translation in this article. Don't you feel safer because of it? But don't feel too safe...
Now normally, when I write these articles a month or two before the publication date, I am relying on historical information as my guide, but for some reason I have an uncanny knack for coming up with things that become relevant just after I go to press. This month the force seems to be with me... and allies come in the strangest forms. Here's the start and some extracts from the press release:
Research Triangle Park, NC - 31 August 1999 - Between Hotmail hacks and browser bugs, Microsoft has a dismal track record in computer security. Most of us accept these minor security flaws and go on with life. But how is an IT manager to feel when they learn that in every copy of Windows sold, Microsoft has installed a 'back door' for the National Security Agency (NSA - the USA's spy agency) making it orders of magnitude easier for the US government to access their computers? [...]
Then came WindowsNT4's Service Pack 5. In this service release of software from Microsoft, the company crucially forgot to remove the symbolic information identifying the security components. It turns out that there are really two keys used by Windows; the first belongs to Microsoft, and it allows them to securely load CryptoAPI services; the second belongs to the NSA. That means that the NSA can also securely load CryptoAPI services... on your machine, and without your authorization.
The result is that it is tremendously easier for the NSA to load unauthorized security services on all copies of Microsoft Windows, and once these security services are loaded, they can effectively compromise your entire operating system. For non-American IT managers relying on WinNT to operate highly secure data centers, this find is worrying. The US government is currently making it as difficult as possible for "strong" crypto to be used outside of the US; that they have also installed a cryptographic back-door in the world's most abundant operating system should send a strong message to foreign IT managers. [...]
The key is actually called "_NSAKEY" but there is - so far - no proof that it belongs to the National Security Agency or that, if it did, it was there idea to put it there. Perhaps it was simply an NSA key that Microsoft adopted as their own. Who knows. But the point I am trying to make is not really related to the massive potential for abuse by the NSA - or Microsoft - or any of the employees who has access to the underlying mechanisms - although these are certainly valid points to be made - and don't just apply to the cryptography but to all of the other Trojan horses in Microsoft software and that of other companies they and we rely on.
The point I am trying to make is that the notion of secretly using cryptography is fundamentally flawed, the notion of a single crypto-key that is used for millions of systems is fundamentally flawed, and the notion that cryptography will protect you from attacks against your insecure operating system is fundamentally flawed. It is an ill-conceived idea based on poor assumptions and excessive trust in public key cryptosystems. I will elaborate just a bit more on my three points.
The notion of secretly using cryptography is fundamentally flawed: How hard was it for the one or two people involved in this discovery to figure out that there was a secret cryptographic key? It was not very hard - or they would not have been able to do it. Chances are, they did something like looking at all of the 'strings' in the binary files. A method that many people used in the 1980s to find obvious malicious code by the fact that it included messages like "Joe Rules" or "Deleting all your files - ha ha I gotcha". Hey - look - there's something called 'NSA_KEY' here - let's investigate. Not exactly rocket science. The fact is, we can almost always detect the use of cryptography without much in the way of sophisticated tools, and once we find it, we can almost always figure out when it is used by whom and for what purpose if we are willing to put forth the effort. Stupid mistakes led to the release of this information, and we certainly can not count on people to never make stupid mistakes. Perhaps the people at Microsoft were not even told to not include the symbol tables. Perhaps it's not the NSA but a Trojan horse planted by someone else to point the finger at the NSA. There are certainly those who would do this as well. And if we wanted to find out the answer to this question, we could do that too - pretty definitively - and without any of the cryptography getting in the way.
The notion of a single crypto-key that is used for millions of systems is fundamentally flawed: The level of trust we place in a number is just incredible today - and I mean 'in-' (as in not) credible. The way public key cryptography works, you generate a pair of numbers and throw away the things that made those numbers. Then you give one number out and keep the other one secret. In private key cryptography, you have a shared key (or a set of different but related keys) that you keep secret. Ignoring the problems associated with key generation would be a big mistake - as the people who implemented Kerberos found out a few years back when it was discovered that all of the public key infrastructure was susceptible to an attack based on poor key generation. But even if that part of the process went perfectly, we still have the problem that the secret key in the system has to be kept secret or else the whole system falls over. But of course the secret key has to be used in order for the system to work, so as the value of breaking the system goes up, the risks associated with defending it skyrockets along with the rewards of breaking it. The way you keep this risk from exceeding acceptable levels is by limiting the trust for any given key, any given cryptosystem, and so forth. This used to be called not putting all of your eggs in one basket.
The notion that cryptography will protect you from attacks against your insecure operating system is fundamentally flawed: Of course the way we keep most of our digital secrets secret is by a combination of physical and 'logical' protection methods. Don't ask me why they are called logical - many of them are in fact quite illogical - but that's another issue. The point is, if you have a car with no windows and a plastic bag for a roof, putting the best locks in the world in it won't make it burglar-proof. Sun had 45 new security patches in one month recently. Not one of them had anything to do with cryptography. If the cryptography were perfect, Microsoft's Hotmail service would still have been penetrated with millions of users effected - because the cryptography had nothing to do with the attack. Cryptography would not help mitigate the numerous sites affected by the recent word viruses. The advertising people can manage many peoples' perception by claiming that cryptography will somehow make you safe, but it is a bunch of baloney painted up to look like prime ribs. Don't be fooled by the seemingly logical argument that because a product has a security feature it is somehow secure. My car has a security feature in that you can't turn on the engine without your foot on the brake. Should I therefore believe that the things I put in the trunk are safe in accidents?
Cryptography is a very useful tool, and I use it myself. It reduces risks associated with observation and modification of information, particularly in real-time transactions where lasting secrecy is not important but momentary integrity is. Secure shell and similar methods to prevent session takeover are good examples of reasonable uses of cryptography - given that proper precautions are taken against trusting it too much.
In truth, I don't try to encrypt things with the intent of keeping them secret for a long time. I also don't trust cryptography as my only protection mechanism for limiting access to my systems. I use many other techniques in combination with cryptography to lower the risks to a level I am willing - but not anxious - to accept.
This article is about cryptography and the excessive trust we are unnecessarily and unwisely placing in it. But, as such, it is also about risk management and the poor job we are doing of applying it to cryptography. So there are really only a few reasons that we make poor risk management decisions.
The biggest reason that people make poor decisions is that we don't know enough to make better ones. I hope this article will start some people thinking about things they weren't thinking about before and asking questions they didn't ask before.
The second biggest reason that people make poor risk management decisions is because their stake in the outcome is not personal. I hope that some top level managers read this and decide to personalize the effect of poor decisions on their upper level management decision-makers. Of course since responsibility rolls down hill, the top level managers had better make sure the buck stops where it will do the most good - firing one person closer to the top usually saves you more and has a larger effect. I have a little list...
About The Author:
Fred Cohen is exploring the minimum raise as a Principal Member of Technical Staff at Sandia National Laboratories, Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection, and a practitioner in residence in the University of New Haven's Forensic Science Program, where he educates cybercops on digital forensics. He can be reached by sending email to fred at all.net or visiting /