Networks dominate today's computing landscape and commercial technical protection is lagging behind attack technology. As a result, protection program success depends more on prudent management decisions than on the selection of technical safeguards. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
PGP has been one of the most trusted software packages in the history of computing, and yet, it had a major security flaw introduced a few years ago and nobody apparently found or published the flaw in the open literature until late August of 2000. The flaw in PGP - and in GPG (the Gnu version of PGP) was, in my view, typical of what we have come to expect in the software arena. A strong package was, once again, made weak by adding excessive functionality and not taking adequate care.
In the case of PGP it wasn't a particularly subtle bug. It certainly would have been detected if anybody had done the analysis of the design change that we would expect from - for example - a bridge refit. In the case of any cryptosystem where we are changing the key distribution and management system (which is what happened with PGP) we would expect that a reasonably thorough cryptographic protocol analysis would have been done. Apparently none was done - at least by those who seem to support the notion that PGP - and cryptography - are good security methods. It may have been done by those who suggested or designed the changes to the protocol, and if so, it was intentional.
At any rate, the next time somebody tells you that "open source" makes it safe, tell them that PGP had a major security hole in it for more than a year and the "open source" community only figured it out once experimentalists demonstrated the vulnerability. Tell them that this isn't the first time. For example, there was a period of about a year when the IRC daemon for Unix had a vulnerability in the form of an intentional Trojan horse allowing remote system access to anyone using the client. In the case of PGP, probably more than a million public keys were susceptible to the attack. In the case of the IRC program, about 100,000 systems were vulnerable. How many actual attacks there were and what they were used for we will likely never know.
Recently, a variety of attacks against the Domain Name System (DNS) of the Internet have been found and exploited to great effect. In this case, the DNS system vulnerability had been around for several years. In this case, hundreds of thousands to millions of systems had the faulty DNS servers and certainly thousands were exploited for remote root access and extensions of attacks to other systems. In one case alone, more than 250 DNS systems in as many Internet domains were successfully taken over by the attacker.
When you look at why this DNS system was exploited, you find another rather obvious vulnerability - an input overflow that allowed a remote user to overwrite program memory with their own program code. This is common today, the techniques have been widely published for many years, there are even programs that test for the presence of many input overrun conditions and identify the potential for abuse. Any prudent designer of such a security critical application as the software that controls all translations of domain names to IP addresses and IP addresses to domain names and is the core of the ability of the Internet as used by normal users today would have surely made such a simple, obvious, and easy to make check of their software. Certainly we would expect that a competent designer would have found such an obvious and widely understood flaw as this.
As an aside, the DNS systems that were attacked were open source, so hundreds of thousands of people from the open source community had the opportunity over a period of years to examine this software and find the flaws at any time. None apparently ever did. It wasn't till these systems were found to be widely exploited that the flaw was found. Is this the last such flaw? Did the DNS fix that was released a few days after the flaw was widely published fix all of those flaws? Who in the open source community verified this?
That's thousands of viruses that exploit the same basic unnecessary and almost never legitimately used feature of Microsoft Word - the macros. I have personally never desired to write a document that acted like a program. If I want to write a program, I write a program. If all the macros are really used for is filling out forms and changing default behaviors of documents, whey do they have the ability to - for example - delete all of your files and remove functions from the Word menu bars?
I like to call it creeping featurism. Programmers keep adding features because that's what programmers do. Forget the fact that there are no users or particularly legitimate uses for those features. The programmers philosophy is, build it and they will come. Of course the legitimate users of Word macros have not come in droves - but the virus writers have. It's the same sort of thing as the feature that causes most menu systems to grant unlimited access to users who use special key sequences (like ! followed by a command) and causes many CGI scripts to grant general purpose access to the system it runs from (like the use of back quotes in an argument). It's use of trivial hacks to run general purpose interpreters on user supplied data without adequate input checking and filtering instead of taking the time to write a proper program that does the right and desired function and nothing else.
Word, by the way, isn't open source, but the vulnerabilities of Word macros to virus exploitation have been published since the 1980s - more than 12 years ago now. The exploits have not changed significantly over that time, nor have the applications that support these massive security holes. Microsoft has known about this for many years but they have not acted to remedy the situation, not are they particularly likely to unless it becomes a competitive advantage. Of course any competent person who desired to eliminate such a flaw should be able to do it to a reasonable extent at a reasonable cost. So why is it that the same set of flaws are still present today as 12+ years ago? The same CGI-script flaws are also present - as are the same excessive functionality flaws in the menu systems of today.
It was from a lack of space... not a lack of examples. I could write a daily column on these situations and still not have enough space to keep up with the security flaws being found and exploited in current software. This is a completely ridiculous situation in my opinion. It is beyond belief that those making so many billions of dollars and claiming to be building the future of humanity cannot keep themselves from making the same stupid mistakes again and again and again. It's not just a short term thing - they have been doing it for more than a decade. Now if I had an employee who kept making the same mistake again and again and again for a period of ten years, I would fire myself for retaining such a poor performer for so long. The employee would have to go as well, but it's hardly their fault that their employers keep paying them to do stupid things.
I don't mean to be claim that nobody should ever make a mistake. I make plenty of them. I like to tell my students that the way I got to know so much about computers was by making so many mistakes over such a long period of time. But I should also mention that there is another key ingredient to this equation. You need to learn from your mistakes. And that is where the computing community seems to be failing. The companies that sell computer software are not learning from their mistakes - they are simply making more of them.
After all, computer-related companies are making record profits... How bad can they be? The answer is obvious - just not quite as bad as the next company. Who's fault is this? Yours. After all, you buy this software, you don't demand your money back when it fails, you don't sue for negligence when the same problem happens again and again and again, and you probably don't even go to another vendor. There are companies that do a very good job on both sides of this issue - their are companies that make very good software and companies that try to only buy very good software - but there aren't enough of them.
I sometimes rush to get the 80% solution like anyone else. It's the right thing to do in many circumstances. There are two major problems with the way today's business community uses the 80% solution in computing:
(1) Once the 80% point is reached, the product hits the market - but instead of working it further and producing the 99+% solution, the push is to get the next 80% solution - after all this product can now sell, it's time to make another one - or the next generation. The unseen improvements to quality just don't sell more product, so they are not worth making - they only benefit the customer and not the seller. Besides, when it breaks the seller will just tell the customer it's the old version and have them pay for a new and improved version with different flaws.
(2) When it's really important to get it right because the consequences are high for getting it wrong, it is treated just like everything else and the 80% solution is used. The truth is, the 80% solution is probably just right for game software. After all, those subtle bugs in the program only make the game play a little bit differently or crash once every day or two, but it's only a game. If your window manager crashes every few days, you reboot the computer - no big deal. The problem is that if your security system fails - even once over the lifetime of the product - it can be a very big deal. You could lose all of your work, get wrong answers and have the bridge you are designing fall down, have your company confidential information leaked out to a competitor who uses it to put you out of business, end up in my next article as an example of how to not run your security, or end up in jail because I decided to make it look like you committed the next big Internet crime.
The computing community seems to suffer from a lack of pride in the quality of workmanship and a lack of ability to tell what's important. I guess I should say that this is because money has simply taken too high a precedence in our society. Money is chosen over doing the job well - after all - you are responsible to the stockholders to make as high a return on investment as you can, regardless of how you do it. As long as it's legal, you can sell crappy software for a big profit - and I have heard this many times - "There's nothing wrong with that."
Pride is one of the seven deadly sins, and yet without self-respect we lose a part of the social fabric that is at the heart of society's ability to bring prosperity to all of us. When we decide to use the ability to accumulate wealth as the basis for self-respect we move away from the thing that has led to success for all successful societies over the course of history - and we move toward the causes of the downfall of most of them. The software industry's lack of pride in workmanship is symptomatic, not of the revolutionary nature of the information age, but of the pending downfall of Western society, unless...
That was pretty provocative... Western society will collapse unless we balance the drive for money with the drive for the survival of humankind. That applies to a lot of things - to the environmental impacts of destroying Earth for profits - to the mass extinctions of animal forms because we want to eat them or don't care if we destroy them via pollution - the the destruction of rain forrests.
But this isn't quite so dire. If we continue to use low quality in our software development process for security-critical elements, we will simply allow for massive loss of the financial wealth we have created through the use of information technology to those who are cleaver enough to steal it from us using our own tools. Will it be the Chinese? Perhaps the French? The criminal elements from all over the world? Who knows - but they are all trying - and they are starting to succeed.
No big conclusion this time. Just the same conclusions we have been drawing for a long time. Things we thought we could trust or that we once trusted are no longer trusted because they were never trustworthy and we have just now come to realize it. We live in a decaying world and we need to struggle eternally to keep it functioning with some level of assurance.
We can reasonably expect that everything we trust will fail from time to time - as it has always been and will likely always be. These days things are falling apart in our field at a fast and furious pace. But there is a solution...
The solution is diversity, redundancy, good intelligence, rapid detection, and appropriate response. I'll lay is out a bit more:
Diversity: We need to place less trust in each thing we trust. In this way, as things we trust fail, we suffer smaller consequences and are more resilient to their failure. This is traded off with higher operating costs for the more diverse environment - but gains in that was are already invested in the technologies that do better and are thus better prepared for a wider range of futures.
Redundancy: Whenever one thing fails, something else backs it up. Again, we pay, but in exchange, we have redundant protection to compnsate for individual protection faults so that the overall system does not fail. This we attain a reliable system from unreliable components. (Thank you John von Neumann.)
Good Intelligence: It is very important to know when things are failing so you can respond to those failures. This means that you need to know what is failing and when and how to fix it before your redundancy also fails. Similarly, you need to know about the changing threat so as to strategically adapt your defenses as needed.
Rapid Detection: Rapid detection is a major key to mitigation of conseqeunce. How rapid depends on the situation.
Appropriate Response: Over-reaction can be as bad or worse than most consequences we encounter. Under-reaction can leave you open for along time and susceptible to liabilities. The notion of measured and appropriate response is something that developes with practice and practical experience.
Well... I am off to write the milleneum article... again...
About The Author:
Fred Cohen is exploring the minimum raise as a Principal Member of Technical Staff at Sandia National Laboratories, helping clients meet their information protection needs as the Managing Director of Fred Cohen and Associates in Livermore California, and educating defenders over-the-Internet on all aspects of information protection as a practitioner in residence in the University of New Haven's Forensic Sciences Program. He can be reached by sending email to fred at all.net or visiting /