The Internet is now the world's most popular network and it is full of potential vulnerabilities. In this series of articles, we explore the vulnerabilities of the Internet and what you can do to mitigate them.
Because money for protection is limited, people who defend information systems have to make choices about what they defend against. In my experience, the choice is often made based on assumptions about the sophistication level required by an attacker to carry out the attack.
As an example, most Internet sites have some way to detect people trying to gain unauthorized access by guessing passwords, assuming they don't guess the real password right away. But the same sites are far less likely to be configured to detect remote sites trying to mount internal network file systems by guessing file handles. Even among sites where administrators are aware of both sorts of attacks and have adequate knowledge and access to tools to defend against both, the dominant defense is against password guessing.
Some of the reasons I chose guessing NFS file handles for my example are that it's probably a little bit obscure to most readers, it sounds pretty sophisticated, and the chances are that relatively few people in the world have the technical know-how to create such an attack. Another reason I chose guessing NFS file handles for my example is that it's probably a big mistake not to detect it. In fact, it's probably a much bigger mistake than not detecting people trying to guess passwords.
Why is it that this technically sophisticated and hard to implement attack should be detected with more vigor and a higher priority than even such a well known attack as password guessing? Because:
It turns out that this is fairly typical of the Internet environment. Even the most obscure and highly technical vulnerability may turn out to be simple to exploit by an unsophisticated attacker operating from a home PC through an inexpensive Internet Service Provider.
In this month's issue of Internet Holes, I describe and discuss some of the more common tools for automated attack against Internet sites, and discuss techniques you can use to determine whether your site is vulnerable to these sorts of attacks.
There are a lot of widely available attack tools available in today's environment. So many that, in this venue, we probably couldn't even list them all. Every once in a while, I go out into the Internet in search of such things, and I am constantly surprised by how many new tools I find. For example, last week, I found a file with tools for bypassing Novell Netware security. I found about 15 new attack tools I was previously unaware of.
The code for these tools is often gleaned from widely published full-disclosure vulnerability mailing lists. For example, the cypherpunks mailing list and the bugtraq list often have source code that exploits a particular vulnerability. These mailing lists are used by the good-guys to quickly hunt down and repair vulnerabilities, but the information also gets out to the bad guys, who create automated attack tools.
With each of these situations, there is a window of widespread vulnerability between the time the problem is first published and the time the repairs become fully integrated into the global environment. That window is wide open in the first few days after a new hole is introduced and typically closes over a period of more than a year.
How do I know this? For more than a year now, I have been providing free one-time tests for these sorts of vulnerabilities from my World Wide Web site (/). As new and easily implemented remote vulnerabilities appear, I add them to the remote test suite. When the systems administrators request a test (by making a selection from their Web browser) the tester automatically runs the tests, gathers data on which vulnerabilities are present, and sends a report of test results to the site that requested the test. In 1995, about 2,000 sites did self-tests.
To give you a sense of types of tools available and their effectiveness, I'm going to go through the automated tests that we perform and give you some loose statistics on how effective these tools are at bypassing protection. Keep it in mind that all of these tests are performed on a voluntary basis by systems administrators who are probably more aware of and interested in addressing protection concerns than the average Internet host administrator.
The first test we perform is a scan of all the UDP (User Datagram Protocol) and TCP (Transmission Control Protocol) ports on the remote host.
Typically, TCP/IP systems allow port numbers in the range of 0 to about 65,536 (a 16 bit port number). A port scan does a brute force test of all port numbers to determine which ports have services operating on them. A table lookup is then used to match the port number against a standard list of services associated with port numbers to give a list of the services offered from over the Internet. Here is a typical partial listing.
Port 7 ("echo" service) opened. Port 9 ("discard" service) opened. Port 13 ("daytime" service) opened. Port 19 ("chargen" service) opened. Port 21 ("ftp" service) opened. Port 23 ("telnet" service) opened. Port 25 ("smtp" service) opened. Port 37 ("time" service) opened. Port 79 ("finger" service) opened. Port 111 ("sunrpc" service) opened. ...
This indicates that, unless further precautions are taken within the operating environment, we could remotely login to the computer (given a valid user ID and password), we could transfer files to and from the computer, we could send mail to the system, we could check for users on the system (thus getting a list of valid user IDs and start guessing passwords and quite often getting other important information used in breakins), and remote entry without a password may be possible via remote procedure calls. In this case, we haven't broken in yet, but we have a much better idea of what to try. Some services are far easier to break in than others, so by examining this list, attackers who know what they are doing dramatically reduce their chances of getting caught and increase their chances of breaking in by selecting the path of least resistance.
There are at least five freely available remote port scanners on anonymous ftp sites throughout the Internet.
Many sites are configured to allow anonymous users from over the Internet to use the File Transfer Protocol (FTP) as a means to get and store files. For example, many archive sites use this protocol to grant remote access to freely available software or digests of mailing lists. Depending on the version of FTP and how it is configured, many attacks are available. A simple tool can be built in a few minutes to try anonymous login, type a series of commands, and store the results for later analysis. For example, our tester tries the pwd, mkdir, and rmdir commands. If these work, chances are the remote attacker can break into the site by planting a Trojan horse in the FTP area to, for example, allow remote entry without a password.
There at least twenty well documented Frequently Asked Questions (FAQ) archives that give step by step instructions on how to break in this way. It took only a few minutes to write a command script to do this test. The same is true for well-known attacks based on the Trivial File Transfer Protocol (tftp).
Something like 1 in 200 sites in our tests were configured to provide an attacker with the system password file through well known tftp scripts.
Just as there are well known FTP holes, there are well known sendmail holes requiring little or no expertise to detect. Here is a listing from a site with these holes:
SMTP:220 site.com Sendmail 4.1/SMI-4.1 ready at Tue, 7 Mar 95 22:07:52 ES 250
250 250 250 250 250
In this case, each of the potential holes is left open. If there were no such holes, you would probably get a message more like this:
500 Command unrecognized
This test is about as hard to program as the FTP holes listed above.
Fake electronic mail is easily created, easily stopped, highly valuable as a tool for abuse, and commonly overlooked. Writing a program to fake mail can be done by simply looking at the specifications of how electronic mail works, which takes a few minutes. There are widely available pre-made scripts for faking mail, and they have been around for at least ten years. We even use a fakemail script written in perl as part of our on-line strategic planning simulation system to allow us to alter incoming mail so that the return address routes through our server.
When our automated testing service script fakes mail, it tells the recipient that this is fake mail and to check the header information to determine whether their mailer allowed this forgery to take place without the ability to trace it back to our site. Since we don't get copies of the received mail, we can't directly tell you how many of the people who run tests allow fake mail, but from other reports it appears to be a widespread phenomena.
Fake mail is often indistinguishable from non-fake mail, so if you want to see an example, just look at your last electronic mail, and there it is - or might be at least. The only effective countermeasure is for both parties to the electronic mail to use PGP or a similar authentication package in conjunction with electronic mail, and even this is only effective from a secure server.
Example exploit code for each of the sendmail bugs found over the past several years has been posted to widely read mailing lists. In our tests, we simply copied these posted programs and added some comments to help the person doing the test understand the results.
A good example of a fairly complex attack that was easily implemented was a case where sendmail buffers could be overflowed by status messages sent to the identification daemon resulting in attacker commands being executed by the machine under attack. This attack is far too complex for non-technical attackers to implement, however, an example of the attack was made available over the Internet, and by simply copying the results of the example into a print routine, another routine was easily generated to accomplish the same task. In this case, it was harder to make a version that works only on systems under test than it is to make the attack work against all systems.
Some weeks after the first examples were posted to the Internet, source code for an attack was posted in another forum and the attack became accessible to anyone. When this attack was first announced and tests were developed, about 60 percent of all computers running the test were penetrated by the test. Within a week or two vendors supplied updated versions of sendmail that eliminated this vulnerability, and over the following months, the percentage of systems susceptible to the attack went down to well below ten percent.
Remote procedure call vulnerabilities are almost universally caused by improper configurations on hosts. These vulnerabilities are designed into systems as features to increase the convenience of remote access. If improperly set, they may allow remote access without passwords from anywhere on the Internet. They have been known for many years, they are easy to exploit, and exploit scripts for this sort of attack are so well known that they appear widely in FAQs and are on almost every auditor's checklist.
A typical attack is performed in our tests. They attempt to perform remote commands as user "Nobody" on the system under test, the result of which is sending a copy of the system password file to the test system. This test was put in as a bit of a lark, but we were surprised to find that it worked in a few percent of all computers tested.
Similarly, many systems still have guest accounts. Once into a system on a guest account, it is almost always possible to get additional access. One common technique to expand capabilities once logged in is to run one of the widely available audit programs such as TAMU. This program will often identify numerous vulnerabilities. Another technique is to run the Crack program on the system password file. For any system with a substantial number of users, the chances are very good that accounts with weak passwords will be found.
There are a lot of remote login methods available via the most common Internet services. For example, remote users may be able to send messages to users currently logged into a system. This has been used to, for example, get users to create vulnerable configurations. Exploit scripts for these sorts of attacks are simple one-line commands. Remote users can get other information such as uptime, lists of active users, and lists of remote users and where they are logged in from. All of these are trivially requested and lead to various social engineering attacks.
The Internet Security Scanner (ISS) software package was made available in source form for free over the Internet in 1994. In this case, the author designed a very useful program to test for many of these vulnerabilities in a range of network addresses. ISS was made available before SATAN (which got a lot of publicity but was hardly a breakthrough) and thousands of copies of ISS were downloaded over the Internet and reposted elsewhere.
While ISS is designed to demonstrate vulnerabilities and not to actually attack systems, once the source code to ISS is in hand, it's a simple matter to extent the program to exploit vulnerabilities it finds.
Yellow pages (now known as Network Information Services - NIS) provides the ability to maintain a single centralized database of user IDs and passwords so that users throughout a network can login to any workstation to do their work. This is, of course, inherently risky because of the inability to use physical security to differentiate users, the problem of moving authentication information over networks, and other similar issues.
Some time ago, somebody came up with a scheme for getting a copy of the NIS password file. In order to make NIS relatively secure, either passwords must be encrypted during transmission to the NIS server, the NIS server must pass encrypted passwords back to the workstation for authentication, or another method of authentication must be used. Each of these schemes involves substantial technical challenges, and the solution chosen was to transmit the Unix encrypted version of a password file entry over the network to the system being used by the user. The basic process is for the end-user's system to ask for the password associated with a particular user identification (UID) number. The NIS server then returns the line from the password file associated with that UID.
One attack against this technique is to simulate a series of NIS requests asking for the password entry of each UID from 0 to the maximum allowable UID. The result is a complete listing of the password file which can then be analyzed with Crack. This might be a difficult task for the average attacker if it weren't for a program called ypx. Ypx is run by simply typing ypx [hostname]. If the host named in the command is susceptible to this attack, the result is a listing of the NIS password entries in a format usable by Crack.
In our testing, we found more than one in ten systems were vulnerable to this attack. We tested about 20 of these password files by running Crack and found that about 80 percent of them quickly yielded a valid UID and password pair that could be exploited. Here is an example of a few lines of output from Ypx (only the password field has remained unchanged):
xyzabc:Pd7J7JO2pA1WA:998:80:Abdul Ababi:/home/Accounts/ababi:/bin/sh xyzabd:PqXKNi97oBo.g:843:85:John Smith:/home/Accounts/smith:/bin/sh
Network file systems using NFS permit users on one computer to remotely mount a disk on another system from over a network. This is a very convenient way to access remote files, but it also creates a substantial number of potential problems. The main challenges are that all of the information in the file system is trusted over and transmitted via the network. Some people even have a strong desire to run NFS over the Internet between sites (I once tried this between the U.S. and Australia). The risks here should be obvious.
Trying to mount remote NFS file systems would be quite a chore for the average attacker if it weren't for a small program called nfsbug that automates the process. In essence, nfsbug tries a variety of techniques for remotely mounting an NFS file system at the site under attack. If it succeeds, it gives read or read/write (unrestricted) access to the remote file system. Here is a sample listing:
Connected to NFS mount daemon at test.com using TCP/IP Connected to NFS server at test.com using UDP/IP MOUNTABLE FILE SYSTEM test.com:/img1 (no export restrictions) UID .. BUG: test.com:/img1 MOUNTABLE FILE SYSTEM test.com:/img2 (no export restrictions) UID .. BUG: test.com:/img2 MOUNTABLE FILE SYSTEM test.com:/usr (no export restrictions) UID .. BUG: test.com:/usr MOUNTABLE FILE SYSTEM test.com:/var (no export restrictions) MOUNTABLE FILE SYSTEM test.com:/home (no export restrictions) UID .. BUG: test.com:/home MOUNTABLE FILE SYSTEM test.com:/img1 (via portmapper) MOUNTABLE FILE SYSTEM test.com:/img2 (via portmapper) UID .. BUG: test.com:/img2 MKNOD .. BUG: test.com:/img2 MOUNTABLE FILE SYSTEM test.com:/usr (via portmapper) MOUNTABLE FILE SYSTEM test.com:/var (via portmapper) UID .. BUG: test.com:/var MKNOD .. BUG: test.com:/var MOUNTABLE FILE SYSTEM test.com:/home (via portmapper) UID .. BUG: test.com:/home
This is a pretty bad case. Anyone from over the Internet can read or write files in the user areas, the system log areas, and the login directories of users, perhaps including the root user. About 6 percent of all systems tested allowed remote mounting of NFS file systems, and most of those allow remote mounts to write into user file systems.
In addition to these automated tests, when the SATAN tester came out, we added those tests to our testing service and added some hacker emulation tests of our own. The SATAN tests are basically the same as the other tests we have described with a few minor additions. Our hacker emulation tests do things like guessing common user IDs and passwords. In combination, these tests gain entry to about five percent of systems.
To a first approximation, this is what we found. About 60 percent of the systems we tested had vulnerabilities that could be exploited using commonly available Internet attack tools. Of the sites that tested more than once (we reset the list of sites that have already tested every few months) less than ten percent showed vulnerabilities on the second test. This would seem to indicate that testing is very beneficial to the process of removing vulnerabilities.
Our tests are carefully designed to do no harm. If we were willing to risk the integrity of the systems under test, we believe the percentage of systems compromized could be significantly increased. This is supported by tests done by the U.S. DoD that showed over 80 percent of systems tested could be taken over via remote attack from the Internet.
First and foremost, we must recognize that these vulnerabilities are well known and, in most cases, have been well known for a long time. The vast majority of these vulnerabilities can be removed by proper system configuration, which would seem to imply that systems administrators charged with managing these systems are not getting the job done.
To address this part of the challenge, we should ask why it is that these systems administrators are failing. In my experience, there are several reasons:
Automated attack tools appear to present a substantial threat to the Internet environment. If more than half of all Internet-based computers are vulnerable to attack by a few widely available automated tools, how can we help but be concerned about the existence of these tools?
Many may try to blame the weaknesses of systems on these tools and the people who write them, but for the most part, the people who write these tools do so as the only means at hand to test the vulnerability and the effectiveness of countermeasures. Without testing using these tools, we would be in far worse shape.
Ultimately, the solution to automated attack tools is proper defenses. There are many effective defenses available today, but as a community, the people using the Internet have simply failed to use them. If we use the defenses available to us, we can be safe from all but the newest and most sophisticated threats, but as long as we refuse to take appropriate protective action, we will be vulnerable.