The Internet is now the world's most popular network and it is full of potential vulnerabilities. In this series of articles, we explore the vulnerabilities of the Internet and what you can do to mitigate them.
If you've read the Cookoo's Egg - or if you've even heard of it, you know about how Cliff Stohl helped track down attacks against computers in the United States by investigating a seemingly trivial accounting error in his system. The good news is that detecting the sorts of attacks described in Cliff's book is far easier now than it was then. The bad news is that once you've detected an attack, tracking down the source has become even harder.
I've mentioned our Internet site before, and I've probably told you that we detect more than one suspicious activity per day on average. What I haven't told you about is what happens when we try to track down the sources of these activities. That's what this month's article is about.
I have chosen three incidents from 1995 that give a flavor of what happens when we try to track down the source of an attack and do something to prevent the attacker from trying again.
A U.S. Department of Defense (DoD) site tried using our free one-time automated testing service to detect vulnerabilities in their configuration. Upon seeing the incoming test, someone at the site apparently misinterpreted the action as hostile and returned the favor by trying to test our facility without the courtesy of asking us. The inbound activity was detected and generated automatic response in the form of email to the site administrator.
After 3 days of email non-delivery, it would normally fall upon our fearless systems administrator (yours truly) to do a manual follow-up. This particular attack was of special concern because of the apparent determination of the attacker to test our defenses. In the face of our refusal to accept inbound telnets (nobody from their site has a legitimate reason to telnet into our network so we don't allow then to even try), the person on the other end apparently decided that it might be a problem with the particular computer and proceeded to try to login from several other computers at that site. Real-time detection on our system generates on-screen audit trails as well as email to the systems administrator. Upon seeing the series of attempted entries on the screen and reading the various emailed notices of inbound activity, it was decided to forego the normal 3-day waiting period and go to a more active approach to defense.
We started by calling the Computer Emergency Response Team at Carnegie-Mellon University. After some rather hostile words on my part, they decided the case was worthy of response, so they gave me the telephone number of the DoD response team and sent me on my way. I then called the DoD emergency response team and identified what was going on. They asked for audit trails so I sent them a page or two worth of audit trails via email (it arrived safely within a few minutes). We also discussed the fact that this was probably a response to the automated test they had run, and pending decisions by higher authority, decided it would be prudent to prevent a recurrence by restricting our testing service to disallow tests from their particular branch of the military. This was done during the telephone conversation (it takes less than a minute to restrict tests - we simply add the domain to our list of sites not wanting to allow tests).
The people at the military site assured me that they would try to track down the source of the attack. Indeed, within a very short period of time, the attacks did stop. As to tracking down the source of the attacks, well, ...
Since the military site didn't run the ident daemon, we couldn't track each attempt down to a particular user ID, but you would expect that normal system logs would indicate which user was logged into which of those systems at the right times to allow all of the attempted entries. Apparently this is my misimpression. The response team later reported back to me that they had narrowed the search down to four people, none of whom would admit any involvement in the process. Rather than proceed further, it was decided that the no-harm no-foul principle would be applied, and I was assured that all of the individuals were quite certain that they would never do such a thing in the future, even though they were all quite certain they had not done such a thing in the past.
It's fairly common for attackers to break into accounts at one site and use those accounts to break into other sites. They feel that the indirection provides limited protection against traceback - and they are right. It's not that they can't be easily traced - it's that the more systems administrators become involved in the process, the more likely it becomes that one of them will be unable or unwilling to take the next step. Of course the indirection also increases the time for a traceback in many cases.
In this particular case, the attempted entry was detected when a user at a University tried to telnet into our site. Our automatic response to such attempts is to send a mail message to the systems administrator at the source of the attempt. Here's what one of our automated responses looks like:
A user at your site has just attempted to telnet into our site without proper authorization. We consider this inappropriate behavior and would like an explanation of this action as soon as possible.
This message is generated automatically at the time of the attempted entry and is sent to our administrators and the postmaster at the machine making the attempt. We have included any information provided by your ident daemon (if in use) on the subject line of this message. We also do a reverse finger for future reference.
Fred Cohen - fred at all.net - tel:US+330-686-0090
The systems administrator at the university responded within less than two hours stating that they appreciated the heads up and that they would investigate and let me know how it came out. Within two days, the story came back that the account attempting the entry was a stolen account and that it was being used from another university. The stolen account had been disabled and the student who owned it was given a new account. The next university down the line had been contacted and they were going to trace things from there. The administrator thanked me, and I haven't heard anything about the incident since then.
It might be nice to do more comprehensive follow-ups on these incidents, but remember, we get one-a-day on average, and if we followed each one up to find out what finally happened, we would never get any other work done. We would probably feel differently if the attack had succeeded, but in a failed attack, detecting it, getting the source shut down, and tracking it two steps down the line is about as far as we can normally go. By the way, if the administrator at the second site reads this, please send me a note telling me what happened next.
If you're in the military and you try attacks, you'll probably get shut down quickly. If you try to exploit a university system and get detected, they will probably shut down your stolen account and trace back further to try to catch you. But if you want to attack with relative impunity, the best place today seems to be small Internet Service Providers (ISPs).
Most small ISPs lease a 56K, 64K, or T1 (or equivalent) telephone line to connect them to the nearest Internet backbone site. At the ISP's end, you will likely find a few Sun file servers, a router, and a modem bank. The modems are used by their customers to dial in for service, and the ISP provides domain name system (DNS) services and an IP class C address range for their use.
Most small ISPs have their hands full keeping their systems operating. When it comes to securing outsiders against attacks by their customers, they are not very anxious participants. Most systems administrators don't know how to track down attacks, and when it comes to reading audit trails, they often don't even know where they are located or if they are operational. But perhaps the most important reason small ISPs don't like playing detective against their customers is that their customers are paying them for Internet service, while the person being attacked by one of their customers is not paying them. If they find an attacker among their customers they reduce their income and send the customer to the competition.
Perhaps this explains the response I recently got from an ISP in the northeastern United States. Our system detected an attempt to scan our Transmission Control Protocol (TCP) ports looking for services. This is a typical start of an attack wherein the attacker identifies available services by trying every possible service (as opposed to asking the service called portmapper which is supposed to list available network services). After available services have been identified, services with well known vulnerabilities are attacked one-at-a-time in an attempt to gain entry. Our detection system automatically responded to some of these attempts by sending email to the systems administrator at the ISP's site.
The ISP's response was less than supportive. In essence, the systems administrator responded that these few attempts to enter our site didn't constitute a threat worth following up on. I responded by stating that, in my opinion, it was a violation of federal law to attempt to enter another Internet site without authorization to do so. The administrator responded that it was just too much effort to try to track down such an attack. After all, they have scores of users at that time of day. I was basically told that this was the end of it as far as he was concerned.
My next step was to get details on the site from the InterNIC (the Network Information Center for the Internet - rs.internic.net). By using the whois command, the InterNIC provides contact information for Internet sites. Once I found the site details, I tried to contact higher authority at the ISP's site by telephone. In sizable ISPs, there is usually someone (perhaps the owner) who is in authority over the systems administrator and can ask the administrator to follow-up on this. Unfortunately, this is a small ISP, and as a result, there is no higher authority. The systems administrator owns and operates the whole show, so there is no appeals process.
My next and increasingly distasteful and less likely to succeed step was to contact the CERT at C-MU. I figured that the ISP might be embarrassed or bullied into investigating further if contacted by the CERT. The CERT expressed that there is nothing they can do to force an ISP to do anything, but that if I wanted them to, they would contact the ISP and ask them for their side of the story. I said that it would be worthwhile to do this and that I would like the CERT to keep a file on the incident so that any further incidents could be correlated as to source.
The CERT said they would proceed along these lines if and only if I filled out their incident report form. As I filled in the form, I noticed several questions about my internal configuration which I felt were inappropriate. I certainly didn't want to identify details of my security setup to the CERT for storage on a publicly funded database. I answered these questions n/a or in some other similar fashion and sent the form off to the CERT.
The next day, I got a two-part response. The first part was, in essence, that they had contacted the ISP and that the ISP would do nothing further to investigate (no big surprise). The second part advised me to run some hokey computer security software that the CERT thinks will help to secure my system. In my opinion, this second response is completely inappropriate (and I told them so):
At this point, my legal options were essentially exhausted. Since this was an Interstate incident, local and state police could be of no assistance. They probably wouldn't understand what I was talking about, but even if they did, they can't help in Interstate crimes. The FBI can only investigate if there is damage in excess of $5,000. Since I defended against the attack successfully, there isn't enough damage to bother tracking down the attacker, and they won't even start a file over such a minor incident. Three's no higher authority to go to, it's not worthwhile to file a civil law suit over such a thing, and anything I might do to find the information out on my own would probably involve breaking into someone else's system, which I will not do.
Unfortunately, very little can be done by individuals about a global situation. But fortunately, every systems administrator can do their part. Here are a few suggestions that will help to improve the situation for all of us:
Perhaps the biggest Internet hole of all is the fact that when an incident is detected, there's no uniform or enforcable way to track down attackers and punish them.
Our system detects and prevents over 350 attacks per year, tracks each of them down to a specific computer (and in some cases even a specific user), and reports many of them to the administrator of the affected site. In 1995, our efforts resulted in action against only about five individuals, and in no case was the action of any noticeable significance. One person was told to stop it, one person was told to be more polite, one person was given a copy of the site policy and told not to do it again, ... you get the idea.
There's an old saying about computer crime that goes something like this:
If this is true, the risk is 1 in 1.5 million of being punished. We've done our part by cutting this down to 1 in 15,000. Now if we could only get everyone else to do their part, we could win this thing.