Computing operates in an almost universally networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs has increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
I got a call earlier this evening from a cybercop on the East coast. He was asking on behalf of lots of folks who were recently attacked by Distributed Coordinated Attacks (DCAs) if there was anything they could do right now to defend themselves. My answer was not as hopeful as it might have been, but after I hung up, I thought about it, and I think there are some things we can do here and there to fare better than many folks have been doing to date.
With that in mind, and realizing that it's getting late on a Friday night, I figured that the only way to get the word out would be to write this article and tell a lot of folks about it. Now I realize that this is supposed to be a management article and all, and I also understand that having written all of these management articles over the last few years, I may no longer be able to do technospeak. In fact, I was appalled recently at two events - one in which a woman of about my age told me that I represented one of those outdated old white males - and another in which a youth in his early twenties accused me of being a know-nothing old guy who couldn't possibly understand what he was telling me since he had spoken to nearly 20 real hackers in his career. However, I am a brave sole by nature, and I figure I have a few bytes left in these weary brain cells of mine, so here we go.
This article is about defending against DCAs here and now. It's not about perfect defenses and it's not about what we might be able to do some day or in an ideal world. It's about the options you have here and now, while we're under attack, in the trenches.
Now I don't want you to get the wrong impression about me. I didn't think up these things I am about to tell you with the luxury of hindsight or in my pensive moments. Every one of the techniques I will discuss was developed in haste while under or about to be under attack. Yes - I generally know ahead of time that I will be attacked in a particular way.
After having posted this article (Feb 4, 2000), I was indeed attacked. The first one came only about 20 minutes after the article was posted, but it was a mere illicit entry attempt that hit one of my deceptions. Over the weekend, the @home network which supplies one of my connections to the Internet was down in this area. It came back up and down and up and down for almost 8 hours and finally returned Sunday morning. They didn't say why, but then they tend not to. By Feb. 9, I was hit with some of the large fragmented ICMP packet attacks (probably just a holdover from the PAPA virus strains) from an account at Earthlink - it was cut off within a few minutes of sending the notice to them. Nothing of import. Then, on Wed evening (0000/2/9), we started getting encapsulated protocol attacks, using another protocol over IP. No effect again. Feb 10, we get the occasional attempts at malformed UDP packets, no effect. More will no doubt follow. For examples of previous attacks, you might want to look at this article (recent) and this article (1996). - FC
For example, I knew that I was likely to be attacked in a big way earlier in 1999 when I 'offended' some of the malicious virus-writing community by telling everybody I could about their virus and FTP site for stealing RSA keys. It was a few months before they launched the papa virus aimed at denying service to my site by packet flooding. It failed because I knew something like that was coming and prepared for it ahead of time. It did bring down the @home network for a while, but my important sites were both reachable and reaching out to the rest of the world the whole time.
In the same way as I knew that then, I know now that within the next few days or weeks or months, somebody will be hitting my site with yet another DCA - perhaps one of the newer high-tech ones. I know that despite their best efforts, law enforcement will be unable to do much about it, and I know that bothering to trace down the source will be fruitless because the level of damage will not exceed the current threshold for law enforcement caring about it. But I am not worried, and frankly, neither should you be. As the song once went... "What's the use of worrying, it never was worthwhile, so... pack up your troubles in your old kit bag and smile, smile, smile. (I'm not really that old you know...)
These strategies are long-term things that you can do to mitigate the risks from large-scale DCAs. They are directed primarily at preventing attacks from being launched and being prepared for attacks if they come.
1) No matter what, don't bait the bad guys like this article is likely to do. Face it, if they come after my small business, there is only so much real harm they can do with cyber attacks. Even if they put me completely out of business, the economic boom would not stop. Your business, however, may be more important to you than mine is to me. Trust me. Don't ask for trouble.
2) Stress test your systems. My systems are all designed and tested with the specific requirement that they can handle more traffic than can ever appear on the incoming Internet connection. If 100% of the total available bandwidth to my firewall was continuously run at it - to the point where the wire could not handle any more bits, the total impact on my operations would be nearly zero. That would be true for at least a few weeks - until my audit logs exceeded available disk space - and then I would have to spend about two minutes unplugging the current audit disk and putting in the new audit disk - both of which operate in a removable disk bay.
3) Have proper redundancy in place for EVERYTHING you depend on for operation. This means, for example, that you should have DNS servers planted across the Internet so that when one gets blitzed you can switch to another. It means that from different request locations, different IP addresses should be provided for your domain name resolution. It means that you should have multiple ISPs hooked up to your site, some of which are not normally used for traffic. This used to be called 'held in reserve' but in the new era of efficiency over effectiveness, we seem to have forgotten about it. It means spare servers, spare locations, and so forth. Then, since you have kept these details confidential, when the DCA comes, you will only be partially effected (only the locations they attacker noticed will be under attack) and you will be able to rapidly shift load to redundant assets for the legitimate traffic - while not shifting the load for the attack traffic. More on this will come later.
If you are small, like my company, you can't afford many of these things, but on the other hand, my company can't be very badly hurt by losing the inbound Internet connection for... ever? E-commerce isn't that important to a consulting business. It's more a matter of pride. If you are a big bad e-commerce site, then you naturally planned for this sort of thing - didn't you? If not, you may not survive in the real-world information age. You had better toss that other consult and and call in somebody who can get the job done right. This will cost you several million dollars if you are very big or at least a few hundred thousand if your are pretty big. In my case, it must cost me at least $200 per month to have my 5+ redundant sites - but then I share some elements of my redundancy with other sites so we all benefit and the overall cost is dramatically reduced.
4) Have good contacts with your ISPs, other ISPs, your local, regional, national, and global cybercops, and a few folks that can get the job done if it's really important regardless of circumstances. Are you a big company? If so, when you call your ISP, for help in a critical circumstance, if you get the same response I get when I call you, you had better get another ISP! Sometimes we need to get human beings fast. Thanks to automated attendants, this only comes from knowing them and having their real phone numbers.
5) Plan to call the cybercops early on. You can trust me on this one - DCAs cannot be handled alone. You will need multiple search warrants in multiple jurisdictions, and these will only lead to intermediary sites. From there you will have to trace back still further. And when you get to the real source - which you will if you do this right, you will want to have a cybercop there to arrest them.
6) In some cases, you may feel you have to strike back. Resist the temptation. Striking back at an intermediate site during a DCA may cause you liability, may well be illegal, and may harm a partially innocent third party (not completely innocent - after all - their systems were too weak to prevent being exploited to attack yours). But if you do feel the need to strike back, you had better be fully prepared. In theory, let us suppose that I have about 10 gigabits per second available to me in aggregate bandwidth, more if I really need it, and it is diversified around the globe. So I can pretty much smash significant parts of the Internet to save other parts if I so desire. Of course, so can the bad guys if they put up the necessary resources. Escalation is rarely a good idea in this environment, and it should only be engaged in if you have enough power to win. Another variation on this theme is the use of a UDP virus or similar attack against the site attacking you. Since they were so poorly configured that they got attacked by the person exploiting them, they are probably not configured to stop a UDP virus, and in a few packets, you can shut them down.
Well, with this much strategic preparation, you can likely weather any storm out there today unless it is very well resourced. That is - if you know how to use these resources.
The basic tactical plan for the defender is to fend off the attack rapidly by moving to redundant infrastructure and leaving the attack on the infrastructure it started with. If the attack follows you through the infrastructure changes, and assuming the details of the redundancy have been kept properly confidential, this requires active control on the part of the attacker. In order to exert that control, (1) the attacker has to have a lot of resources - which each in turn get destroyed as they are used because the defender is able to filter them out - and (2) the attacker has to issue commands to redirect the attack over time. These commands are traceable to their source - if law enforcement is involved. The goal is to be able to move at a faster pace than the attacker, exhausting their resources, and tracing them further and further toward their source. You stay operating, they chase you with little effect, and they get tracked down and arrested. In the military parlance, this is called maintaining a faster operational tempo than the enemy.
Now that I have given up the plan, all you have to do is execute it. With proper planning, technical skills, and training, any reasonably good network security and IT department should be able to carry this off in real time and to great effect.
I'm sorry. I thought my work was done. So you don't have all of this set up and worked out yet, and you are under attack right now! OK. Here are some cheap tricks that might get you out of it this time, but you can only use these if you promise to do the right thing for the future.
1) Try filtering out the sessions. If the attacker is using a known protocol to exhaust resources, you may be able to find the pattern of their abuse and filter it out. For example, if they are using a university computer as one of their intermediaries, you can refuse connects from that source IP address (and use well-known syn-flood defenses that have been in place for years now). If they are using a particular sort of UDP packet, filter out that UDP packet type. Bandwidth may still be consumed, but at least the computer under attack doesn't exhaust its resources handling these requests.
Example TCP wrappers filter: all:
: deny : all: deny Example with ipfwadm: DR="deny" ANY="0.0.0.0/0" BADFOLKS=(bad IP address list goes here) echo;echo -n "...bad IPs..." for ipadd in $BADFOLKS do echo -n "$ipadd " /sbin/ipfwadm -I -a $DR -S $ipadd -D $ANY -W eth0 -o /sbin/ipfwadm -O -a $DR -S $ipadd -D $ANY -W eth0 -o /sbin/ipfwadm -I -a $DR -S $ANY -D $ipadd -W eth0 -o /sbin/ipfwadm -O -a $DR -S $ANY -D $ipadd -W eth0 -o /sbin/ipfwadm -I -P icmp -a $DR -S $ipadd -D $ANY -W eth0 -o /sbin/ipfwadm -O -P icmp -a $DR -S $ipadd -D $ANY -W eth0 -o done
2) Try different responses to malicious requests. In many cases, ignoring packets will cause the attacking host to stop sending them. In other cases, a negative response refusing the service will cut off the packet stream. In still other cases, various ICMP responses, such as redirect, host not available, source quench, and so forth may reduce or eliminate traffic. For example, in the script above, try setting DR to "reject" if "deny" is not effective. Of course this is not effective if the attacker is forging IP addresses and using only UDP or ICMP or similar packets to flood your network.
3) Get your ISP to push the filter back toward the source of the packet stream. This is a form of shunning. If all the ISPs start to shun all of the packet stream sources, the stream will rapidly slow to a trickle. Similarly, if service is refused by ISPs to places that are emitting these streams, the intermediaries will act quickly to mitigate the problem and prevent its recurrence.
4) Contact the intermediaries and have them fix their systems and improve their security. No legitimate site wants to be exploited to bring your site down, and if they can get help in fixing the problem, they will take it. They will also help track down the source of attack and likely even help interface with the police.
5) Shut down non-essential services and redirect essential services. For example, at your firewall, shunt all web traffic to a new web server. This can be done by changing the DNS entry for the web server or by placing a 'redirect' page that transparently and automatically sends users to another IP address. This is relatively clean for the user and can be done to different IP addresses in a minute or two, assuming you have the IP addresses available. This is particularly useful since things like SYN floods will not get redirected while only a very few packets have to be exchanged with the system under attack and the legitimate user in order to transfer them to an open channel.
Once the web traffic is redirected, and assuming the web server itself is under your control, you still need to route the legitimate traffic back to the real web server. Do this by changing its real IP address (but don't tell anybody about it) and redirecting the current (and ever-changing) IP address specified in the web page redirect through multiple address translation (patent pending) so as to redirect these alternative IP addresses back to the real address of the web server.
I have also used 'netcat' as a tool to do this redirection process on computers assigned up to 4,000 different IP addresses on one interface. I have done this at a rate of more than one address change (with associated translations) per second. For most web traffic and similar e-commerce functions this will be very effective and cost very little. As the attacker adapts, so must you. You will need some resources to do this, but in a pinch it can be set up in as little as 15 minutes for a relatively complex site. In TCP Wrappers:
www: all: twist /usr/local/bin/nc -w 3 Real-ServerIP-Address
In this example, we redirect web traffic to different IP addresses every 3 seconds. This is only one element of the total picture, and not a plug-and-play solution (you need to put in your own IP addresses and put the lines ending with \ on one line all together - I did it this way so you could read it more easily), but it should give you an idea of just how easy it is to do this sort of thing. Note that this code was pulled out of the all.net 80-line secure get-only web server published in 1995 and widely ignored by industry experts.
while true do for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 do echo "www: all: twist /usr/bin/echo \"HTTP/1.0 302 Found\n \ Server: ManAl/1.0\n \ MIME-version: 1.0\n \ Location: http://1.2.3.$i/index.html\" > /etc/hosts.allow" sleep 3 done done
6) Use other deceptions. Most of this rapid address translation thing and flexible redundant infrastructure are examples of deceptions. One of my favorite deceptions is to try to correlate the attacks with the location of the user getting feedback from those attacks. After all, if they are open loop, so there is no feedback, the first address change will be effective indefinitely. If they keep chasing you, it means they are testing to see if you have changed something. If you can identify how they are getting their feedback, you can (1) provide false feedback so they don't properly follow your changes (2) provide them with feedback designed to help track them down, or (3) make it look to them like the system is crashed even though it is working for all the other users.
7) If you have large key accounts, contact them and have them work with you to access you infrastructure through a different route. For example, if a big bank does large volume business with an insurance company, and the bank is under attack, the DNS at the insurance company can be rigged to use a special server in a different part of the bank's infrastructure for its transactions. As a rule, this sort of thing should be done for major customers anyway. For a few hundred dollars up front and $40 a month, I can provide a DSL line for a special customer that is independent of the rest of my Internet connectivity. I can provide special services through this route, restrict access to only come from that customer, and even put up an encrypted link between us for very little cost.
8) Use the opportunity to get management to buy into the notion that effective infrastructures are not always minimal. Redundancy is needed for the real environment we live in.
Since I first wrote this article, several fine discussions of the topics related to IP address forgery have taken place, and I thought it might be worthwhile to include the details here. Please feel free to skip over the rest of this section if you don't want to hear the technical stuff.
Since the SYN floods of a few years ago, the mechanisms in OSs have changed to mitigate the effects of SYN flood attacks. There are several versions of the change, however, the basic limitation is the size of a table in the kernel that keeps track of pending TCP connections. While increasing the table size is helpful (typical size was 128 for Unix boxes, so some have gone to 2048 or so) it also takes up more and more kernel space - which is not swappable and thus reduces available memory. The other simplistic move is to reduce the time associated with waiting for a connection to complete from 120 seconds (which it used to be) to - say - 15 seconds. This has a fairly small effect on reliability but, if used with 2048 table size, it means that you can handle up to about 150 SYNs per second. You can adjust these a bit more, but then you are done. That is why three different strategies were adopted:
1) Instead of waiting for the channel to complete, the destination computer sends - in the sequence number field - a coded number that get checked against the IP address and port number of the normal reply. If the reply comes back so that - for example - the checksum of the IP address and port number = the sequence number, then it is assumed to be a valid completion of a 3-way handshake and the session is on. In this way no storage is needed and the SYN flood is completely defeated.
2) A FIFO queue is used with older SYNs being dropped. The net effect is that faster computers can still get past the SYN flood. An adaption of this for high speed SYN floods is to track this outside of kernel space with allocated memory so that the table can grow larger to maintain a constant time window (say 15 seconds). Since each entry takes up something like 10 bytes, a million SYNs per second for 15 seconds takes up about 150 Mbytes. The net effect is that with a pretty decent PC, this is sustainable indefinitely and allows all legitimate connections completed in 15 seconds to be handled with some reduction in performance. A million SYNs per second at 40 bytes per SYN completely exhausts a 400Mbit/second link.
3) A random replacement strategy is used on the table of pending SYN requests so that whenever the table is full, a random pending request is dropped and replaced with the new one. This has problem similar to number two at high volumes. The random drop is pretty good at mid-level volumes, but if you are getting hundreds of thousands of SYNs per second and the queue size is - say - 10,000 the odds get pretty high that most of your legitimate connections will get overrun by the SYN flood. Most connections take on the order of a second to complete - with high speed infrastructure on all sides - under normal loads. Under high loads, average connections can take several seconds to complete.
So as we see, with the available technology, we can always defeat SYN floods, but it does require that we properly configure our systems and do the calculations necessary to assure that we can meet the peak demand. We'll do the calculation for the random drop method as an example. The trick is to know the incoming bandwidth and calculate the number of SYNs per second
(bits/second / (40 bytes * 10 bits/byte) ^ BW - the bandwidth ^this includes some overhead for of the wire Ethernet headers, etc.
Then take the average number of seconds delay under high load (say 5) and provide a queue that is large enough. For a 50% chance of not getting a legitimate SYN hit by a flooded SYN:
Queue-size= ((BW/400)*5)*2 = BW/40
For a 10Mbit/second wire, you need a table with 250,000 SYN entries for a 50% chance of a legitimate connection being hit at super high loads. At 10 bytes per table entry, this comes to 2.5Mbytes of kernel memory for the SYN table.
As an aside, there is another defense method that was discussed but never widely implemented in the attacks in the 1996 time frame. The idea is to use the properties of the pseudo random number generator (PRNG) used by the attacker to defeat the attack. In this technique, you predict future source addresses from past source addresses and rapidly reconfigure your defense to reject the coming datagrams before they show up. As they show up, you update the datagrams you reject so that a normal user coming in on that IP address is not hindered. If you do this well, you can stay within a very tight window and only block a few IP addresses from each remote attacker for a period of a few second. This works very well for published off-the-shelf attacks as well as attacks written by people using easily predicted PRNGs.
OK. I've given away most of my best secrets for defeating DCAs, and wasted another perfectly good Friday night working instead of relaxing like I am supposed to do on Friday nights. At the same time, I have invited attack by every DCA writer who will be upset at my providing an out against their latest scripted attack and because they can't rip off a big company for $100K. Too bad those big companies aren't paying me for the service.
Above all, I am a realist. You can count on this or similar advice soon being posted on the CERT or some such place - likely without giving this article credit for the idea and without citation. Then the big CPA firms and major manufacturers will start to sell it, first in a consulting role, then as products and related services. They will all forget me just like they did a few years back when I wrote an article on how to counter IP address forgery. As an aside, if all the ISPs would follow the advice in that article, the current spate of attacks would be easily tracked and countered. Next year, the US will give out several multi-million dollar government grants in this area, and none of them will even mention me, much less seek my advice or council on the matter.
Thank goodness for Network Security Management Magazine. At least they will pay me a few hundred dollars for this article. And you never know, next year I may even make the SANS security roadmap. Large corporate donations of gratitude (we take visa/MC) are accepted...
About The Author:
Fred Cohen is exploring the minimum raise as a Principal Member of Technical Staff at Sandia National Laboratories, Managing Director of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection, and a practitioner in residence in the University of New Haven's Forensic Sciences Program, where he educates cybercops on digital forensics. He can be reached by sending email to fred at all.net or visiting /