Networks dominate today's computing landscape and commercial technical protection is lagging behind attack technology. As a result, protection program success depends more on prudent management decisions than on the selection of technical safeguards. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
I have heard many decision makers and executives say things that went unchallenged even though they were dead wrong. The reason they went unchallenged varied with the situation, but I think there are three basic areas of rational. (1) The person they were talking to perceived themselves as less powerful and did not wish to offend, (2) the person they were talking to did not know the facts and simply bought into the misimpression of the more senior person without questioning it, or (3) the person they were talking to was afraid of offending the executive because they wanted something from the executive and figured you go along to get along.
Well, I don't perceive myself as less powerful than anyone, I know some of the facts, and the chances of my getting any money from anyone like that are so poor that I have nothing to lose. So I am going on a brief crusade this month fighting some of the stupid things I have heard high-level people say about security issues, particularly those who were believed by others and whose expressions found their way into widespread belief. Of course this could easily become a full time occupation for a small army, but let's not rub it in.
Of course to really do this well, I need a list of stupid things people have said so I can trash them. Rather than come up with my own list, I decided to ask others to list the ones they have heard, and I just sprinkled in one or two of my favorites along the way.
Here's a partial list of the security myths I was sent - with a few of my favorites thrown in:
That sounds like an 'academic' view
That's a Socialistic View
We have a firewall/intrusion detection system/virus scanner that will take care of it.
Nobody would be interested in hacking us.
"IT is responsible for security" - statement by the CEO.
"I use my children's names and passwords for all my passwords on all my accounts as I know the IT people make it safe".
We don't have to worry about viruses because [fill in the blank] - typically "we only use Macs", "we only use Linux", "we don't use shareware".
"We don't want to establish security policies, since that would upset the employees."
You consider *availability* to be a part of security?
Why waste money on intrusion detection? We've never seen a compromise or even an attack.
We use SSL, so our web site is secure.
We were certified by [PLACE NAME HERE] so we must be secure.
Let the trashing begin.
It is said in a derogatory manner with the emphasis on 'academic' and, generally, with the nose extended into the air slightly when the word is used. Now I would be the last one to say that academics cannot be obnoxious. If you want a good example, call up any professor at MIT, C-MU, or Stanford. Actually, I know of a few folks at Stanford that aren't this way... damned few. The implication is that all things academic are nice to talk about and good in theory, but they really don't apply to the real world.
Of course I am willing to go along with this. It's a deal. Don't use anything from academia in security. Let's see, that means that we cannot use any of the most popular cryptographic systems - the RSA and public key cryptography are out. No SSL, no encrypted email with private keys, you get the idea. But that's only the beginning. The Internet was born of academia - funded by the government no less. So we will need to abandon such networks if we are to say that academia is not practical. Of course process separation and most of the things we do for trusted operating systems, intrusion detection - for the first 15 years of its history, every sort of virus defense in use today was first published in academic research papers, and the list goes on and on. The truth is that, out of all the technologies we use today for information protection, well over 90% of them have come from academia.
But that's hardly fair - right? These are all past achievements. What has academia done for us lately? Well, you sure are right about that. If you look at academic research, you will find that almost none of it meets the 3 months to market deadline of the software industry. Perhaps that's why it tends to last longer than 3 months and be important in the long run. My point is made, so I will not rub it in. If you don't want to support academia, that's fine, don't claim your degree when you apply for your next job, and don't apply what you learned in school, and don't use the techniques and mechanisms that form the basis of all of the technology we have for information technology and protection.
I heard this one at lunch just the other day. It is one of the favorite responses to any attempt to talk about building lasting value through quality. I should apologize for the severity of my retort at the time. I think I say something like "I never said not to make a profit. Did I say not to make a profit? I never said not to make a profit!" I then went on to rant and rave for a few minutes about how building quality products that last bring a reputation to the name and how the application of these products into more and more domains and in cleaver partnering relationships can build the brand up and create more lasting value than selling crap.
It seems to me that in security, many people seek the easy way out. It's like medicine - these days we tend to emphasize cure over prevention. I think it relates to the success of antibiotics in the middle of the 20th century. All of the sudden, instead of most of the people with major infections dying, most of the people with major infections survived. So people become less risk averse because those who took risks with infections tended to live more often. When AIDS showed up, the risks associated with certain behaviors changed and so behaviors have changed - to the extent that those who practiced what we now call 'unsafe sex' in large volume have tended to die off.
The point I am trying to make is that when we behave more safely we help others and when they behave more safely they help us. You haven't yet seen the criminal negligence and other legal cases in this arena that you would expect, but I think that lawyers will wake up to the notion that when your failure to meet nominal standards of safe computing cause your systems to harm mine, you have liability. I for one will happily testify on the stand about standard and prudent business practices including preventing computer viruses and indicate that you should indeed be liable for other systems' infections if you haven't taken prudent precautions. It's not just viruses either. When you build or use a product with buffer overruns, you are negligent and I will support those law suits. If this sounds threatening, good. It's not socialism for me to want you to do your job to protect me. And it's not socialism to want quality from others. It's survival - your survival if you don't start doing your job.
The first mistake is the notion that technology takes care of anything. People take care of technology, at least these technologies. I have been quoted as having said that with a really good defender, any technology can be made reasonably effective and with the best technology in the world, a lousy defender will be ineffective. I stand by this quote. It's your people who make your protection effective, not your technology. Furthermore, good people will help you get technologies that are best suited to your needs because they don't necessarily want to do everything the hard way.
There are also several major misconceptions surrounding these technologies. The first major misconception is that they will find or block 'all' of something. If used properly, these technologies are all somewhat effective at finding or blocking portions of the totality of the possible attacks. But none of them are very effective at stopping a knowledgeable and skilled attacker for very long. That last part is particularly important - the part about "very long". Like a bank vault, information protection technologies can only hold off an attacker with so much resources for so long. If you can't react so as to find and arrest the perpetrators, if they are good enough, they will eventually break the bank.
On average, if you place a computer on the Internet today, it will be less than 15 minutes before someone or some other computer tries to attack it. Some will claim figures as low as a few minutes, but whether it's 30 seconds or 15 minutes, clearly there are people out there with a desire to attack every system on the Internet. And it's not just the Internet. People with war dialers go after telephone numbers of companies seeking modems at a fairly high rate as well.
Suppose you have nothing of any interest to anyone but yourself. In that case people might attack your system to destroy all of your files just for the malicious fun of it. They might want to take over your computer as part of attacking other computers. They might want to use your computer as spare storage of pornography or as a virtual meeting place for their criminal activities. They might want your computer to act as a zombie for their distributed coordinated attacks. They might just be looking for children who they can rape or bank accounts with money they can steal. They might be in search of a file drop for sharing illegally obtained software.
In case you think these are far fetched, each of them is an example of something done to at least thousands - and perhaps tens of thousands of systems every year. It is commonplace. If you are connected to others, others are connected to you and that includes the bad actors of the world.
The last time I looked, top management was responsible for limiting corporate liability, for preventing corporate assets from being used for the commission of crimes, for due diligence with respect to business hazards, for adequate insurance, for adequate controls, and so forth. It is the CEOs (or sometimes the COOs) responsibility to issue policies for internal controls and get feedback on the effectiveness of those internal controls. Making a profit would also fall under the general responsibility of the CEO.
It turns out that all of the things listed above are critically intertwined with information technology in most modern corporations. If you do not have a good handle on information protection issues at the top level, the policies will be out of kilter with realities, the budget for information protection will be inadequate, training, auditing, and awareness, will all be inadequate or inappropriate, technical safeguards will be inadequate or inappropriate, and so forth.
It's not just me who's telling you this either. The "Generally Accepted System Security Principles" and "British Standard 7799" say so as well. And in the many studies I have done for top management at major corporations, I have indicated such lapses and top management has agreed that they needed to become more personally involved in these areas. IT may be responsible for information protection, but top management is responsible to see to it that IT is doing that job.
Now this one is a direct quote. There are so many foolish things indicated here that it's hard to know where to begin. Using names for passwords is a very bad idea because it makes them very easy to guess. Using your own children's names makes guessing your passwords child's play... you go in the clueless category right away. Using the same passwords on all of your accounts is also a problem. This means that anyone getting the password to one has the password to all. I know it is practically impossible for real people to memorize large numbers of random strings, but you need to do better than this by far. Don't get me wrong - there are situations where even this might be appropriate, but not in any business context I am aware of.
Then comes the really hard to swallow part. This is justified because of the knowledge that IT people make it safe. Problem: How could someone who clearly knows almost nothing about the most basic issues in information protection be able to tell if the IT people made anything safe? Is this the expression of belief based on a religious doctrine? Is it because you are using Microsoft and that makes it safe? This is the type of foolishness that generates shareholder law suits when your company goes down the tubes. Get serious here. What sort of specific evaluation have you had done by independent external experts that indicates that your IT people make it safe to act in this manner? I would like the name of the firm so I can advise all of my clients to never go to them or anyone who has ever worked for them. Did you get this advice from a supposed ex-hacker? Or was it your brother-in-law who used to work in computer repair?
Then there is part 3. What gives you the idea that information technology can ever be made 'safe' in any generic sort of sense? Safety in computing is a detailed and very interesting research topic and has been for many years. It is indeed possible to make computers in specific applications far safer than they would otherwise be, but you may rest assured that it is highly unlikely that this is the situation in your corporate computers. For the most part, systems that meet specific safety requirements have far higher costs than those used in most businesses. They have things like strong change control and special purpose hardware and software. They are designed following ISO 9000 or similar standards. If you as a CEO are not aware of these in some detail, then you do not have a compliant organization.
You can fill in any blank you like. You don't have to worry about anything - eventually we all die, so enjoy life as it comes. Fair enough. But the notion that the use of certain types of widely used operating systems or lack of use of 'shareware' makes you safe is patently ridiculous.
Problem: Linux is shareware. Clearly both using Linux and not using shareware cannot make you safe. In fact neither can. More people have been infected by commercial products than by shareware. But of course if you use commercial software there is someone to sue - right? Wrong. There has never been a successful law suit against a commercial company for delivering viruses to its customers. You do, however, pay them more for products that are in many cases not as good. OK - but at least if you use RedHat Linux you can sue RedHat right? Wrong again. They assume no liability whatsoever for the product the deliver. Just like Microsoft - you can get another disk if the disk they delivered to you fails to operate when you use it the first few times.
It does turn out that Macs are relatively safe from viruses today. There are only a few thousand viruses that work against Macs. Of course if you use Word for Mac or any other outside software like that, you have ten thousand or so more viruses that will work against you. But indeed - the relatively low popularity of Macs makes them less susceptible because most virus writers don't want to waste their time on 5% of the market.
I cannot tell you how many times I have heard this one. It is the most bizarre thing and it tends to happen in the most high tech firms. I guess nobody is worried about upsetting factory workers - but when it comes to Ph.Ds (and I am one of them) we are all worried that they might - what - walk out because we told them that information is valuable and must be protected properly? Or is it the recently graduated from high school programmers who will be offended by such policies and leave for another dot.com? Get real! If they don't want to protect the corporate assets, you don't want them working in the corporation!
Let's get real. The people that don't want policies are, more often than not, executives and high level managers who are looking at child pornography or day trading or doing queer deals. They just don't want to get caught. If you don't want appropriate policies in place, you are probably either a criminal, a fool, or perhaps both. If you are a fool, I apologize for being so blunt in telling you so. But it is better to know it so you can change than to be told that everything is OK as they laugh at you behind your back. If you are a criminal, you have just been squealed on, and they will be looking at your books in more detail now. I look forward to your public humiliation, trial, and long jail sentence.
If you are neither a fool nor a criminal and you still oppose having reasonable and prudent policies for information protection, please let me know. Of course I may end up telling you you were mistaken about the fool part, but...
You had better believe it. I have heard lots of people try to separate availability of services from security telling me it is something else, but the attackers missed that memo. They didn't know that they weren't allowed to try to deny services, so they just went right on ahead and did it anyway. But it's OK with me if you don't want me to cover this problem in my assessment of your company. Just make sure you pay me in advance because you will be out of business pretty soon in today's environment if you don't include availability as a security issue.
One of the big problems that people in information protection face is the lack of understanding of how broad a subject area it really is. I just talked last week to a high muckity muck in a university who thought that since they had researchers in cryptography and therefore qualified as a center of excellence according to the NSA, that they must actually know what they are doing. Big mistake. Information protection must necessarily go where the attackers go. Otherwise we will be 'secure' even though our information systems keep getting successfully attacked.
The breadth of knowledge does not mean that information protection people are in charge of all aspects of computing. They should be thought of more as sage advisors to all aspects of the corporate environment but with a particular focus on protection of the corporate information assets. While the CFO is responsible for all aspects of financial issues within the corporation, that doesn't mean that they always decide on each purchase. But they set policies and try to assure that the finances and financial systems work right. Replace information protection for finance in this description, and you start to understand the role of the information protection folks in your organization.
Why waste money on microscopes? You've never seen any of these bacteria, or even gotten sick from one. The problem is that you don't know what's making you sick until you get a microscope out, and you don't always know you have been attacked via computer if you aren't looking for the 'rat droppings' associated with computer attack.
You may assert that if you cannot tell without looking if you have been attacked, the effects of the attack are probably not worth looking for. It seems to make sense - but it really does not. Most frauds go undetected until one day when a whole lot of money is missing and nobody knows why. That is typically detected in an audit. But if you never did the audits, you would just figure you weren't all that profitable. Ignoring the fact of fraud doesn't make it go away.
An example is worth a lot. In one recent case I was contacted when a company found out they were attacked as a result of being told by customers that the credit cards numbers provided by their customers were being used to charge things fraudulently. They didn't see the compromise because they weren't looking for it. How many major customers do you think they lost in this one? What sort of damage to reputation can you sustain just because you don't bother to look for such things?
Operator... operator... this manager seems to have been disconnected! SSL does NOT make your web site secure. In fact, it has almost nothing to do with making your web site secure. The ONLY thing SSL does is to encrypt communications between a browser and a server for the specific communications associated with those secure transactions. And there is NO case in which this does anything to make the server more secure.
I can hear the 'huh?' already... 'But you just said it encrypts the communications - doesn't that make it secure?' Well, it might make some it secure, but it depends on which it and what secure you are talking about. If the it is the communications between the browser and server and the secure is prevention of unauthorized observation of content, then it makes it a bit more secure. But if the it is the web server and the secure is prevention of break-ins, unauthorized access, corruption, or denial of services, it is likely that the encryption of select communications has nothing to do with it.
SSL does NOT:
Authenticate the person sitting at the browser.
Prevent malicious content from passing through the encrypted channel.
Protect the web server from attacks.
Protect the system with the browser from attacks.
Protect the non-SSL encrypted traffic.
Protect against denial of services.
Protect against corruption of information.
Protect against leakage of information.
Control use of server or browser.
Protect servers in any other way.
I hope this is a start down the path of reconnecting the people who believe this with reality. I also hope it derails all of the vendors out there who are perpetrating this fraud on the public.
I know of a madhouse that will certify you are insane. Does that make you secure? I used to laugh at people who said that the NSA certified their encryption mechanism. I asked: "Certified for what?" - If it is certified for export, that probably means that it is breakable. I know a lot of programmers that are certifiable... but that's probably not what you were looking for.
There are meaningful certifications in the world. ISO certification gains a lot of respect from me. Of course I then have to look in detail at the certification level, the tests, the basis of the certification, and so forth. But for the most part, certifications are based on the business model that vendors can sell more in a competitive environment by having certifications. So they want to pay someone for a seal of approval as an indirect endorsement. In the information protection field, reputations are easily generated among the typical buying public - some advertising dollars and PR go a long way toward the aura of legitimacy. You then sell endorsements in the form of certifications.
I don't mean to say that certifiers do nothing. They do something. They take money and have someone who claims to know something try to do something. If they fail, they admit it, which is called a certification. If they succeed they report it back to the vendor, take their fee, and don't admit that they ever looked at it. Now what I would like to see is a situation where (1) the vendor's failed attempts are publicized widely and loudly along with details of how they failed, (2) the testers are themselves certified by a similar system, and (3) if there is ever a real attack on a certified system, there is large personal and financial liability as well as widespread publication of who failed and in what way, attached to the certifiers - both the individuals and their certification companies. But fear not. It wouldn't work as a business.
"The Human Equation", Jeffrey Pfeffer, 1998, 0-87584-841-9, U$24.95
"... Interestingly, Pfeffer writes something to this effect in chapter four, while pointing out some of the tragically flawed beliefs and practices of modern business. He notes that the formal evaluation process, so beloved of management, requires that experts explain their conclusions to non-experts. However, experts make decisions based on accumulated experience and an almost intuitive level of knowledge. This reasoning generally cannot be explained to novices, who can only rely on common knowledge. The explanation, therefore, must proceed at the novice level. As the old saw has it, if you can tell the difference between good advice and bad advice, you don't need any advice. If an institution has need of expert advice, then the organization obviously does not command the expertise to fully evaluate that advice. The requirement to have the expert explain conclusions means that easy, and therefore unimportant, decisions can be easily explained, while more complicated, and significant, resolutions will be much harder to explain, and thus have less chance of survival. ..."
My crusade of the month is at an end. I think that I have trashed nearly all of the possible sources for future funding, and at the same time offended thousands of mindless but powerful people. Ask not for whom the bell tolls, it tolls for me.
And yet, despite my expressed rage and abuse of those who would say mindless things and ignore the truth for the convenience of their preferred viewpoint, I must, in the end, admit that I too have my biases and my thoughtless viewpoints. We all do.
What I ask of you, my reader, is only what I ask of myself. To try to evaluate based on facts and to seek out the real evidence and to tell it like you see it, disregarding your short term personal interests for your long term interest and those of all of us.
Gees - that was too poetic. The heck with it. Do stupid things, just do fewer than the competition and you will eek out a living.
About The Author:
Fred Cohen is researching information protection as a Principal Member of Technical Staff at Sandia National Laboratories, helping clients meet their information protection needs as the Managing Director of Fred Cohen and Associates, and educating cyber defenders over-the-Internet as a practitioner in residence in the University of New Haven's Forensic Sciences Program. He can be reached by sending email to fred at all.net or visiting http://all.net/