Networks dominate today's computing landscape and commercial technical protection is lagging behind attack technology. As a result, protection program success depends more on prudent management decisions than on the selection of technical safeguards. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
Chipping is a term that describes the use of a hardware-based Trojan Horse - a corruption of hardware that plants an undocumented and potentially hazardous function in to a system at the hardware level. I prefer 'Trojan Horse' myself, but I like Winn Schwartau and he uses this term (he was the first to use it that I recall) so I figured I would give credit where it is due (if it is indeed due there).
Chipping is an old tradition in the spy game. Ever since computers became part of the landscape - and before that in telephony, hardware modification to allow remote access was part and parcel of the landscape. But historically, this has been a custom or semi-custom type of job, it has never been used in mass production like it is today...
According to Bill Caelli, the world reknown Australian information security expert, in order to allow application programs to access the display memory without operating system intervention, an undocumented hardware instruction on Pentium class Intel processors and possibly other 'compatible' processor architectures has been included to grant user-level processes direct access to the real memory of the central processor. This means that a user process can read directly from or write directly to system memory. the net effect is that any user who can run a program of their devising on one of these computers is guaranteed to be able to take over the system and do whatever they wish - regardless of the operating system - and regardless of any add-on precautions at the software level.
As a side effect, China has refused to buy Pentium processors or allow their use in any government system attached to the Internet, and China will soon be making their own Intel-compatible high-speed integrated circuit CPUs under their own control. As a side effect, no operating system that allows programming and that runs on a Pentium processor can even achieve C2 class security certification by any fair and honest certifying authority. As a side effect, anybody who runs such a processor and allows any user to run user-defined programs to run trusts the user because they have the ability to become system privileged regardless of other precautions. As a result, any program that is loaded onto such a processor and run has the potential of being able to take over the system and cause arbitrary change and observe arbitrary content.
If you were not aware of this - as I was not aware of it until relatively recently - you may have been suffering under the false impression that chipping was relatively unlikely to happen - or that your operating system was reasonably secure - or that you could safely load programs onto your computer from the Internet and rely on operating system protection to help you stay safe. Forget about patching your x86-based systems against insider attack - it's a waste of time. How soon will it be before an exploit is published on the Web? I suspect they are being designed as we speak. How will this affect Intel stock? Probably not a bit. How does this compare to other attacks on computer integrity, availability, and confidentiality? It means that we can only trust what we build ourselves - which means most computer people are in big big trouble.
It is clear that Microsoft know of this added functionality because it is apparently only documented on a special CD intended for operating system designers. Indeed, Microsoft's compiler apparently doesn't allow you to generate these instructions (although they generate them for their applications) from their C compiler. You can generate them in assembler of course.
If this is all true, (perhaps it is all perception management intended to undermine Microsoft and Intel) it appears that this example of chipping was intentional and part of a collusion to provide special functions only to authorized parties. Compare this to the relatively minor implications of Intel adding a processor identity to every chip, and you will see that the identity number was only the mildest form of a pernicious problem.
The question is now, what management tactic can we us to protect our most valuable assets from this and closely related chipping issues. While the general answer is that there is little we can really do, my specific answer lies in an operational policy I have run for several years.
My counter to this and a large related class of attack mechanisms is to decide that all machines fall into one of three classes. (1) Single user machines, (2) Limited function servers and infrastructure machines, and (3) Machines I don't care about and don't trust at all but have around for some utility function.
Single user machines: These machines only ever have one authorized user and we assume that this user is in physical control of the machine - that is - we don't trust it to maintain any level of control within itself other than that provided by the user. We leave it up to the user to protect themselves and may help them do it if they are willing to live with our rules.
Limited function servers and infrastructure machines: These machines are configured to depend as little as possible on anything we don't trust. We don't load extra software, they never run code we didn't write (or that is our goal - and we sometimes even achieve it), they use hardware that we have a long history with, they rarely get updated and certainly are not up to date with the latest fashion in anything, and they are generally not expected to change very often. We control these with a passion - they make up our firewalls and secure servers and other similar infrastructure items. We trust but verify by having some of these machines watch others of them for suspicious behavior (anything we haven't programmed as all right is treated as suspicious) both internal and external checks - all of them non-standard and unpublished.
Machines I don't care about and don't trust at all but have around for some utility function: These are things like Windows boxes, which we try to put in untrusted subnets and warn users about. We simply cut them off when we don't like them and rebuild them whenever we feel like it.
I know this might not work for you, but if you have critical applications, you might consider it. We actually have other rules and conditions for such machines, but I don't want to bore you (or give too much away).
It seems like every month I am revealing more and more of how I do security and every month I hear about another strange item that reduces my already waning trust in anything other than the things I do for myself. Frankly, I am getting tired of this business of having nothing left to trust, so I have decided on another alternative. I will simply trust everything and not worry about it.... just kidding.
In fact, I have taken the opposite posture. I trust almost nothing about information technologies or systems and I treat it all with suspicion. That doesn't mean that I don't use it and use it usefully, but rather that I increasingly depend on security through obscurity - mixed with some technical know how here and there - to put off my enemies (not paranoia - they are out to get me).
I am tired of seeing the security world chase after the newest and ugliest breech of ethics and trust, and I am really tired of hearing claim after claim after claim about products. Basically, my perspective is that with good people you can solve your protection problems and without them you may as well give up. This begs the question...
I build them. Hard to believe? Maybe it is, and I can hardly believe it myself, but it's the only way I have left. I can never afford to hire real experts because they cost too much and there aren't enough of them. All of the non-experts I know seem to think that this product or that will save them - and they believe in the tooth fairy as well. So I build them by taking students into my educational programs. No - you can't have them - I want them!
The real question for you is how do you get good people? And I'm afraid the answer is rather less than ideal - you have to build them too. In your case, building them involves spending a lot of time and money putting them into school after school - sending them to the best conferences - contracting for their first born children if they ever leave the job - and paying them well enough that they don't need to take advantage of you in order to put their kids through school.
The Pentium example is likely only the first of many we will see popping up in the next few weeks, months, and years. As folks around the world get more serious about security, they will find more and more of these hardware Trojans waiting about to be exploited.
From a management standpoint, trying to gain assurance in a situation like this is a complex and poorly understood task. In essence, you are faced with building a trusted infrastructure out of untrusted components. While my methods of doing this may be reasonably effective today, this is a game where you have to keep moving or get eaten by the sharks.
The notion of security by obscurity is scorned by the information protection purists, but obviously, this is apparently something we all depend on every day. We seem to live in interesting times.
About The Author:
Fred Cohen is exploring the minimum raise as a Principal Member of Technical Staff at Sandia National Laboratories, helping clients meet their information protection needs as the Managing Director of Fred Cohen and Associates in Livermore California, and educating defenders over-the-Internet on all aspects of information protection as a practitioner in residence in the University of New Haven's Forensic Sciences Program. He can be reached by sending email to fred at all.net or visiting /