Over the last several years, computing has changed to an almost purely networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programs have increasingly become a function of our ability to make prudent management decisions about organizational activities. Managing Network Security takes a management view of protection and seeks to reconcile the need for security with the limitations of technology.
Most modern networks include a mix involving some or all of five different classes of computing systems, each with substantially different protection requirements:
The rest of this article looks at special protection properties of these different types of systems in more detail.
Safety critical systems often have very stringent requirements for availability and integrity. For example, some critical control systems used in dynamically unstable aircraft cannot fail for more than a milisecond without causing a crash. Similarly, many of these systems control elements of manufacturing systems that could cause loss of life if not properly controlled.
The stringent requirements for availability and integrity are normally met through an engineering process. This process carefully considers the range of possible events in the physical world, relates them to control events, and produces a design which fails in as safe a mode as possible under each set of conditions that could to cause harm. This is typically done through a fault tree analysis which considers all single faults and, in same cases, substantial numbers of multiple simultaneous or sequential faults.
Protection of such a system consists primarily of assuring that the system is in proper, and well-defined, states throughout its operation. Because typical software systems used in most current computers have inadequate protection to provide this high degree of assurance, special purpose systems such as programmable logic controllers are used to intermediate between highly complex software systems and hardware capable of causing harm. Redundant interlocks are typically used to assure that regardless of any errors in computation, physical redundancy will prevent unsafe conditions.
High-level systems are then networked along with detailed data gleaned from intrumentation to provide interoperation. Interoperating components are typically redundant as well. For example, in an automated manufacturing plant, inspection stations check the results of manufacturing processes to assure that the resulting component meets specification. Subsequent testing provides yet another level of redundancy in order to assure that the overall system is working properly.
Information protection in these systems then consists of redundancy, well thought-out design and implementation, testing, physical controls, interlocks, failsafes, and other similar methods. The reason this is possible for safety-critical systems is that the loss associated with a failure is substantial and quantifiable, and economies of scale justify the high cost of high assurance based on its positive effect on the bottom line.
Personal workstations tend to represent the opposite end of the protection spectrum from safety-critical systems. Failures in these systems usually only the output of a single worker and, with some notable exceptions, have little effect on the overall operation of a substantial operation. For this reason, it is almost never cost effective to spend much money protecting any particular personal workstation. The exceptions to this are notable:
Cost effective protection in personal workstations is a very tricky challenge. It is almost never cost justified to prevent simple attacks or failures, but more sophistocated attacks can leverage even a single workstation into staggering harm. For this reason, it is vital to (1) understand which personal workstations must be well-protected and properly protect them, (2) design the work flow of the organization so as to minimize the impacts of individual failures on overall productivity, (3) provide effective organizational-level protection against threats like computer viruses and common-mode failures, and (4) provide adequate internal controls so that break-ins effecting individual workstations can only have a limited overall impact on the organization. Cost-effective mixes of protection often have substantial interaction with the structural design of information networks.
File servers usually support ten or more personal workstations and provide for shared services like printing, email forwarding, and databases access. They are commonly used to coordinate activities within a department or based on function. For example, one file server might provide central printing services for a small department, while a different file server might provide email forwarding services to a building.
Because file servers serve many users, downtime, corruption of data, or leakage of data may have a greater effect on the organization as a while. For example, if the payroll department's file server fails at a critical time, it might be impossible to produce paychecks in time for a particular pay period.
In most cases, protection for file servers is given higher priority than individual workstations. For example, access controls are normally configured on file servers, trained systems administrators often help to manage file servers on a part-time basis, regular maintenance and backups are done in many cases, and audit trails may be kept and periodically examined. The justification for added protection comes from their increased importance in the overall operation, and the cost effectiveness of increased protection comes from an economy of scale.
The high cost of large systems is usually justified by their high-valued functions and, as such, the requirement for effective protection tends to be far more stringent than it is for file servers. In an engineering company, for example, a large system might be used to perform complex analysis of stresses on structural components of bridges. If bridges fail because of incorrect computations, the consequences are very serious. Similarly, large systems may be used to control large production facilities. Downtime in this situation may result in extreme financial losses, and errors or omissions in the coordination of parts for just-in-time delivery might result in production line shutdowns or high reject rates.
Large systems tend to have more than one systems administrator, full-time operations staffs, systems programmers, and other similar human support systems that smaller or less critical systems do not have. Because of the size of the operation, protection in large systems can be tuned on a case-by-case basis to the criticality and financial impact of failures in the protection system.
Large systems tend to use user and group-based access controls, often involve a development environment connected to an operational system through a change-control function, have regularly scheduled and verified backups, produce detailed audit trails that are examined on a regular basis, are periodically audited by internal and external IT auditors, have 24-hour service contracts, and in some cases have continuous on-site service personnel with cold-standbye hardware components available for immediate use.
The cost of protection for such a system is justified by the system's criticality and the high cost of failures. Availability is normally given high priority in this environment because the cost of operating the system is so high that the loss from downtime justifies almost any effort to bring the system back up. For example, some new supercomputers cost on the order of a billion U.S. dollars to acquire and operate over a 5-year period. That comes to almost a million dollars a day, or $40,000 per hour, or $10 per second. With direct costs on this scale, not including application-specific losses, it should be relatively easy to justify the continuous presence of an information protection specialist, some hardware engineers, several operators, and a few systems programmers.
Infrastructure components form the basis for interconnecting information systems together. As such, they are vital to the overall effectiveness of a protection program.
One of the raging debates in the information protection community over the past several years has been over the balance between host-based rpotection and infrastructure-based protection. Some argue that since infrastructure protection is and will likely always be very limited, host-based protection is thw key to a successful protection program. Others take the position that host-based protection for the vast majority of hosts is not cost justifiable and that infrastructure-level protection is the only cost effective option. My personal view is that a balance is needed.
When economies of scale favor infrastructure-based protection and that protection is effective against the threats faced by an organization, it is a good choice. In today's computing environments, there are places where infrastructure-level protection is justified. Examples include, but are not lmited to, firewalls between the Internet and most organizations, intrusion detection systems in some networks, PBX-based protection of telephone networks, router-based traffic limitations on internal networks, and centralized incident response.
In many cases, infrastrucutre-level protection covers some, but not all of the potential vulnerabilities. In these cases, a mix of infrastructure and host-based protection may be required. For example, most modern firewalls don't provide protection against the content of information they allow to pass. An internal Web browser might bypass the firewall's protection mechanism by acting on a Trojan horse within an Internet-based Web page. In order to address this challenge, we may find that host-level protection is most cost-effective against things that firewalls do poorly. At the same time, it would be a very expensive task to secure all of the hosts in a substantial network against low-level attacks that are easily and effectively prevented by most network firewalls.
The key to effective infrastrucutre-level protection is finding the right mix between host-based and infrastructure-based techniques and gaining the economies of scale provided at the infrastructure level wherever possible.
It is often hard to see the big picture in a big company, but in the case of information protection, only the big picture leads to cost effective results. Our brief overview of different sorts of networked systems demonstrated a wide range of possibilities, but the devil is in the details. Cost effective protection in today's networked environments requires a balance that can only be achieved through careful and detailed analysis of the systems as the operate within their environment.
In future issues, we hope to explore the balance in more detail, but for now, we simply wish you a happy and healthy new year.
Fred Cohen is a Senior Member of Technical Staff at Sandia National Laboratories and a Senior Partner of Fred Cohen and Associates in Livermore California, an executive consulting and education group specializing information protection. He can be reached by sending email to fred at all.net.