Syntax checking, including length, symbol sequence, bounds for the specific input in situation, and program state should be applied at every input to every program and should be explicitly mandated at every input involving another computer or a human being, including within any program that interacts with a network. Syntax checking in the context of the program state is particularly effective at assuring that only valid inputs appear in each situation within a software process. This eliminates all of the methods commonly used to break out of the normal operation of software. Syntax checking is sometimes also used in outbound controls (e.g. to detect plaintext social security numbers as part of DLP) but is of limited utility.
Redundancy for verification should be used to verify inputs with increasing amounts of redundancy used as the consequences or wrong inputs increase. For all entries associated with addresses or similar locations, postal codes should be verified against states and addresses, and address checks against names should also be applied where feasible. Redundant verification should also be used by creating more sensors and communications in higher consequence situations.
Transformation technologies with attribution, such as cryptographic checksums and certificates should be used to verify software and patches from commodity sources such as all software packages, disks, CDs, and patches from vendors. As surety levels increase, added verification processes should be used, and in medium and high surety environments, well-defined processes for acceptance of external software and hardware should be required. Transforms should also be used as part of the change control process to verify that alteration does not take place between inception and execution. This includes integrity shells, white listing, and other similar methods. Note that when transforms encrypt, they are problematic for other content controls such as filtering and syntax checking.
Change control processes are necessary in any medium or high consequence situation because they provide increased assurance that only authorized and properly tested changes take place. When used in combination with transforms to verify against unauthorized changes in operational systems, they form a testable basis for belief that the system operates as intended. Sound change control typically requires substantially more effort than simplistic approaches and it is therefore reserved for situations in which the risk warrants the costs. This should also be used in any software provided to others, and certainly for any widely distributed hardware or software.
Verification and testing processes are necessary in any medium or high consequence situation because they provide increased assurance that only authorized and properly tested original mechanisms are in place. They should normally be used in conjunction with sound change control and/or transofmraiton technologies with attribution to verify against unauthorized changes in operational systems. HArdware and software should be put though systematic verificaiton and testing processes to assure that all identifiable input sequences or classes of input sequences result in propper states and outputs. While it is infeasible in practice to generate complete tests in many such systems, measures of coverage should be attained and applied to undersand the extent to which surety has been attained.
Structural mechanisms such as network separation, digital diodes, one-way UDB channels, and network zoning approaches limit or eliminate the flow of information between different areas and thus limit the ability of unauthorized content to enter areas. This is a fundamental approach that should be applied with increasing surety as risks increase. For enterprise production environments, at least zoning and subzoning mechanisms should be use to limit inbound content and to limit interactions between business functions and their infrastructures.
Microzoning and virtualization mechanisms limit the flow and retention of undesired and useless content by keeping them within the microzones and, with non-state-retaining virtual machines (VMs) by destroying all content other than that explicitly retained at the end of the period of use. This is a good approach for limiting untrusted content, applications, and access over periods of use, at the cost of limited overhead.
Counterintelligence methods, such as removing email addresses from external Web sites, trying to reduce the profile of the enterprise, using anonymizing mechanisms for postings to external forums so that responses are limited and internal addresses and structure is not revealed, and so forth.
Place defenses in the logical perimeters.
This approach places defenses at the perimeter, typically in the DMZ for a layered architecture. It has the advantage of being relatively centralized and manageable while retaining control by the enterprise. But it also means that a lot of traffic and decisions may have to be made at the firewall that otherwise could be avoided. It typically involves creating proxy gateway mechanisms for most services allowed to pass in and out of the enterprise with filtering embedded in those mechanisms.
Place defenses in the network.
This approach places defenses throughout the network and turns the network into an enforcement mechanism at many or all levels. This increases the need for resources and adds technical management challenges but provides greater defense-in-depth. It typically means the use of intrusion and anomaly detection, internal firewalls, and other similar mechanisms.
Place defenses on the endpoints.
Mobile endpoints presumably have to protect themselves, and other endpoints may reasonably be expected to protect themselves regardless of what the rest of the environment does or does not do. This typically involves antivirus and antispyware on end devices that are susceptible, anti-spam in email clients, and other similar controls as identified herein depending on the specific requirements.