In most cases, the integrity of content is most important to its utility because, even if it is available and kept confidential, properly audited, and under use control, if it is wrong, its utility is poor. If it is wrong in specific ways, it can be very harmful. Integrity is often broken down into the integrity of the source, protection from inappropriate or unauthorized changes in the content, and assurance that the content represents an accurate reflection of reality suitable for the purpose. Source integrity expresses the association of reliability of content with its source and is an example of an approach to assuring the correspondence of the content with reality. Many cryptographic technologies are associated with integrity in the sense of freedom from unauthorized change and attribution to source, however, cryptography has serious limitations in integrity protection. (5.6) Change control is a vital component of an effective integrity control scheme because it provides redundancy-based controls over changes to verify that they are reasonable, appropriate to the need, and that they operate correctly in the environment before the changes are deployed. Changes also have potentially recursive, complex, and indirect effects that lead to unintended consequences. For example, computer viruses use changes in software to cause transitive spread of the virus from program to program. (5.5) This is an unintended but predictable consequence of combining general purpose function with transitive information flow and sharing.
If information is not available in a timely fashion, its utility decreases, but may not completely disappear. Availability is typically measured in terms of mathematical formulas for availability and reliability of the function when needed. Availability is typically measured as percentage of down time per unit time. For example, hours of system outage per year is used for some systems. Sometimes it is normalized for utility in the enterprise, such as the use of user outage hours per month. It can also be calculated based on mean time to failure (MTTF) and mean time to repair (MTTR) as MTTR/(MTTF+MTTR). Assuming that everything is properly accounted for, these are measurements after the fact, but not as useful for prediction, which is critical for design.
If confidentiality is lost, some content may become useless or even dangerous, but this is rare. In most cases the consequences are limited to potential liability. When classified information, trade secrets, or similar content is involved, consequences are higher. Confidentiality is usually controlled based on the clearance of the identity, certainty of the authentication of that identity, classification of the content, and need for the authorized purpose. The means of creating and operating this basis is often more easily attacked than the real-time protection in an operating system or application. Information flow controls are the only really effective way to limit the movement of information from place to place. All other techniques are leaky in one way or another and most can be defeated to great effect by any reasonably astute attacker. These controls are implemented at routers through network separation technologies (e.g., VLANs with quality of service controls to eliminate covert channels), in computer systems through access controls, in physical technologies by separation of systems and networks by distance and with shielding, and in applications through application-level access control.
Use control (U)
If use control is lost, either content is not usable by those who are supposed to be able to use it, which corresponds to a loss of availability, or content is usable by those would should not be able to use it. This can lead to loss of integrity, availability, or confidentiality, depending on the specifics of the uses permitted. Use control generally associates authentication requirements with identified parties for authorized uses. The basic notion underlying use control is that identified individuals or systems acting on their behalf are granted appropriate use based on their identity and the demonstrated extent of authenticity of that identity. If the current level of authentication is inadequate to the need, additional authentication is required to meet the level required for the use. Use may be more permanently disabled via fail safe if warranted, for example by disabling system use for a period of time.
Loss of accountability reduces the certainty with which proper operation can be verified either now or in the future. Accountability is often considered in terms of attribution of actions to actors, the accurate identification and recording of the situation, and the association of the activity with the actor in the situation.
Loss of transparency reduces the trust others are likely to place in processes and the results they produce. Transparency is often considered in terms of openness about process, implementation, and history, allowing the truth of what has happened by whom, when, where, and how, and why to be revealed and allowing others to make their own judgments rather than trusting yours.
Loss of custody implies loss of control and implies an inability to verify that what is being presented is what it purports to be and nothing else, that others have not had access to and/or tampered with what is presented, and supports the general notion of inability to be certain. Custody is often considered in terms of source, chain of events and possessions, and status over time.
Associated with individual content at maximum granularity.
Control objectives can be associated at maximum granularity, for example, by granting bank customers access to only their own account information. However; to maintain this across the enterprise for all content, is potentially very costly and complex to manage.
Associated with groups of content such as databases, files, or directories.
This approach is typically taken for systems management purposes and is aligned with how computers work as opposed to people or businesses.
Associated with applications.
This approach associates controls with applications, components of which may reside across locations, entities, infrastructures, systems, databases, and anything else. Applications tend to be logical groupings based on business functions.
Associated with systems.
This approach breaks down decisions by associating systems with content. It is particularly effective when a single system, or virtual system, is used for a certain class of content.
Associated with network zones and subzones.
This approach uses the zoning architecture to differentiate controls to provide common protective mechanisms associated with assuring the desired properties. It gains an economy of scale through commonality of mechanism.
Associated with business areas or customer sets.
This approach aligns protection objectives with businesses or customers. Alignment with businesses sometimes makes sense at a gross level, but usually only when a business is highly specialized and in a tight niche with common protection objectives for all relevant content. Customer sets are similar in that certain classes of customers may have very similar needs and therefore tight alignment of protection objectives.