Sat Aug 30 13:02:59 PDT 2014

Human factors: User decision-making: What decisions do users make and how do they make them?


Options:

Real-time decisions:


Decision quantity and presentation:


Decision dimensions: [Objective / Subjective], [Quantitative / Qualitative], [Nominal / Ordinal / Interval / Ratio], [Hierarchical / Flat], [Simple / Complex], [Explanatory / Predictive], [Group / Individual], [Casual (ad-hoc) / Formal (well-defined)], [Optimizing / Satisficing / Incremental / Cybernetic / Random], [Amplitude / Architecture], [Designed / Programmed / Ad-hoc], [Personal / Work | Individual / Group / Enterprise process] decisions are made using [Text / Visualization] for [Strategic / Tactical] purposes [driven by Communications / Data / Models / Knowledge / User]-based [Externally / Internally] defined evaluation criteria at [Beyond human speed / Reflex speed / Trained response speed / Cognitive resonance speed / Consideration speed / Group consideration speed / Analysis speed / Too long for typical human decision processes] tempo by decision-makers with [Naive / Novice / Professional / Expert / Group] expertise in a [Static / Dynamic decision space] with [Stand-alone / Strictly competitive / Strictly cooperative / Mixed competitive, cooperative] objectives.


Basis:

Any useful user input has consequences. If and to the extent those consequences effect protection objectives, that input should be considered for control in the protection program. In the case of decisions made by users in near-real-time, these are often protection-related, made quickly, and made often. Such decisions may have long-term widespread effects on protection and are thus prone to failure with substantial negative consequences.

Generally, decision-making can be categorized in terms of the following dimensions:

This is codified in the following phrased sentence structure and details are available through the JDM section of JDM: [Objective / Subjective], [Quantitative / Qualitative], [Nominal / Ordinal / Interval / Ratio], [Hierarchical / Flat], [Simple / Complex], [Explanatory / Predictive], [Group / Individual], [Casual (ad-hoc) / Formal (well-defined)], [Optimizing / Satisficing / Incremental / Cybernetic / Random], [Amplitude / Architecture], [Designed / Programmed / Ad-hoc], [Personal / Work | Individual / Group / Enterprise] process decisions are made using [Text / Visualization] for [Strategic / Tactical] purposes driven by [Communications / Data / Models / Knowledge / User]-based [Externally / Internally] defined evaluation criteria at [Beyond human speed / Reflex speed / Trained response speed / Cognitive resonance speed / Consideration speed / Group consideration speed / Analysis speed / Too long for typical human decision processes] tempo by decision-makers with [Naive / Novice / Professional / Expert / Group] expertise in a [Static / Dynamic] decision space with [Stand-alone / Strictly competitive / Strictly cooperative / Mixed competitive, cooperative] objectives.

Human decision-making limits some of these dimensions. For protection-related decisions, most users typically have less than a second to make a real-time decision. For example, should I save this potentially risky file, should I run this potentially dangerous program, should I authorize this potentially hazardous action, etc. Even if people have more time, they typically want to make such decisions quickly because of interruption in thought and work rhythm. As a result, the decisions given to users on a real-time basis tend to be:

Because this is the nature of the sorts of decision that can be made by most users in real-time, the decisions given to those users should be decisions suited to these limitations or those limitations should be changed through user education and training and technical means.

Unfortunately, most existing software has not been designed taking this into consideration. As a result, a tradeoff between cost, complexity, customization, and other related factors has to be considered in determining when it is worthwhile to force this level of care in the design of user interface.


Users make mistakes, and the more decisions they have to make, the more mistakes they are likely to make. Thus it is beneficial to reduce the number of decisions and their complexity and to make the symbology and mechanisms of decisions uniform in terms of dimensions of relevance to the user and the organization.

User decisions are minimized: Generally, this approach determines when users really need to make decisions and, when they do not, decisions are not presented. For example, enterprises may have a policy against downloading PDF files which then eliminates the need to prompt for a decision about whether to interpret such a file when not from a trusted source. The higher the consequence, the more care should be taken in allowing users to decide and under what conditions, with what information, and on what basis.

Fail-safe defaults are uniformly displayed for use: For each decision with adequate aggregated potential negative consequence to justify the effort, condition-based fail-safes are defined, and the presentation to the user presents the fail-safe as default in a manner that the user can universally recognize and most readily apply. Thus less safe alternatives are readily recognized and only taken if the user takes explicit action to override the fail-safe. In high consequence situations, justification for non-default decisions might also be required, and additional safeguards such as submit-commit cycles applied when appropriate.

User-settable defaults are used: User-settable defaults provide the means for the user to make a decision once and have it uniformly applied unless and until changed. While it would be helpful for all such situations to provide such settings, many systems don;t support this approach for many decisions. Such a methodology also implies a way to examine and alter such settings. Such defaults should be set to either present or not present the decision to the user.

Enterprise-set defaults are used: Enterprise-set defaults provide the means to eliminate decisions or force defaults for decisions at a level the user does not control. This is often appropriate when the enterprise sets technical security policy that the user is required to follow. It has all the potential advantages of user-set defaults at a higher volume. Such defaults should be set to either present or not present the decision to the user, depending on whether the user has any alternatives by technical protection policy.

Uniformity of display based on clarity of potential effect is used: This approach creates a set of symbols, presentation methods, ordering, coloring, and related interface characteristics to provide uniform display of decisions to the user. In essence, for rapid decision-making, it leads to the ability to make reliable trained response speed decisions at the level of risk tolerance desired, with the option to make consideration speed decisions if desired.

All of these methods require some level of user knowledge, awareness, training, and practice to do well, but for the most part, this is necessary for better user-level real-time decision=-making and less onerous than the effort required when these methods are not used and more likely to yield more reliable protection decisions.

Copyright(c) Fred Cohen, 1988-2012 - All Rights Reserved