How security decisions go wrong?

Information warfare is primarily a construct of a ‘war mindset’. However, the development of information operations from it has meant that the concepts have been transferred from military to civilian affairs. The contemporary involvement between the media, the military, and the media in the contemporary world of the ‘War on Terrorism’ has meant the distinction between war and peace is difficult to make. However, below the application of deception in the military context is described but it must be added that the dividing line is blurred.

The correct control of security often depends on decisions under uncertainty. Using quantified information about risk, one may hope to achieve more precise control by making better decisions.

Security is both a normative and descriptive problem. We would like to normatively how to make correct decisions about security, but also descriptively understand follow where security decisions may go wrong. According to Schneider, security risk is both a subjective feeling and an objective reality, and sometimes those two views are different so that we fail acting correctly. Assuming that people act on perceived rather than actual risks, we will sometimes do things we should avoid, and sometimes fail to act like we should. In security, people may both feel secure when they are not, and feel insecure when they are actually secure. With the recent attempts in security that aim to quantifying security properties, also known as security metrics, I am interested in how to achieve correct metrics that can help a decision-maker control security. But would successful quantification be the end of the story?

The aim of this note is to explore the potential difference between correct and actual security decisions when people are supposed to decide and act based on quantified information about risky options. If there is a gap between correct and actual decisions, how can we begin to model and characterize it? How large is it, and where can someone maybe exploit it? What can be done to fix and close it? As a specific example, this note considers the impact of using risk as security metric for decision-making in security. The motivation to use risk is two-fold. First, risk is a well-established concept that has been applied in numerous ways to understand information security and often assumed as a good metric. Second, I believe that it is currently the only well-developed reasonable candidate that aims to involve two necessary aspects when it comes to the control of operational security: asset value and threat uncertainty. Good information security is often seen as risk management, which will depend on methods to assess those risks correctly. However, this work examines potential threats and shortcomings concerning the usability of correctly quantified risk for security decisions.

I consider a system that a decision-maker needs to protect in an environment with uncertain threats. Furthermore, I also assume that the decision-maker wants to maximize some kind of security utility (the utility of security controls available) when making decisions regarding to different security controls. These different parts of the model vary greatly between different scenarios and little can be done to model detailed security decisions in general. Still, we think that this is an appropriate framework to understand the need of security metrics. One way, maybe often the standard way, to view security as a decision problem is that threats arise in the system and environment, and that the decision-maker needs to take care of those threats with available information, using some appropriate cost-benefit tradeoff. However, this common view overlooks threats with faults that are made by the decision-maker. I believe that many security failures should be seen in the light of limits (or potential faults) of the decision-maker when she, with best intentions, attempts to achieve security goals (maximizing security utility) by deciding between different security options.

I loosely think of correct decisions as maximization of utility, in a way to be specified later.

Information security is increasingly seen as not only fulfillment of Confidentiality, Integrity and Availability, but as protecting against a number of threats having by doing correct economic tradeoffs. A growing research into the economics of information security during the last decade aims to understand security problems in terms of economic factors and incentives among agents making decisions about security, typically assumed to aim at maximizing their utility. Such analysis is made by treating economic factors as equally important in explaining security problems as properties inherent in the systems that are to be protected. It is thus natural to view the control of security as a sequence of decisions that have to be made as new information appears about an uncertain threat environment. Seen in the light of this and that obtaining security information usually in it is cost, I think that any usage of security metrics must be related to allowing more rational decisions with respect to security. It is in this way I consider security metrics and decisions in the following.

The basic way to understand any decision-making situation is to consider which kind of information the decision-maker will have available to form the basis of judgments. For people, both the available information, but also potentially the way in which it is framed (presented), may affect how well decisions will be made to ensure goals. One of the common requirements on security metrics is that they should be able to guide decisions and actions to reach security goals. However, it is an open question how to make a security metric usable and ensuring such usage will be correct (with respect to achieving goals) comes with challenges. The idea to use quantified risk as a metric for decisions can be split up into two steps. First do objective risk analysis using both assessment of system vulnerabilities and available threats in order to measure security risk. Second, present these results in a usable way so that the decision-maker can make correct and rational decisions.

While both of these steps present considerable challenges to using good security metrics, I consider why decisions using quantified security risk as a metric may go wrong in the second step. Lacking information about security properties of a system clearly limits the security decisions, but I fear that introducing metrics do not necessarily improve them;this may be due to 1) that information is incorrect or imprecise, or 2) that usage will be incorrect. This work takes the second view and we argue that even with perfect risk assessment, it may not be obvious that security decisions will always improve. I am thus seeking properties in risky decision problems that actually predict the overall goal – maximizing utility – to be, or not to be, fulfilled. More specifically, we need to find properties in quantifications that may put decision-making at risk of going wrong.

The way to understand where security decisions go wrong is by using how people are predicted to act on perceived rather than actual risk. I thus need to use both normative and descriptive models of decision-making under risk. For normative decisions, I use the well-established economic principle of maximizing expected utility. But for the descriptive part, I note that decision faults on risky decisions not only happen in various situations, but have remarkably been shown to happen systematically describe by models from behavioral economics.

I have considered when quantified risk is being used by people making security decisions. An exploration of the parameter space in two simple problems showed that results from behavioral economics may have impact on the usability of quantitative risk methods. The results visualized do not lend themselves to easy and intuitive explanations, but I view my results as a first systematic step towards understanding security problems with quantitative information.

There have been many proposals to quantify risk for information security, mostly in order to allow better security decisions. But a blind belief in quantification itself seems unwise, even if it is made correctly. Behavioral economics shows systematic deviations of weighting when people act on explicit risk. This is likely to threaten security and its goals as security is increasingly seen as the management of economical trade-offs. I think that these findings can be used partially to predict or understand wrong security decisions depending on risk information. Furthermore, this motivates the study how strategic agents may manipulate, or attack, the perception of a risky decision.

Even though any descriptive model of human decision-making is approximate at best, I still believe this work gives a well-articulated argument regarding threats with using explicit risk as security metric. My approach may also be understood in terms of standard system specification and threat models: economic rationality in this case is the specification, and the threat depends on bias for risk information. I also studied a way of correcting the problem with reframing for two simple security decision scenarios, but only got partial predictive support for fixing problems this way. Furthermore, I have not found such numerical examinations in behavioral economics to date.

Further work on this topic needs to empirically confirm or reject these predictions and study to which degree they occur (even though previous work clearly makes the hypothesis clearly plausible at least to some degree) in a security context. Furthermore, I think that similar issues may also arise with several forms of quantified information for security decisions.

These questions may also be extended to consider several self-interested parties. in game-theoretical situations. Another topic is using different utility functions, and where it may be normative to be economically risk-aversive rather than risk-neutral. With respect to the problems outlined, rational decision-making is a natural way to understand and motivate the control of security and requirements on security metrics. But when selecting the format of information, a problem is also partially about usability. Usability faults often turn into security problems, which is also likely for quantified risk. In the end the challenge is to provide users with usable security information, and even more broadly investigate what kind of support is required for decisions. This is clearly a topic for further research since introducing quantified risk is not without problems. Using knowledge from economics and psychology seems necessary to understand the correct control of security.

Sajad Abedi
Sajad Abedi
National Security and Defense Think Tank