ISSN: 2332-0761
+44 1300 500008
Research Article - (2016) Volume 4, Issue 1
Policy design takes place in an environment of extreme uncertainty and complexity. This paper addresses how we integrate human decision making capacity with the realities of complex systems in order to yield positive outcomes and avoid decision paralysis. It begins with a discussion of naturalistic or expert decision making and describes biases that can be introduced into the process through the use of heuristics which can result in suboptimal choices. From there, a high level description and overview of complexity science is presented in terms of an alternative paradigm to help explain why heuristics can be a double edged sword in decision making in a complex implementation environment. Finally, a new tool is described that may be useful in reviewing policy in the context of a vague and uncertain political and societal environment.
Keywords: Policy design and implementation, Decision making, Complexity science, Agent based modeling
Policy makers from time immemorial have faced the issue of complexity. When Clausewitz [1] wrote about friction, he was describing the very essence of complexity. As a side note, although the term “fog of war” is frequently attributed to Clausewitz, he never uses the phrase [2]. A critical problem faced by policy makers in the derivation of a policy initiative is that the world does not behave in a Newtonian fashion; if we understand how the individual parts work, we do not necessarily understand the operation of the whole. Traditional top down theories of societal behaviors such as Realism or Neo-liberalism, have a tendency to view political interests as disconnected from political structure instead of being generated from it, which can cause historical discontinuities to be dismissed as outlier phenomena [3,4]. “Confidence in the decomposability of social systems is both implicit in those theories and usually shaken by observation” [5]. The issue is not whether the world is becoming more complex; as societies advance they become more intricate, interconnected, and therefore complex by definition. The problem confronting policy makers, in the words of H. L. Mencken, is that “there is always a well-known solution to every human problem neat, plausible, and wrong” [6]. Nassim Taleb [7] describes the problem as “causal opacity” because the linkage between cause and effect is difficult or impossible to see. When pundits argue that the world has always been complex, the complexity they are describing is causal opacity. As our tools for determining cause and effect have improved, the increased interconnection of systems (human built networks) has decreased the traceability of cause to effect, stymying efforts to determine the linkages. People have a difficult time making trade-offs when choosing between outcomes; particularly when the outcomes are not well specified or known and the decision environment is complex [8]. In these types of circumstances, heuristics have typically been more effective than linear, statistical models for decision makers [9,10]. Taleb [11] has written extensively about the limits of statistical models. In his writings, he frequently refers to “the fourth quadrant” as the area in which statistical decision making models fail. The primary cause for this failure or “fragility” as he refers to it is our inability to calculate consequential risks of rare events [11]. Stated another way, he defines this inability as a framing problem causing people to take risks they do not understand and would not take if they did.
The concept of the fourth quadrant needs the perspective of the other three to give it meaning. In the first quadrant, risk has a Gaussian (bell shaped) distribution. The payoffs are simple win/lose and there are no outliers. The first quadrant is the world of casino games. The second quadrant also has simple win/lose payoffs, but the distribution is ‘fat-tailed’ or indeterminate. With this type of distribution, risks are poorly understood because an outlier event can cause a severe impact. If you cannot swim, you should not try to walk across a river that is an average of two feet deep the unexpected ten foot pool will severely impact your journey. Quadrant three is back into the area of the normal distribution but now the payoffs are complex. The outcomes conform to traditional statistical models and are able to be predicted with a high degree of certainty. The complex system known as the modern automobile is an example of quadrant three risks; there are many interdependent parts, but the failure rates are well known and predictable. While failures can be catastrophic, building in redundancies limits exposure and mitigates systemic risk. The world behaves in a mechanistic and predictable fashion. When we move into the world of social policy, we enter the forth quadrant. Extreme events are rare but their impacts are immense. The risks associated with the extreme events are generally unseen, or if they are seen, ignored because they are unlikely. Taleb uses the story of the turkey and the butcher to illustrate the point. A turkey is fed and pampered by the butcher all his life. From the turkey’s point of view, the butcher treats turkeys very well and the future looks very bright with a long life expectancy. One catastrophic event, Thanksgiving, changes the turkey’s outlook drastically. These types of events can happen in social systems as well the events surrounding the financial crisis of 2008 are a case in point. The degree of leverage and interconnectedness of the financial system hid the risks that were accumulating. When Lehman failed, without massive intervention by the U.S. government, the financial system faced collapse. Such are the dangers that lurk in the fourth quadrant [7,12]. The issue at hand is how to best deal with complexity in the formulation of policy; or restated, how do we blend human decision making capacity with the realities of complexity to yield positive outcomes and avoid decision paralysis? The answer begins with understanding and acknowledging human limitations.
This paper begins with a discussion of naturalistic or expert decision making, describing some of the biases that can result in suboptimal choices. From there, a high level description and overview of complexity science is presented in terms of an alternative paradigm to help explain how heuristics can be a double edged sword in decision making. Finally, a new tool is described that may be useful in reviewing strategy in the context of a vague and uncertain political and societal environment.
Naturalistic decision making in practice is an expert making sense of a problem or circumstance that confronts him or her and then applying the next logical steps to solve that problem. Efforts to improve the expert’s process have looked at specific areas of the decision process in order to reduce cognitive biases. These studies have led to a better understanding of the use of ‘sense-making’ and risk management techniques as well as how an individual’s training, experience, and personality interact with these techniques. Experts typically acquire the expertise through a combination of formal schooling, mentoring, and on the job training. There is a real and distinct difference between experience and expertise. Experience can be described as having lived through an event; expertise is having learned something from the process.
Naturalistic decision making is primarily a descriptive model in that it attempts to describe the process that expert decision maker’s use in dealing with uncertainty, lack of information, and many conflicting alternatives [13]. Studies detail findings such as more experienced decision makers use forward chained reasoning to rapidly decide upon a course of action. This process is efficient for the expert decision maker due to their expertise. They have a large reference set of situational templates to choose from, and then using inference rules (if / then statements) rapidly choose the appropriate template for the situation at hand. This is in contrast to the novices’ backward chaining approach; visualizing a desired end state and then searching for data to determine a course of action. The primary difference between the two methods is that forward chaining is a data-driven process and backward chaining is goal-driven. It may seem counterintuitive, but it is more efficient when dealing with uncertainty, lack of information, and many conflicting alternatives to use the data-driven process of the expert. Evidence points to the conclusion that the way a person sizes up a situation is often more critical than the way a person selects between courses of action [14]. The sizing up of a situation involves the sense-making aspect of naturalistic decision making. Additional studies have approached naturalistic decision making from the theoretic standpoint that a series of disciplined questions help experts make sense of complex and ambiguous situations. It is that sensemaking process, rather than decision making, that is the critical distinction in successfully determining the optimum course of action in a complex situation [15].
Naturalistic decision making utilizes two decision making subtypes; system-one and system-two. This use of a combination of system-one type thinking to make adaptations to the basic template of questions and then use system-two thinking to rationally analyze and make sense of the situation, compares favorably with the concept of forward chained reasoning by expert decision makers [14,15]. Bazerman and Moore describe system-one thinking as “fast, automatic, effortless, implicit, and emotional” [11]. System-one thinking is the way most decisions are made. In everyday life, thousands of decisions must be made. It would be physically impossible for each of these decisions to be thoughtfully reasoned through – there is neither the time nor the inclination for that to occur. That reality is what Herbert Simon described as bounded rationality; people frequently lack the resources of time, information, and processing capacity to make fully rational decisions according to the classical model. Instead they satisfice: they review alternatives until they come across one that is satisfactory for the current need and suffices to meet some acceptable level of outcome. System-one thinking utilizes intuitive thinking and heuristics (or rules of thumb). With the time constraints in the fast pace of many international events, system-one decision making is frequently used by policy makers. There is a body of theory which holds that it is critical for expert decision makers to listen to their intuition. One of the critical components of that intuition relies on expertise. The theory further asserts that this expertise is manifested as a set of subconscious decision heuristics. The expert’s knowledge is referred to as “intuitionas- expertise and the related notion of intuition as an aspect of sensemaking” [16].
System-two thinking is a slower, more thoughtful and reasoned way to arrive at a decision. The standard model for classical decision making is a six-step process. First, the nature and scope of the problem is identified and if need be, the problem subdivided into sub-problems. Second, data is gathered and alternatives identified. Third, all the alternatives are evaluated in light of the possible outcomes and unintended consequences. Fourth, the best alternative is chosen. Step five is the implementation of the decision and step six is the evaluation of the decision. By adhering to the more traditional or classical approach to decision making, decisions made using system-two thinking, while not necessarily yielding better outcomes, should be less subject to the influence of cognitive biases. The difficulty for most managers knows when to engage in system-two thinking versus system-one. System-two thinking requires the utilization of many scarce resources of the organization – the most precious being time – to arrive at decisions. The ability to perform well at this balancing act can be critical in a leader’s career.
As discussed earlier, heuristics are simplifying strategies people utilize when making decisions. These simplifying strategies, or rules of thumb, help people to make sense of a complex environment and are a natural coping mechanism [16]. Heuristics themselves are not inherently dangerous in decision making. The danger of heuristics lies in their misapplication and the fact that people are frequently unaware that they are using them. Insight into the subtle nature of heuristics, and the insidious way that they can begin to negatively affect decision making, comes from research that shows an increase in risk tolerance and greater risk taking in subsequent decisions. An example of this phenomenon was demonstrated by NASA when the organization experienced an increased risk tolerance of the shedding foam on the external tank that precipitated the Columbia mishap [17]. This is the type of system-one framing bias which, when it is introduced, causes errors in the way a decision maker sizes up a situation. It is manifested by an increased risk tolerance, leading to the ignoring or downplaying of warning signs of impending danger, and allowing continued operation with a known defect in a hazardous operating environment [14,17,18].
Naturalistic decision making theory however, generally sees heuristics in a positive and necessary light. Decision makers will use system-one thinking to make adaptive changes to the basic template of questions and then use system-two thinking to rationally analyze and make sense of the situation. This use of a combination of system-one and system-two thinking compares favorably with the concept of forward chained reasoning by expert decision makers [14,15]. While generally treated as effective cognitive tools, the theory does acknowledge issues that come along with the use of heuristics. One of the problematic biases is expectancy bias. This bias stems from the assumption that the future will resemble the past and occurs when making inductive inferences as a part of the process of naturalistic decision making. An additional inductive inference is that similar things will have similar traits. Assumptions such as these are the bases behind generalizations which can have a dramatic influence on inductive inferences and accompanying system-one thinking [19]. Biases such as these will inevitably be introduced into naturalistic decision making because of these assumptions, and the necessity of making these assumptions in naturalistic decision making. The critical point is that the biases are acknowledged and accounted for in the decision making process [19,20,21].
In addition to the use of heuristics, naturalistic decision making relies heavily on inductive reasoning during the sense-making aspect. Mood, both good and bad, has been shown to induce biases into decision making. A study conducted by Estrada, Isen, and Young [22] demonstrated that positive affect facilitated the integration of information and the reduction of anchoring bias. Other studies have identified the negative influence that combat stress can have in the ability of decision makers to properly size up a situation due to the introduction of biases [23-25]. An example of a negative outcome precipitated by the introduction of bias in decision making is the shooting down of Iran Air Flight 655 by the USS Vincennes in 1988. The study found that scenario fulfillment or expectancy bias was evident in the decision making process that led to the shooting down of the Iranian airliner by a U. S. Navy warship [23]. This bias was introduced into a highly trained and experienced team of decision makers. Better understanding of the causes of bias introduction is a vital part of improving naturalistic decision making. The understanding of the human factors surrounding the decision process in naturalistic decision making is therefore critical in evaluating the decisions reached.
One view of the decision making process is that good decisions are ones that are made through a valid and repeatable process. The outcome or consequences of the decision is not relevant as to whether the decision is a good one. The choice of whether or not to build a hurricane abatement project provides an example. If the process to reach the decision to build is based upon historic monetary realities of the probable damage faced by building versus not building and the long term likelihood of a hurricane striking, then the decision to build or not build is a good decision, regardless of whether a hurricane strikes the area. There may be argument about more esoteric considerations not being represented in the process which may indeed be valid, but the fact remains that the process is grounded in fiscal reality and is valid. The problem with this view is that loss has a much larger impact on the human psyche than does gain [13]. While not funding the hurricane abatement project may be perfectly rational from an economic perspective, the gain of an alternative use of the funding is quickly forgotten when the loss to the storm is realized. These types of problems are amplified when overlaid on the realm of policy design and implementation.
Good decisions are possible in the naturalistic decision setting because the experts making decisions typically use a valid and repeatable process. The preponderance of theory from the literature suggests that expert decision makers, when working under conditions of uncertainty and shifting conditions, determine the suitability of alternative templates for sense making. This process enables them to choose an interpretation, categorize the situation and react quickly, working forward from existing conditions rather than backward from desired goals. This process is both valid and repeatable leading the experts that use it to make good decisions. The danger comes into play when groupthink or some other cognitive bias insidiously creeps into the process and undermines the interpretive progression.
One of the more subtle biases comes from human cognition regarding control of events. Nassir Ghaemi [26] reports that in over one hundred separate studies, the vast majority of people generally consider themselves more likely to experience positive events than other people. Additionally, most mentally healthy people overestimate their level of control of events when they experience success. This combination helps to explain how intelligent and rational, local, national, and even world leaders can get caught up in an action plan whose success depends upon a large number of events going exactly right. The Japanese attack on Pearl Harbor is an example of optimistic thinking on the part of the Japanese. By calculating that the attack and subsequent destruction of the United States Pacific Fleet would demoralize Americans to the point that they would allow Japan to continue to build the Greater East Asia Co-Prosperity Sphere unmolested, the Japanese leaders did the one thing that would prove to galvanize an otherwise isolationist nation into action [27].
Are changes in the naturalistic decision making process required to keep the strategic planning processes relevant in highly turbulent times? How can an examination of the complexity and interconnectedness of the strategic environment be creatively and safely infused into questions regarding the role of the military in sustaining democratic values? Is the examination of the effects of complexity vis-à-vis the role of the military in sustaining democratic values for good or ill? Traditional political theories, which form the basis of the template folios for decision makers, have focused on the predictable and controllable dimensions of policy. Although these dimensions are critical in policy development, they provide only a partial explanation of reality. Complexity science invites us to examine the unpredictable, disorderly and unstable aspects of the environment. Complexity science complements our traditional understanding of the environment to provide us with a more complete picture.
Complex adaptive systems-which describe most all social systemsare different in that each of their individual components follows its own unique set of rules that can vary as they interact within the larger system. There is no overarching set of rules governing the outcome – complex adaptive systems are non-deterministic. The nondeterministic, or stochastic, non-linear behavior of these complex adaptive systems can be extremely difficult to predict. Small variations in initial conditions can cause dramatic changes in outcomes. These variations become problematic to predict because their outcomes vary as the product of the system variables versus the sum of those variables. The problem as Holland describes it is that “It is so much easier to use mathematics when systems have linear properties that we often expend considerable effort to justify an assumption of linearity” [28].
Linear thinking, which can be described with the invocation of a machine metaphor, has been a very comfortable paradigm that has served humanity well in the general sense. Since Newton, the machine metaphor has been used as the lens to make sense of our physical and social worlds, including international relations. The machine metaphor is simply the idea that if the functioning of the individual parts of a system is understood, then the functioning of the full system is understood. The reverse is also true; if the overall functioning of a system is understood then the functioning of individual parts can be derived. Linear thinking leads to the development of top down models which by their very nature have great difficulty predicting emergent behaviors.
The difficulty predicting emergent behaviors is so great that the term “black swan” came into vogue thanks to Taleb’s [29] bestselling book, The Black Swan: The Impact of the Highly Improbable. It is precisely these types of emergent events-9/11, the 2008 financial crisis, the Japanese tsunami and subsequent nuclear mishap-that cause dramatic change; what Gersick [30] describes as punctuated equilibrium. These types of events do not cause a teleological progression of outcomes. Once unforeseen events such as these dramatically alter the systems in which they occur, the change they cause is not always for the better. In addition to not being particularly good at predicting outcomes from unexpected events, none of the linear, top down models could have predicted such events because they were caused by outlier variables. In contrast, complex adaptive systems such as human societies have a number of linked attributes or properties. It is not possible to identify the starting point for the series of attributes within the system because the attributes are all linked. Each attribute functions as both a cause and effect of the other attributes. The attributes of complex adaptive systems listed are all in stark contrast to the implicit assumptions underlying traditional strategic thought and Newtonian science. The cause and effect is mutual and reciprocal rather than one-way which serves as the ultimate demonstration of Taleb’s causal opacity.
The seeds for complexity science have been around for a long time. The founders of complexity science were often far ahead of their time and their work was frequently misunderstood or misapplied. Why is now the right time for practitioners to grapple with the inferences of complexity science and apply them policy design in general and policy implementation in particular? Complexity, ambiguity, and connectedness are not new to the late 20th and early 21st century. What has changed throughout time are the methods used to cope with these issues. The machine metaphor has been a powerful force in helping us to understand and solve many of the issues involved in manufacturing, administration, and organizations. But the machine metaphor has reached its useful limits. We have found the world to be much more correlated than previously thought; micro and macro phenomena are connected, and there is a defined sense of compression of both space and time due to the new reality of instant mass communications via social networks [31].
The dominance of the machine metaphor is waning because its limits are now becoming more obvious and the influence of complexity science is gaining prominence. While we continue to use the machine metaphor where appropriate, a seismic shift has begun away from it because the machine metaphor is at best not helpful and at worst misleading in an ever increasing number of instances. Linear, top down theories have a great deal of difficulty predicting events such as ‘The Arab Spring’ and the ensuing aftermath. On the other hand, complexity science, with its grounding in the nonlinear aspects provides a framework to better understand issues of emergence and self-organization, and the inter-dependencies these engender.
There are critics that say the world has always been complex and our old tools and theories are just as effective now as they always have been. They claim that considerations of complexity are a trap that paralyzes decision makers [32]. To debate whether or not the world has become more complex is a fallacious argument. As societies advance they necessarily become more complex. As the complexities of societies increase, the tools policy makers use to bring order to these societies become more elaborate (complex) which enables further societal development (increasingly complex). Offsetting these forces which are driving toward greater order and increasing complexity are the forces of social entropy which seek to drive cooperation and advancement towards conflict and chaos. Which way you view the evolution of the strategic environment is subject to Mile’s Law: “Where you stand depends on where you sit” [33]. Regardless of the point of view taken, it is reckless to ignore the lessons available from complexity science. Complexity is no more a trap than is gravity; it is a phenomenon to be understood and accounted for.
Complex adaptive systems are exquisitely sensitive to initial conditions. While one system may closely resemble another, they can have dramatically different outcomes in response to input stimuli. Even the same system can have disparate outcomes; varying when the inputs are applied at points separated either spatially or temporally. For policy makers, this problem frequently manifests itself by masking the true cause and effect relationship. It is difficult to balance ways, means, and ends when there are multiple feedback loops obscuring cause and effect. Once again, they are faced with the classic problem of the fog of war that befuddles the best efforts of humans to peer clearly into the realm of complex adaptive systems. This does not mean that it is impossible to choose the proper path toward a successful policy. It does, however, mean that the strategy chosen is subject to the play of chance and probability which is a part of a true Clausewitzian concept, the “remarkable trinity” [1].
To assist in peering through the “fog” of complexity, policy makers utilize naturalistic decision making when designing policy. Embedded within this type of decision making (sense making) is the use of heuristics which allows the policy maker to cut through the vast amounts of frequently ambiguous data generated from the complexity of the strategic environment. As discussed earlier, these heuristics are subject to, and serve to magnify the biases their human users are prone to introduce. Expectancy bias, anchoring bias, confirmation bias, the illusion of control, and many others may conspire to distort the policy makers’ problem framing efforts. System-one decision processes are critical to the sense making portion of naturalistic decision making yet they are the most susceptible to cognitive biases. While cognitive biases are frequently helpful in human decision making, they can be troublesome in large scale complex systems due to the opacity of causation and the hidden or overlooked elements of risk [9,11]. If the problem framing is flawed, the selection of the proper heuristic template is unlikely. With these known handicaps plaguing policy makers, there is little wonder that many policy makers freeze or engage in hedging bets. Worse, others would ignore these issues; trusting their own exceptionalism to carry the day. No matter how astute and wellreasoned the resulting policy may be, if the circumstance (problem) is improperly framed, the probability of success is low.
Given the circumstances of ever increasing environmental complexity and time compression, coupled with forces of social disaggregation and enormous downside risks involved in strategic decision making, it would seem that is no alternative for policy makers and strategists to do anything but muddle through. Fortunately, there are several tools available to help test problem framing assumptions and to identify unforeseen or emergent possibilities. Agent based modeling is one of these tools for conducting social simulation. Within complex systems, which are an apt description of social systems, identifying cause and effect relationships can be extremely difficult [34]. Agent based modeling creates an abstracted replica of reality in which agents with simple rules guiding their behavior, interact with their simulated environment and each other [35]. These interactions build upon each other and begin to generate patterns and actions that can be dramatically removed from the original simple parameters assigned. These micro-motives assigned to the agents generate macrobehaviors which closely resembles the processes that occur when individual humans, each with their unique motivations, interact to generate large scale social behavior [36]. This ability to connect microactions with macro outcomes in an artificial society, built from the bottom up in the same manner that human societies are constructed, allows a researcher to conduct “what if ” type of experiments and observe not only the outcomes, but the evolution of the outcomes.
The ability to manipulate a relatively small number of variables and watch the impact those changes cause as their influence permeates the system, is a characteristic of agent based modeling that makes it particularly useful in deriving an understanding of cause and effect within complex systems. One of the central tenets of complex systems is that they contain multiple feedback loops which can put a great deal of temporal space between a cause and effect [34]. Further compounding the difficulty in identifying cause and effect is the process of interaction that takes place among the variables that can mask the true cause and effect and even make it counter intuitive. Agent based modeling, by its very structure and design, provides a way to peer through this veil of complexity and identify the latent causes of system behavior. If a complex system cannot be readily defined, some of the behavioral elements can be defined. The behavioral elements derived from literature reviews and expert interviews are utilized to create the behavioral rules that the agents, or adaptive actors utilize in the simulations. Agent based modeling utilizes five principles that guide development: (1) simple rules guide agent behavior and can generate complex behaviors; (2) there is no single agent that directs the other agents – there is no agent hierarchy; (3) each agent possesses bounded rationality in that each can only respond to local situations in the environment and other agents in close proximity; (4) there is no global rule for agent behavior; and, (5) emergent behavior is demonstrated by any behaviors that occur above the level of the individual [35,37]. From these principles, agent-based modeling builds a macro social interactive structure from the interaction of individual units from the bottom-up versus the top-down approach typically taken in typical social science studies [38].
The modeling process itself can yield insights to system dynamics that provide benefits to expert decision makers. In the process of defining the behavioral rules for the agents, the experts’ heuristic templates are explicitly reviewed. By using a system-two approach to review the experts’ system-one templates, biases can be examined to determine their usefulness in describing reality. When the behavioral templates are captured, the agent behavioral rules are defined and the scenario developed. This process can be as simple as participatory modeling, which is basically a role playing exercise with a sound conceptual foundation [39]. Once the behavioral rules are refined, they are codified and further tested in a simulation environment. The goal of the exercise is to reproduce a targeted historic outcome to provide validation of the behavioral rules established for the agents. Once this process is complete, more elaborate computer simulations can be designed using these rules to build a simulator in which experiments can be run on various policies or courses of action. The point of these experiments is not to predict the future; there are no models of any kind that can reliably claim to accomplish that feat with a high degree of fidelity. The point is to gain a better understanding of the dynamics of the decision environment, much like a pilot rehearsing a combat mission in a simulator. Throughout this process, the scrutiny of the experts’ templates and their fundamental assumptions provides clarity to an otherwise hidden, or at least obscured (foggy) process. These types of simulations can provide a viable method for assessing and understanding the dynamics of various strategic policies and operational strategies as well as reducing the level of uncertainty associated with many complex and ‘wicked problems’ such as terrorism or peacekeeping and stability operations [40]. The promise of allowing analysis of patterns of structural emergence and destruction is real and provides an improved adaptive response to the environment [35].
In Barbara Tuchman’s seminal work, The March of Folly, she lists three attributes of folly that are defined as the pursuit of policy by governments that are counter to their self-interests. The first is that “it must be perceived as counter-productive in its own time, not merely by hindsight” [27]. Since the scope of her writing covered vast periods of history, this rule is critical to avoid the curse of hindsight. The second attribute of folly is “a feasible alternative course of action must have been available” and third, “the policy in question should be that of a group, not an individual ruler, and should persist beyond any one political lifetime” [27]. The critical lesson to take from the realization of limitations of human cognition within an environment defined by complexity is that we must continuously reevaluate the societal environment as policy is implemented. Altering course to accommodate unforeseen events should not be seen as a strategy failure but rather as an implementation success. Despite what some rational choice theorist, and their ‘top-down’ utility maximization approach to policy development may insist, hedge-betting or other precautionary actions are not signs of being caught in an intellectual trap. Caution is a sign of rationality in the face of a security environment that is vague, uncertain, complex, and ambiguous. The process of model design can facilitate gaining insight; the application of various policy alternatives in the modeled environment can provide a synthesis of the possible outcomes and the correlations implied between them, providing a vehicle to understand the dynamics of the policy environment. Given our bounded rationality, heuristics have served us well in the complex environment of human society as we struggle to find the correct course of action. The application of agent based modeling to supplement the heuristic selection process in policy formulation can help provide the leverage necessary for decision makers to successfully engage an ever more complex environment.