On 5th June, Dr. Jan Kwakkel shared some insights on decision making under deep uncertainty. His presentation was followed by a plenary discussion. This is the summary of the talk and discussion by Pauline Dame.
Previously known as operational research (OR), modern policy analysis stems from the attempt of taking advantage of quantitative mathematical analysis to inform public decision making and improve governments’ efficiency. Given a well-defined political objective and sufficient knowledge of the likelihood of the possible policy outcomes, traditional OR’s methods originally required analysts to aggregate multiple success criteria into a single measure of policy success. By representing world uncertainties and political choices as variables of a mathematical function, OR researchers designed models that relate public choices to their consequences and managed to turn political decision making into the mathematical optimization of a welfare function.
In spite of its successful application to complicated military and logistic problems, OR got soon confronted with the difficulties of deep uncertainty and the unsuitability of a purely rational approach. Deep uncertainty arises when the involved parties to decide do not know or cannot agree on the mathematical model to use, on the probability distribution of its inputs, and on which outcomes to consider, as well as their relative importance. It results from value diversity, incompatible world views, differing perceptions of the same problem and a lack of knowledge as to the future state of an ever-changing world. Under deep uncertainty, modelers are required to explicitly balance the relative importance given economic, social or environmental values. They are confronted with the incompatibility of different success criteria that a single welfare function can no longer aggregate. By aggregating over time, space, actors and objectives, they become responsible for disregarding micro-scale inequalities, be it social, economic or geographic, and expose themselves to the political contestations of their methods. Consequently, problem-solving and formulation become inherently intertwined. Finally, they have to deal with change and new information. Even though time resolves uncertainty, it brings dominant values’ change, and with it the need for redefining goodness criteria and designing new success measures. Not only does it limit the range of solutions explored, but it also questions its ability to support unbiased analyses.
Policy analysis under deep uncertainty
Acknowledging OR’s limitations, present-day policy analysis developed a set of dedicated methods that aim at informing decision making while accounting for change, ambiguity, and value diversity. By systematically exploring ranges of parameters instead of assuming their probability distributions or weighing their relative importance, exploratory modeling determines their relative effect on a model’s outcome and identifies regions of the input space which lead to particular scenarios (Kwakkel and Pruyt, 2013). This way, it is used to understand the effect of each assumption on the set of policy and uncertainty under investigation, for worst case discovery or to identify the predicted conditions under which a policy fails. Secondly, adaptive planning handles change and new information by analyzing the consequences of different sequences of policy actions depending on the different ways in which uncertainty may resolve (Haasnoot et al., 2013). It allows to identify long-term lockins caused by short-term decisions, bringing value to flexibility. Finally, decision aiding lets the choice of the aggregation procedure, and corresponding value weighting/ordering or each criterion to political arbitrage. It also lets the policymakers assess the’ likelihood of each scenario based on their field knowledge. This way, models are used to draw inferences, engineer a policy window and inform the negotiation process rather than pointing out an optimal solution.
From theory to application
Even though it has come a long way, model-based support to policy analysis is not a silver bullet. The exploration of models’ parameter space is restrained by the curse of dimensionality and does not bring more knowledge as for outcomes’ likelihood. Models are simplified versions of the world assuming parameters independence and remain unable to account for unknown unknowns. They are generally not flawless and happen to output some absurd scenarios. Thus, future research’s goals include improving prediction verifiability by comparing different models' outcome for a single problem as well as confronting their predictions with reality through post-decision analysis. It aims at implementing feedback loops between choices and their consequences. More fundamentally, and by its very nature, policy analysis challenges scientists and politicians to consider each others’ perspectives and overcome critical differences in their objective to better work together. Scientists thrive for generalizability and mathematical elegance while decision-makers ask for case-specificness and clear interpretability of the results. Analysts are required pure objectiveness when politicians are chosen for their support to a specific world view. Yet, if modeling pursues real-world impact and consideration, it requires to account for decision makers’ goals and expectations, namely to identify policies with high success potential and to resolve the uncertainty as for the consequences of their choices. Because bringing models into the political world bares real-world consequences, it becomes crucial to raise awareness among decision-makers as for the underlying assumptions and inherent limitations of our analyses. By giving more room to political choice, the latest methods of policy analysis engage in a process of co-discovery which does not only hold politicians accountable for their choice of assumptions, rather than have them embedded in a model’s specifics, but also gives transparency to the analysis.
Complexity researchers, where does it leave us?
Because complex systems’ science (CSS) aims at applying methods and models from natural science to tackle societal challenges, it shares several challenges with policy analysis and has much to learn from its experiences. On the other hand, it has also much to offer when it comes to modeling techniques. Network analysis can model cross-scale institutional feedbacks. By adopting a system perspective, it can be used to understand how different choices made by multiple governments may interact with each other in the future and affect the system as a whole. Computational methods inspired and used by CCS, such as coarse-grain modeling, provide solutions to shorten the runtime of extensive models. Existing toy models attest to the existence of otherwise unsuspected behaviors of complex interconnected systems such as tipping points, high sensitivity to initial conditions, and self-reinforcing nonlinear effects. Besides, they can be used to educate students as well as decision-makers to the possibilities and limitations of system modeling, thus narrowing the gap between science and society.
Haasnoot, M., Kwakkel, J. H., Walker, W. E., & ter Maat, J. (2013). Dynamic adaptive policy pathways: A method for crafting robust decisions for a deeply uncertain world. Global environmental change, 23(2), 485-498.
Kwakkel, J. H., & Pruyt, E. (2013). Exploratory Modeling and Analysis, an approach for model-based foresight under deep uncertainty. Technological Forecasting and Social Change, 80(3), 419-431.