Original articlesStrengthening ‘good’ modelling practices in robust decision support: A reporting guideline for combining multiple model-based methods
Introduction
Models are used widely within decision support processes to enhance the understanding of the complexities of real-world problems and to support evidence-based decision making [69]. The use of models in decision support is challenged by the presence of a variety of uncertainties drive by incomplete knowledge about the characteristics of real-world problems and potential surprises and shocks in the problems’ environment [47], [76]. Exploratory modelling is a computational approach, with a growing literature, focusing on the treatment of uncertainty in decision support (and in models in general) based on a large number of computational experimentations under varying assumptions and hypotheses [2], [28], [29]. Exploratory modelling has been widely adopted in decision support through robust decision support frameworks [27]. These frameworks use exploratory modelling to analyse how robust decisions perform across ranges of possible futures and how sensitive decisions will be to unforeseen futures [79]. Among these frameworks are Robust Decision Making (RDM) [45], Many Objective Robust Decision Making (MORDM) [31], and Dynamic Adaptive Policy Pathways (DAPP) [20].
Robust decision support frameworks use a variety of methods from exploratory modelling to generate decision insights. Here, we use ‘method’ as a generic term for referring to techniques of the generation of scenarios, the generation of decisions, the measurement of performance, and the analysis of decision vulnerabilities, as the four components of robust decision support frameworks, presented by Herman et al. [27]. For example, decisions or scenarios can be generated by systematically sampling from many assumptions or by searching through these assumptions and finding those which could maximise or minimise a particular system property of interest (these techniques are explained further in Section 2.2). There have been also a variety of extensions to address the limitations of current existing robust decision support frameworks through the use of a new set of methods or by rearranging the mixing design – i.e., the ways that multiple methods interact with each other, the level of overlap between methods, the type of information passed between methods, amongst other mixing features – of the original methods. Two examples are: a study by Watson and Kasprzyk [78] which modified the interactions and iterations of methods to incorporate multiple problem formulations and to enhance the robustness into the MORDM framework; and a study by Moallemi and Malekpour [56] which tailored RDM with new sets of participatory methods to suit long-term energy planning.
The presence of multiple methods requires choices to be made about which methods to select [27] and also how to mix them together [58]. Such methodological choices and mixing design choices can significantly influence the quality of results in the decision support process, as such improvements were observed in previous research (see, e.g., [11], [78]). The lack of clarity on these choices can lead to the risk of their misuse when they are used by practitioners and a misunderstanding of their capabilities and limitations [8]. Moreover, lack of understanding of the variety of ways these methods can be combined and implemented can limit their potential use and advancement.
A number of researchers from the broader area of modelling and simulation have recently discussed ways to enhance the quality of modelling projects and to move towards a ‘good’ modelling practice by articulating dimensions and methods which have been integrated [8], [12], [19], [24], [61], [81]. There has also been research on reporting guidelines for simulation-based studies to enhance the transparency and reproducibility of modelling results (see Section 2.1). About 50% of researchers believe there is a “reproducibility crisis” [1] which may be due to, at least in part, the lack of a general universal guideline for approaching these kinds of studies. Given the limited scope of previous works on reporting guideline, it has been urge for fostering reusable and reproducible research [7]. While the use of multiple methods in robust decision support frameworks is popular (see [53] for a review), there has not yet been sufficient effort in establishing good modelling practice and no previous work to develop reporting guidelines in the area of robust decision support. Two examples of efforts for establishing good modelling practices in robust decision support are the system diagram by Walker [74] and the XLMR framework by Lempert et al. [45]. However, both focused on framing the decision problem and not the entire decision support process. Another example from the same literature is a taxonomy of robust decision support frameworks presented by Herman et al. [27] who provided a systematic big picture of methodological choices available in robust decision support, but did not discuss choices related to mixing design. While these previous works can be used towards developing good modelling practices in robust decision support, more research is still needed on the clarification of the choices available to the modeller (methodological and mixing design), what the characteristics of their choices are, and what the justifications of the choices made are in the modelling process.
This article articulates steps towards a good modelling practice in the mixed use of multiple methods for coping with uncertainty in decision support. The article presents the steps as a reporting guideline to help modellers and stakeholders to reflect on their choices regarding the methods used and the design of method interactions (i.e., mixing design) and, further, to convey better the justification of their choices among alternative options. Such a reporting guideline can deliver multiple benefits to the mixed use of methods in robust decision support. The guideline can help the development and applications of robust decision support that conform to a universal standard. Such universal standards enable generalisability, reproducibility, and comparability of different practices and can improve the recognition of the value of a specific method in dealing with uncertainties in a multi-method framework. The guideline can assist researchers to increase the possibility that other researchers (and practitioners) can reuse their work and extend their frameworks [57]. The guideline can also classify and reflect on various practices of decision support, helping modellers to identify possible ways of mixing methods under uncertainty and helping practitioners to enhance the scientific soundness and defensibility of their results [8]. Given the aforementioned benefits, at the very least we hope that this work provokes further discussion towards formalising some form of reporting guideline in robust decision support, and towards promoting its adoption among both researchers and practitioners. However, we do not expect to convince all readers to adopt the whole suggested guideline as the significance of some steps may vary in different modelling projects.
We develop the steps and the step choices of the guideline based on two studies: Herman et al. [27] which discuss different methodological choices of robust decision support frameworks (explained in Section 2.2) and Morgan et al. [58] on the dimensions to be considered in mixing of methods in operations research (explained in Section 2.3). We demonstrate how multiple methods are used combined with each other based on these steps in an illustrative application of RDM in asset life cycle planning. We choose RDM as it is an established and widely-used framework with multiple methods with which readers would be more familiar. Therefore, RDM can act as a benchmark to represent better the value of the suggested guideline.
The rest of the article is organised as follows. Section 2 presents foundational ideas on which the reporting guideline is based. Section 3 explains the steps and choices of the guideline for reporting the mixing of methods in robust decision support. Section 4 demonstrates the suggested guideline in practice. Section 5 concludes the article by discussing future research directions.
Section snippets
Previous works of reporting guidelines
Since there is no previous reporting guideline in the area of robust decision support (with exploratory modelling methods), the current research was inspired by previous reporting guidelines from the broader areas of simulation, modelling, and operations research (see [57] for a review). The suggested guideline of the current article tries to make the use of the key benefits of these previous works and to address their limitations while remaining focused on reporting the multiple use of
The reporting guideline for mixing methods in robust decision support
The guideline of the current article uses the benefits of previous reporting guidelines and addresses their limitations, as specified in Section 2.1. The suggested guideline articulates steps for reporting the mixed use of multiple methods in robust decision support. The steps of the suggested guideline are inspired by characteristics of mixed-method designs proposed by Morgan et al. [58]. The guideline presents (or only exemplifies in some cases) possible choices with examples from previous
An illustrative case
This section demonstrates the steps and choices articulated in the suggested reporting guideline in practice in a mixed use of methods in an application of RDM in asset life cycle planning, with a hypothetical case of a fleet of submarines. We initially provide background information on the decision problem, RDM, and the methods we used. We then use the suggested guideline (see Section 3) to report the mixing of methods.
Future research directions
This work presented a reporting guideline to clarify the characteristics of methodological and mixing design choices made when multiple methods are used for robust decision support. As mentioned in this article, the ways that a mixing process is implemented in practice are often under the influence of the modeller and stakeholder biases and hidden motives [16]. The sequences of methodological and mixing design choices in practice under the influence of biases can form multiple paths and create
Acknowledgements
The authors express their appreciation to the guest editors for selecting the earlier version of this article at MODSIM 2017 for this special issue. Special thanks to Joseph Guillaume (Aalto University) for many useful comments on the earlier version of the article. The authors also thank reviewers for their constructive comments. The authors greatly appreciate useful discussions and technical support from colleagues at Capability Systems Centre, UNSW Canberra. The authors acknowledge the use
References (81)
- et al.
Thinking inside the box: A participatory, computer-assisted approach to scenario discovery
Technol. Forecast. Soc. Change
(2010) - et al.
Chapter two good modelling practice
- et al.
Assessing the robustness of adaptation decisions to climate change uncertainties: A case study on water resources management in the East of England
Global Environ. Change
(2007) - et al.
A model-based analysis of biomethane production in the Netherlands and the effectiveness of the subsidization policy under uncertainty
Energy Policy
(2015) - et al.
Including robustness considerations in the search phase of Many-Objective Robust Decision Making
Environ. Model. Softw.
(2018) - et al.
An overview of the system dynamics process for integrated modelling of socio-ecological systems: Lessons on good modelling practice from five case studies
Environ. Model. Softw.
(2017) - et al.
A standard protocol for describing individual-based and agent-based models
Ecol. Model.
(2006) - et al.
A new analytic method for finding policy-relevant scenarios
Global Environ. Change
(2007) - et al.
Dynamic adaptive policy pathways: A method for crafting robust decisions for a deeply uncertain world
Global Environ. Change
(2013) - et al.
Adaptive robust design under deep uncertainty
Technol. Forecast. Soc. Change
(2013)
Integrated assessment and modelling: Overview and synthesis of salient dimensions
Environ. Model. Softw.
Many objective robust decision making for complex environmental systems undergoing change
Environ. Model. Softw.
Statistical analyses of scatterplots to identify important factors in large-scale simulations, 1: Review and comparison of techniques
Reliab. Eng. Syst. Saf.
The exploratory modeling workbench: An open source toolkit for exploratory modeling, scenario discovery, and (multi-objective) robust decision making
Environ. Model. Softw.
Comparing robust decision-making and dynamic adaptive policy pathways for model-based decision support under deep uncertainty
Environ. Model. Softw.
Why pay attention to paths in the practice of environmental modelling?
Environ. Model. Softw.
Path dependence and biases in the even swaps decision analysis method
European J. Oper. Res.
Linking science with environmental decision making: Experiences from an integrated modeling approach to supporting sustainable water resources management
Environ. Model. Softw.
An uncertain future, deep uncertainty, scenarios, robustness and adaptation: How do they fit together?
Environ. Model. Softw.
Multimethodology: towards a framework for mixing methodologies
Omega
An agent-monitored framework for the output-oriented design of experiments in exploratory modelling
Simul. Model. Pract. Theory
Model-based multi-objective decision making under deep uncertainty from a multi-method design lens
Simul. Model. Pract. Theory
Narrative-informed exploratory analysis of energy transition pathways: A case study of India’s electricity sector
Energy Policy
A participatory exploratory modelling approach for long-term planning in energy transitions
Energy Res. Soc. Sci.
A toolkit of designs for mixing discrete event simulation and system dynamics
European J. Oper. Res.
Cross-training policies for repair shops with spare part inventories
Int. J. Prod. Econ.
From decision theory to decision aiding methodology
European J. Oper. Res.
Modelling with stakeholders
Environ. Model. Softw.
Modelling with stakeholders – next generation
Environ. Model. Softw.
Incorporating deeply uncertain factors into the many objective search process
Environ. Model. Softw.
1, 500 scientists lift the lid on reproducibility
Nature
Exploratory modeling for policy analysis
Oper. Res.
Computer-assisted reasoning
Comput. Sci. Eng.
Hybrid simulation modelling in operational research: A state-of-the-art review
European J. Oper. Res.
Classification and Regression Trees
Open is not enough
Nat. Phys.
Model-based assessment of the submarine support system
Bump hunting in high-dimensional data
Stat. Comput.
Documenting a computer-based model
From data to decisions: Processing information, biases, and beliefs for improved management of natural resources and environments
Earth’s Future
Cited by (8)
Assessment of measurement accuracy in <sup>210</sup>Pb dating sediment methods
2022, Quaternary GeochronologyCitation Excerpt :This contribution may be evaluated by applying a uniform (also named rectangular) distribution to the interval of results deriving from the different models used to estimate the measurand, i.e. the experimenter, being all models reasonably comparable in terms of accuracy, hypothesizes that the measurand can assume all the values included in that interval with the same probability. Reproducibility has been formally adopted as a source of uncertainty by ISO Standard (ISO 21748:2017, 2017) and already applied in several fields, ranging from calibration of force-proving instruments (ISO 376:2011, 2011) to mathematics (Moallemi et al., 2020). Similarly, the effects of reproducibility caused by model selection on dating sediment cores via the 210Pb method will be presented in this paper.
A review and catalogue to the use of models in enabling the achievement of sustainable development goals (SDG)
2022, Journal of Cleaner ProductionCitation Excerpt :The results of this research show that uncertainty treatment was mostly either lacking or done using simplified methods that are not suitable for the models targeting SDG issues. In order to robustly support decision making for interacting social-environmental-economic systems, modelers are encouraged to utilize more advance uncertainty treatment approaches such as applying mixed uncertainty treatment frameworks as well as being elaborate in documenting and reporting the uncertainty treatment process and its underlying assumptions (Moallemi et al., 2020; Newcomb et al., 2021). Finally, the outcomes of this research can be useful for modelers through their attempts to develop SDG related models by presenting the common modeling practices found in literature.
Reflective communication to improve problem-solving pathways: Key issues illustrated for an integrated environmental modelling case study
2020, Environmental Modelling and SoftwareCitation Excerpt :It is a challenge to enable effective flow of information among them. There is a need for shared approaches, tools, protocols and reporting guidelines (Moallemi et al., 2019) to report on facts, findings and progress of the project, and communicate within and between all three audience groups (Zare et al., 2019a). Transparent communication is required to address different audience needs in a way that is accessible for everyone in order to obtain a social license to operate (Badham et al., 2019).
Robust decision making and Epoch–Era analysis: A comparison of two robustness frameworks for decision-making under uncertainty
2020, Technological Forecasting and Social ChangeCitation Excerpt :Awareness of the characteristics of different frameworks and their relative performance can contribute to the progress of methodological developments in the DMDU literature. It can also lead to further research on the possible integration of existing frameworks to address the limitations of one with the strengths of others (Moallemi et al., 2019b). The implication of this article's comparison of RDM and EEA for practice is that while RDM is a more effective framework under contested and imprecise information about the future (a posteriori analysis), EEA can better work when stakeholder knowledge about the future is adequate to allow a deliberative specification of future scenarios (a priori analysis), mostly in the design of engineering systems (similar to Engineering Options Analysis).
A Novel Algorithm of Priority Selection Strategies for the United Nations Sustainable Development Goals Based on Relationship Network
2023, Proceedings of SPIE - The International Society for Optical EngineeringRobust multi-model controllers design without impulse in switching times for non-linear vibrations suppression of sandwich plate
2022, IET Control Theory and Applications