Introduction

Integrated Assessment (IA) models are computer models which serve as tools to analyse complex real world problems and to portray their social, economic, environmental and institutional dimensions. Technically, IA models often consist of several interlinked sub-models, which use outputs from one sub-model as inputs to another. This allows generating, structuring and integrating knowledge from different scientific disciplines in order to provide a comprehensive analysis of the problem at hand. Over the past three decades, there has been growing interest in IA models as computer-based information- and decision-support systems for assessing environmental, economic and social consequences of problems such as, for example, climate change, transboundary air pollution or water resource management. Well-known examples are the Regional Acidification Information and Simulation Model (RAINS, Amann et al. 2004), the Dynamic Integrated Model of Climate and the Economy (DICE, Nordhaus 1992, 1994) or the Integrated Model to Assess the Global Environment (IMAGE-2, Alcamo et al. 1998). Initially, IA models were used as purely analytic tools (Jakeman and Letcher 2003). In recent years, this aim has shifted towards providing decision-support to users other than analysts and model developers (Harremoës and Turner 2001; Sundqvist et al. 2002).

Given the complexity of the problems addressed and the complexity of the models themselves, IA models are subject to various types and sources of uncertainties, which may considerably hamper their reliability and acceptance. To become useful tools, therefore, an assessment of uncertainties in IA models is inevitable. Accordingly, uncertainty analysis in IA models has received considerable attention within the scientific literature. Key topics have been the development of (a) typologies of uncertainties (Beck 1987; Alcamo and Bartnicki 1990; Lam et al. 1996; Casman et al. 1999; Kann and Weyant 2000; Aaheim and Bretteville 2001; Morgan 2003; Walker et al. 2003), (b) tool catalogues and guidelines for selecting appropriate methods for uncertainty analysis (van der Sluijs et al. 2003; Refsgaard et al. 2007) and (c) frameworks for the systematic assessment of uncertainties (van der Sluijs 1997; van Asselt 2000; van Aardenne 2002; Janssen et al. 2005; Krayer von Krauss and Janssen 2005; van der Sluijs et al. 2005; Gabbert 2006).

Since many IA models provide scientific input to public decision-making processes, they can also be characterised “science–policy interfaces” (van der Sluijs 2002; Watson 2005) or “bridge building tools between science and policy” (Rotmans and van Asselt 2001). This function can only be satisfied if the information supplied by model developers and analysts meets the information requirements of model users. Otherwise, the decision-support provided by IA models will be inappropriate. Examining the conceptual frameworks mentioned earlier and the large number of studies analysing uncertainties in IA models, however, we observe that they have predominantly been designed from the model developers’ perspective. Only little attention has been paid to the question which type of uncertainty information is in fact demanded by model users, for example by policy makers as the ultimately most important user group of IA models.

The objectives of our paper are, therefore, twofold. The first aim is to suggest an approach for uncertainty analysis in IA models that explicitly accounts for the model user perspective on uncertainties. We assume that insight is needed into what type of uncertainty information model users consider relevant before uncertainties can be analysed. Following Gabbert (2008), users’ demands for uncertainty information are called “uncertainty information needs”. There exist different possibilities for investigating these needs. Which is the most appropriate has to be decided case-by-case. The second objective of the paper is to illustrate the proposed approach by an example. We take the case of the SEAMLESS Integrated Framework, an IA modelling framework for assessing and comparing alternative agricultural and environmental policy options from an ex-ante perspective (van Ittersum et al. 2008a; Ewert et al. 2009). During the SEAMLESS project model developers maintained close contacts to different user groups, in particular to policy experts at the European and national level. Policy experts’ uncertainty information needs were investigated as part of this interactive process and by using a questionnaire. In this paper, we present the results of this case study and discuss implications for user-oriented uncertainty analysis in IA modelling.

The remainder of the paper is organised as follows. In the next section, an approach to more effective uncertainty analysis in IA models that is based on model users’ uncertainty information needs is introduced. We review the scientific literature addressing the need for a user perspective to uncertainty analysis in IA models, and we explain in which way our approach provides a novel contribution. We then introduce the SEAMLESS Integrated Framework and discusses why and how a user-oriented approach to uncertainty analysis was adopted. Following to this we discuss results revealed from investigating users’ uncertainty information needs and lessons learnt for targeting effective uncertainty analysis in IA modelling. The final section concludes and discusses which results of the SEAMLESS-IF case study may apply to a broader class of IA models.

Uncertainties in IA models and the objective of uncertainty analysis

IA models are always cutouts of reality. As a consequence, they suffer from imperfections in many ways, causing model inputs (and, following to this, model outcomes) to vary. In social sciences, it has become common terminology to call these variations where probabilities are unknown and only subjective probability estimates can be made “uncertainty”. Uncertainty has to be distinguished from “risk”, denoting variations where probabilities are known (see Knight 1921, 2002; see also Brooke 2008).

As indicated in the introduction, several suggestions for categorising uncertainties have been made in the IA literature (see Gabbert and Kroeze (2003) for a survey). It has become widely accepted to distinguish between “types” of uncertainties, denoting the manifestation of uncertainties in a model, and “sources of uncertainties”, indicating their origin or location within the model. Walker et al. (2003) suggested an uncertainty matrix which distinguishes different types and sources of uncertainties in order to facilitate uncertainty classification. Initially, Walker et al. (2003) suggested a third category (“nature of uncertainties”), examining whether uncertainties identified are epistemic or stochastic. As pointed out by Refsgaard et al. (2007), this terminology can be misleading because of terminological overlaps between the three uncertainty categories. In this paper, therefore, we follow the suggestion in Refsgaard et al. (2007) and distinguish two main uncertainty categories, i.e. types and sources of uncertainties.

According to Walker et al. (2003), types of uncertainties can be further split into three sub-categories, i.e. statistically quantifiable uncertainty, uncertainty due to the definition and modification of the scenarios incorporated in a model (scenario uncertainty) and uncertainty due to an imperfect understanding of the underlying problem (recognised ignorance) (see also Table 5 in section “User-defined uncertainty information needs” of the paper). These types of uncertainties can have different locations in an IA model. While “context” addresses model boundaries, i.e. uncertainties caused by an imperfect representation of the problem of concern, model and input uncertainties arise because of structural and technical imperfections within a model. Parameter uncertainty is caused by imperfections of data and methods that are, for instance, used to calibrate a model. Finally, we need to consider uncertainties in model outcomes, which result from uncertainties of the above-mentioned sources. While the practical usefulness of this proposed classification scheme has been controversially discussed in the literature (Norton et al. 2006; Krayer von Krauss et al. 2006), it illustrates the variety of uncertainties that model developers and analysts can potentially take into account, causing uncertainty analysis to become a time and resource consuming task.

Assuming that IA models serve as “interfaces”, i.e. tools within a highly interactive process of information generation and exchange, we define the general objective of uncertainty analysis to identify model imperfections of any type and source (either in a quantitative or qualitative way). Making these imperfections transparent identifies possibilities for model improvement, which, in turn, increases confidence in model outcomes.

Effective uncertainty analysis in IA models: the need for a model user perspective

In this paper, we distinguish two main stakeholder groups within the IA process; (1) model builders, who are developing and maintaining the model and who update databases and (2) model users. Model users can be analysts, i.e. scientists using an IA model for research purposes and as advisors of public decision-makers, or public decision-makers, who use IA models as scientific underpinning in concrete decision contexts. The IA model transforms different inputs (assumptions, parameters, data and mathematical relationships) into information that aids preparing and making decisions on a complex problem. We consider uncertainty analysis to be an integral part of this transformation process. Uncertainty analysis should provide information that is relevant for making decisions on the problem at hand. Hence, uncertainty analysis is considered effective if the information provided reflects uncertainty information needs of model users.

Within recent years, increasing attention has been given to characterising effective model-based decision-support (for example Jones et al. 1999; Tuinstra et al. 2006). Uncertainty analysis was pointed out highly relevant for improving the interface between scientific research and policy-making. The question how effective uncertainty analysis can or should be achieved has, however, not been addressed. This also holds for the growing literature on Participatory Integrated Assessment (PIA) examining how different groups of stakeholders can or should be included in the IA process (see, for example, Hisschemoeller et al. 2001; Toth 2001, van de Kerkhof 2004; Newig et al. 2005).

Many scientists have recognised the need for gaining insight into the model user perspective on uncertainty analysis in IA models (Shackley and Gough 2002; Walker et al. 2003; Krayer von Krauss and Janssen 2005; Gabbert 2008). Turnpenny et al. (2004), who surveyed the needs of organisations in the United Kingdom for information from integrated assessments of climate change, conclude that users regard a clear treatment of uncertainty as vital. Furthermore, they point out that trust and confidence of users in the results of research is not exogenously given but must be developed and carefully maintained. Likewise, model users have repeatedly stressed the need for a more systematic, user-oriented analysis of uncertainties in IA models (IIASA 2002; CEC 2004).

However, only few attempts have been made so far to systematically investigate these uncertainty information needs. In an earlier paper, Gough (1999) investigates policy-makers’ motivation for using IA approaches (including computerised models) as decision-support tools. Gough (1999) conducted interviews with 12 representatives from the European Community and two Brussels-based Nongovernmental Organizations (NGOs). Taking a more general view on IA processes, the study did not intend to provide insight in uncertainty information needs of model users in more detail. In a recent paper, Gabbert (2008) proposed a normative approach to identifying uncertainty information needs of model users. Taking the precautionary principle as a key guiding rule for decision-making in many different policy fields (for example air pollution reduction, chemical safety, biodiversity), Gabbert (2008) investigated the precautionary principle’s perspective on risk and uncertainty and identified a general set of uncertainty information needs for precautionary policy-making. Finally, Stalpers et al. (2009) developed a framework for reconciling model results with information needs of model users. The framework is based on the “Delft Dialogues”, an empirical study of the participative process for preparing the Kyoto Protocol negotiations (Van Daalen et al. 1998). Similar to Gough (1999), the study of Stalpers et al. (2009) takes a general view on information needs of model users, not specifically focusing on uncertainty information needs.

Compared to Gabbert (2008), the approach suggested in this paper takes a more general perspective since it is not focusing on a particular (policy) decision context. Contrary to Stalpers et al. (2009), we propose to identify model users’ uncertainty information needs prior to performing uncertainty analysis in an IA process. This allows (1) to investigate how users’ uncertainty information needs differ from uncertainties considered most relevant by model developers and (2) to focus uncertainty analysis on those types and sources of uncertainties that are considered relevant and meaningful by particular user groups.

Identifying users’ uncertainty information needs: the case of the SEAMLESS Integrated Framework

Objectives and structure of SEAMLESS-IF

The SEAMLESS Integrated Framework (in the following SEAMLESS-IF) has been developed as a computerised, integrated framework to analyse agricultural and environmental policy options and questions from an ex-ante perspective (van Ittersum et al. 2008a; Ewert et al. 2009). Its aim is to support the development of sustainable agricultural and environmental policies on the European, national and regional level. SEAMLESS-IF includes a large set of outcome indicators that capture the key economic, environmental and social issues of the questions at stake. These indicators can be selected from the “SEAMLESS library”, which includes officially accepted indicators for impact assessment (European Environment Agency 2005). A selection of these indicators is shown in Table 1.

Table 1 Selected environmental and economic outcome indicators calculated in SEAMLESS-IF

The framework uses a software architecture which allows interlinking several components, each of which focuses on specific processes or scales. The SEAMLESS-IF components include a European database, an indicator list and a set of models:

  1. 1.

    APES (Agricultural Production and Externalities Simulator) is a deterministic and dynamic cropping system simulation model for calculating agricultural production and its externalities in response to weather, soils and agro-management (Donatelli et al. 2010);

  2. 2.

    FSSIM (Farm System Simulator) is a bio-economic farm model, using mathematical programming for quantifying the integrated agricultural, environmental and socioeconomic responses of farming systems, partly using the output from APES (Louhichi et al. 2009; Janssen et al. 2010);

  3. 3.

    EXPAMOD is an econometric expansion model used for up-scaling the supply responses from FSSIM to the European scale (Pérez Domínguez et al. 2009); and

  4. 4.

    SEAMCAP, which is a comparative static equilibrium model providing information on supply and demand relationships based on the CAPRI (Common Agricultural Policy Regionalized Impact) model and applied to the agricultural sector of the European Union (Heckelei and Britz 2001; Britz et al. 2007).

Thus, SEAMLESS-IF outcomes are quantified indicators derived from different models. Figure 1 illustrates the SEAMLESS-IF model chain and the linkages of the sub-models involved.

Fig. 1
figure 1

Main model chain in SEAMLESS-IF

For a particular assessment problem (e.g. a new policy proposal), a baseline and a policy scenario are compared for a defined time horizon (e.g. 2013 or 2020) (Therond et al. 2009). The baseline scenario can be interpreted as a projection of relevant drivers that are exogenous to agricultural systems (for example population growth or economic growth). It includes already agreed future European agricultural policies. The policy scenario is equivalent to the baseline scenario, but includes one or several proposed policies. The scenarios are assessed through a set of indicators. Most of these indicators are either quantified by the SEAMCAP model (at the EU level) or by FSSIM (for specific farm types in certain regions). The SEAMCAP model simulates market prices for agricultural commodities at the EU and the global level. The FSSIM model simulates a farmer’s responses to these indicators in a specific region of the EU through integrating agricultural activities (alternative crop and animal production systems at field level, characterised by their input and output coefficients), farm resource endowments, objective functions and policy constraints. Both models are linked through price–supply relationships (elasticities) calculated in EXPAMOD. Following a simulation of market prices by SEAMCAP, FSSIM is rerun with these updated prices to simulate supply and externalities at the farm level.

Modeller–user interaction

The intended users of SEAMLESS-IF can be characterised as (1) integrative modellers and (2) policy experts. While integrative modellers are assumed to use results of SEAMLESS-IF predominantly for academic purposes (research and education), policy experts are anticipated to use the outcomes of policy scenarios analysed in SEAMLESS-IF as information- and decision-support. Given this latter function of the modelling framework, close interaction between SEAMLESS-IF developers and policy experts was maintained during the entire duration of the project, aiming (1) at regularly informing policy experts about the purpose and the development of the tool and (2) at obtaining feedback from policy experts on whether these developments met their needs. This two-way communication was mainly realised through half-day “user forum meetings” of modellers and policy experts held twice a year. The persons invited to these regular meetings were representatives from various Directorates-General (DGs) of the European Commission, whose work is linked to the work of SEAMLESS-IF (for example members of DG Agriculture and Rural Development, DG Environment and DG Economics and Finances). Furthermore, members from organisations associated or linked to the European Commission such as the Joint Research Centre (JRC) and the European Environment Agency (EEA) were invited. To further improve transparency during SEAMLESS-IF development, “targeted meetings” and individual interviews with user forum participants complemented the user forum meetings (Alkan Olsson et al. 2009).

Already early in the project (i.e. in 2005, when the project started), policy experts pointed out that they consider transparency regarding methods used in the IA modelling process, including the assessment of uncertainties, essential for better understanding the model and for creating confidence in its outcomes (Bäcklund et al. 2010). Accordingly, an iterative approach for developing SEAMLESS-IF was adopted. Intermediate steps were discussed with potential SEAMLESS-IF users during the meetings by means of demos and prototypes. The regular interaction between model developers and users, in particular the policy experts, shaped the IA modelling process that was finally adopted, consisting of a pre-modelling, a modelling and a post-modelling phase (see Fig. 2).

Fig. 2
figure 2

Integrated assessment procedure adopted in SEAMLESS-IF (Source: Van Ittersum et al. 2008a)

For performing IA studies, the SEAMLESS-IF tool was implemented in a computerised graphical user-interface (GUI). When using the tool for analysing specific policy options, close interaction between model developers and users is particularly important during the pre-modelling phase. This is to ensure a well articulated problem definition, its formal translation into the model chain, a concise definition of scenarios to be investigated and a clear specification of indicators to be compiled. In addition, model developers can explicitly document the parameterisation through the GUI during the modelling phase. In order to improve potential model users’ understanding of the SEAMLESS-IF setup, the development process of the tool and all components were extensively documented (http://www.seamless-ip.org). This was targeted towards strengthening users’ trust and confidence in SEAMLESS-IF as a science–policy interface.

The need for a user-oriented approach to uncertainty analysis in SEAMLESS-IF

The component-based design of SEAMLESS-IF allows for analysing different modelling pathways in parallel. While resulting in a complex model structure (see Fig. 1), this was explicitly endorsed by model users (Alkan Olsson et al. 2009). As outlined in the introduction, however, model complexity causes IA models to be vulnerable for various types and sources of uncertainties. Discussions during the user forum and the targeted meetings clearly revealed uncertainty assessment in SEAMLESS-IF to be an issue of high concern. Although uncertainty analysis was not explicitly discussed at every meeting, policy experts repeatedly underlined that, besides a transparent and well-documented model setup, uncertainty analysis should serve the purpose for creating confidence by proofing the reliability of model results (Alkan Olsson et al. 2009).

For some individual models included in SEAMLESS-IF, uncertainty analyses have been documented in earlier studies, reflecting the model developers’ uncertainty perception and providing uncertainty information that developers consider most relevant and feasible. The studies demonstrate that model developers have put clear focus on quantifiable uncertainties. More specifically, parameter and scenario uncertainty received prior attention. In APES, for example, extensive sensitivity analyses of crop and soil parameters using screening methods, regression-based methods and variance-based methods were performed (Donatelli et al. 2009). Furthermore, uncertainty in process simulation, which can be examined by comparing alternative crop and soil components, was addressed (Donatelli et al. 2010). Likewise, developers of FSSIM applied restricted and default sensitivity analysis to the linear programming models. Moreover, alternative calibration methods were compared (Kanellopoulos et al. 2009). Uncertainty analysis in the agricultural sector model SEAMCAP put emphasis on selected parameters in the trade module (Britz and Witzke 2008) and on the calibration method in the supply module (Jansson 2007).

Assuming (1) that SEAMLESS-IF has been developed as a tool for supporting decision-making based on impact assessments and (2) that uncertainty analysis is directed towards strengthening science–policy interaction, it had to be clarified whether potential SEAMLESS-IF users would regard this rather narrow perspective on uncertainty assessment appropriate. If so, above-mentioned standard quantitative methods could straightforwardly be applied to the enlarged dataset of SEAMLESS-IF components. If not, insight would be required into how model users’ uncertainty information needs differ from the uncertainty information that developers’ had provided in earlier studies. Hence, investigating the user perspective on uncertainty analysis in SEAMLESS-IF in more detail aimed at launching a learning process between model developers and users. This should facilitate uncertainty classification and should help SEAMLESS-IF developers to (re-)structure uncertainty assessment in this IA model. Furthermore, besides uncertainty classification, SEAMLESS-IF developers needed guidance for uncertainty prioritisation, i.e. information about what uncertainties should be addressed first. Because of the regular and constructive interaction with a group of policy experts throughout the SEAMLESS project (see section “Modeller–user interaction”), these potential users were informed about the model framework, its structure and its outcomes. They also were aware of possible limitations of the model. This motivated to investigate policy experts' uncertainty information needs. Given the time frame of the project using a questionnaire was considered most appropriate.

Identification of model users’ uncertainty information needs in SEAMLESS-IF

The questionnaire consisted of two main parts (Table 2).

Table 2 Structure of the questionnaire for identifying uncertainty information needs of policy experts involved in the SEAMLESS project

Generally, to each question a set of answers was offered where participants could select relevant items by making single or multiple choices. We purposely avoided offering participants a spectrum of answers indicating, for example, the degree of relevance, because this would have introduced considerable vagueness for evaluating results. Instead, being aware of the subjectivity inherent to uncertainty perceptions and uncertainty information needs, our aim was to ensure maximum comparability of responses across participants by standardising possible answers. While this might not reveal a perfect and complete picture of users’ uncertainty information needs, the objective was to create a sound basis for contrasting the user perspective on uncertainty assessment to that of SEAMLESS-IF developers. In addition, at the end of each question participants were invited to add comments or to explain their view in more detail. This should allow participants to document opinions differing from the standardised answers provided.

General questions of part 1 concerned the professional backgrounds of participants. In addition, we proposed the definitions for “uncertainty” and “uncertainty analysis” presented in section “Uncertainties in IA models and the objective of uncertainty analysis”, asking participants whether they agreed, partially agreed, disagreed or did not know. In case they would only partially agree or even disagree, participants were encouraged to explain their views and to add further aspects that should be included in the proposed definitions.

Part 2 of the questionnaire addressed uncertainty analysis in SEAMLESS-IF. Participants were offered a list of different aspects which may create confidence in model outcomes. The list addressed both aspects reflecting participants’ personal experience with IA models in general and with SEAMLESS-IF in particular. Moreover, the list addressed analytic aspects as well as the interaction between SEAMLESS-IF model developers and users (see Table 3).

Table 3 List of topics offered in the questionnaire which can make model users feel confident with a model and its outcomes

Participants were asked to select those topics that seemed relevant to them. Multiple selections were allowed. They were also given the opportunity to add additional relevant aspects not mentioned in the list. In addition, different possible sources (locations) of uncertainties in SEAMLESS-IF were suggested. Participants were asked to indicate to which of the options offered a model developer or an analyst should give priority. Using the uncertainty categorisation modified after Walker et al. (2003), sources suggested in the questionnaire were model context (system boundaries), model structure and technical setup of the model and inputs (see Table 5 in section “Lessons learnt in SEAMLESS-IF for user-oriented uncertainty analysis”).

Participants were also asked to indicate which mode of uncertainty analysis documentation they would find most convenient, distinguishing between probabilistic analysis, checklists for model quality assessments, model comparison, scenario analysis and expert elicitation. Again, participants were given the opportunity to express own views or to add suggestions.

Considering that participants may be short of time, the questionnaire was designed in such a way that filling-in would be possible in approximately 15–20 min. This was not meant as a time-limit; rather it should motivate policy experts to participate. As an alternative to filling-in the questionnaire we offered the possibility for conducting telephone interviews. During a user forum meeting in November 2007, the questionnaire was introduced and discussed (Alkan Olsson et al. 2009). Subsequently, the questionnaire was sent both to selected members of the European Commission (including the Joint Research Centre), to the European Environment Agency (EEA) and to selected partners of the SEAMLESS-IF consortium in France being in contact with policy experts at the national and the regional scale. Only policy experts who had been in contact with SEAMLESS (for example by participating in the user forum and the targeted meetings) were asked to participate in the case study. This was to ensure some familiarity with SEAMLESS-IF and the research questions addressed.

Because of their tight schedules all persons contacted preferred to return filled-in questionnaires by e-mail.

Results and discussion

User-defined uncertainty information needs

Of the eleven people contacted at the European level, six questionnaires were completed and returned. Respondents from the European Commission represented different Directorates-General (DG Agriculture and Rural Development, DG Environment, DG Economics and Finances). In addition, we received four completed questionnaires from policy experts in France working at the national and the regional level (Table 4). All respondents had an academic background (university degree or PhD) and had either worked with scientific models before (e.g. crop models or agricultural sector models) or were familiar with using results of scientific models in their daily work. Hence, we could presume respondents to be well qualified for responding to the issues addressed in the questionnaire. Though more filled-in questionnaires would certainly have been desirable, the responses received provided valuable insight into the users’ demand for uncertainty information. This holds in particular if we keep in mind that examining users’ uncertainty information needs is a field where only little data have become available so far (see also the literature review in section “Effective uncertainty analysis in IA models: The need for a model user perspective”). In the following, the discussion of results refers to the whole sample of participants. Further details on results obtained from individual respondents can be provided on request.

Table 4 Response to the questionnaire

Generally, answers received to the first part of the questionnaire illustrate a high variability of uncertainty perceptions across respondents. For example, one participant claimed that our uncertainty definition “mixes up errors and uncertainties”, stating that “…imperfections in model input (observations, data, interpretation of statistical information) are errors”, while “there is uncertainty when the representation of the system observed is uncertain”. Another participant suggested characterising “uncertainties” as “reliability defects” due to parameterisation errors, which have to be distinguished from “design defects (model structure errors)”. This clearly differs from the uncertainty definition offered in the questionnaire. Even though the divergence of users’ understanding of uncertainty may be a result of the relatively small number of participants, we regard it to be an important observation, indicating that the users’ uncertainty information needs are likely to differ from the information that model developers provided in earlier studies.

Our definition offered for “objectives of uncertainty analysis” received more compliance. Two statements of disagreement, however, expressed fundamental controversy. One respondent claimed that identifying uncertainties and exploring possibilities for model improvement is part of the model validation process and does not belong to uncertainty analysis. Another respondent stated that uncertainty analysis should deal both with the “external stochasticity” of a model and with “model uncertainty”, i.e. parameters and functions which are subject to “a specific model design”.

From the comments received, we conclude that respondents were aware of different types and sources of uncertainties in SEAMLESS-IF. Furthermore, respondents explicitly stated that uncertainties can also be located outside model boundaries. For example, one comment remarked that “uncertainties in computer models generally reflect uncertainties in our understanding of (…) the processes”. Another participant pointed out that “even in case of perfect model input (…) model results remain uncertain with regard to their ability to reflect reality”. Nevertheless, the respondents’ comments imply that they did not consider all types and sources of uncertainty equally relevant. For example, one respondent questioned whether “design defects (i.e. model structure errors)” should be part of uncertainty analysis. Instead, it seemed much more plausible for this policy expert to focus on those uncertainties which are caused by a bad parameterisation. Another comment stated that “uncertainty should be more narrowly defined to factors which in the real world cause uncertainty such as weather, animal disease, exchange rates. (…) As a user I want to know the impact of (these) policy relevant uncertainties on model outcomes”. This suggests that for policy experts using SEAMLESS-IF uncertainties located within the model seemed to be more relevant than uncertainties due to lack of knowledge and ignorance. In particular, uncertainties inherent to technical parameters and forcing functions were of high interest to the consulted experts. Hence, using a classification scheme modified after Walker et al. (2003) illustrates that respondents showed a preference for information about statistical and scenario uncertainty (see Table 5 in the next section). In addition, SEAMLESS-IF users asked for information how these uncertainties impact model outcomes. This does not mean that uncertainties due to an imperfect understanding of the problem or the model context were considered to be unimportant. To the contrary, respondents explicitly pointed out that making model limitations transparent, as it has been done during the development process of SEAMLESS-IF, is helpful for a better interpretation of model outcomes, even if this should not be the focus of uncertainty analysis. Several factors may explain the interest of SEAMLESS-IF users for certain types of uncertainties. For example, given the academic backgrounds and the experience in using IA models the users may have felt capable of making own judgments about uncertainties due to lack of knowledge and ignorance. Hence, keeping in mind that investigating users’ uncertainty information needs in SEAMLESS-IF is meant as an illustrative case rather than an exhaustive study, uncertainty information needs may well be different for users less experienced with using models or working in other policy contexts. Alternatively, Owens (2005) assumed that users may often not be interested in uncertainties lying beyond model boundaries because they primarily want to use the model results to rationalise preconceived policy decisions. Which of the possible explanations is most appropriate, however, is an empirical question that is beyond the scope of this paper.

Table 5 Types and sources of uncertainties in SEAMLESS-IF considered relevant by users (U) and model developers (D)

Regarding possible sources of uncertainties the respondents showed weak preference for “model structure”, “technical setup” and “model inputs”. Several respondents indicated that they consider these sources equally relevant. Only one respondent considered system boundaries to be an important location of uncertainties.

Of the possible aspects that make model users feel confident in model results the issue “interaction and communication between model developers and users” received by far highest agreement. More specifically, the option “model developers inform about uncertainties in SEAMLESS-IF and show their impact on model results” received the highest number of scores (seven respondents). Furthermore, a good communication and information flow seems to be very important (four respondents). Earlier experience with IA models, especially with models included into SEAMLESS-IF, and the policy relevance of issues addressed in SEAMLESS-IF, received moderate agreement. Analytic aspects such as data quality were only of minor relevance to the respondents. Comments added to this part of the questionnaire stressed transparency on model limitations to be an important issue, which is in line with statements received in the first part of the questionnaire discussed earlier.

Finally, respondents were asked to indicate their preferred way of uncertainty documentation. Answers received did not point to clear priorities. Three respondents stated that they did not know which option to prefer. The remaining items “model comparison”, “scenario analysis, including extreme options”, “probabilistic analysis” and “expert elicitation” received almost equal scores. The option “checklists for model quality assessments” received no score. This corresponds to the low relevance policy experts attached to data quality as a means for creating confidence in SEAMLESS-IF results.

Lessons learnt in SEAMLESS-IF for user-oriented uncertainty analysis

As outlined earlier in this paper, the development of SEAMLESS-IF was a participatory process. Hence, while policy experts’ uncertainty information needs were expressed most explicitly in the questionnaire, results as discussed below must be regarded as part of the model developer–user interaction throughout the SEAMLESS project.

In view of the primary aim, i.e. getting insight into uncertainty information needs expressed by SEAMLESS users, we may summarise our results in three observations. First, categorising users’ uncertainty information needs by using the modified classification scheme of Walker et al. (2003) demonstrates that an exhaustive uncertainty analysis, addressing every possible type and source of uncertainty, was not preferred by SEAMLESS-IF users (see Table 5). Obviously, users considered focusing the analysis on selected types and sources of uncertainties to be more effective. This may also be regarded an outcome of the interactive development process of SEAMLESS-IF as described earlier in the paper (see section “The need for a user-oriented approach to uncertainty analysis in SEAMLESS-IF”), which considerably strengthened policy experts’ understanding of the model. In the particular case of SEAMLESS-IF, information on uncertainties due to model context and model boundaries were not considered a prior need by the users.

Second, users' uncertainty information needs differed from the SEAMLESS-IF developers' priorities as reflected in published uncertainty analyses of some SEAMLESS-IF components. Comparing uncertainty information needs of both groups illustrates that policy experts had a broader view on relevant sources of uncertainties (see Table 5). In particular, policy experts incorporate model structure and the technical realisation of a model into the set of relevant sources of uncertainties. This illustrates that for the potential users an analysis of parameter uncertainty only, as preferred by the developers of SEAMLESS-IF components, was not regarded sufficient for creating confidence in model outcomes.

Third, the findings illustrate that regular uncertainty communication and information exchange between model developers and users during the IA modelling process is just as important for creating confidence in an IA model as uncertainty analysis itself. The need for communicating uncertainty to model users has repeatedly been pointed out in the literature (for example by Manning 2003; Walker et al. 2003; Patt and Dessai 2005; Janssen et al. 2005; Refsgaard et al. 2007). The SEAMLESS-IF case study suggests that such two-way uncertainty communication should be part of the modelling process instead of being attached to it after completing the modelling phase. Beyond that, the findings indicate that uncertainty communication as a part of the modelling process can help to narrow down types and sources of uncertainties to be assessed. This aids in performing uncertainty analysis more efficiently. Furthermore, since most IA models are developed in long-lasting projects, embedding uncertainty communication in the modelling process facilitates to identify changing uncertainty information needs over time, for example as a result of learning.

Furthermore, the users’ feedback stimulated reflection of SEAMLESS-IF developers on how to structure uncertainty analysis in this model. Our results allow for an identification and categorisation of user-relevant types and sources of uncertainties in SEAMLESS-IF. The findings, however, were not detailed enough for ranking different uncertainty categories. Clearly, this required further research. Policy experts repeatedly emphasised their need for understanding the impact of model- and inputs-related uncertainties on model outcomes (i.e. quantified indicators included in the SEAMLESS-IF library, see Table 1). Therefore, a stepwise approach to uncertainty analysis in SEAMLESS-IF was proposed (Van Ittersum et al. 2008b). The objective was to further narrow down uncertainty analysis from the outcome side of SEAMLESS-IF. More specifically, outcome indicators should be ranked according to their policy relevance, where “policy relevance” should again be determined by model users. In a subsequent step analytic methods should be applied to examine for a set of selected key outcomes which user-relevant uncertainty types and sources trigger variations of these key indicators. As a start, nitrate leaching and farm income were identified as two highly policy-relevant SEAMLESS-IF outcomes. Both are calculated by the FSSIM model, with some input from APES, as a function of biophysical, agricultural and economic processes. Since policy experts considered statistical and scenario uncertainties located in model structure as well as in technical model setup and inputs to be relevant, for these two key outcome indicators an inventory of necessary analytic steps for assessing these uncertainties was made. For example, statistical uncertainties located in model structure largely depend on whether or not and how other forms of nitrogen (ammonia, nitrogen in organic matter etc.) are taken into account, and how temporal variability of nitrate leaching has been modelled in APES. An analysis of this uncertainty type/source combination would, therefore, require to assess the impact of different structural modifications in APES on results revealed in SEAMLESS-IF for nitrate leaching. Also, the calibration procedure used in FSSIM was identified to be very important for the simulation of outcomes. Accordingly, different calibration options would have to be tested in order to assess their impact on model outcomes.

This structuring of uncertainty analysis was adopted at a relatively late stage in the project. The user-oriented uncertainty analysis could, therefore, not be fully implemented during the lifetime of the project. An important lesson for future projects is, therefore, that an uncertainty analysis that is guided by user needs should be set up in an early project phase and should accompany the entire development of an IA model. Notwithstanding, we conclude that user interaction, complemented by an explicit investigation of users’ uncertainty information needs, is essential for user-oriented uncertainty analysis. This can be regarded an important input to future applications and projects using SEAMLESS-IF.

Conclusions

In this paper, we argue that uncertainty analysis in IA models should be user-driven in order to effectively contribute to model-based decision-support. This requires investigating users’ uncertainty information needs. As an illustrative example, we discuss the case of the SEAMLESS Integrated Framework (SEAMLESS-IF). Uncertainty information needs of policy experts, the most important user group of this IA model, were examined in an interactive process during the development of SEAMLESS-IF and by using a questionnaire. This allowed for identifying and categorising policy experts’ uncertainty information needs, which, in turn, facilitated the structuring of uncertainty analysis in SEAMLESS-IF. It should be pointed out that, while providing interesting and useful insight into model user’s uncertainty information needs, the case study presented is just a first step towards user-oriented uncertainty analysis in IA models. Further research is needed in several respects. First, the current empirical basis for comparing model developers’ and users’ uncertainty preferences is still weak. Hence, exploring uncertainty information needs for a broader group of SEAMLESS-IF users would be useful for better supporting user-oriented uncertainty analysis in this IA model. Second, the uncertainty information needs identified in SEAMLESS are case-specific. Generally, users’ uncertainty information needs may vary depending on the user group and the IA model of concern. Therefore, applying the approach suggested in this paper to other IA models as well as to other user groups would be interesting challenges for future work.

Nevertheless, SEAMLESS-IF is a typical IA model as it incorporates different sub-models, comprises approaches from different disciplines and addresses different stakeholders or user groups. Therefore, the case study warrants some interesting indicative conclusions, which apply to a broader class of IA models applied in supporting policy decision-making. First, the standard uncertainty analysis provided by model developers can differ from users’ uncertainty information needs. Second, an exhaustive uncertainty classification and analysis in IA models may not always be necessary or desirable. Instead, focusing on selected, but user-relevant uncertainties may be more effective for fostering the understanding of a model, for creating confidence in its outcomes, and for decision-support. This facilitates more efficient uncertainty analysis by, for example, reducing time and computational capacity needs. Third, user participation during model development, as it is typical for IA models anyway, must include uncertainty analysis from an early stage on in order to allow for implementing users’ uncertainty information needs. Thus, uncertainty analysis must be part of the overall IA modelling process.