System Appraisal

Output preparations

These activities involve completing the scientific description of the Simulation Analysis (Model plus Interpretive) and addressing requirements of the Output Step.

Complete the Interpretive Analyses

These actrivities complete and describe the scientific assessments of the results of the Simulation Model (both the mother Simulation Model and its scenario versions).

Activity Explanations
The major objective of the SAF is to provide a methodology for both diagnostic and prognostic assessments of complex systems. Because systems science is still a relatively young field, it is important that the presentation of SAF results is well described and documented in a credible manner. This emphasis is broadened with the objective of creating a more viable science-policy interface where ‘scientific credibility must be translated in to non-scientific terms.

The second portion of the Interpretive Analysis is completed in this work task. It includes the assessments of the ESE Models, the evaluation of the Simulation Model and its scenario runs, and in addition, it includes the results of the Collateral Analyses, which have been conducted in support of the Simulation Model objectives.

Describe and Interpret the Hindcast and Scenario modeling results

A first step will be to explain how the scenarios were chosen. Why was a certain scenario chosen to be eventually presented to the stakeholders? Which were the reasons for it? What were the criteria for the selection? What where the alternatives? A description of a procedure for the selection process will need in front of the stakeholder audience in the output step. It must be noted that the chosen scenarios are only (each of them) one suggestion and that they may not be the best possible solution. You should document your choices – for example why you decided not to include a particular variable in the model, what would have been needed (more data, additional surveys, ) to include it. It will be important to clearly describe the assumptions, decision and added value that have led to a certain model representation and to the selected scenarios. The process needs to be explained in a transparent way in the Output Step when the scenarios are shown to the stakeholders.

A second step would be to explain what is being quantified in the model and what is shown by the results. As described in the Formulation Step, this implies a confrontation between the simulations of the model and the expectations derived from the mental models produced by people, in particular when the virtual model detects unexpected behaviours. Increased understanding and confidence in the model by policy makers and stakeholders can be obtained by exploring the system dynamics. For example, simplified versions of the model can be used to show highlight some aspects of the dynamics and to explain some counter-intuitive results obtained when the whole system is simulated.

Complete Collateral Analyses | Example

The Collateral Analyses are the supplementary assessments and analysis that enrich the Simulation Model and its Scenario results. The suite of analyses accompanying to support the Simulation Model will vary from system to system as did that for the individual ESE Component Models. Here, we describe some categories and provide examples.

Error Analysis . Prognostic descriptions have a particularly strong requirement for credibility, particularly in regard with obtaining the confidence of the end-users, as Policy-Makers, Stakeholders, and public. Some of this effort falls in the Output Step, but the basic information on the actual or expected reliability must be done in the Appraisal Step. This requires an evaluation of error over the complete range of validity for the simulation model.

The evaluation of the range of validityis taken here as the evaluation of “application niche uncertainty”, which refers, as explained in Pascual et al. (2003) to the set of conditions under which the use of a model is scientifically sound. This evaluation can be considered as a part of the sensitivity analysis. It can be carried out (STOWA/RIZA, 1999) by feeding the model with extreme values of inputs in order to find which conditions cause it to crash or to show undesirable behaviour. The Stella manual (Stella, 2001, p. 150) recommends:

The evaluation of errorcentres on the total effects of uncertain factors on the model results, rather than on the relative sensitivity of factors (STOWA/RIZA, 1999). Different sources of uncertainty are described above.

The evaluation of uncertainty depends on the calibration method used. The uncertainty of a calibrated parameter vector can be represented by a variance-covariance matrix (STOWA/RIZA, 1999). This can be used to give an uncertainty or confidence interval in the model results. Another method is the min-max approach, in which a Monte-Carlo simulation is used to construct an uncertainty interval. The uncertainty can also reflect the existence of parameter ranges, rather that uncertainty in their determination. Often, statistical assumptions such as that of a lognormal distribution of error in a parameter value are needed. First order uncertainty analysis, based on a truncated Taylor series expansion, is useful when the coefficient of variation of each parameter is known. The f uzzy set approach allows the notion of graduation to express whether an element belongs to a set (see example in Freissinet et al., 1999). In some cases, as in the Millennium Ecosystem Assessment, determination of uncertainty may be carried out in a subjective way (see below). Practical advice for carrying out uncertainty analyses can be found, for example, in STOWA/RIZA (1999) and Odum and Odum (2000).

After validation and sensitivity analyses, the results of the simulation model need to be subjected to a number of additional checks. Some of them have been already pointed out in the Systems Formulation Step:

- Simulations need to be compared with expectations, bearing in mind the different groups of stakeholders.

- Does the model point to the existence of previously unrecognized behavior?

- Can the model reproduce the behavior of other examples of systems in the same class as the model?

- Are the policy recommendations sensitive to plausible variations in parameters and changes in the structure to represent alternative formulations?

Pasqual et al. (2003) advise external peer-review as a mechanism for independent assessment of models, particularly when these models need to be used as a basis for regulatory or policy/guidance decision-making. Recommended mechanisms for accomplishing peer review include using ad hoc panels or holding a technical workshop.

According to Refsgaard and Henriksen (2004), the decision of when a model is good enough must be taken in a socio-economic context. Accuracy requirements may be different from case to case depending on the intended use of the model and on how much is at stake. The appropriate degree of evaluation cannot be defined only by modellers or scientists, but needs to consider the view of decision-makers.

Risk Analysis. In the Design Step, possible risks to the system represented (Virtual System) were defined. Some of these risks define the validity limits (above) of the Model in the sense of the model structure, e.g. the change in the resilience of the trophic web by the introduction of an alien species not included in the model. Other of these risks may have been included in the Scenarios, e.g. what would be the risk of habitat loss by urbanizing the shoreline, etc.

System Dependence. It is important to provide a clearer definition about the degree to which the Simulation Model, the Impact, and Policy Issues, the Social and Economicwibble responsed system-inesulting from forcings ehe system aned by it) anld be considt (included within the se assessmentmonstrated iientific man ng > Again, an emphasis on the discussion of results must include some assessment and criteria concerning the relevance to Sustainability in all the ESE Components.

ESE Interrelationships . The ESE assessments of the Appraisal Step will not have completed the discussion of the potentially important feedback loops between the ESE Components; and likewise, some of these will not be represented by the restricted linkages used in the Simulation Model. A discussion of these is essential in terms of the holistic aspect of the SAF, i.e. they exist but were not represented as not being fundamental to the immediate functionality representing by Simulation Model. The final end-user audience may not want to restrict their field of interest to that represented by the SM, and the end-user discussion needs not either. In other words, the fuller holistic functionality of the CZ must be recognized at least in a qualitative manner if not represented qualitatively.

Draft the conclusions of the Simulation Analyses | Example

IHere, the major scientific conclusions are defined and explained at a preliminary level for their use in the Output Step. These conclusions will be integrated into the material for the Science-Policy Interface so that the combined ults can be finalized for both the Scientific Article and the Science Policy Report.

Next step