System Appraisal

Systems simulations
These activities must construct the Simulation Model from the latest versions of the ESE Component Models and with consideration of interactive feedbacks between these components.

Construct Simulation Model (SM)

These activities create a working Simulation Model and makes tests it for stability, validity, and agreement of its output variables with observations.

Activity Explanations
These tasks combine all previous model preparation into the CZ Simulation Model. This model may have several versions because of the scenario specifications. The specializations of the Simualtion Model to accommodate the following scenario requirements. To understand this distinction, one can think of the product of thes activities as the ‘mother’ model that represents the functionality surrounding the ecological impact and examples of the economic and social responses to that impact. The scenario versions are specific runs of the mother Simulation Model, which require certain changes in the input, internal function, or output of the Simulation Model.

Review Inputs, linkages, outputs for Simulation Model | Example
The SAF recognizes three main types of model subsystems or components: the Natural Component (NC), the Economical Component (EC), and the Sociological Component (SC). These three components are coupled in the Simulation Model (to be developed for each study site). The division into NC, EC, and SC is adopted for simplicity, but it should be taken into account that, as described in Wang et al (2001), the level of integration between these subsystems can be variable.

Based on the degree of integration between their economy-ecology subsystems, Wang et al. (2001) describe different types of models: Models with unilateral interactions and Integrated models (Wang et al., 2001) don’t mention the sociologic subsystem, but their scheme could be extended to consider it also. System models with “unilateral interactions” may be ecologically or economically-oriented. In the first case, the model focuses on the changes in the ecological subsystem caused by the economic subsystem. In the second case, the ecological subsystem provides inputs for the economic subsystem. However, there is no real feedback between both subsystems. Integrated models link individual elements of both ecological and economic subsystems. In this case, Wang et al. distinguish integration through a “production function” (which transforms input of factors such as human labor and ecological resources into outputs such as investments and consumption) and integration via an “objective function”, for example, by including ecological factors (such as resources or emissions) into economical functions such as a “consumer’s utility function”.

Construct Simulation Model from the ESE Component models | Example | Example 2
From the system formulation step you should already possess building blocks of your model in the modelling package you are using; that is sub components, which you have run, calibrated and validated. The environmental, social, and economic sections of the model need to be finalized and brought together into system based model before it can be used to inform stakeholders and decision-makers.

Linking ESE components is not an easy task. For example, it may be difficult to use EXTEND to integrate modeling routines using different time steps. It is best to use a time step that is sufficiently small to be applied to all the model components. This may no be very efficient; for example in Scheldt delta, the adopted solution was to build an estuarine compartment block that implements the time integration itself andhas a user dialog to fit the time step. The Guadiana Estuary model uses blocks that accumulate the outputs of the environmental component and control when they are sent to the socio-economic model.

As you have been considering the system as a whole, and modelling the three aspects of the system mentioned above, it is logical to assume that these three aspects interact with each other. This is the basis of systems thinking, the bringing together of all aspects of a system and modelling them as a whole as opposed to their component parts. The problem comes in how we should go about integrating these three components.

The easiest way to go about creating this integration is to consider a ‘common currency’ between these models. This can take the form of a variable, which is present in both models such that the output of one component is the input of another, remembering that there may well be a feedback loop of the output of that second component also acting as an input to the second. Often, it may be more realistic to have two or more linking variables.

Verify the Simulation Model, conduct Sensitivity tests and Error Analysis, and Document Results
Verification. The initial stage of validating the models is that of verification. This is simply running the model and making sure that it functions would be expected on a qualitative level.

The simplest way to undertake a verification check is to run the model through a simulation run and confirm with the experts within your system that the variables in the model are acting as they would expect. It is not necessarily the modeler’s responsibility to check this, as the modeller is not expected to be an expert in the minutiae of the system being modelled. The other members of your team are there for this purpose.

A good way to undertake this sort of verification is to have the members of the your stakeholder group present in a meeting while the model is being run. This then allows discussion within the group as to the models functioning and may result in feedback that would not be encountered if participants were approached individually. More often than not, this then allows changes to be made to the model while all members are present and results in a model that consensus considered correct. It may be necessary to repeat this exercise two or three times until consensus is reached. Also be aware at this stage that some experts will dispute the accuracy of the model and might try to influence its construction. It is therefore advisable that a trained facilitator is present at the session to ensure all experts provide input to model reconstruction and to ensure that parties are aware of the benefits of joint working in this area. See the SAF Protocol on CZ System Output concerning requirements for the facilitator.

The scale at which these checks are made depends on the scale of the data available. Ideally, a dataset of variables entering and exiting the model will be used as this allows a simple validation check of the system. If this is not the case, and only partial datasets are available, or the validation using this method shows unacceptable error, then a more complex validation procedure on a function-by-function basis must take place. In the case of a lack of data, this allows verification of each link in the system, in the case of a failed validation on a macro scale; this allows the tracking down of the faulty functions within the system.

It is worth reiterating here that the point of the SAF is that it is a Systems Approach. One important premise in systems thinking is that emergent properties will be evident when the system is modelled as a whole, which are not present when the sub components of that whole are modelled individually. This presents a problem when it comes to validation based on real data. If we are validating the ESE components individually, then the interplay between these components is present in the real data, which the model is being validated against, but not in the model itself. This would results in increased error in the model..

Sensitivity Analysis . Sensitivity analysis examines how much the model output is affected by changes in a parameter value or a forcing function. If a minor change in a parameter value results in a huge change of model output then the model is said to be sensitive towards this parameter.

In relation to model validation, sensitivity tests are mainly used to evaluate how uncertainties in the estimated forcing functions or parameter values affect the model output. A lot of uncertainty in model output is introduced due to inaccurate approximations of the forcing functions. This inaccuracy is mainly caused by lack of data, poorly temporally or spatially resolved data, poor data quality etc. The potential variability in forcing functions should be estimated and used to make a sensitivity analysis.

Several parameters in complex system dynamics models represent quantities that are difficult, expensive or impossible to measure and which have to be estimated from a model calibration procedure. It is import to evaluate the sensitivity of these parameters in order to increase the credibility of the model.

The model parameters that have a high sensitivity should be identified and documented in preparation for the output, since the reliability of model scenarios is highly depended on how well/precise these parameters can be estimated and often pinpoints the need for high quality data for a better determination of these critical parameters.

Quantification of Error. Quantification of model error (i.e. measures of the difference between output from a hindcast simulation and empirical data) is part of the validation procedure. A series of techniques are described and applied in Allen et al (2007). As an example the Nash Sutcliffe Model Efficient Measure (ME) that is a simple way of assessing model performance is calculated as:

Where D are the observational data, D’ is the mean of observational data and M the model output. Allen et al categorise model performance levels as ME > 0.65 excellent; 0.65-0.5 very good; 0.5-0.2 good; < 0.2 poor. Another technique is to calculate the percentage model bias (i.e. model error normalized by the data) as:

Values can be categorised as: < 10 excellent; 10-20 very good; 20-40 good; > 40 poor. It should be noticed that the choice of categories are highly subjective.

Conduct a hindcast simulation with Policy or other change | Example | Example 2
Hindcast Model. Comparisons between model output and observational data are one of the most effective ways to tests whether the model sufficiently resembles reality. The purpose with a hindcast simulation is to validate the model, by making model simulated data that are suitable for comparison with empirical data. A hindcast simulation requires forcing data, which will be used to approximate the forcing functions that drive the model during the simulation. In addition, empirical data suitable for model comparison are required. These data should correspond to the model state variables or process rates and they should have been measured during the same forcing conditions as the forcing data.

It is highly recommended that the forcing data and data used for model comparison have been measured during a major policy change in order to test that the model responds correctly to this change.

To conduct a hindcast simulation it is necessary to construct proper initial values for each state variable and to construct the forcing functions. An initial value defines the state variable at the beginning of the simulation period (i.e. at time t = 0). Mathematically speaking the initial value for the state variable Cs is described by the function Cs(x,y,z,t = 0) = ps(x,y,z). Unfortunately, this function is often unknown and has to be approximated based on data and data inter-/extrapolation, good guess or model simulations (spin up period). A simple way of constructing initial values for a validation simulation is to find data observed close to the beginning of the simulation period and then makes a simple linear interpolation (or extrapolation) in time and space.In general, it is an advantage if the simulation period is initiated during a period when the system dynamics is slow (for example, in winter for an ecological submodel) or after a given temporal boundary (for example, the beginning of new fiscal or laboral regulations for economical and sociological submodels) .

Forcing functions (or external variables) describe how the external world influences the state variables described by the model. In a hindcast simulation, all forcing functions have to be based on observations from the simulation period in order to simulate what has happened previously (hindcast). This is not a requirement in e.g. forecast scenarios.

Forcing functions are represented either by a prescribed value (e.g. sunlight, temperature) or by a flux condition (e.g. nutrient loading from a river). In both cases forcing functions have to produce a value or flux in each time step of the simulation and often also over large spatial domains. Normally data are not available on such short time scales and with a high spatial resolution and it is therefore often necessary to interpolate or extrapolate observed data. Various inter and extrapolations techniques are available but simple linear interpolation is often recommended.

Boundary conditions are a special type of forcing that is traditionally used in the context of partial differential equation based models (i.e. models with a spatial component). For each state variable of the model, either it is necessary to prescribe the boundary value of the state variables or to describe the trans-boundary transport of the substances represented by the state variables. In coastal ecosystem models, it is very common that the boundary condition at the open border separating the coastal area from the sea is prescribed with a boundary value. This boundary value (that changes over time) can be approximated based on inter/extrapolations of available concentration measurements close to the model domain. The open boundary at river discharge areas is normally prescribed with a flux condition approximated by measurements of the water inflow and the concentration in the river. If the EXTEND model of the coastal zone does not explicitly deal with space then the transport in and out of the “model domain” has to be parameterized.

Once the initial values and forcing functions have been constructed and implemented in the simulation model, the model output from the hindcast simulation can be used for comparison with empirical data for validation purposes. Observational data that are often used for comparisons with model output are “pool” measurements that correspond to the model state variables (e.g. algae concentration and mussel biomass) but it is also important to have rate measurements for model comparison (e.g. measurements of primary production or grazing rates). In practice it is often, necessary to “translate” model output to the semi-equivalent measured parameter because exact match between model output and measurements are rare. A typical example is modelled and observed concentrations of algae biomass. Most models calculate algae biomass in carbon or nitrogen units whereas algae biomass is measured as chlorophyll a. In this case, algae carbon (or nitrogen) has to be “translated” to chl_a before comparison with empirical data.

Run Scenario Simulations.

The purpose of these activities is to test and run versions of the Simulation Model for the selected scenarios and to document its results.

Review and evaluate priority and feasibility of scenarios | Example | Example 2
The role of scenario building in environmental studies is to develop exploratory views of the future. Scenarios are not to be confused with predictions or forecasts but are an attempt to describe a “range of possible futures”.

Within the SAF frame, a scenario can be defined as a set of forcings, boundary conditions, initial values, model parameters and constraints that can be used with a numerical model of a CZ system to assess what will happen in response to a change in these forcings, etc. It is thus a combination of policy options (i.e. modifications in components of the virtual system itself) and changes in forcing functions used to explore the potential future of the system through the representation of system trajectories.

Within SPICOSA, the chosen scenarios can explore those changes in policy options and forcing functions by defining several sets of input data and the model can be run with each of these sets, to show the changes incurred in the system.

So far, in the project, the issue resolution task of the Design Step focused on the environmental impacts and their socio economic consequences that were of concern to the stakeholders and coastal managers. Through a collaborative process with the stakeholders, the key issue to be modelled has been agreed on.

Since the local system managers control some of the forcings, model parameters or boundary conditions of the model representing the dynamics of the key issue, the issue resolution task also included discussions with the stakeholders about the different options of scenarios for change in these environmental impacts. Scenario building is a powerful support tool to involve stakeholders or policy makers and stimulate the discussion, facilitate the assessment of the relevant action, policy, decision or governance issue at stake (European Environment Agency, 2001).

Within the your system, three different areas of scenarios can be identified:

1) The first one relates to public policy and describes a change in the management options and regulations. Local policy makers could for instance introduce new constraints on water treatment (see box on effects of sewerage management options in Barcelona beaches), encourage the development of organic agriculture or develop a new protected area.

2) The second one relates to the occurrence of natural events: meteorological events or global change. For instance, the stakeholders might be concerned by a meteorological event such as a storm or tsunami occurring in the short term. The long-term change can also be a concern, climate change or sea-level rise for instance should be taken into account when there is a likelihood of change. Where the extent of the change is uncertain the scenarios should cover the range of realistic possibilities. To assess such long term or more global changes, specific types of scenarios can be used such as emissions scenarios. IPPC Special reports on emission scenarios for instance describe four socio economic scenarios families (based on assumptions about determinants such as population, economic growth, technological change, or environmental policies). The Millennium Ecosystem Assessment also developed a range of different scenarios to assess outcomes of global ecosystem services and their impact on human well-being.

3) The third one relates to interactions between nature and society. Stakeholders may for instance want to know the impact the increase of a specific type of HA may have on the system. For example, if the policy issue deals with eutrophication, the reduction of point and/or diffusive nutrient loadings or the increase/decrease of the leisure boating industry in the coastal zone might be studied. The impacts of these options for instance on the aquaculture sector or the tourism industry will be reflected in the chosen indicators (ecological, economic or social).

Most of the scenarios will be evaluated by comparison to baseline scenarios representing the future states of society and the environment in which policies either do not exist or do not have an influence on society or the environment. Multiple baselines can be developed to reflect different trends, some of which have a lower probability, and some a higher (e. g. different trends on nutrient inputs or greenhouse gas emissions). If the time horizon is long, multiple baselines will be more needed since uncertainty over environmental, social and economic systems increases with time.

The three types of scenario described above can further be classified in levels of increasing difficulty with respect to the modelling exercise (Formulation Step guidelines):

• The scenarios involving merely changing input values (any forcing functions, parameters, initial or boundary conditions of state variables) or testing the output sensitivity to change.

• The scenarios that require modifying an internal component – like inserting an alternative technology, making another type of economic or social analysis, or exploring another scale of policy options.

• The scenarios that require adding an internal component – like a different land use problem.

• The scenarios relating to changing to an unrelated Impact such that a different cause-effect chain or assessment would be required; or changing the economic method or social assessment.

Further changes of scenarios (whether at the storyline level or at the numerical estimates level) are possible during the Appraisal Step but should be lower than the first level of difficulty described above.

Generate necessary Input data for selected Scenarios | Example | Example 2
Once calibrated and validated, the model will be used to make simulations. The results should meet the requirements of the stakeholders and provide an insight into the policy issue they wanted to focus on.

New data is often needed to run the model with the different scenarios such as ‘simulated inputs’ to take into account the projection into the future, boundary conditions or additional time series (Note that these data issues regarding the scenarios should have been tackled during the Formulation Step).

Prepare, conduct, and test scenario versions of SM | Example | Example 2 | Example 3
The main model structure often represents the baseline or business as usual scenario. This structure might have to be adjusted to match the other chosen scenarios (or the last changes made during the Appraisal Step, consecutive to the conclusions of the Interpretive Analyses). The model structure might need to be changed even when the scenario involves no apparent “structural” change to the system. For instance, extreme high flows may not have been included in the available calibration and validation data even if they are needed to run the chosen scenarios. Since high flows may bring different flow paths or processes, the model needs to be adjusted to consider this.

The adjustment of the structure implies additional testing of the revised model and re-assessment of its soundness. However, it is impossible to recalibrate or revalidate the adjusted model, because no data exist for the possible future situation characterised by the scenario.

Next step