System Output

Do Stakeholder Forum

Introduction

Stakeholder Forum

You are depending on the reference group being interested in you and in what you are presenting. It is therefore crucial that you show an honest interest in them and in their stakes. Of course this has been an important aspect throughout the prior steps also. If the audience consists of a variety of different stakeholders (farmers, fishermen, environmental managers, local and regional politicians,…), it is crucial to communicate on a meta-level so that everybody receives ‘sufficient’ information. Also, it will be important to define a (more or less) common regional interest (e.g. funding opportunities for the whole region).If the group is too diverse or tending towards conflict, it is recommended that an initial meeting be held for each respective group before bringing the groups together to present the scenarios in the above mentioned meta-level.

Entering the real world: Conduct the Stakeholder Forum

Recapitulation of the entire SAF process for transparency

You will need to briefly explain to the stakeholders what has been done so far.
Ideally you have chosen a policy issue in the beginning together with your stakeholders and then defined different policy/management options in order to build scenarios in the models and after consultation with stakeholders selected those which shall be presented to the stakeholder forum in this present step of the process. The scientist should not have been the person to select which scenarios are to be presented.
Prior to starting the presentation of the scenarios, the initial and general recapitulation will be an essential starting point in the Stakeholder Forum to build confidence and transparency:
You enter this step of the systems approach framework with:

Where/when has the reference group been involved?
You should highlight the steps in which the reference group members have given feedback and information. As said before – it is at this stage very important to explain the entire process. When presenting the model, it is important to inform the audience about which stakeholders have given input and feedback at which step of the process and where the quality of the model could be improved as a result of the information given by the reference group members. It has to be made clear that the stakeholder’s involvement has been of added value for the scientists and that without these inputs and feedbacks the scientist would not have been able to produce the model FOR the reference group.

Taking advantage of Issue Id and Design Step Activities
Again it is now important to thoroughly go back to what has been done in the Issue ID and Design Step activities.
Read the full article of using social tools in the Output step
Go back to the brief overview of social tools in the Output Step

Show the usefulness (and limits) of scenarios to the audience

The target group specific presentations must strongly show the usefulness of the scenarios (and the communication with the science policy interface) for the stakeholders and policy makers and the usefulness of using scenarios for their decision-making and the implementation of these decisions.
If this is not taken into consideration, it is most likely reference group will lose their interest in scenarios.
Click here to go back to ‘Working with scenarios’

Scenario presentation

Interpret and present the social-ecological modelling results to the audience
When presenting the modelling results to the audience you need to be aware of the following aspects: Every scientist and/or the relevant facilitator should be aware of the fact that reference group members come from specific stakeholder groups and from different environments. They have their own language and binary codes, through which they articulate and negotiate their interest within their own groups. The scientists need to be aware of this challenge when entering the dialogue with the reference group. Stakeholders only listen and translate within these ‘codes’ and if they hear something different, it will be sorted and ‘removed’ (Luhmann). On the other hand, most stakeholders have multiple stakes, e.g. as fishers, parents, hikers etc. The presentation should therefore use examples that bring the different coastal spheres together. One therefore has to be aware that it is essential and inevitable that the modelling results, the scientific results are reduced, especially if the objective is that audience understands the presentation well.

General explanation of uncertainties and assumptions

Scientists frequently stress all the conditions, uncertainties etc so strongly that lay persons can get the impression that nothing can really be concluded, thus reservations that are normal for scientific presentations can cloud the message for the non-scientist. This can be confusing and valid conclusions should be made clear. However, it is also very important to explain that the models are based on assumptions and that our present knowledge will limit the predictive power of the models and/or impose some levels of uncertainty or confidence borders on the predictions. This difficult balance between making the limitations of the predictive value of the model tool clear to audience on one side and demonstrate the usefulness of the tool on the other side is very important for the presentation. It is important to make sure that the audience realises that current knowledge sets limitations on how precise a model can be and that using modelling softwares always will impose simplifications. But also make sure that once this context is made clear that all following conclusions are stated as valid within that context. A good way to gain credibility is to demonstrate how the model portrays existing conditions which can be recognised by the audience. It has to be made clear to the audience that scenarios are by no means a certain or an accurate prediction of the future. Their objective is in fact to present possible future conditions in such a way that the same audience is able to imagine these conditions, to find subjective interrelations and connotations, thus create a basis for discussion and deliberation. They are about gaining own imaginations and desires with regard to the described futures – depending on whether the presented scenarios and their details seem desirable, ambivalent or even frightening.
Also, it has to be made clear that none of the scenarios will need to be accepted completely. Some people in the audience may favour it and identify themselves with a certain person or attitude – or not. Keep in mind that such possible constraints or hidden pitfalls can support scenarios and the discussion about them and about action options and possibilities for implementation and formation. They are a tool for empowerment towards a proactive and framing perception of the future and the transition to the deliberation process that follows the presentation of the scenarios.
A significant characteristic of scenarios is that they allow a positive look into the future. The desirable chances and benefits can be put in front without letting the scenarios become ‘utopia’ and without fading out disadvantages, risks or problems – they should be faced, not removed. No utopia shall mean that scenarios are linked to existing social contexts, to trends and to developments of the region or community. They can as well be hooked to technology and innovations which are considered as possible or feasible for group members.
Speaking about trends implies that the logical time scale for the presented scenario is a medium time horizon: ten up to a very maximum of twenty years.
When constructively and critically dealing with the scenarios, the audience could ask the following questions during the presentation of the scenarios:

Uncertainty in ecological models
Uncertainty is present at all stages of the assessment process, whether it be uncertainty about the magnitude of physical impacts and their geographical and temporal distribution or uncertainty over the value of changes in ecosystem goods and services. Whatever methodology is used to conduct the assessment, all results should have been subjected to a rigorous sensitivity analysis. Sensitivity analysis allows this uncertainty to be explored in a constructive manner and can be used to identify the parameters of the system which are particularly subject to uncertainty and that have a significant impact on the overall outcome of the assessment.
Degradation and loss of ecosystems, and subsequent loss of their associated services, constitute a reduction in natural capital. Whether or not this implies an unsustainable path depends on the extent to which one believes that the ecosystem services provided by natural capital can be substituted for by other forms of capital. Whatever the case there is a great deal of uncertainty about both the consequences of ecosystem service degradation and loss, and the ability to generate substitutes. Given this uncertainty, and the potential for catastrophic change, many would argue for a precautionary approach, in which case current rates of biodiversity and other natural capital depletion are a source of serious concern for sustained maintenance of human welfare.
Ideally ecosystems would be managed with sustainable development in mind. In practice, there are a number of acknowledged reasons why ecosystem degradation continues unabated. These reasons mainly include market failure and poor governance. One of the key causes of market failure is lack of information, and so the provision of information on the economic value of ecosystems can under circumstances contribute to better decision-making. This current lack of knowledge relates both to ecosystem functions and economic values. Poor knowledge of the mechanisms by which bio diverse ecosystems are maintained is a barrier to the development of effective management and assessment protocols.

Risk can be seen as the likelihood of occurrence of a set of factors leading to an undesirable outcome. In this context risk can be seen as a combination of the probability of a hazard occurring and the probability of the system being vulnerable to that hazard, which would then result in a negative outcome. Uncertainty, in general terms, is the lack of confidence in the likelihood of future events occurring and it is in a way more difficult to quantify. This uncertainty may arise from different aspects and steps in the process.
In terms of modelling, it is clear that different levels of complexity can be considered. First we do not have an absolute understanding or knowledge of the full workings and functioning of reality. That level of knowledge is not a requirement for the construction of a good working model. In fact simplicity tends to be a more desirable way to progress when modelling, given our ability to comprehend and assimilate simpler structures and processes. However, would the adopted ‘simple’ model be appropriate to represent ‘reality’? It is important to realize that models are often used to represent the physical system as well as our understanding of the science and theory linking the different compartments and defining the links. Will our knowledge and representation of ‘this science’ be appropriate? Another possible source of uncertainty relates to the parameters needed in the modelling process. How much knowledge do we have about them and can we improve this knowledge via some kind of predictive relationship? This would reduce uncertainty. For example, we may find (possibly via sensitivity analysis) that a model is very sensitive to a key parameter. This parameter may be difficult to measure, or else in order to improve the model, we may decide that it would actually be better if we could predict this parameter by devising a (mathematical) relationship with another parameter we can actually reliably measure. However, how reliable is this relationship and how can it be improved?
Therefore when modelling natural systems it is always better to start with the simplest conceptualizations and mathematical representation given these systems’ complexity. Models can then be made more complex following a stepwise approach, by changing assumptions and representations if and as these prove unable to simulate realistic behaviour of the target system. In this context 'realistic' is defined in terms of ability to simulate the correct qualitative behaviour of the system as it changes in time under natural forcing or in response to a perturbation, and to generate values of the state variables that are within observed envelopes of variation. This is an iterative learning process that would lead to an improved representation of the system and therefore would improve our understanding of the system functioning. Following this process would result in reduced uncertainty which would increase our confidence in the model and in its outcome.
The design of the model, as well as the assessment of its suitability to the task in hand, including the undertaking of sensitivity analysis, is the responsibility of the scientists. During the presentation it can be made clear to stakeholders what the sources of uncertainty may be and what was done to address them during the process. When presenting the model to the stakeholders it is most likely and indeed desirable that an interactive process takes place. Stakeholders may provide inputs and suggestions that may lead to changes in the modelling approach. The processes previously followed to reduce uncertainty (described above) may need to take place again (not in the presence of the stakeholders, although a simple explanation should be provided). When running sensitivity analysis there are standard procedures that should be followed, however it would be important to involve stakeholders in decisions such as which parameters they may think are important and in which they may interested in assessing any overall impacts resulting from any alterations in these parameters by a pre-defined percent. This would allow the reference group to evaluate model sensitivity to specific parameters and as such improve their understanding of uncertainty.

Presenting (running) the scenarios for audience


You now take the scenarios you have developed, apply them to the model, and examine the outputs of a model.  You interpreted these outputs in such a way that they can be provided to stakeholders and policy makers and be considered useful by this audience.
Make sure that you present the different scenarios with the same sequences and visualizations (for comparability) as explained. Point out those aspects that are changing considerably from one scenario to another.
Before presenting your findings to a forum audience, you must prepare your information in such a way that you can explain the advantages of the model and its outputs that you have produced.  But equally you must provide information on the possible negative outputs of the scenarios. You need to be careful to provide a balanced view of what the results of your scenarios show. The following steps provide a procedure that will make your presentation easier.


How to present model findings to an audience
When we consider showing the findings of our models to an audience, we need to think about what it is we will be showing them.  Numbers in themselves are difficult to interpret quickly and accurately, which is why we tend to use graphs and figures to present findings in publications. However, what numbers should we be reporting on?
Run models with pre defined scenarios in front of audience. Use graphical output of the model to visualise what is happening.
Use running averages to smooth variability from seasonality, if running on a monthly time step, and 12 point running average will remove annual seasonal variability.
A simple end point is often not useful, instead we need to design a way of showing what has happened during the period that the model has run.  Where a simple change over time is not evident, taking a running average and reporting a how that changes may be useful.  The simplest measure to be used is the rate of change of that variable (1st differential), which in the case that a curve is not described mathematically can be represented by the gradient of the change in that variable over time. 

Identify scenario inputs and outputs
Each of the scenarios that are run on the models will have input parameters which deviate from each other.  These will be important to emphasise to the forum as there will frequently be the costs or savings that are the responsibility of the audience.  Equally you should identify the main output variables and emphasise the importance of these, referring back to discussions from the Design Step.

Read more about alternative policy instruments
Read more about the using the Issue ID and Design Step tools in the Output Step

Assumptions
As part of the process of designing and constructing your models, you had to make certain assumptions on how the system functioned.  As these assumptions could greatly affect the outputs of your models, it is important to communicate these to your audience, including the extent to which these assumptions will affect the output of the model. 

a) Identify and quantify benefits for scenarios
These will be the positive aspects of your scenarios, which the audience can discuss to support the adoption of a particular course of action. The problem with these benefits is that usually only parts of them are direct economic benefits that are already valued by market prices. In most cases you will have to make clear that a considerable share of benefits consist of indirect economic effects or ecological and social benefits that are not valued in monetary terms in the first place. It is a methodologically difficult and politically often challenging process to find adequate dimensions to measure and communicate the multidimensional benefits of management options. Therefore valuation methodologies must be explained clearly and transparently.
To acknowledge the standard perspective on benefits the presentation of financial benefits may be a good starting point for the discussion. But since usually at least some if not a considerable share of benefits associated with environmental policy concern public goods that have no direct market value, do not raise the expectation that financial benefits will be the job to outweigh the costs of implementation easily. It is often the case that direct financial benefits are modest compared to the costs of policy measure implementation. So broaden the picture step by step to show the parts of the valuation that are often neglected. Namely the indirect financial effects, if you take into account also regional or national economic effects and positive side-effects on complementary activities.
The field of environmental benefits is somewhat more complicated. As mentioned above, you have to make very clear, which methods to assess non-market values your team has employed. If you have expressed the values in monetary terms, they will be easily comparable with the financial benefits (and costs). But also in this case you should discuss the assumptions of the method, its limitations and include a sensitivity analysis or other means to deal with the uncertainties embodied in the method. If you did not monetise the environmental benefits, you have to discuss the weighting of the different dimensions of benefits in course of a cost-effectiveness or multi-criteria-analysis – to make sure, that non-financial benefits are not neglected or are ranked lower than already monetised costs and benefits just because they show no monetary value at first sight.
Social benefits can comprise some of the indirect financial benefits mentioned above, so beware here of double counting. The aspect to be discussed here in detail is mainly the distribution of benefits within or among the affected communities – be it a local, regional, national or global community. The main question here is: Who will benefit and who will not? This question is often neglected and accordingly leads to unexpected problems in practical political processes. Especially drawn together with the question of who will pay for the measures, it can explain consent or opposition by positively or negatively affected stakeholder groups. This might also be the case if the policy leads to a theoretically overall “efficient” outcome in allocative terms. The distributional aspects should in any case be analyzed and discussed thoroughly. Many of these social aspects can be assessed by an actor or stakeholder analysis that could be presented in this context to more comprehensively cover the social dimensions of the investigated effects.


b) Identify and quantify costs for scenarios
To present how scenarios will affect the system we must evaluate the costs that these changes will incur. These will be considered the negative qualities of the scenarios, be they financial (e.g. costs through loss of amenity), social (e.g. loss of jobs, thus well-being) or ecological (e.g. habitat loss). The costs can occur as “costs of doing nothing” – so they may be higher, if no environmental policy measures are implemented. But often the implementation itself causes costs of considerable amounts, since it requires investments or implies restraints to current ecosystem uses.
The most directly visible costs of policy measures are the financial costs – most easily displayed as investment and maintenance costs. But also costs of non-usage or restrictions to current uses should be calculated and discussed. These indirect costs also cause financial losses, which are more difficult to detect, but will result in political opposition by stakeholders negatively affected. Therefore also these indirect costs should be analysed and discussed as far as possible.
Closely connected with the indirect financial costs may be the social costs of a policy option. Here – similarly to the benefits discussed above – also the distribution of the costs should be discussed carefully. Since the incidence of costs and benefits may often not match in the sense that the cost bearing stakeholder groups might be very different from the beneficiaries, compensatory policy options should be discussed in that context. Also concerning the cost, an actor network analysis can reveal social conflict potential and show decision makers possible paths to socially responsible and consensual management options.
Environmental costs occur mainly in the case of ‘doing nothing’ compared to additional environmental management options. So here is a close correspondence with the environmental benefits of ‘doing something’. But also the possibility that doing some good to one environmental dimension might do harm to another should be discussed (e.g. CO2-saving offshore wind parks vs. submarine noise emissions). Finally, environmental costs are of course present if investment projects intended to raise profit or regional economic product are to be assessed by environmental impact analysis or cost benefit analysis. Here the same methodological problems as with the environmental benefits occur and have to be discussed transparently. The environmental costs are difficult to value and therefore it has to be made sure in the decision making process, that this does not lead to ignorance towards environmental costs.


c) Present and discuss the time scales of costs and benefits
A major barrier to the acceptability of a scenario will be where a short term cost will lead to a long term gain. Running scenarios in your simulation models will be able to provide an approximate time frame in which the change in human activities, or capital input will have an effect on the system. Naturally stakeholders who put resources or effort into a management scenario will wish to see the fastest possible result of this input. The uncertain potential of a positive system response at some unspecified point in future might not be a convincing incentive to follow this course of action.
Running the simulation model and tabulating results which show when impacts could reach their target values might be an effective tool to make time scales transparent. Similarly showing the audience the running of the simulation model with and without certain inputs could be an effective way of demonstrating the importance of each group's actions.
But there will be no way around a discussion of time preferences. The results of a cost benefit analysis are highly depended on the discount rates implied. A short-sighted time preference (high discount rate) will weight short term cost higher than long term benefits. A time preference taking into account also the interests of future generations would put more weight on the long term benefits – the current investment costs thereby loose much of their prominence. To make the consequences of different time preference assumptions transparent, results of cost benefit analyses should be presented accomplished by a sensitivity analysis concerning the implied discount rates (e.g. ranking between close to zero up to 5 percent). It might also be useful to provide time dependent curves that show the development of costs and benefits over time. This way of presenting the effects of different time preferences could be made palpable and thereby more easily discussable.


d) Compare costs and benefits for scenarios
As discussed above, costs and benefits should be presented in tables and/or diagrams. The final results of a cost benefit analysis can be presented in absolute terms over a certain planning period, in discounted present or annual values or in benefit-cost ratios. The choice of the presentation format is dependent on the problem perspective of your audience and should be chosen accordingly.
To reveal the uncertainties and assumed time preferences, the range of uncertainty and the effects of different discount rates should be presented in additional charts or should be included right from the beginning. But you have to be careful not to overload single tables or diagrams with over-complexity. Otherwise disorientation or rejection by the audience will be likely.
Often a fundamental opposition to valuation of environmental or other non-market goods or services is prevalent in the audience. And of course a fully monetised cost benefit analysis is not the only possibility to value and weight non-marketed and intangible ecosystem services. So to avoid a rejection of your research results just because of the valuation method employed and to draw a broader picture of possible valuation approaches, you could additionally or alternatively present a multi-criteria analysis. This method does not try to melt everything into one dimension (i.e. money). It shows the effects of baseline developments and management scenarios on the multiple dimensions of the ecological, economic and social systems. The problem with this method lies in the complexity of its results. It is know from psychological experiments that it is difficult to make judgements on different alternatives taking into account more than seven criteria at once. But a multi-criteria analysis can easily supply dozens or hundreds of criteria. Without aggregation and/or weighting with the help of indicators, decision makers and stakeholders will be overstrained and disoriented. On the other hand, aggregation and weighting in itself is a difficult process relying heavily on strong and influential assumptions and value judgements. So, multi-criteria analysis also has its methodological problems that should be revealed and discussed openly.
A special and widely used case of multi-criteria analyses is cost-effectiveness analysis. It takes monetary values on the cost side and confronts them with non-monetary values on the benefit side. It can be used to discuss how to reach a certain environmental target as cheaply as possible or how to reach a level of environmental improvements with a fixed budget. The difference with a full-blown multi-criteria analysis lies in the reduction of the dimensions judged relevant to usually only one or two. In the presentation of a cost-effectiveness analysis, it must be made clear that this means ignorance towards all the other (possibly also relevant) dimensions of effects. The results should therefore be discussed posing the question in how far other relevant economic, environmental and social dimensions would be affected.


e) Discussing outputs as information
Data do not become information until it has been interpreted. We have looked at how we could change numbers to symbols to clarify changes in variable in our models. But how do we discuss these outputs in the context of the system we are studying, and how will we know if the management strategy we are developing will be a success? An effective way to do this can be through the use of examples and reference points.

Examples using this technique are most powerful when on a local level, for example suggesting that the water quality in the bay that you are running the SAF in may return and to a clarity that was present several decades ago when the water was considered to be clean. Another way that these examples can be used is to compare possible scenarios with indicators currently at similar levels in other locations. These again provide direct comparison and can be used by the audience to determine whether a scenario desirable or not, particularly when these examples can be backed up with photographs.

Look up the economic tools and how to use them in the Output Step
Cost Benefit Analysis
Economic Valuation
Financial Analysis
Input Output Analysis
Multicriteria Analysis

Explanation of uncertainties of the scenario
To effectively describe the outputs of a model to an audience, you must describe not only those outcomes, but also the uncertainties associated with them. It is important that for each scenario the uncertainties and assumptions are explained and that it is visible what impact they have on the predictive capability of the model and the possible futures if a policy option is implemented.  Make sure that you explain in general terms the limitations of your model and that you show that even with the limitations you are able to simulate the actual state of the coastal area in question as a calibrated and validated model. Make also clear that model runs and the predictions given as scenarios will be presented as ranges rather than as exact numbers. It is very possible that the audience will ask for the probabilities, i.e. when presenting the scenarios, you will need to be very well prepared to answer spontaneous questions on uncertainties and assumptions, besides those information that you have given to them when presenting the scenarios.

Error envelopes of models
Similar to the assumptions which were made while constructing the models the errors associated with the running the models need to be carefully explained to the audience.  These errors may not be intuitively obvious to your audience and they may assume that errors are not important.  The Appraisal Step is largely concerned with the identification and minimisation of error in models, and therefore worthy of discussion with the stakeholders when reporting back to them.  We should, at the very least, give an indication of the effect the error envelops could have on our results.  An effective way of doing this is by the use of error densities on a graph which shows your prediction (a fan chart); the modelling error, which was calculated during the Appraisal Step can be applied to each time step of your modelled projection and included in a graph. 

Unexpected changes
As well as the assumptions and likely errors, we must also make clear that things may happen which are simply beyond our capacity to predict, and that these changes may fundamentally change what happens to the system being modelled, far outside the range of the errors discussed above.  These can take place in any of our social, economic or environmental components, for example:

These shifts are generally so massive that it is easy to convey to your audience that they will invalidate the outputs of your model.

Uncertainty in ecological models
Uncertainty is present at all stages of the assessment process, whether it be uncertainty about the magnitude of physical impacts and their geographical and temporal distribution or uncertainty over the value of changes in ecosystem goods and services. Whatever methodology is used to conduct the assessment, all results should have been subjected to a rigorous sensitivity analysis. Sensitivity analysis allows this uncertainty to be explored in a constructive manner and can be used to identify the parameters of the system which are particularly subject to uncertainty and that have a significant impact on the overall outcome of the assessment.
Degradation and loss of ecosystems, and subsequent loss of their associated services, constitute a reduction in natural capital. Whether or not this implies an unsustainable path depends on the extent to which one believes that the ecosystem services provided by natural capital can be substituted for by other forms of capital. Whatever the case there is a great deal of uncertainty about both the consequences of ecosystem service degradation and loss, and the ability to generate substitutes. Given this uncertainty, and the potential for catastrophic change, many would argue for a precautionary approach, in which case current rates of biodiversity and other natural capital depletion are a source of serious concern for sustained maintenance of human welfare.
Ideally ecosystems would be managed with sustainable development in mind. In practice, there are a number of acknowledged reasons why ecosystem degradation continues unabated. These reasons mainly include market failure and poor governance. One of the key causes of market failure is lack of information, and so the provision of information on the economic value of ecosystems can under circumstances contribute to better decision-making. This current lack of knowledge relates both to ecosystem functions and economic values. Poor knowledge of the mechanisms by which bio diverse ecosystems are maintained is a barrier to the development of effective management and assessment protocols.
Risk can be seen as the likelihood of occurrence of a set of factors leading to an undesirable outcome. In this context risk can be seen as a combination of the probability of a hazard occurring and the probability of the system being vulnerable to that hazard, which would then result in a negative outcome. Uncertainty, in general terms, is the lack of confidence in the likelihood of future events occurring and it is in a way more difficult to quantify. This uncertainty may arise from different aspects and steps in the process.
In terms of modelling, it is clear that different levels of complexity can be considered. First we do not have an absolute understanding or knowledge of the full workings and functioning of reality. That level of knowledge is not a requirement for the construction of a good working model. In fact simplicity tends to be a more desirable way to progress when modelling, given our ability to comprehend and assimilate simpler structures and processes. However, would the adopted ‘simple’ model be appropriate to represent ‘reality’? It is important to realize that models are often used to represent the physical system as well as our understanding of the science and theory linking the different compartments and defining the links. Will our knowledge and representation of ‘this science’ be appropriate? Another possible source of uncertainty relates to the parameters needed in the modelling process. How much knowledge do we have about them and can we improve this knowledge via some kind of predictive relationship? This would reduce uncertainty. For example, we may find (possibly via sensitivity analysis) that a model is very sensitive to a key parameter. This parameter may be difficult to measure, or else in order to improve the model, we may decide that it would actually be better if we could predict this parameter by devising a (mathematical) relationship with another parameter we can actually reliably measure. However, how reliable is this relationship and how can it be improved?
Therefore when modelling natural systems it is always better to start with the simplest conceptualizations and mathematical representation given these systems’ complexity. Models can then be made more complex following a stepwise approach, by changing assumptions and representations if and as these prove unable to simulate realistic behaviour of the target system. In this context 'realistic' is defined in terms of ability to simulate the correct qualitative behaviour of the system as it changes in time under natural forcing or in response to a perturbation, and to generate values of the state variables that are within observed envelopes of variation. This is an iterative learning process that would lead to an improved representation of the system and therefore would improve our understanding of the system functioning. Following this process would result in reduced uncertainty which would increase our confidence in the model and in its outcome.
The design of the model, as well as the assessment of its suitability to the task in hand, including the undertaking of sensitivity analysis, is the responsibility of the scientists. During the presentation it can be made clear to stakeholders what the sources of uncertainty may be and what was done to address them during the process. When presenting the model to the stakeholders it is most likely and indeed desirable that an interactive process takes place. Stakeholders may provide inputs and suggestions that may lead to changes in the modelling approach. The processes previously followed to reduce uncertainty (described above) may need to take place again (not in the presence of the stakeholders, although a simple explanation should be provided). When running sensitivity analysis there are standard procedures that should be followed, however it would be important to involve stakeholders in decisions such as which parameters they may think are important and in which they may interested in assessing any overall impacts resulting from any alterations in these parameters by a pre-defined percent. This would allow stakeholders to evaluate model sensitivity to specific parameters and as such improve their understanding of uncertainty.

Comparison

Comparing Scenarios

Now that we have a series of figures which describe what our model is doing as it is run, either as an end point value, or a description of how that variable is changing, we may want to consider comparisons between our scenarios.  In order to provide support of decisions based on these scenarios, we need to be able to provide information to the audience in such a way that we can compare them.
We are fortunate in that our indicators have been developed with the reference group in such a way that we have objectives.
Other variables within our models may however be much more ambiguous. Let us consider for example fisheries; to an ecologist, a decrease in landings from a fishing fleet may be considered the positive outcome in that the ecological pressure being applied to a fish stock is reduced. However this standpoint may be the exact opposite from that of a fisherman, to whom a decrease in landings could mean a catastrophic reduction in his or her income and therefore his or her family's social wellbeing.
It is not for us as providers of information to make these value judgments but only to provide information in such way that they can be discussed by the stakeholders and policy makers. We must therefore be very careful to provide information in an unbiased way.

Wrap-up

Run a requested scenario
A way to show the usefulness of scenarios to the audience is to use the rather fast response time of the models. This implies that the audience can request a given scenario in the form of changing parameters that can be controlled, e.g. a xx% reduction in nutrient emission or a yy% in a model increase in fishing effort, and the results can immediately be shown.
If running a model ‘on demand’ for the audience, you will have to be even better prepared for feedback questions because you will need to do ad-hoc interpretations of the results if a certain parameter is changed / a policy option is modified in the course of a meeting. 
When presenting the scenarios in such a way, you will come to the point where the question is raised ‘what do I need to do in order to change this scenario, this future’. This implies the opportunity for the reference group to ‘influence’ the system. The scientist will have to be well prepared to answer those arising action-oriented questions. These are tackled more in detail in the Appraisal Step during the interpretive analysis.

Further demands
If there is a demand for going into detail for an in depth analysis of the implementation of measures or interpretation of different scenarios, this should be done by splitting up into different sessions / work groups. Every subgroup needs to be treated equally.
If you decide from the beginning to do separate forums with the different stakeholder groups before bringing them together, make sure that in every different group, the exact same scenarios are used – just that the way they are presented will have to be adapted specifically to the group and the group’s interests and stakes.

Next step