Global Sensemaking

Tools for Dialogue and Deliberation on Wicked Problems

One legitimate type of node on a climate change argument map could be some output value from a climate model, which can support or undermine some other node. In that sense, the model output is equivalent to some observation ("the ice sheets are retracting"). However, if we then wish to evaluate the quality of this bit of evidence, the task is, by several orders of magnitude, harder than checking some measurement. We would need to check all the model assumptions, the implementation of these in the simulation code, the input data and so on - an almost impossible task, given the complexity of most serious climate models, the fact that they are implemented in procedural code rather than declaratively, the run time for the models, the large amount of single-sourced data, etc.

Although I am not a climate modeller, I am involved in environmental modelling, and take it for granted that such models should be part of the deliberation process. However, it's not clear to me how the climate argumentation community would see this happening. Should we be constructing extensive argument (sub)maps for each model, setting out the justification for each and every assumption etc? Or is the evaluation of the quality of a model treated as something entirely outside the argument mapping methodology?

Views: 57

Reply to This

Replies to This Discussion

Thanks Robert: this is an excellent (and thorny) issue for the group to consider.

I know from earlier conversations with Mark Klein that the MIT team have been giving this issue detailed consideration; so I'll alert Mark to your questions now.

David
The Center where I work (http://cci.mit.edu/) has a project that is taking that (huge) challenge on. Our idea is to work, in baby steps, towards an open-source climate/economy-model creation effort, where people can share models (probably systems dynamics models), comment on their design, make suggestions for improvements, and ultimately (perhaps) even collaboratively create improved models. This is a long-term project, and to be honest I'm not sure how well it will take off because probably only a relatively small number of people are qualified to critique and improve climate-economy models.

I personally believe that if you use a simulation model output in an argument map, you (or someone) should attach the key assumptions made by that simulation as "pros" supporting that output. People can then add pros and cons arguing about whether those assumptions are well-founded. You're right, it would be an enormous task to do that exhaustively for a serious model, but I think it is worth doing even if it is incomplete.
Thanks Mark, that's very helpful background and a good practical suggestion.

Another mapping approach to consider might be to represent the assumptions of the model as co-premises.

The inclusion of sensitivity analysis in the map to show how the outputs of the model vary depending on changes in the assumptions / co-premises would be helpful / instructive too.

David
Such display of sensitivity (say through color or size or visibility) could also give indications of life cycles (long term influence, short term).
Your site looks like exactly the right thing; found the Collaboratorium article summary.

Regarding the simulation model, hopefully the pros and cons would converge as the system got smarter detecting their patterns. Topic/subject mapping could help, although at the end I think you'd still be stuck with some unvouched-for co-premises, which would amount to a kind of default best-fit mythology of the moment. Ie, mythologies provide the as yet unwarranted assumptions, and so provide simplification.

Would creating super workstations for the people qualified to critique and improve climate-economy models be a design goal? But then you need similar for all the people validating their co-premises :-) Workstations participating in related tasks then form a distributed situation room, a kind of climate-traffic control room.
Hi Robert,

Mark Klein's approach has an explicit role for embedding simulations and scenarios in their IBIS outline:

http://cci.mit.edu/research/climate.html

Simon
Thanks for the pointers to Mark Klein's work - fascinating stuff.

There is, I believe, a distinction between embedding simulations and embedding models in sense-making systems. If it's the simulation that's embedded, then the sceptic has to be able to check out the model which led to the simulation results. As discussed above, one way of doing this is to make the model assumptions explicit. However, a list of model assumptions is not the same thing as the formal specification of the model itself. Sending the sceptic to look at computer code is hardly an option, given the fact that many environmental models run to 10k-100k lines of code.

However, as it happens, most environmental models can be represented declaratively, as directed graphs capturing (e.g.) the dependency between variables. Since argument maps themselves are directed graphs, it is tempting to play around with the idea that symbolically-represented models could be simply a part of a large argument map. Then, a sceptic browsing an argument can drill down into a model which forms part of the argument, using the same user interface as that used for drilling down into the argument map itself.
This movie shows how Compendium was embedded in a semantic web infrastructure for emergency response (to an aircrash rather than a climatic disaster, but use your imagination!), with a variety of tools feeding event data (like a climate computer model might) into Compendium, which served as the sensemaking 'glue'.

http://www.e-response.org/demo/movie

from:

The Application of Advanced Knowledge Technologies for Emergency Response
http://eprints.aktors.org/602

"Making sense of the current state of an emergency and of the response to it is vital if appropriate decisions are to be made. This task involves the acquisition, interpretation and management of information. In this paper we present an integrated system that applies recent ideas and technologies from the fields of Artificial Intelligence and semantic web research to support sense- and decision-making at the tactical response level, and demonstrate it with reference to a hypothetical large-scale emergency scenario. We offer no end-user evaluation of this system; rather, we intend that it should serve as a visionary demonstration of the potential of these technologies for emergency response."

RSS

Members

Groups

Videos

  • Add Videos
  • View All

Photos

  • Add Photos
  • View All

Badge

Loading…

© 2024   Created by David Price.   Powered by

Badges  |  Report an Issue  |  Terms of Service