Global Sensemaking

Tools for Dialogue and Deliberation on Wicked Problems

As one might guess from my background as a political scientist and a lexicographer, I am interested in tools that help to clarify arguments. This group rightly gives prominence to tools for making arguments explicit, so they can be developed by a group. When there are disagreements, they may be because people disagree about the quality or relevance of data, or they may have different perspectives on the problem, resulting in use of different words and different meanings for common words.

One of the tools I have been most interested in is WebGrid, developed by Brian Gaines and Mildred Shaw at the University of Calgary. ( It is a tool for knowledge acquisition, based on eliciting an object-attribute grid from an expert. It also has a component called Sociogrid that provides comparisons among experts' constructs (concepts). This allows people to clarify whether a dispute may be rooted in terminological differences, or conceptual differences that can be clarified in dialogue.

If this approach to organizing dialogue is of interest at some point in the development of this group's deliberations on tools and dialogue processes, I'd be happy to participate and help in whatever way I can.


Views: 161

Reply to This

Replies to This Discussion

Andy, your comment about how people, as individuals in conversations, conveyed a richer, broader sense of their circumstances then when not in their paid mode of operation is a fascinating point. Nia Eliasoph noted something similar in a book called "Avoiding Politics" -- she explores the phenomenon of 'political evaporation' where individual's circle of concern shrank as they engaged more and more people -- as the audience grew, the concern shrank. And, as her ethnography shows nicely in the context of political apathy in everyday life, this had to do with norms for talk, communication, and argument that groups/communities develop.

So, yes, I agree with you that our tools need to address this. How far can our formalizations go in drawing out the tacit dimensions of argument that deal with proof, presumption, value premises, and agreement and disagreement?

Your comment also opens up the issues that have emerged in the context of corporate social responsibility about whether social and environmental 'bottom lines' can/should hold up against/with the profit bottom line. Some argue these will always be in opposition and others argue that it's possible to integrate these multiple bottom lines. Let's say the latter is possible. That would involve cultural shifts in value assumptions, premises, and forms of presumptive/default reasoning. This would call for new forms of argumentative practice (i.e., ways of contesting and pursuing claims and grounds for resolving differences in one direction or another - that is not all claims are equal).
The notion of "avoiding politics" may be worth commenting on, since I spent a fair amount of my teaching time explaining to my students that they can't really avoid politics, unless they define "politics" as external to their experience and exercise of influence in everyday life. Again, it all depends on what we mean by "x"... :-) I have eventually come to believe that words don't "have" meaning so much as they indicate a position on our search for the "way of life" that anchors them in a meaningful world - or the search for "a" way of life that could anchor us in a meaningful world.

I'm glad you have linked this discussion with the notion of multiple bottom-lines. I won't comment further since this is in the end probably the most important topic of discussion - how to create a measure of activity that incorporates both the monetary valuing of scarce material wealth and the non-scarce social valuing of ways of life. I'd appreciate mention of any papers/books that seem worth reading together.
[Laughing in agreement] It reminds me of how certain people in government and media would call a reduction in the *rate* of growth a "cut".

You make a great point Andy, in that (we) "software engineers needed precise definitions in order to create useful, interoperable products" but are often flummoxed by these deliberate obfuscations.
The differences in political and engineering speech acts seem like a rich area to mine. Bob makes a good point about the value of ambiguity. The very thing that irks the engineer seeking a very precise common understanding is a godsend to the negotiator who needs to find just some common ground to move forward.

When I was working on natural language understanding I thought that the truly hard part (of the vast variety of hard parts in that domain) was handling context. Context provides the framework in which language can be disambiguated. But how to understand and represent the context of a speech act? There's the rub; it's hard to do. Again the engineer wants it to be as explicit as possible: "in these circumstances we shall use this term to mean X." In fact Sun, faced with conferring certification on different implementations of the Java engine, took the (useful IMO) position that the meaning of a function was determined by its test suite -- and everyone could inspect the test suite. Hard to be more precise than that.

The negotiator/politician might be motivated in the other direction. In order to create a statement of joint purpose or agreement between parties with very different views, making the context fuzzier allows for a greater range of statements to be "meaningful" -- where meaningful means something like "mutually consistent within the context in which they were expressed."

I tend to look at these things operationally. What's the goal and what are the instruments to achieve it? On one end of the spectrum we have engineers who need clarity to get their work done and, on the other, politicians who need fuzziness to achieve their ends. At a high level, though, the goals are the same: in what context can we agree that a set of statements are coherent and consistent within that context? Depends on what you want to do.

Is all this just too spacey?
I think looking at things "operationally" is a good approach, within a semiotic framework. Unfortunately "operationalism" has been linked with a positivist philosophy of science, which generally avoids discussion of the human meaning-making dimensions of the "operations" performed by scientists. In a social-semiotic approach (the label given to the work I'm most attracted to and aligned with), the context would be the system of speech-acts and communication norms that regulate them.

And I think you're on target to point out that the best context depends on what you want to do. Sometimes fuzziness in communication systems is appropriate and sometimes its stifling. When we're talking about a "sense-making" communication environment, perhaps therea re a range of contexts, some more task-oriented and some more like "rules of a game". When there are or may be conflicting interests, the communication context becomes game-like rather than task-like.

There was a very interesting link on the "Topic Maps" list today, pointing to the possible use of topic maps in programming the rules of a game. . Since I'm not a programmer, I'm sure I haven't understood this post at a level beyond the surface. But I'd be interested in a programmer's view of whether topic map semantics could be used in defining a "rules engine" to facilitate the "language game"? (Sorry ... I'm not sure I even know how to frame the question....)
Thanks for pushing the discussion along, Andy,
Andy's reminder of expert systems forms a great response to the notion of finding a way to program monopoly or many other games. If "life is a game" holds as a working metaphor, then, um, maybe what we need is a giant expert system, or maybe a federation of expert systems that cover each situation. I think I recall that being one of the great visions of the past. In fact, I recall writing my first inference engine in Forth by transliterating the "animals" inference engine in Byte magazine back in the early 80's. I recall being completely swept up in the notion that I could write an interpreter for phrases that, when captured in logical statements, allowed my computer to appear "smart".

But, simple expert systems turned out to be fragile. Then I learned about "deep reasoning" where, when the defaults fail, you resort to deeper knowledge and construct an explanation from there.

That leads to the need to be sure you are talking about the same thing: if you talk about, say, "Jack's baseball book", you're not talking about me and you then need to resort to a lookup of some kind to apply some rule to a different subject.

That, I think, is the proper role for topic maps: keep the universe of discourse well organized in a subject centric way; then, build the inference engine such that, when it sets about instantiating variables, e.g. IF *x authored *y, it has a reasonable subject-centric way to, at the very least, sort out when it's in ambiguous territory and resort to other rules to carry out what comes next.

In some sense, we will always need rules, and we will always need a well organized universe of discourse.

It is entirely true what they say about the idea of assigning a universal identifier, a URI, to each and every subject. Suspecting that there are probably a zillion different subjects for each of the 7 billion people on this planet, some subjects of which overlap (they are the same subject, just expressed in different terms), it seems unlikely that we will ever have a registry with up to 7 billion zillion URIs. That might be more identifiers than there are molecules in the universe, or something like that.

Instead, we need a way to do parametric lookup: what are the properties of those subjects, etc. Once we do have a URI for a subject, within our tribe, say, then we can safely trade on it, much to the chagrin of those outside our tribe, at least until they go lookup what that URI means.
I'm putting my money on topic maps for the time being.
My reply down the thread should have been placed here. Operator error. Then you can read Jack's next post in it's proper context.
I don't know enough about topic map semantics to answer your question, Bob, but here's a reframe of part of this discussion that may shed some light from a different angle.

The article you linked to naturally led me to think about software design, and your use of the term "rules engine" brought to mind several things: production rules in a parser, the inference rule engine in an expert system, and business rule engines in enterprise software systems.

Consider the "sensemaking communication environment" as it might exist in an international standards body or policy making group where the goal is to generate a report on a particular topic. IPCC, W3C, and WTO workgroups are examples covering a range of political, commercial, and scientific/engineering domains. What would an expert system to facilitate that kind of work look like?

A workgroup has a set of topics that serve as a basis for discussion and a goal related to those topics (producing the report the represents the outcome of their work). Each workgroup participant brings to the table a body of knowledge, a set of goals (some essential, some not), and a set of strategies to to achieve their goals. This is the sensemaking version of Monopoly.

An expert system for guiding the workgroup chairperson or facilitator might contain rules like:

* IF the report covers all our topics AND NOT in conflict, THEN we're done.

* IF in conflict THEN pull back the focus by using more general terms OR try to surface participants' essential goals that are not being met OR break down the point of conflict into more manageable pieces.

* IF NOT in conflict AND NOT done THEN narrow the focus to look for more precision and clarity OR put another topic on the table for discussion.

Some bits of state information the system would have to represent and maintain include topic relationships, participants' known goals, participants' positions on topics covered so far, conflict among positions, active strategies, .... (Solve a few knowledge representation problems, add a few hundred rules, and set this on top of Compendium or Debategraph and couple it with TopicSpaces and we've got a great sensemaking game.)

Thinking about a "workgroup facilitation engine" might be a way to move toward a more precise model of sensemaking. The workgroup itself can be a metaphor for any number of activities. The workgroup's product, the report, is a metaphor for shared understanding. Its process involves the rich set of activities around building and using relationships among topics/ideas to achieve understanding.

One particularly nice quality of an expert system is its ability to provide a trace of how/why a certain rule fired. That in itself is a valuable aid to understanding, rather like having Jeff Conklin around to say, "The reason I drew the map this why was...."
Continuing link repair: see Jack's comment above. [Sorry for the confusion and the potential of making it worse by trying to fix it!]
I trust the flippancy was obvious in my parenthetical comment above: "Solve a few knowledge representation problems, add a few hundred rules, and set this on top of Compendium or Debategraph and couple it with TopicSpaces and we've got a great sensemaking game."

To be less flip, I was thinking about how an fairly small expert system could help a user manipulate a suite of sensemaking tools effectively, and how the construction of that rule set might shed some light on what sensemaking is and means.

Writing in 1991 Ed Feigenbaum said, "These two shortcomings of expert systems of the first era — brittleness and isolation — provide shape for the next decade of knowledge-based systems research. While the technology of the first era is being inserted, improved, and applied, a second era of knowledge based systems is being invented. The thrusts of the second era research are concepts of large knowledge bases, knowledge sharing, and the interoperability of knowledge bases that are geographically distributed."

My favorite line — "Every expert system appears to be a custom-crafted cottage industry event!" — is about something that seems to be as true today as it was then in 1991 and not just in expert systems. GSm is, I think, an attempt to move sensemaking beyond its custom-crafted cottage industry status.

Later Feigenbaum says, "Knowledge sharing ... means more than just the knowledge bases of several expert systems interoperating to solve a problem. Knowledge sharing also means the computer-facilitated cooperation of many people in the building of a large body of codified knowledge. The vision is that hundreds or thousands of knowledge base builders would cooperate."

I wonder if GSm can be characterized by paraphrasing Feigenbaum like this: Gsm is "the computer-facilitated cooperation of some people in the building of a small body of codified knowledge around the development and use of sensemaking tools. The vision is that dozens of sensemaking tool builders and users would cooperate." Substitute Feigenbaum's statement about knowledge sharing for "sensemaking" and it's a nicely recursive definition.

That aside, I agree with Jack about the need for topic maps (or some similar "ground of reference") in any such system. I have to do a little work and learn (1) how to distinguish topic maps from semantic nets and (2) how to understand a topic map as a computational unit. By the later, I mean what are their properties and how to I configure them so a topic map can be integrated into another program, say, an expert system.
Nicely stated, Andy. Particularly the opportunity to listen to Ed Feigenbaum, the "father" of expert systems.

Let me think outloud about this quote: Gsm is "the computer-facilitated cooperation of some people in the building of a small body of codified knowledge around the development and use of sensemaking tools. The vision is that dozens of sensemaking tool builders and users would cooperate."

I think I see two phases of GSm. Feel free to offer a correction to the vision I carry in my head: Phase 1--precisely as stated in the quote; Phase II--others, including GSm tool builders, using GSm tools, engaged in the building of a large body of codified knowledge around the issues of our times. Well, that's all I have to say about the quote; it's a beauty--it's just that I see it as half of the story.

Topic maps. They are semantic nets. So are concept maps, dialogue maps, mind maps, and so forth. They are symbolic representations of human thought--that which we sometimes call "knowledge", organized in a relational way; things are connected to other things, and those connections are important.

Concept maps, including dialogue maps and mind maps, place less emphasis on the nature of the relationship between the represented concepts (nouns over verbs, perhaps), while mind maps don't even give you a nice way to state a relationship; whatever it is is just implied by the line connecting one thing to another, and the shape of the emerging graph: forks seem to indicate sibling relations, for instance.

Topic maps don't let you get away with just drawing a labeled arc between to concepts; they insist that you create an "ontology" of relation types (called associations by XML topic maps, and assertions by the TMRM topic maps, which are also known as subject maps).

An aside: I'm getting tired of the Ning user interface, at least as it behaves on Firefox 3: when I select a phrase and click the italics button, it jumps to the top of the screen; if I'm not paying attention, I end up wrecking whatever I said and have to start over. That's not an acceptable experience.

Back to topic maps and relations. You not only need to declare an ontology of relation types, you also need to pay attention to role types: the graph that wires two concepts together through some typed relation also declares the roles played. XTM, the XML topic maps specification, requires this, though I've seen people ignore the roles. Subject maps force no such requirement: you are free to create a graph any way you wish, one that even qualifies as a concept map.

Cohere just happens to allow you to specify roles together with typed relations when connecting annotations. I like that. Very topic mappish!

What's different? Simply this: if you call something a topic/subject map, you are making a statement to this effect: for any given subject (defined as anything you can think of or talk about that you wish to represent symbolically), there will be one, and only one proxy (representation) for that subject in any given map. That's what's different, though it doesn't imply that concept maps couldn't say the same thing.

This forces attention to subject identity; in order to guarantee a lone representation for a human who goes by the name "Jack Park", it should be abundantly clear that name-based identification will not be sufficient for a map that promises to cover all possible people with that name. I return to my earlier comment about URIs, one for each subject. The topic maps folks invented the PSI (published subject indicator), essentially the same thing as a URI. PSIs are really good, as are URIs for things we all agree on and, in general, know about. But, if they are the only way we separate subjects, what happens when someone wants to query our topic map for a particular "Jack Park" person? Like a google query, they're going to get possibly many hits. That's not acceptable; we need to, and can do better than that by allowing "keyword" searches, just as does google and all other query engines. The big difference is the potential for reduction of infoglut in query responses; a better-organized search result. To get just a tiny hint at how that works, visit the Carrot2 search engine and type in any query you like.

[Your Ning UI experience is not due to FF3, same here in FF2. They don't keep a pointer to the current location in the edit pane. When you are willing, please reveal the source of your special powers in finding & pointing to interesting tools I've never heard of like]

Thanks for background and disambiguation regarding topic maps, concept maps and their ilk. In that regard it seems like Debategraph, through its constraints on relationships, offers something different because the relationships are specialized and in effect create a nontrivial map grammar, maybe similarly for IBIS style Compendium.

"Topic maps don't let you get away with just drawing a labeled arc between two concepts; they insist that you create an "ontology" of relation types."

If they are "just" semantic nets that raises my concern that they are no better than semantic nets in that the ontology has to come from somewhere. That is, as a software developer I can see felxibility as a feature and, if necessary, can specify an ontology appropriate to my project. But that takes us back to expert systems as a cottage industry. How do I build on the work of others and incorporate it into what I'm doing? In other words, where do I find and how do I select an existing ontology? If I do that, how do I know what proprieties it has and how they might change over time? Asking this in exactly the same spirit as one might ask where do I get and how do I use an XML parser? There is still a cottage industry of sorts for XML parsers but the interface to using them has been standardized and for many cases the default parser a system offers is expected to be adequate (thinking Java standards here). And there are well known rules about how software components change.

In creating an expert system I have to interview and distill the knowledge of domain experts to discover appropriate inference rules. Back in the 1980's I'd also be using that same interview information to create an appropriate semantic net to support the knowledge representation end of the system. How does the recasting of semantic nets to topic maps help me, as an engineer, create "better" systems or speed the development of equivalent systems?

I know little more than there are some competing standards like OWL and RDF and if I bothered to own a personal domain I could establish "Andy Streich" as a unique ID. But what if I wanted to add that I'd been a professional ski instructor? There are people who have taught skiing, people who charged money to teach skiing, people who have been employeed by ski schools to teach skiing, and people who have passed nationally certified exams to qualify as profession ski instructors — all different types that can be easily conflated. Back when I did that sort of thing it was important to me to ensure my students knew I was in that last category, a certified professional.

In a generalize topic map space how do we determine what "certifiend professional ski instructor means"? Who gets to say? And for the period that the term applied to me? The exam changed over time as did the examiners. And this was alpine as opposed to nordic.

I'm not criticizing topic maps but rather trying to understand if the field has changed in a significant way since my grad school days 20 years ago. Then we were experimenting with systems where we had to specify the lexicon, the grammar, and the semantic net (every aspect of it) along with the expert system rules and whatever state it needed to maintain — and we had to handle (or more likely avoid) modifications to all these subcomponents. In short every example system was typically purpose-built from the ground up.

My complaint, if you can call it that, is either that I haven't kept up or the field hasn't progressed much in the last couple of decades in a way that affects software engineering. It's got to be the former, yet I see out no obvious evidence of progress. The space still seems like a cottage industry. Please tell me I just haven't looked hard enough.





  • Add Videos
  • View All


  • Add Photos
  • View All



© 2024   Created by David Price.   Powered by

Badges  |  Report an Issue  |  Terms of Service