Global Sensemaking

Tools for Dialogue and Deliberation on Wicked Problems

As one might guess from my background as a political scientist and a lexicographer, I am interested in tools that help to clarify arguments. This group rightly gives prominence to tools for making arguments explicit, so they can be developed by a group. When there are disagreements, they may be because people disagree about the quality or relevance of data, or they may have different perspectives on the problem, resulting in use of different words and different meanings for common words.

One of the tools I have been most interested in is WebGrid, developed by Brian Gaines and Mildred Shaw at the University of Calgary. (http://pages.cpsc.ucalgary.ca/~gaines/WebGrid/) It is a tool for knowledge acquisition, based on eliciting an object-attribute grid from an expert. It also has a component called Sociogrid that provides comparisons among experts' constructs (concepts). This allows people to clarify whether a dispute may be rooted in terminological differences, or conceptual differences that can be clarified in dialogue.

If this approach to organizing dialogue is of interest at some point in the development of this group's deliberations on tools and dialogue processes, I'd be happy to participate and help in whatever way I can.

Regards,
Bob

Views: 159

Reply to This

Replies to This Discussion

I have wondered if it would be possible for people to come into a dialogue map, find two dichotomous responses to some question and use Kelly's methods as suggested in WebGrid to indicate something like a preference for either response.
Hi Bob,

I'm interested in learning more about tools for semantics. It's always seemed such an abstract and fussy area, but more and more clearly to me the failure to detect the lack of shared semantics is a huge challenge for communication and collaboration.

By the way, have you come across the Center for Semantic Excellence (http://www.semanticexcellence.org/index.html)? I'm curious about the nature of their work.

Jeff
Hi Jeff,
Thanks for the link. I try to follow the different technologies to some extent. Progress seems to be coming in entity recognition and topic identification - and some of it is becoming more accessible, like Reuters' open Calais engine. But my own interest is in the other side of the equation - in helping people to understand their own use of words, and how this awareness can facilitate dialogue and intellectual progress. To put the case in its most extreme - as a political scientist, I can assure you that as machines are given the tools to disambiguate a word or phrase, or any unit of discourse, politicians can - and will - re-ambiguate just as fast. But aside from this extreme case, machine algorithms for disambiguation will always be useful.

But my own interest is in personal tagging and personal thesaurus tools. This interest comes from taking seriously a feature of thought, identified by C.S. Pierce as "abduction" - the generation of hypotheses and speculations, that may go beyond the ability of a standardized vocabulary to convey effectively. So in these cases, we use words in proposing a meaning ... sort of like saying, "just for now, for discussion purposes, what if we think of "x" as meaning "d, e, and f"? Personal uses of language are evident of this mode of thought.

So I'm interested in developing personal tagging and personal thesaurus tools, that allow us to tag and organize our personal vocabularies, and then make these terms and their relations explicit and available for others to see and compare and discuss.

I hope this isn't too abstruse. The most effective implementation of tools that allow people to make their personal concepts explicit is the repertory grid program developed to support personal construct psychology. Brian Gaines and Mildred Shaw developed this approach as a knowledge acquisition interface for expert system tools.

Bob
Hi Bob,

Are you familiar with the cognitive mapping work developed at Bath University and subsequently at Strathclyde University—by Colin Eden and Fran Ackermann—which builds on Kelly's Personal Construct Theory?

There's an early overview here [Click for pdf]:

David
I think Cognitive Mapping comes out of the tradition of "content analysis", which attempts to build a map of someone's beliefs/ideology from their written communication. Harold Lasswell, one of the founders of content analysis in the '40's and '50's initiated this approach. Of course the text analysis tools used now are far more sophisticated.

What I had in mind for this context was something more like the use of repgrid in this article:


"Managing Terminological Interference in Goal Models with Repertory Grid", 14th IEEE International Requirements Engineering Conference (RE'06) pp. 303-306
Nan Niu, University of Toronto
Steve Easterbrook, University of Toronto

Abstract
"Terminological interference occurs in requirements engineering when stakeholders vary in the concepts they use to understand a problem domain, and the terms they use to describe those concepts. This paper investigates the use of Kelly’s Repertory Grid Technique (RGT) to explore stakeholders’ varying interpretations of the labels attached to softgoals in a goal model. We associate softgoals with stakeholders’ personal constructs, and use the tasks that contribute to these goals as elements that stakeholders can rate using their constructs. By structurally exchanging grids data among stakeholders, we can compare their conceptual and terminological structures, and gain insights into relationships between problem domain concepts."

In other words, when collaborators begin to explore a domain, some procedure to clarify terminological as well as conceptual commitments is helpful. Jack is familiar with this approach, but perhaps other people haven't been exposed to the repertory grid approach.

Bob
Another approach rooted in the semantics of argumentation in communities of practice is developed by the STAR lab group in Belgium. Here's one view:
Aldo De Moor, "Ontology-guided meaning negotiation in communities of practice" (2005)
Proc. of the Workshop on the Design for Large-Scale Digital Communities at the 2nd International Conference on Communities and Technologies (C&T 2005
http://www.starlab.vub.ac.be/staff/ademoor/papers/ct05.pdf
Abstract:

Communities of practice require many specialized communication services, including customized workflow management systems, discussion services, and knowledge management systems. Communication ambiguities create a mismatch between these services and community requirements, and are caused by unclear (e.g. incomplete, inconsistent, overlapping) definitions of communication patterns. These are sets of related communicative workflow and norms definitions describing the acceptable and desired communicative interactions within a community. Addressing communication ambiguities requires a process of meaning negotiation, in which community members arrive at the requisite amount of agreement on pattern definitions to continue or improve collaboration. Ontologies are instrumental in facilitating this negotiation process in largescale online communities. In the DOGMA approach, we are exploring ways to develop ontology-guided meaning negotiation.
From the abstract Bob quotes: "Terminological interference occurs in requirements engineering when stakeholders vary in the concepts they use to understand a problem domain, and the terms they use to describe those concepts."

This (negotiating requirements) has been a central part of my professional life, most notably around JavaCard. We had Visa and MasterCard (competitors), their card and security suppliers (all competing with each other), their technology providers (IBM, Sun, et al., all competing with each other), their customers (major world banks all competing with each other), ... We had all of them in a room (many rooms in many countries over a period of years) working out requirements for a standard they could all accept and which could actually be implemented. Talk about a definitional nightmare of precisely the sort that makes the tech news: each stakeholder trying to get definitions framed in ways that gives them some advantage even while acknowledging (these were intelligent people) that software engineers needed precise definitions in order to create useful, interoperable products. Shades of meaning cast on shades of meaning. Ironic laughter and raised eyebrows in back rooms. That's how the competitive world works. Cooperation only so far as necessary and in the spirit of each of us seeking advantage over the other.

If I could put a hard requirement on our GSm work, it would have to do with the necessity of acknowledging this aspect of the real business world and the need to bend it toward a true spirit of doing the "right" thing. Time and again as I got to know individuals participating in these negotiations I was struck by how aware they were of global problems and their honest, personal desire to address them, then they went back to the negotiating table and represented only their employer's interests. Lots of money was at stake.

My story is typical. Tales of Microsoft's behavior in W3C standards bodies abound. (Ms was only tangentially involved in my work.)

Our tools need to address this and I don't see any of them doing so in an explicit way. I'm not sure it's possible, but it bears consideration. It has to do with power relationships, flagging/highlighting them, going beyond the naive assumption that everyone has an equal vote about what's important and what's not, that it's just about science and reason.
Just to signal that Peter and I are open to the cooperative process you describe Andy, and that I'm sure that Bob's guidance would be a great help to us as a group in our deliberations around a common global sensemaking terminology / conceptual framework.
Andy,
As a former teacher of "political science", I certainly understand the issues you are referring to. One of the central issues I have dealt with is the nature (and meaning) of "power". To what extent did "money" shape the participants' views of their communities/world? And to what extent did "grandchildren" (or "community" or "world") shape the participants' views of "money"? Ok... I'll stop... :-)

The postulate I'd propose here is that explicitness is a value of high order. To the extent that we are explicit in our meanings, to that extent we can know with whom to negotiate about what. The engineering perspective is of great value in creating the cultural norm of explicitness. What engineers - and academics generally - may not "explicitly" appreciate is the potentially positive role of ambiguity and vagueness. Because these techniques are an opening out of our current (potentially contradictory and destructive) modes of thinking, and into the imaginative/abductive processes required to rethink and reset the trajectory of our sensemaking.

I'm not sure if this is a good environment for this kind of comment. I'm feeling the same discussion is occurring in several places, none of which come to me as though we were in conversation. For me, email does a better job of creating this feeling of conversation. With an email list, I have the feeling that a specific set of persons are self-selecting into the discussion, while others are merely listening. With Ning and other web "places", I never have a sense of who is "listening". Perhaps any sensemaking process goes through socially defined stages - with a small group needing some period of free flowing discussion to establish the rapport necessary to begin treatment of more specialized topics.

Cheers,
Bob
Bob, nice point, and indeed, the problem of audience plays directly into the problems of word choice. Your message not only comments on this but it illustrates this important problem - very cool. BTW, my work has been in a related area -- communication, argumentation, and pragmatics.

To say something requires a sense of situation especially the other to whom we speak. If we think about it, it's rather difficult to speak in de-contextual propositions even though, ironically, much theory of communicating floating around in the world makes such supposed talk an ideal (or let’s say overly idealized).

Following on your point, I would agree that a key to sense-making lies in an ability to see ambiguity and vagueness as a fact of human interaction that is often functional and not inherently a problem to be squashed and finally solved.

At least for informal interaction, it is the fact of ambiguity that keeps us tuned in and checking in with each other and in inventing ways to repair, align, and/or exploit the ambiguity.

So, maybe, inventing and agreeing to vocabulary is not so much about figuring out what the ultimate meaning of something is as vocabularies are practical tools communities invent, such as semantic ontologies, to move forward (or at least move on). Vocabularies are devices for managing conflict at the borders of different kinds of communities as in Andy's example. Also, an implication of Aldo de Moor’s work.

We can turn this on ourselves as we build tools and ontologies to support sense-making.
+1
Gordon Pask included notions just like that in Conversation Theory: Speaker domain models, Speaker models of the listener's domain model: knowing your audience.

RSS

Members

Groups

Videos

  • Add Videos
  • View All

Photos

  • Add Photos
  • View All

Badge

Loading…

© 2024   Created by David Price.   Powered by

Badges  |  Report an Issue  |  Terms of Service