Difference between revisions of "Projects:ConsensusPollTour"
Line 55: | Line 55: | ||
Ted,I like this back and forth dialog we are having about consensus poling because it helps to flush out various issues. For example it might be that the general populace would be more excited about the potential to radically reduce the consensus time line on a calendar, in contrast to gaining a much higher level of “buy in”. If the former is the case (at this early stage I don’t think one can credibly say which way the general public would veer), then I suspect no amount of consensus polling refinement (assuming no calendar compression time is gained) will gain large public participation and interest – but that’s just a hunch so really holds no weight. | Ted,I like this back and forth dialog we are having about consensus poling because it helps to flush out various issues. For example it might be that the general populace would be more excited about the potential to radically reduce the consensus time line on a calendar, in contrast to gaining a much higher level of “buy in”. If the former is the case (at this early stage I don’t think one can credibly say which way the general public would veer), then I suspect no amount of consensus polling refinement (assuming no calendar compression time is gained) will gain large public participation and interest – but that’s just a hunch so really holds no weight. | ||
+ | |||
+ | Ted, after reading this http://www.cio.com/article/print/121550 at the aboutus "dailybuzz": | ||
+ | |||
+ | These moments are routine in meetings. But in virtual meetings such consensus can't be read. Lacking this signal to wrap up, an online discussion can be endless. Even worse, because the participants can't read the mood of the "room," the conversation ends up reflecting the interjections of the most frequent and forceful participants, rather than the overall judgment of the group, which is usually different from, and often better than, the judgment of the noisiest few | ||
+ | |||
+ | I wonder, for consensus polling why not have this automation: Each person has on their wiki page (computer monitor) a "mood" slide bar, they can move with their mouse. It is explained that this mood bar reflects their current state of engagement in the consensus process (I'm bored, etc.). There might even be more than one slide bar for each person in the consensus group. Then the automation in effect averages all the individual slide bars and displays the total "group mood" - so one can get a better idea of the visual "stuff" that is typical of a face to face meeting. | ||
+ | |||
+ | A scenario where this might be useful: You are wondering, "I’m getting bored or tired of thus long consensus process, and if I were in charge I would interject a new topic or course of discussion to move it along – but won’t, because I’m afraid that right now I am the only one with such thoughts, so don’t want to make waves". In contrast if I were looking at the group mood indicator and saw that 90% of the consensus group folks were also bored, I would feel more confident in purposely interjecting something else (new). | ||
+ | |||
+ | One might also do a search of the participants names, to discover what is the mood of a particular participant. [[MartinPfahler]] | ||
Revision as of 21:42, 17 July 2007
Contents
Who
ConsensusPollTour is a One and Done Project
Why we're doing this?
We want any easy way to walk people through the concepts of ConsensusPolling.
It will be done when:
All tour pages are done, with consistent navigation and style, covering all areas of consensus polling.
Project Tasks
Ted went through the tour in detail with Julia today. We made some changes to the tour directly, added notes to some pages, and have some notes on sticky notes. Here are those stickies that need to be gone through, hopefully today by Ted.
- create ConsensusPolling:Introduction - needs work
- the problems wiht voting page changed to ConsensusPolling:Challenges of Decision-Making - page now needs work
- on the document is alive page, use a more realistic example - maybe deciding on the yearly budget for an organization, for example
Done
- re-wrote the ConsensusPollParticipant page
- get rid of the roles page, move participant to the right place, move facilitator to the right place, move enthusaist to a concluding slide
Discussion
please see User talk:Martin#consensus polls
Good work on this tour - it is not an easy subject to explain! I’m trying to offer some helpful critique. How might we make the tour shorter? My gut level hunch is many in the general public will have glazed over eyes before they complete this entire tour. Is there another potential introductory strategic approach to test – perhaps running new comers through a fake example poll, where there is automation to in effect take the place of real people during various stages of explanation (showing an actual change in poll status, etc.)?
I have this nagging question in the back of my own mind, so wonder if others in the general public will have the same after going through this tour. What if 20k or a million folks are in a consensus poll and 20 percent of the people edit the document as they go from a “not yet” to a “yes” status, which might then change many of the prior “yes” to “not yet”, and so the latter people make a new document edit to again get back to their prior “yes” status? Seems that 20% of such large participation numbers could drag the process on so long many people simply won’t tolerate the long delay. Perhaps to explain what happens in such larger scale situations?
The "age old" problem with the consensus process has been “it takes too long” (time is money, so it is also often too costly). In my talks about the consensus process with other people in the general populace, it has been the promise of novel automation to speed up the consensus process that captures their interest, rather than the offering of a better chance to gain their input. The latter already occurs over a voting process “offline” with traditional consensus, and again this “too long” process is the sticky issue for those I have talked to.
In general software and computer automation have tended to speed up traditionally “very long” processes (starting long ago with eliminating a lot of accounting burdens involving time consuming arithmetic). In contrast I don’t see anywhere in this consensus polling tour how this process adds any efficiency to the already occurring consensus process. If there is promise of increased efficiency can we explain more of the specifics?
Here is one attempt to illustrate a potential problem. The consensus topic is, "should our group board the tour bus leaving at 10:00am or 2:00pm?" – and this poll topic is posted 1 day prior to this group's scheduled trip. With 20 people in the consensus group chances are consensus can be had before the 10:00am bus leaves. With 300 people in the consensus group both the 10:00 am and 2:00pm tour buses might leave before consensus is reached (more people, longer time lag). In the latter case a voting process might be better, because then at least some of the people will catch a bus, even if many have a radically lower level of participation and are not so happy with the vote (how many will be happy if they they miss both buses?). MartinPfahler
- Martin, we don't know yet how it will work when it gets big. That's why we're working incrementally, on problems that we care about. As those problems get more complex and involve more people, the process will evolve. So far, we've just hacked this into the wiki, but as it grows, it's going to need more tech tools. We've already identified a couple of problems with it that need to be fixed. Which pages would you leave out, to make the tour shorter? I really like what you say about the problems of face to face consnesus. We need to add that to our page on it. THe problem with adding it is that I don't see this as a way to speed things up, necessarily. TedErnst
Ted, this tour gets back to a discussion I had with Ray some time back, where we were thinking that likely no single tour is best for different ways that brains work, which might fall into perhaps 3 to 5 different categories. For example my own brain type does not like to read a bunch of explanation, then by the time I actually go to “do it” (whatever that is) I have forgotten half of the instructional information given in the tour (so perhaps you can classify my brain as feeble and having only short term memory). Thus I prefer working explanations – take me through an actual simulation of the process, and have me input relevant data or info, and as the need arises then give explanation during various points of the simulation.
This is likely a more complex tour experience to create, because then Techies must create relevant automation, to gain a simulation where normally humans would be involved, but now the automation does the job.
In terms of “face to face” consensus, the problem has been it's “too slow”. Using online consensus enables one to have consensus at a greater distance, but from what I see so far, not at any increased speed – and it is this later issue that is desired by many folks (me included), because it holds promise of lots of folks making faster progress than is possible with “old fashioned” consensus processes. Again in this context, is the aboutus consensus polling effort intended to gain increased process speed? MartinPfahler
- I suppose it depends on how you measure speed. Time on the calendar from start to finish is unlikely to be faster than one of the other decision-making methods we're familiar with, but I'm guessing if we will show fewer person-minutes spent on the process, and a significantly higher buy-in from those affected. That's just a hunch. No data yet. TedErnst
Ted, you stated, “but I'm guessing if we will show fewer person-minutes spent on the process”
I’m not sure if I understand this explanation Ted. I assume you mean comparing the consensus polling process to a “face to face” censuses process. For people that are physically located far apart, it is obvious that “on-line” methods reduce the need to travel – but that is inherent in all online activity so I don’t think such a novel process differentiator.
If your comment is meant to compare instead to a typical voting process, it takes relatively little time for a person to read their choices then cast their vote – much less time than any consensus process I have seen. Admittedly people taking part in voting might not “buy in” as strongly as with a consensus polling process, but even then the fact is lots of people still do it – a national scale presidential election is just one example.
In many activities of importance time going by on the calendar is the critical issue, and in fact can be so important that one is willing to sacrifice a high degree of “buy in”, just to reduce the costs and downsides of time lag. For example anybody that has ever been involved with the construction industry or real estate development will tell you that many of the decisions made along the way are heavily influenced by trying to reduce the total process time (lag time in construction can quickly eat up ones profit margin – ditto for all manner of other projects).
Ted,I like this back and forth dialog we are having about consensus poling because it helps to flush out various issues. For example it might be that the general populace would be more excited about the potential to radically reduce the consensus time line on a calendar, in contrast to gaining a much higher level of “buy in”. If the former is the case (at this early stage I don’t think one can credibly say which way the general public would veer), then I suspect no amount of consensus polling refinement (assuming no calendar compression time is gained) will gain large public participation and interest – but that’s just a hunch so really holds no weight.
Ted, after reading this http://www.cio.com/article/print/121550 at the aboutus "dailybuzz":
These moments are routine in meetings. But in virtual meetings such consensus can't be read. Lacking this signal to wrap up, an online discussion can be endless. Even worse, because the participants can't read the mood of the "room," the conversation ends up reflecting the interjections of the most frequent and forceful participants, rather than the overall judgment of the group, which is usually different from, and often better than, the judgment of the noisiest few
I wonder, for consensus polling why not have this automation: Each person has on their wiki page (computer monitor) a "mood" slide bar, they can move with their mouse. It is explained that this mood bar reflects their current state of engagement in the consensus process (I'm bored, etc.). There might even be more than one slide bar for each person in the consensus group. Then the automation in effect averages all the individual slide bars and displays the total "group mood" - so one can get a better idea of the visual "stuff" that is typical of a face to face meeting.
A scenario where this might be useful: You are wondering, "I’m getting bored or tired of thus long consensus process, and if I were in charge I would interject a new topic or course of discussion to move it along – but won’t, because I’m afraid that right now I am the only one with such thoughts, so don’t want to make waves". In contrast if I were looking at the group mood indicator and saw that 90% of the consensus group folks were also bored, I would feel more confident in purposely interjecting something else (new).
One might also do a search of the participants names, to discover what is the mood of a particular participant. MartinPfahler