[Wikipedia-l] Sanger response and rating system experiment

David Gerard fun at thingy.apana.org.au
Thu Jan 6 12:14:02 UTC 2005


Daniel Mayer (maveric149 at yahoo.com) [050106 19:22]:

> NPOV is a much better guarantee of accuracy than trusting a supposed expert
> (although I do highly value feedback from field experts - I just don't take
> their ideas as the last word). 
 

Absolutely.


> Many in academia are used to being the gatekeepers and stewards of 
> information. Wiki opens those gates to anybody with an Internet connection. So
> many in academia will always recoil in horror at the mere concept - that is
> their problem, their failing, not ours. 
 

I particularly favour Clay Shirky's description of the process, in
http://www.corante.com/many/archives/2005/01/03/k5_article_on_wikipedia_antielitism.php :

   It's been fascinating to watch the Kubler-Ross stages of people
   committed to Wikipedia's failure: denial, anger, bargaining,
   depression, acceptance. Denial was simple; people who didn't think it
   was possible simply dis-believed. But the numbers kept going up. Then
   they got angry, perhaps most famously in the likening of the Wikipedia
   to a public toilet by a former editor for Encyclopedia Brittanica.
   Sanger's post marks the bargaining phase; "OK, fine, the Wikipedia is
   interesting, but whatever we do, lets definitely make sure that we
   change it into something else rather than letting the current
   experiment run unchecked."

   Next up will be a glum realization that there is nothing that can stop
   people from contributing to the Wikipedia if they want to, or to stop
   people from using it if they think it's useful. Freedom's funny like
   that.

   Finally, acceptance will come about when people realize that
   head-to-head comparions with things like Brittanica are as stupid as
   comparing horseful and horseless carriages -- the automobile was a
   different kind of thing than a surrey. Likewise, though the Wikipedia
   took the -pedia suffix to make the project comprehensible, it is
   valuable as a site of argumentation and as a near-real-time reference,
   functions a traditional encyclopedia isn't even capable of. (Where,
   for example, is Brittanica's reference to the Indian Ocean tsunami?)


> That said, we can and should continue to find ways to make our articles better.
> Milestone snapshots (aka Wikipedia 1.0) selected via a credible process would
> help a great deal toward that (as the FAC/featured article process already has
> for the best articles we have). 


Absolutely. Is Magnus Manske's experimental rating software (active on
test:) any closer to going into the running build?

As Jimbo has said a couple of times (to me at the last London meet, and
reported at a previous meet), the best thing to do with a rating system at
the moment is ... nothing. Run the rating system for a time period, gather
the data, *don't reveal it yet* for fear of affecting the rating
experiment, *then* release the data for scrutiny and ideas. how people rate
things given a simple system, see if the results of that rating accord with
common sense, see if they approximate the desired Rating System That Scales
(the way FAC doesn't quite).

I assume the devs would prefer we shake the worst bugs out of 1.4b3
first and get a handle on the hardware situation (since the charitable
would presently be tapped out by tsunami donations ;-), but is there
anything stopping it then?


- d.





More information about the Wikipedia-l mailing list