There's a lot of research in education on peer assessment-- I remember
reading studies that show students' assessment of peers' work is
similar to that of teachers' assessments, *when they are given
guidelines.* So I'd expect that the interface design (how well
expectations are communicated in the design of the rating system) will
influence how well newbies are able to contribute in meaningful ways.
-andrea
On 12/16/05, Jimmy Wales <jwales(a)wikia.com> wrote:
Andrea Forte wrote:
Exactly! I think that's what I just proposed.
:-) Or, instead of open
ratings, you could use some sample of articles and ask third-party
experts to rate them along various dimensions of quality (accuracy,
comprehensiveness, accessible writing, etc.)
In January, it is anticipated that the long-awaited "article validation"
feature will go live. This is essentially just a system for gathering
public feedback and *doing nothing with it* (at first). The idea is to
simply record feedback on all the articles and then take a look at it
with minimal a prior preconceptions on what it will tell us to do.
A fantastic research project would be to select N articles at random and
have either "experts" or some sort of control group do a similar rating,
and look at the correlation. Another aspect of this research would be
to compare the ratings of anons, newbies and experienced wikipedians.
If the result is that the ratings of the general public are highly
correlated with the ratings of experts, that's a good thing, because
it's easier to get ratings from the general public than to do some kind
of old-fashioned expert peer review. I would expect, myself, that *in
general* the ratings would be similar but that there will be interesting
classes of deviations from the norms.
--Jimbo
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l(a)Wikimedia.org
http://mail.wikipedia.org/mailman/listinfo/wiki-research-l