Andrea Forte wrote:
Exactly! I think that's what I just proposed. :-)
Or, instead of open
ratings, you could use some sample of articles and ask third-party
experts to rate them along various dimensions of quality (accuracy,
comprehensiveness, accessible writing, etc.)
In January, it is anticipated that the long-awaited "article validation"
feature will go live. This is essentially just a system for gathering
public feedback and *doing nothing with it* (at first). The idea is to
simply record feedback on all the articles and then take a look at it
with minimal a prior preconceptions on what it will tell us to do.
A fantastic research project would be to select N articles at random and
have either "experts" or some sort of control group do a similar rating,
and look at the correlation. Another aspect of this research would be
to compare the ratings of anons, newbies and experienced wikipedians.
If the result is that the ratings of the general public are highly
correlated with the ratings of experts, that's a good thing, because
it's easier to get ratings from the general public than to do some kind
of old-fashioned expert peer review. I would expect, myself, that *in
general* the ratings would be similar but that there will be interesting
classes of deviations from the norms.
--Jimbo