On 12/12/05, Ray Saintonge <saintonge(a)telus.net> wrote:
Anthony DiPierro wrote:
On 12/12/05, Ray Saintonge
<saintonge(a)telus.net> wrote:
I generally agree with your comments, although
this one strikes me as
backwards. I see ratings as a way of determining whether an article is
in fact stable. If an article must first be judged stable what would be
the mechanism for making that decision?
You seem to be confusing "good" and "stable". It's easy to
see if an
article is in fact stable. Just look at when the last time is that
it's been edited. I suppose you could get even more detailed, and
look at the types of edits that have been performed (minor fixes
indicated stability or major changes and new content indicated lack of
stability), but even that isn't what ratings are about. Ratings are
about whether or not a version is good, not whether or not it's
stable.
And in order for ratings to be useful, you have to have a lot of
ratings on the same version. That's why you need stability before
ratings can be effective.
The two go hand in hand, or become part of a feedback loop. A "poor"
rating will have the effect of destabilizing an article.
I don't see how. I don't even understand how this is supposed to be
applied. When you see a bad article, do you rate the version before
or after you fix it? Or do you rate both? Or do you go through the
history and start rating all the versions?
Is there a project with an example of ratings running? I've seen
article validation in practice, and already that's way too much.
Ratings seem to only be worse.
This is
perhaps a chicken-or-egg kind of problem. One would need an easily
applied criterion to measure stability.
Number of characters changed in the past two weeks?
We all know that the edits on
[[George W. Bush]] can be chaotic.For comparison I looked at the
recent edit history of [[Martin Van Buren]] and it had 26 edits in the
last month. I didn't look at the details, but it would still take time
for someone else to do that if it were being considered for rating. To
be effective a rating system should be able to automatically adjust its
results for stability.
In what way do you think a rating system should be adjusted to account
for stability? I can think of a lot of different arguments, all of
which would be applicable to different situations. I'm just not sure
you can boil this stuff down to a number.
The lack of an agreed mechanism for doing that has been
a major factor
in not getting the 1.0 project off the ground.
Ec
I agree. That's why I haven't really opposed adding ratings in.
Agreeing on something is better than nothing here. Worst case
scenario ratings come out and everyone realizes why they weren't such
a good idea, and then new ideas can come forward.
Yes. Ratings like any other tool will have bugs.
Ec
I'm not talking about bugs. I've seen article validation in action,
and I think it's fairly useless. It works, but it's not useful. I
think article ratings will be even more useless, because the data will
be even more spread out.
Maybe I'm wrong. If so, then I'll be happy to jump on the article
rating bandwagon. There's only one way to find out for sure, and
that's to try it.
Anthony