Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo
ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis
parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec,
pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec
pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim
justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis
eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum
semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor
eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in,
viverra quis, feugiat a, tellus. Phasellus viverra nulla ut metus varius
laoreet. Quisque rutrum. Aenean imperdiet. Etiam ultricies nisi vel augue.
Curabitur ullamcorper ultricies nisi. Nam eget dui.
Dear All,
we finally managed to put up some demos of WikiTrust on some Wikipedias, to
start gathering feedback.
WikiTrust (http://wikitrust.soe.ucsc.edu/) is an open-source tool that
computes, for every word of text:
- Who is the author of the word
- In which revision was the word introduced
- How well the word (and the surrounding text) has been revised.
For the latter, we color in orange the background of untrusted text; the
shade of orange gradually turns to white for text of increasing trust
values.
To see the demos, go to
https://addons.mozilla.org/en-US/firefox/addon/11087and install for
Firefox the WikiTrust add-on. You can then browse the
Italian (it.wikimedia.org) and Portuguese (pt.wikimedia.org) Wikipedias, and
we are working on adding other Wikipedias to the demo soon.
Some notes on the demo:
We developed the demo to help us test code suited to running at the
Wikimedia Foundation (WMF). But the code does not run there now: the demo
is implemented by polling our servers at UC Santa Cruz to obtain the text
information. This has a few consequences:
- The demo is slow, as it involves a lot of back-and-forth between WMF
and UCSC servers. It would be much faster if it ran at the WMF directly.
- As the code is not running at the WMF, our servers are not notified
when someone edits a page. Thus, when you request information on a
revision, we occasionally tell you that we don't have the information, and
to try again in ten seconds or so. In the meantime, our server at UCSC
fetches from WMF the revision, and analyzes it. Again, this would not
happen if WikiTrust was running more tightly integrated in the WMF.
- Since we cannot authenticate users (the WMF, not us, is sent the
authentication cookies), we had to turn off a button that enabled users to
vote for the correctness of text (inspired by the work on flagged revisions:
indeed, we could tie the flag to this vote action).
The purpose of the demo is to help us test the code and experiment, and
feedback is most welcome. Please be tolerant: I am sure there are still
kinks, and we will do our best to iron them out. You can find links to
mailing lists, bug-tracking systems, code, etc at
http://wikitrust.soe.ucsc.edu/
Some notes on text trust:
Text trust is computed on the basis of a reputation system for Wikipedia
users. Users gain reputation when they make contributions that last in the
system. Thus, new users must do some amount of good work before they gain
reputation. The trust of the text then depends on the reputation of the
user who inserted the text, and on the reputation of all the users who
subsequently revised the text. Text can become fully trusted only when it
has been revised by multiple high-reputation authors. Thus, WikiTrust
highlights changes in articles, and makes it difficult for authors of
malicious changes to cover their tracks.
If you click on a word of text, you are sent to the revision where the word
was inserted. We also can have pop-up balloons that announce the author of
each word, but we thought that this too-obvious proclamation of who the
author is could lead to silly competition for who last replaced or reworded
a sentence.
We also had the option of voting for the validity of a revision, thus
raising its trust, but since our code does not run at the Wikimedia
Foundation (WMF), we have no way of authenticating votes, and this feature
is thus currently inactive.
---
I hope you enjoy the demo, and I hope this can be a useful instrument for
those who patrol, maintain, and improve Wikipedia pages, as well as for
those who are just visitors.
All the best,
Luca, Bo, and Ian
Hello,
I am a researcher in the GroupLens lab (http://grouplens.org) at the
University of Minnesota. You might recognize previous work in Wikipedia
like "Creating, Destroying, and Restoring Value in Wikipedia"
(http://www-users.cs.umn.edu/~reid/papers/group282-priedhorsky.pdf <http://www-users.cs.umn.edu/%7Ereid/papers/group282-priedhorsky.pdf>).
As part of our continuing work within Wikipedia, my colleagues and I are
conducting an academic (non-commercial) study in which we have developed
a modification that is designed to help users work together more
effectively by changing the interface for reverting other editors.
If you choose to participate in the study, you will be automatically
assigned a Wikipedia gadget that will consist of a subset of the
modifications we have developed. As part of the study, we will be
logging your usage of the tool (ie. when you are reverting other
editors). We will also be available for tech support and bug fixes.
Most likely there will be a survey at the completion and the complete
tool will be made available.
Consent form/installer: http://wikipedia.grouplens.org/NICE/consent/
Wikipedia user script page: http://en.wikipedia.org/wiki/User:EpochFail/NICE
-Aaron Halfaker
GroupLens Research
University of Minnesota
Dear All,
My name is Avanidhar Chandrasekaran
(http://en.wikipedia.org/wiki/User_talk:Avanidhar).
I work with GroupLens Research at the University of Minnesota, Twin Cities.
As part of my research, I am involved in analyzing the usefulness and
Necessity of author reputation in Wikipedia.
In lieu of this, I have simulated an Interface to color words in an article
based on their Age.
Being experienced contributors to Wikipedia, I invite you to participate in
this study, which involves the following.
1. Please visit the following Instances of wikipedia and evaluate the
interface components which have been incorporated into each of them. Each
of these use their own algorithm to color text.
a) The Wikitrust project
http://wiki-trust.cse.ucsc.edu/index.php/Main_Page
b) The Wiki-reputation project at Grouplens research
http://wiki-reputation.cs.umn.edu/index.php/Main_Page
2) Once you have evaluated the two interfaces, kindly complete this survey
on Wikipedia quality
http://www.surveymonkey.com/s.aspx?sm=hagN5S1JZHxH6pF9SmXkkA_3d_3d
We hope to get your valuable feedback on these interfaces and how Wikipedia
article quality can be improved.
Thanks for your time
Avanidhar Chandrasekaran,
GroupLens Research, University of Minnesota
It is possible to calculate a kind of user contribution weight in
articles. That is, if I as Jeblad writes the whole article about Norway
I would get a ucw of 1.0, if I write half of the article I would get
0.5, and if I write nothing I would get 0. Those numbers could then be
used to adjust the numbers of page views to show how high impact I as a
contributor has on the total impact of Wikipedia, and also possibly give
me an overall rank of "importness" or place mo on some scale of
importness. Perhaps a kind of top-tier scaling.
Of course this has very litle to do with real trust, its only a measure
of my impact on the overall product. It is although a very visible
factor that may have some merit as a kind of internal honour for good work.
I think most of this is already implemented by de Luca and his friends,
it simply has to be reorganized. It is not trust but the weighting
mechanisms are much the same.
John
As some of you might remember, we have been working on author
reputation and text trust systems for wikis; some of you may have seen
our demo at WikiMania 2007, or the on-line demo
http://wiki-trust.cse.ucsc.edu/
Since then, we have been busy at work to build a system that can be
deployed on any wiki, and display the text trust information.
And we finally made it!
We are pleased to announce the release of WikiTrust version 2!
With it, you can compute author reputation and text trust of your
wikis in real-time, as edits to the wiki are made, and you can display
text trust via a new "trust" tab.
The tool can be installed as a MediaWiki extension, and is released
open-source, under the BSD license; the project page is
http://trust.cse.ucsc.edu/WikiTrust
WikiTrust can be deployed both on new, and on existing, wikis.
WikiTrust stores author reputation and text trust in additional
database tables. If deployed on an existing wiki, WikiTrust first
computes the reputation and trust information for the current wiki
content, and then processes new edits as they are made. The
computation is scalable, parallel, and fault-tolerant, in the sense
that WikiTrust adaptively fills in missing trust or reputation
information.
On my MacBook, running under Ubuntu in vmware, WikiTrust can analize
some 10-20 revisions / second of a wiki; so with a little patience,
unless your wiki is truly huge, you can just deploy it and wait a
bit.
Go to http://trust.cse.ucsc.edu/WikiTrust for more information and for
the code!
Feedback, comments, etc are much appreciated!
Luca de Alfaro
(with Ian Pye and Bo Adler)
There are several attempts to make bots that detect copyright
violations. The problem is that there are a lot of such "infringements"
that are legal, quotations for example, and then the writers gets pissed
because they have used the material in a completely legal way.
I have made a Javascript-based solution that seems to solve the problem
by placing a user in the loop. The only thing the script does is to mine
the web for possible similar texts.
Basically the script takes the additional text, extract the plain text,
excludes some of the text, breaks it into sentences, uses the sentences
to build a query, rematches the result to the sentences, accumulates
those and gives some warnings if a match limit is reached.
For the moment I try to extend the system to older edits, and also to
make it a bit more resistant to small changes in the text. It is already
fairly resistive to small reorganizations of the text.
John
In a message dated 8/27/2008 12:02:04 P.M. Pacific Daylight Time,
john.erling.blad(a)jeb.no writes:
Basically the script takes the additional text, extract the plain text,
excludes some of the text, breaks it into sentences, uses the sentences
to build a query, rematches the result to the sentences, accumulates
those and gives some warnings if a match limit is reached.>>
--------------
Wouldn't that flag situations where the author of the work, existing at some
site, is giving the copyright to the Wiki as well? That is they are
giving-up their copyright.
**************It's only a deal if it's where you want to go. Find your travel
deal here.
(http://information.travel.aol.com/deals?ncid=aoltrv00050000000047)