On Thu, Jun 9, 2011 at 7:19 AM, Mihály Héder <hedermisi@gmail.com> wrote:
Dear Wikitext experts,

please, check out Sztakipedia, a new Wiki RTE at:
http://pedia.sztaki.hu/ (please check the video first, and then the tool itself)
[snip]

That's really cool! The analysis & interactive suggestion tools are especially interesting; some of those can be the sorts of tools that could really aid in certain kinds of article editing, as well as for other uses like doing research or more educational-focused work where pulling information from other parts of the wiki resources would be useful.

Our plan right now is to create an API for our recommendation services
and helpers and a MediaWiki js plugin to get its results to the
current wiki editor. This way I hope the results of this research -
which started out as a rather theoretical one - will be used in a real
world scenario by at least a few people. I hope we will be able to
extend the your planned new RTE the same way in the future.

Awesome! I would definitely love to see that happen; while we're still in early phases on the parser itself we're also starting to work on the editor integration API, so somewhere down the road we should have a standardish way to plug some of that in.


 

Please, share with me your thoughs/comments/doubts about Sztakipedia.

Also I wanted to ask some things:
-Which is the most wanted helper feature according to you:
infobox/category/link recommendation? External data import from the
Linked Open Data? (Like our Book Recommender right now which has
millions of book records in it?) Field _value_ recommendation for
infoboxes from the text? Other?

We'll need to collect a little more reactions from users probably, but offhand I can definitely see real use for:
* helper tool for making references/citations (those book records sound like they could really help in looking up info & formatting a book or periodical reference!)
* suggestion of which infobox-style templates and categories to apply to a page based on content & already-found related pages -- we have such a *huge* array of templates and categories on the big Wikipedias, and it's easy to not know where to look. Having something that suggests 'infobox-city' to you when you're pretty clearly writing about a city could be a lot more useful than a simple typeahead list... it could even suggest it before the user knows to look for it!


-How do you measure the performance of a parser? I saw hints to some
300 parser test cases somewhere...

We have parser regression test cases in tests/parser/parserTests.txt (in maintenance/ on older released versions), but for performance testing you'd want to use some real-world articles.

Roughly speaking, you want to get some idea of both _average cases_ and _worst cases_. Overall server load is ruled by the average case, but it's the worst cases -- the slowest pages to render -- that have the most impact on user experience.

* grab a few dozen or hundred articles as a general subset (possibly weighted by popularity?)
* grab several of the largest, most complex articles from english wikipedia

There tends to be at least some overlap here. ;) Large numbers of template invocations, large numbers of images, and large numbers of parser functions & tag extensions are usually the worst cases. Individual templates can also hide a lot of complexity that's not obvious from looking at the source of the base page.

-Which is the best way to mash up external services to support the Wiki editor
interface (because if you call an external REST service from JS in mediawiki, it
will be cross-site scripting I'm afraid)?

If the JS can call via JSONP (executing via <script> and using a callback to pass data back to the caller), that should be fine. It's also possible to use cross-origin permission headers eg http://www.w3.org/TR/cors/

-- brion