I found this task in [[mw:Parsoid/Todo]]:
* Move list handler from tokenizer to sync23 phase token
transformer to support list items from templates.
Does this mean, using a template like {{Foreach|delim=*}} can generate some
list items, and the calling page has control over
whether these items are in their own list, or continue
another list? In this case, the attached tests describe the
feature correctly?
Regards,
Adam Wight
Forwarding this to the wikitext-l list just in case.
-------- Original Message --------
Subject: [Wikitech-l] Cutting MediaWiki loose from wikitext
Date: Mon, 26 Mar 2012 16:45:51 +0200
From: Daniel Kinzler <daniel(a)brightbyte.de>
Reply-To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
Organization: Wikimedia Deutschland e.V.
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>,
mediawiki-l(a)lists.wikimedia.org
CC: Lydia Pintscher <lydia(a)pintscher.de>, Abraham Taherivand
<abraham.taherivand(a)wikimedia.de>
Hi all. I have a bold proposal (read: evil plan).
To put it briefly: I want to remove the assumption that MediaWiki pages
contain
always wikitext. Instead, I propose a pluggable handler system for different
types of content, similar to what we have for file uploads. So, I propose to
associate a "content model" identifier with each page, and have handlers for
each model that provide serialization, rendering, an editor, etc.
The background is that the Wikidata project needs a way to store
structured data
(JSON) on wiki pages instead of wikitext. Having a pluggable system
would solve
that problem along with several others, like doing away with the special
cases
for JS/CSS, the ability to maintain categories etc separate from body text,
manage Gadgets sanely on a wiki page, or several other things (see the
link below).
I have described my plans in more detail on meta:
http://meta.wikimedia.org/wiki/Wikidata/Notes/ContentHandler
A very rough prototype is in a dev branch here:
http://svn.wikimedia.org/svnroot/mediawiki/branches/Wikidata/phase3/
Please let me know what you think (here on the list, preferably, not on
the talk
page there, at least for now).
Note that we *definitely* need this ability for Wikidata. We could do it
differently, but I think this would be the cleanest solution, and would
have a
lot of mid- and long term benefits, even if it's a short term pain. I'm
presenting my plan here to find out if I'm on the right track, and
whether it is
feasible to put this on the road map for 1.20. It would be my (and the
Wikidata
team's) priority to implement this and see it through before Wikimania. I'm
convinced we have the manpower to get it done.
Cheers,
Daniel
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
In-place editing is going to be the future of all
collaborative content, or at least, it's hard to imagine
otherwise.
This is possibly the greatest step forward that the
VisualEditor promises to bring Mediawiki. Odd that it was
hardly mentioned in the tldr thread "Odd plans on
Future/Parser_plan - are they true?". Perhaps some people
have not tried the new editor?
http://www.mediawiki.org/wiki/Special:VisualEditorSandbox
Let me please not reopen the questions of l33tism or
wikitext whimsey, I simply wanted to state that clicking
directly on the cursor location you want to edit is a big
deal. One's poor brain does not have to shift gears; the
computer actually saves you a little bit of your life (how
often can you say that?); this is a good interface.
...and having a parser in Javascript will really broaden the
horizons for wikipedia: embedding wiki content in another
application, offline editing, abandoning legacy mediawiki
code, to name a few.
-Adam
So, in the course of trying to implement user signature processing in
the PEGjs grammar (https://bugzilla.wikimedia.org/show_bug.cgi?id=35392),
it occurred to me that this is in a whole new class of transformation
function. "~~~~" is actually replaced in the wikitext during the
round-trip of saving on the server. Ha ha, not the quaint little
evening project I thought I was taking on!
Looking at includes/parser/Parser.php::pstPass2, in addition to user
signatures, pre-save transformations can also insert the current
timestamp, do a literal transclusion using the {{subst: tag, or
complete the "context" of a wikilink.
Implementing this class of transformation in the VisualEditor gets
tricky because the client wikitext must be kept in sync. Is there
already a provision made for mutating wikitext after a round-trip?
Thanks,
Adam Wight
Dear authors,
No action from Extension:VisualEditor. I get (different)
fatal javascript errors on both 1.18.1 and 1.20, where can I
file the bug? Is there already a howto install doc?
Here are the errors, the stack traces don't look that
helpful. Using mediawiki r114124 and VisualEditor r114350,
mw.loader::execute> Exception thrown by ext.visualEditor.sandbox: Cannot read property 'HeadingNode' of undefined
TypeError
arguments: Array[2]
0: "HeadingNode"
1: undefined
length: 2
__proto__: Array[0]
get message: function getter() { [native code] }
get stack: function getter() { [native code] }
set message: function setter() { [native code] }
set stack: function setter() { [native code] }
type: "non_object_property_load"
__proto__: Error
I won't even mention the error I get with MW 1.18.1.
Regards,
Adam Wight
Hi lists,
we are proud to announce that we now host the data we extract from
wiktionary publicly on wiktionary.dbpedia.org.
We offer Linked Data: http://wiktionary.dbpedia.org/resource/word
a SPARQL endpoint: http://wiktionary.dbpedia.org/sparql
and N-Triple Dumps: http://downloads.dbpedia.org/wiktionary/
There is also a wiki explaining some details:
http://wiki.dbpedia.org/Wiktionary/
We currently extracted data from the English and German Wiktionary (28M
triples and 3.7M triples), but plan to extend that to at least the
biggest 5 wiktionaries within the next weeks, as our approach focuses on
extendability. The data for each word is structured hierarchically (as
wiktionary is) and contains information about language, part of speech,
definitions, translations, synonyms, hyperonyms and hyponyms etc.
There might be some quality issues, but we want to release early, so
bear with us and report major problems.
Thanks goes to the wiktionary community which does a great job creating
this dataset, and we hope to enable new use cases and consequently
promote the contribution to the wiktionary project.
Regards,
Jonas Brekle
Department of Computer Science, University of Leipzig
Research Group: http://aksw.org
Hello all,
is there a query language for wiki syntax?
(NOTE: I really do not mean the Wikipedia API here.)
I am looking for an easy way to scrape data from Wiki pages.
In this way, we could apply a crowd-sourcing approach to knowledge
extraction from Wikis.
There must be thousands of data scraping approaches. But is there one
amongst them that has developed a "wiki scraper language" ?
Maybe with some sort of fuzziness involved, if the pages are too messy.
I have not yet worked with the XML transformation of the wiki markup:
*action=expandtemplates **
generatexml - Generate XML parse tree
Is it any good for issuing XPATH queries ?
Thank you very much,
Sebastian
--
Dipl. Inf. Sebastian Hellmann
Department of Computer Science, University of Leipzig
Projects: http://nlp2rdf.org , http://dbpedia.org
Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann
Research Group: http://aksw.org
Hello,
As the VisualEditor team agreed last week to go forward with the approach
of using ContentEditable in a view layer, I performed some major cleanup in
the source code (structural changes and class naming):
http://www.mediawiki.org/wiki/Special:Code/MediaWiki/113189http://www.mediawiki.org/wiki/Special:Code/MediaWiki/113191http://www.mediawiki.org/wiki/Special:Code/MediaWiki/113192
Basically the ContentEditable approach is no longer a "hack" built around
EditingSurface and now has it's own set of view files.
The EditingSurface demo is not working properly anymore, but I don't think
it has to (this is due to changes in model classes which refers now to
ve.ce vs. ve.es, as before).
Let me know if you have any questions.
Thanks,
Inez