I worked with Ward a bit on this exploratory stuff and am one of the authors of Kiwi. I met some of you at the Data Summit in Sebastopol back in February.  I've been largely lurking here due to time constraints.

Ward's exploratory parser does work pretty much as Neil described.  It's made for galloping through huge amounts of data while gathering samples of where a parser works and were it doesn't.  I do think it would be useful for testing a grammar for correctness against the full Wikipedia.  Keep in mind that it would need to also work inside the XML parser that Ward wrote if you want to parse the Wikipedia dumps (or using a different one around peg/leg).

Regarding Kiwi, I'm happy to help however I can with the little amount of time I have at the moment.  Our grammar for Kiwi is pretty good I think and there are only a few places where we have to manipulate it and it's done in the normal way supported by peg/leg–we have a C predicate.  Generally this is just to control which expressions have to start at the beginning of a line.  We never turn on or off parts of the grammar explicitly.  Rules either match or they don't.  Because it was done as a separate parser, it does not support some things that require being built into the code (e.g. image resizing).

As for being disappointed that we built an HTML translator and not a parser that parses to an AST, this was never on the table.  Thomas and I built this mostly in our free time with only a bit of time at work and we did it to solve a practical problem: how to render our Wikitext fast enough at AboutUs to remove our caching layer.  I'm surprised, Neil, that you think Ward was disappointed with this as he was always supportive of our efforts and indeed introduced us to Peg and spent some time helping us get into writing grammars and understanding the pitfalls.  I'm sorry it doesn't solve the problem you guys have off the shelf, but hopefully it helps open some doors, or at least serves as a model of how a grammar can be written.

If I can be of help, please just give me a shout.

Cheers,
Karl


On Tue, Jul 12, 2011 at 4:35 AM, Neil Kandalgaonkar <neilk@wikimedia.org> wrote:
Trevor & I talked with him extensively about this. BTW, around here,
he's just Ward. :)

He too was disappointed that his team wrote rules to directly transform
wikitext into HTML.

The parse-everything-in-Wikipedia thing isn't quite what it sounds like.
If I recall correctly it works like this:

As part of his job at About.us, he was really looking for patterns of
Wikitext that he could use to snag business information. One target was
the Infobox on Wikipedia. So, the tool was a way of cataloging the
various ways that people structure an Infobox template.

Because he wrote this in C, he added rules to the grammar to discard
information in favor of keeping a data structure of constant size.
That's mostly what the the <<< >>> in the grammar mean. Anyway, this
then serves as a sampling of the majority of the structures one is
interested in. The more rules you write, the more "unknown" stuff falls
into the fixed size of structures that are unparsed. IIRC he agreed it
might not be so useful if you were writing a grammar for PHP or JS (I
assume the same is true for Python).



On 7/11/11 5:24 PM, Erik Rose wrote:
> On Jul 11, 2011, at 5:17 PM, Brion Vibber wrote:
>> We are however producing a different sort of intermediate structure rather than going straight to HTML output, so things won't be an exact match (especially where we do template stuff).
>
> Nor are we going straight to HTML, which is one reason we didn't steal this stuff. :-)
> _______________________________________________
> Wikitext-l mailing list
> Wikitext-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitext-l

--
Neil Kandalgaonkar  |) <neilk@wikimedia.org>

_______________________________________________
Wikitext-l mailing list
Wikitext-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitext-l