Hi all
One question: Is SOAP really the best for wikipedia here?
As I know SOAP, it can't Cache with Squid and normally you have to use a
big lib. to to request and handle the response.
When I think about million request to the wikimedia servers with SOAP... ;)
Why don't think about REST or simple called: XML over HTTP. You could
use all existing mechanisms to create a website, but try to create a XML
instead of the website itself. It's not really harder to use then SOAP,
but you don't need any* additional knowledge and tools to handle it. And
I think the caching is very Important for Wikimedia, and XML over HTTP
is the same then HTML over HTTP for Squid I think.*
Did anyone have a Link to the diskussions made where SOAP was choosen?
Thx :)
Ævar Arnfjörð Bjarmason schrieb:
On 1/26/06, Michiel van Hulst
<michiel(a)vanhulst.nu> wrote:
So could anyone tell me the story of the API ;-)
Of course I may not have all the facts of the matter but this is
basically what happened: On 2005-06-23 Jimbo Wales on behalf of
Wikimedia/Wikipedia announced that an API (most likely SOAP) would be
written to access Wikimedia content. As far as I know the foundation
did not actually follow up on this, somebody obviously has to write it
and they didn't hire anyone to do that (yet?). Perhaps they were
hoping that someone would do it for free upon them announcing it but
that obviously hasn't happened. Long story short lots of talk but not
much of anything else.
I actually started writing a SOAP API that used standard MediaWiki
functions at one point which worked for some limited things like
getting article text but didn't finish it because other things came
up.
There are several issues with implementing a robot API like that, one
is that a lot of our logic is still tied to our current XHTML output
code, which would have to be split off into a backend and presentation
frontends. Another is that it's inherently hard to write some simple
things like getting the first paragraph of an article (or a summary)
because we don't store those things relationally, and UA's having to
implement their own parser for our syntax isn't really practical due
to its complexity.
See the page on meta[1] and bug 208[2]
1.
http://meta.wikimedia.org/wiki/KDE_and_Wikipedia
2.
http://bugzilla.wikipedia.org/show_bug.cgi?id=208
------------------------------------------------------------------------
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)wikimedia.org
http://mail.wikipedia.org/mailman/listinfo/wikitech-l