Hi all,
I've been working for the past month on an browser-based editor for
JSON, called JSONwidget. For those of you unfamiliar with JSON, it's a
data serialization format, which is a fancy way of saying text markup
for structured data. It's one of two alternatives to XML that is
gaining traction as simpler, more compact formats text formats for data
serialization (the other one being YAML).
Anyway, what my tool does is takes a JSON file and a JSON-formatted
schema, and renders a user interface for editing the JSON, creating
neatly formatted JSON on the client for submission back to the server.
Here are the demos:
http://robla.net/2005/jsonwidget/#demos
Note that this version has only been tested with Firefox 1.0.7. I plan
to fix many known bugs in IE and Opera in my next release, and would
welcome help from users of other browsers. There are still plenty of
rough edges even for Firefox users, but there should be enough there to
give you an idea of where its heading.
For those that object to fancy Javascript interfaces on philosophical
grounds, you'll be pleased to know that it does have a failover mode.
If Javascript is turned off, you are presented with a simple web form to
edit the raw JSON. Not pretty, but functional in a pinch.
I'm not sure if something like this could be adapted for Wikidata; I
haven't had a chance to really dive in and see where that project is at.
But I'm throwing that out there as a possibility.
Let me know what you think.
Thanks
Rob
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
MediaWiki 1.5.3 is a security and bugfix maintenance release.
Validation of the user language option was broken by a code change in
May 2005, opening the possibility of remote code execution as this
parameter is used in forming a class name dynamically created with
eval().
The validation has been corrected in this version. All prior 1.5 release
and prerelease versions are affected; 1.4 and earlier and not affected.
Additionally several bugs have been fixed; see the changelog in the
release notes for a complete list.
Release notes:
http://sourceforge.net/project/shownotes.php?release_id=375755
Download:
http://prdownloads.sourceforge.net/wikipedia/mediawiki-1.5.3.tar.gz?download
MD5 checksum:
fc697787f04208d1842a2c646deca626 mediawiki-1.5.3.tar.gz
SHA-1 checksum:
070189e29ace2ef9ab0589db42ecf849f2b88ee5 mediawiki-1.5.3.tar.gz
Before asking for help, try the FAQ:
http://meta.wikimedia.org/wiki/MediaWiki_FAQ
Low-traffic release announcements mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-announce
Wiki admin help mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-l
Bug report system:
http://bugzilla.wikimedia.org/
Play "stump the developers" live on IRC:
#mediawiki on irc.freenode.net
- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFDktOvwRnhpk1wk44RAi/tAJ9NlfTJTqW+9xTC6xaeOple14hFLQCgpyBn
/hIyYleol9gFbHfMgzJCyy8=
=fdzu
-----END PGP SIGNATURE-----
(was "Problem importing using importDump.php")
Having given up on importDump.php, I'm now trying to
import all Wikipedia articles using mwdumper.jar
The command I typed into the terminal was:
/System/Library/Frameworks/JavaVM.framework/Versions/1.5/Commands/java
-jar /Users/xed/Desktop/mwdumper.jar --format=sql:1.5
/Users/xed/Desktop/20051127_pages_articles.xml.bz2 |
/usr/local/mysql-standard-5.0.16-osx10.4-powerpc/bin/mysql
-u wikiuser -p wikidb
After entering the database password it came up with
this error:
ERROR 1146 (42S02) at line 31: Table 'wikidb.text'
doesn't exist
...and then immediately started doing this:
1,000 pages (49.717/sec), 1,000 revs (49.717/sec)
2,000 pages (75.106/sec), 2,000 revs (75.106/sec)
3,000 pages (92.308/sec), 3,000 revs (92.308/sec)
4,000 pages (97.611/sec), 4,000 revs (97.611/sec)
..etc..
now it's up to
408,000 pages (306.929/sec), 408,000 revs
(306.929/sec)
..which is nice.
But what is it doing? Is it actually going into the
Wiki I set up? Looking at the "All pages" Special page
in my Wiki I just see the couple of pages that I had
already made. Are there any other steps I have to take
once mwdumper has done it's job?
Thanks
X
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
Dear Wikipedians,
I'm currently working on a project for which we would like to use
content of de.wikipedia.org. We set up our own media wiki and if it is
searched and no article is found, we would like to query wikipedia on
that subject.
Now my question is: is this alright to do, because it would be just as
described from start content to end content and I thought that would not
be more trouble than a user reading it with its browser...
Please let me know whether it is alright or not... (don't want to cause
any trouble).
Thanks in advance,
Tobias Hoppenthaler
Hallo Wikipedianer,
ich arbeite z.Z. an einem Projekt, bei dem wir Inhalte aus der Wikipedia
wie folgt übernehmen wollen:
Es wird eine Anfrage auf unsere wiki (Mediawiki-Platform) gestellt und
wenn dort nix gefunden wird, dann sucht er in der wikipedia.de danach
und bindet den Artikel (HTML-Code zwischen <!-- start content -->
und<!-- end content -->) ein (natürlich mit den rechtlichen Hinweisen).
Nun meine Frage: ist das zulässig? Ich werde aus dem Text auf der Seite
http://de.wikipedia.org/wiki/Wikipedia:Wikipedia_anderswo_verwenden
nicht ganz schlau (in Bezug auf Live-Einbindung usw.). Also eigentlich
ist das ja nichts anderes, als eine ganz normale http-Anfrage. Das
sollte doch kein Problem darstellen, oder?
Aber bevor ich da weitermache, wüßte ich natürlich gerne ob es legitim
ist, oder nicht.
Vielen Dank für die Hilfe!
MfG
Tobias Hoppenthaler
Hi Brion,
Yep, I've done that. I've modified DefaultSettings.php:
$wgUseSquid = true;
$wgUseESI = false;
$wgInternalServer = $wgServer;
$wgSquidMaxage = 18000;
$wgSquidServers = array('10.234.169.202');
$wgSquidServersNoPurge = array();
$wgMaxSquidPurgeTitles = 400;
$wgSquidFastPurge = true;
Squid and Apache are running on different servers, but that shouldn't
matter right? If you were referring to something further, can you
point me in the right direction? What do you mean by have Squid
rewrite that for downstream? Is there any more documentation on this
other than the page on meta.wikimedia.org?
Thanks,
travis
Travis Derouin wrote:
> HTTP/1.1 200 OK
> Date: Thu, 01 Dec 2005 19:31:03 GMT
> Server: Apache/2.0.46 (Red Hat)
> X-Powered-By: PHP/4.3.10
> Content-language: en
> Vary: Accept-Encoding,Cookie
> Cache-Control: s-maxage=18000, must-revalidate, max-age=0
You need to turn on squid mode, so that MediaWiki sends out a different max-age
for cacheable pages, and then have Squid rewrite that for downstream. Then squid
won't have to talk to the apache at all for cached hits for anonymous visitors
with no session cookies.
Otherwise Squid still has to hit Apache/MediaWiki for everything to check for
304s, which is a lot slower than being able to return directly from cache.
> But by tailing the logs and looking at how many times a popular page
> has been requested today, Apache is returning 200 codes for the same
> page just as many times as it is being requested from Squid, so Squid
> isn't serving any cached versions of the page.
>
> If the page hasn't changed, shouldn't it be returning a 304 header
> response saying it hasn't been changed if Squid was working properly?
Probably should be; you might want to investigate that.
-- brion vibber (brion @ pobox.com)
Hi,
We recently added a Squid cache for our wiki, for the most part it's
working out quite well.
Although, it doesn't look like popular articles are being cached
properly. I've followed the steps listed here:
http://meta.wikimedia.org/wiki/Squid_caching, the only change being is
that we have a redirect script set up to redirect hostnames, described
here http://wiki.ehow.com/Implement-Redirects-in-Squid, although
removing this doesn't seem to help the cause.
What we are seeing is that Squid isn't caching articles, but keeps on
requesting the same article from Apache. It does look like the
headers are being set from Mediawiki:
HTTP/1.1 200 OK
Date: Thu, 01 Dec 2005 19:31:03 GMT
Server: Apache/2.0.46 (Red Hat)
X-Powered-By: PHP/4.3.10
Content-language: en
Vary: Accept-Encoding,Cookie
Cache-Control: s-maxage=18000, must-revalidate, max-age=0
Last-modified: Thu, 24 Nov 2005 18:32:18 GMT
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
But by tailing the logs and looking at how many times a popular page
has been requested today, Apache is returning 200 codes for the same
page just as many times as it is being requested from Squid, so Squid
isn't serving any cached versions of the page.
If the page hasn't changed, shouldn't it be returning a 304 header
response saying it hasn't been changed if Squid was working properly?
Should the Expires header be set as well?
Any ideas on how this can be fixed, or how to find out what's going on?
Thanks,
Travis
>
> Date: Thu, 01 Dec 2005 12:49:40 +0000
> From: Neil Harris <neil(a)tonal.clara.co.uk>
> Subject: Re: [Wikitech-l] Re: Credit card processing
> for forthcoming
> To: Wikimedia developers <wikitech-l(a)wikimedia.org>
> Cc: board(a)wikimedia.org
>
> I don't think that anyone is advocating dropping
> PayPal or any other
> existing payment option, just adding direct donation
> by credit card as
> another option.
Yes, I see. In that case here are the major credit
card payment systems which I would recommend the
Foundation contact for information about rates:
Processors:
First Data- http://www.firstdata.com/
Paymentech - http://www.paymentech.com/
Citibank Merchant Services -
http://www.citibank.com/us/cards/merchant/
Gateways-
Concord EFS Net - http://www.concordefsnet.com
Cybersource - http://www.cybersource.com/
Verisign also has a credit card payments division;
however, that was just sold to Ebay, which owns
PayPal, so I'm not sure if they're accepting new
customers.
I think almost all of these payment systems support
international credit card payments, though some may
require having a deposit account either with a
particular bank, or in a particular currency.
Technically, Concord probably has the simplest
integration since their architecture is XML over
HTTP(S)-based. Integrating with a processor would be
more difficult, but at least it helps that the
Foundation need only implement one type of operation-
auth-capture- since it does not deliver goods to its
payers, which normally requires deferring the actual
transfer of funds (i.e. capture) until the time of
shipment. Another nice benefit of using only the
auth-capture operation is that the Foundation would
never have to store credit card numbers in a database.
Anyway, I'd be happy to help with more information or
expertise, including with the technical implementation
assuming my employer would not consider that a breach
of contract... :/
__________________________________________
Yahoo! DSL Something to write home about.
Just $16.99/mo. or less.
dsl.yahoo.com
Yeah, it appears to be broken now.
Anthony DiPierro wrote:
>Hmm...
>
>I go to http://en.wikisource.org/wiki/User:Tim_Starling/ScanSet_TIFF_demo
>
>That gives me the list of ranges. I click on "ANDROS-AUSTRIA" (number 2).
>
>That takes me to
>http://en.wikisource.org/w/index.php?title=User:Tim_Starling/ScanSet_TIFF_d…
>
>But that page is the same as the previous page (the same identical
>ranges). Are you getting the same results? If so, maybe the script
>is just broken.
>
>On 12/1/05, Brian <brian0918(a)gmail.com> wrote:
>
>
>>You select one of the name ranges, and then it will list all the pages for
>>that range. You then select one of those pages to get the actual image for
>>that page.
>>
>>
>>Anthony DiPierro wrote:
>>Hmm, I tried that out and all I got was a list of name ranges. Must
>>
>>
>be doing
>
>
>>something wrong. I'll try it again when I get home from
>>
>>
>work.
>
>Thanks for
>
>
>>the info.
>>
>>
>
>Anthony
>
>On 12/1/05, Brian <brian0918(a)gmail.com> wrote:
>
>
>
>>Tim Starling has a demo of it at the link below. I don't know if he's
>>still
>>
>>
>doing anything with it though. You might ask on
>
>
>>wikitech-l.
>>
>>
>
>http://en.wikisource.org/wiki/User:Tim_Starling
>
>
>