Hi Heinz,
run from shell:
php maintainance/rebuildMessages.php --rebuild
Cheers, Jimmy
> -----Ursprüngliche Nachricht-----
> Von: Wikimedia developers <wikitech-l(a)wikimedia.org>
> Gesendet: 11.09.06 22:00:16
> An: wikitech-l(a)wikimedia.org
> Betreff: [Wikitech-l] How to import languageXX.php-files into db / Spezial:Allmessages
> I would like to import languageXX.php files into the mediawiki-database
> (Spezial:Allmessages). Allready existing messages shall be overide.
>
> Do there exist solutions ?
>
> THX, HeinzJ
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
An automated run of parserTests.php showed the following failures:
Running test TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test TODO: Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test TODO: Template with thumb image (with link in description)... FAILED!
Running test Template infinite loop... FAILED!
Running test TODO: message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test TODO: message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test TODO: HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test TODO: HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test TODO: Parsing optional HTML elements (Bug 6171)... FAILED!
Running test TODO: Inline HTML vs wiki block nesting... FAILED!
Running test TODO: Mixing markup for italics and bold... FAILED!
Running test TODO: 5 quotes, code coverage +1 line... FAILED!
Running test TODO: HTML Hex character encoding.... FAILED!
Running test TODO: dt/dd/dl test... FAILED!
Passed 412 of 429 tests (96.04%) FAILED!
>Date: Mon, 11 Sep 2006 21:14:34 +0200
>From: Brion Vibber <brion(a)pobox.com>
>Subject: Re: [Wikitech-l] importDump.php error, WikiRevision given a
> null title in import.
>To: Wikimedia developers <wikitech-l(a)wikimedia.org>
>Message-ID: <4505B59A.80400(a)pobox.com>
>Content-Type: text/plain; charset="iso-8859-1"
>
>Noah Spurrier wrote:
>> I am able to import over 700 articles using importDump.php, but
>> part way through the process I get get an exception complaining that
>> a Wiki Revision was given a null title. Any tips on how to resolve this?
>> I'm using MediaWiki 1.7.1 with a wiki dump downloaded current
>> from August 17th.
>
>>From a Wikimedia site? Add '+' to $wgLegalTitleChars.
>
>-- brion vibber (brion @ pobox.com)
That worked perfectly; although, it seems strange that the default
MediaWiki configuration is not compatible with a Wikipedia dump.
I edited includes/DefaultSettings.php and added a + to this line 175:
$wgLegalTitleChars = " %!\"$&'()*,\\-.\\/0-9:;=?@A-Z\\\\^_`a-z~\\x80-\\xFF";
to get this line:
$wgLegalTitleChars = " +%!\"$&'()*,\\-.\\/0-9:;=?@A-Z\\\\^_`a-z~\\x80-\\xFF";
Yours,
Noah
Hello,
Is it possible to make Mediawiki generate readable and easy to
remember passwords? ie. avoiding confusion between 1 and l, and using
combinations of letters that are pronouncable?
Thanks
-John
An automated run of parserTests.php showed the following failures:
Running test TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test TODO: Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test TODO: Template with thumb image (with link in description)... FAILED!
Running test Template infinite loop... FAILED!
Running test TODO: message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test TODO: message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test TODO: HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test TODO: HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test TODO: HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test TODO: Parsing optional HTML elements (Bug 6171)... FAILED!
Running test TODO: Inline HTML vs wiki block nesting... FAILED!
Running test TODO: Mixing markup for italics and bold... FAILED!
Running test TODO: 5 quotes, code coverage +1 line... FAILED!
Running test TODO: HTML Hex character encoding.... FAILED!
Running test TODO: dt/dd/dl test... FAILED!
Passed 412 of 429 tests (96.04%) FAILED!
Simetrical wrote:
> See http://bugs.wikimedia.org/show_bug.cgi?id=5244 and the various
> things duped to it. I'm pretty sure performance would be a major
> issue here; for instance, finding the first 200 pages in a category is
> limited to iterating over 200 members of the category, and likewise
> for all other operations currently supported by categories (as well as
> unions), but finding the first 200 pages in the intersection of two
> categories has no upper bound on the number of iterations required:
> you have to go through every page in each category in the event that
> they have fewer than 200 shared pages and neither is a subset of the
> other.
>
> Has anyone written code that can handle this efficiently? Is such
> code even possible?
I (and I'm sure many others) have been following this topic on and off
for a long time. It seems pretty clear that the majority of the
community (as represented by people who have voiced an opinion about
it) want this functionality (albeit with a few strong dissenters) but
the remaining issues are 1) how to implement it and 2) can it be
implemented efficiently.
After reading Brion and others' comments, it sounds to me like the
developer community seems to be allowing for the possibility that it
can be implemented efficiently. I myself have written a version using
SQL on the existing schema, but this was rejected as too inefficient.
I think the next possible steps are for the development community to
come up with different acceptable implementations, and then toss them
back to the wikipedia community (the main "customer" for this
functionality).
For the purposes of evaluating possible solutions, I think one key
question recently brought up here has been under-discussed: how often
will this be used? If this will be used very frequently, then the
solution will have to be more streamlined and efficient than than if
it's going to get less usage. There have been objections about using
various SQL methods (including mine) on the existing structure - but I
think these discussions must happen in the context of usage, and we
should determine if a SQL based solution is possible (specifically
MySQL - we really need a MySQL expert to comment on the performance
issues with the join/if exists/group by and count solutions, as we are
throwing around a lot of conjecture about its inner workings), or if
something else (like Brion's Lucene suggestion) will be necessary.
Regards,
Aerik
Hello,
At en wikipedia, I clicked edit on Talk:British Isles (terminology)
and received this message:
--
User is blocked
Your user name or IP address has been blocked from editing.
You were blocked by Pathoschild for the following reason (see our
blocking policy):
Autoblocked because your IP address has been recently used by
"Ilikesheeeeeeeeeeep". The reason given for Ilikesheeeeeeeeeeep's
block is: "Violation of the Username policy (too long, confusing,
Your IP address is 72.14.192.5.
...etc.
--
This was obviously some kind of database fart because it disappeared
when I tried again, and because that IP address isn't close to mine :)
Anyway, just thought I'd mention it.
Steve
I'm trying to set up a more regular search index update for the Wikimedia sites.
To summarize how it's to work:
A process on maurus (the search build master) runs through the list of all
wikis, dumping their text and piping it to the search index builder program.
As each wiki completes, the newly built index is moved from the build directory
into the complete directory.
The lucene search servers currently restart themselves hourly as a precaution
against memory leaks; additionally before restart they will now do an rsync to
copy over any complete new indexes from the master. [There may be some
refinements to make on this, such as keeping 'live' and 'update' copies and
swapping them out during the restart.]
This should keep search index updates happening within a day or two, rather than
the extremely long and irregular schedule of before.
Currently the build process is running on maurus since last night; it's
currently about 1/3 through enwiki and doesn't appear to have spewed any errors.
I'll check in on it again this evening and if things look ok I'll set up the
synchronization process.
-- brion vibber (brion @ pobox.com)
I would like to import languageXX.php files into the mediawiki-database
(Spezial:Allmessages). Allready existing messages shall be overide.
Do there exist solutions ?
THX, HeinzJ
I am able to import over 700 articles using importDump.php, but
part way through the process I get get an exception complaining that
a Wiki Revision was given a null title. Any tips on how to resolve this?
I'm using MediaWiki 1.7.1 with a wiki dump downloaded current
from August 17th.
Do I need to provide more information?
# bunzip2 -dc /root/enwiki-latest-pages-articles.xml.bz2 | /var/www/PHP
maintenance/importDump.php
100 (257.70138911503 pages/sec 257.70138911503 revs/sec)
200 (215.85242404671 pages/sec 215.85242404671 revs/sec)
300 (194.56211852307 pages/sec 194.56211852307 revs/sec)
400 (188.27240827463 pages/sec 188.27240827463 revs/sec)
500 (189.25815368724 pages/sec 189.25815368724 revs/sec)
600 (177.78198503318 pages/sec 177.78198503318 revs/sec)
700 (177.9798646001 pages/sec 177.9798646001 revs/sec)
WikiRevision given a null title in import.
Backtrace:
#0
/var/www/usr/local/apache2/vhosts/10.3.0.13/wiki/includes/SpecialImport.php(620):
WikiRevision->setTitle(NULL)
#1 [internal function]: WikiImporter->in_page(Resource id #51, 'revision',
Array)
#2
/var/www/usr/local/apache2/vhosts/10.3.0.13/wiki/includes/SpecialImport.php(418):
xml_parse(Resource id #51, 'idsummer Night'...', 0)
#3
/var/www/usr/local/apache2/vhosts/10.3.0.13/wiki/maintenance/importDump.php(110):
WikiImporter->doImport()
#4
/var/www/usr/local/apache2/vhosts/10.3.0.13/wiki/maintenance/importDump.php(97):
BackupReader->importFromHandle(Resource id #50)
#5
/var/www/usr/local/apache2/vhosts/10.3.0.13/wiki/maintenance/importDump.php(132):
BackupReader->importFromStdin()
#6 {main}