Cross-posted to wikipedia-l and wikitech-l, I suggest you reply to wikitech-l.
I've just committed a change to make special page names case-insensitive and
localisable. The default name for a special page can be changed, but a
redirect from the English name will always be kept. At present, there are no
local sets of names committed, although one has been proposed for German. I
have created a wiki page for discussion and coordination of this task:
http://www.mediawiki.org/wiki/Special_page_names
The following is for developers, everyone else can stop reading here.
There are a few practice changes associated with this change, for both core
and extension developers.
Instead of
Title::makeTitle( NS_SPECIAL, 'Contributions' );
use
SpecialPage::getTitleFor( 'Contributions' );
Instead of
Title::makeTitleSafe( NS_SPECIAL, "Blockip/$ip" );
use
SpecialPage::getSafeTitleFor( 'Blockip', $ip );
Instead of
($title->getNamespace() == NS_SPECIAL &&
$title->getDBkey() == 'Userlogout')
use
$title->isSpecial('Userlogout')
Only the last of these three changes is compulsory for extensions;
recognition of core special page names must be migrated. Old titles will
continue to work, so the first two changes are optional for extensions and
can be done at your leisure. All three changes should be considered
compulsory for core code.
Extension special pages can provide local special page names using the
LangugeGetSpecialPageAliases hook.
Hard-coded special page names in messages should be changed simultaneously
when the special page names themselves are changed. Special page names are
language-dependent, not site-dependent, so there should be no need for a
{{SPECIALNAME:xxx}} function.
-- Tim Starling
Dear Ladies and Gentlemen,
Good day. I am Mike Tian-Jian Jiang (
http://www.linkedin.com/in/barabbas ), a team member of wikimania 2007.
I'm trying to arrange Hacking Days currently, here's some rough plan
that needs your precious suggestions.
We would like to invite experts like you to provide speech and to
encourage hackers involving MediaWiki/Wikimedia
development.
Please let me know if you have any advise to these outlines. For
academic professionals, we will also hold a
Conference for oral and poster paper presentations.
Also please forward this mail to whom may be interested in.
Thank you very much!
Sincerely,
Mike
Hacking Days Agenda plan:
* Wikimania 2006:
o http://wikimania2006.wikimedia.org/wiki/Hacking_Days
(Schedule MindMap is missing......)
o http://wikimania2006.wikimedia.org/wiki/Hacking_Days_Extras
* Wikimania 2005
o http://meta.wikimedia.org/wiki/Wikimania_2005_hacking_days or
o http://meta.wikimedia.org/wiki/Wikimania_2005:Hacking_Days
* MediaWiki API Introduction:as web 2.0 trend...
o Query API
o http://meta.wikimedia.org/wiki/API
o Python Bot Framework
* MediaWiki API application contest:a contest likes Google and
Yahoo!'s, tries to attract hackers.
o Wikipedia Gadget
o Wikipedia Yahoo! Widget
* MediaWiki enhancement
o Wikiwyg
+ Ingy's AJAX version
+ Flash version
o Collaboration editing/versioning
o Site searching (Is there any plan to make Lucene available
for other languages besides English?)
o Improving Simplified-Traditional Chinese conversion
o Audio/Video processing/streaming
o Community: from "Talk" page to forum/bbs with more a
flexible reputation system.
o Spam/Captcha
* MediaWiki system administration
o Large scale data processing; please refers to
+ http://radar.oreilly.com/tag/database
<http://docs.google.com/%20%20%20%20%20%20%20%20%20%20%20%20%20http://blog.i…>
+ http://labs.google.com/papers/bigtable.html
<http://docs.google.com/%20%20%20%20%20%20%20%20%20%20%20%20%20http://labs.g…>
o Load balancing
o "The great wall" problem
* Wikipedia content applications
o (Crosslingual) search: especially for
translation/transliteration of named entities from Wikimedia
contents.
o (Crosslingual) question-answering: CLEF 2006 already has a
pilot task WiQA.
HI
I have downloaded the last frwiki dump, in particulary:
http://download.wikimedia.org/frwiki/latest/frwiki-latest-pages-meta-histor…
I have checked the md5.
I have tried to upload with the following command:
/home/kelson/tools/jre1.5.0_06/bin/java -server -classpath
/home/kelson/tools/mysql-connector-java-5.0.4/mysql-connector-java-5.0.4-bin.jar:/home/kelson/tools/mwdumper.jar
org.mediawiki.dumper.Dumper
--output=mysql://localhost/frwiki?user=*******\&password=********
--format=sql:1.5 frwiki-latest-pages-meta-history.xml.bz2 >& titi
And that's what I have got on stderr :
8 pages (1,26/sec), 1000 revs (157,456/sec)
10 pages (0,712/sec), 2000 revs (142,339/sec)
16 pages (0,712/sec), 3000 revs (133,583/sec)
20 pages (0,631/sec), 4000 revs (126,267/sec)
28 pages (0,759/sec), 5000 revs (135,45/sec)
32 pages (0,744/sec), 6000 revs (139,519/sec)
Exception in thread "main" java.io.IOException: java.sql.SQLException:
Not a valid escape sequence:
{[Mm]sg:/{{/',530,'Orthogaffe','20040603204946',...........
Any idea what goes wrong
Kelson
Hi,
We've been using mysqldump to do daily full database backups in case
our hardware on our DB server fails. This causes some problems because
for a short period of 4 minutes or so, the site in inaccessible
because mysqldump has the db locked.
I'm not too familiar with the maintenance/dumpPages.xml script, but
this script doesn't backup the whole db, including user accounts,
recent changes, links, etc, does it? And if it does, it probably
doesn't avoid the problem of having to lock the DB for a few minutes,
right?
Is there any reason why Squid is reporting this error to anonymous
users for pages that should be cached? Squid does seem to be caching
pages properly.
If mysqldump is still the answer,(I'm using the --quick option) are
there any other ways we can avoid this brief downtime to capture a
backup? How does Wikipedia do this?
Thanks a lot,
Travis
The last recent dumps (at least) for the german wikipedia are from
September 25. Unfortunatly, dump processes at October 14 and 31 failed.
I know, there is a lot of work to do, but I'm still hoping for new data
for our mirror.
Thank you in advance!
Jochen
During a massive dump import proccess (with mwdumper), which involves the top 10 wikipedias, I've found some 'odd' errors like, for example, this one recovering itwiki latest dump (2006-10-21, reports as OK):
.....
478,834 pages (154.138/sec), 4,603,000 revs (1,481.718/sec)
ERROR 1062 at line 2955: Duplicate entry '103-Matematica' for key 2
479,091 pages (153.95/sec), 4,604,000 revs (1,479.44/sec)
.....
.
From that point on, the recovery process fails and no more data gets into MySQL database..
I've made a little check out method to test if the total number of pages and revisions fits with the online hints in the web page, but sometimes I don't know how to manage this errors in other way than downloading the next dump backwards, till one of them works fine (because all dumps report to be OK).
If you're not very careful you'd think that the import process did its job, because an aprox. 85% of pages was in the DB... Any way to solve this non-config errors?
BTW, I don't chase Brion at all.... He does his best with the dump process and mwdumper.
Thanks, all the best.
Felipe.
---------------------------------
LLama Gratis a cualquier PC del Mundo.
Llamadas a fijos y móviles desde 1 céntimo por minuto.
http://es.voice.yahoo.com
tstarling(a)svn.wikimedia.org schrieb:
> Revision: 17300
> Author: tstarling
> Date: 2006-10-29 22:25:31 -0800 (Sun, 29 Oct 2006)
>
> Log Message:
> -----------
> * Used special page subpages in a few more places, instead of query parameters
Sorry for the stupid question: But what subpages are now possible?
Raymond.
Can templates include non-wiki codes?
I am trying to add a link calling a javascript function.
It works fine when added to the article itself, but template does not seem
to work.
Thanks!
-- details:
wiki:
<html><a href="javascript:function('ID')">LINK</a>
<div id="ID">TEXT</div></html>
template try:
<html><nowiki><a href=javascript:showhide('{{{ID}}}')">{{{LINK}}}</a>
<div id="{{{ID}}}">{{{TEXT}}}</div></nowiki></html>
wiki using template:
{{TEMPLATE_NAME|ID=someid |LINK=somelink |TEXT=sometext}}
Currently most of the existing classes and id's used in the interface
can conflict with user-specified classes and id's. Unless there are
objections, I will be changing over all of them to begin with a
reserved prefix "mw-", as is the current practice for adding new ones,
and add a check to stop content id's (particularly for headers) from
beginning with "mw-". I may also consolidate some id's that are
different in different skins and change some that are otherwise
inconsistent or weird.
When I finalize the changes, I'll post a full list of them here and on
enwiki VP:T a few days to a week before committing them, so that
people have a chance to update their scripts and CSS. Okay?
Could we please get the feature eluded to here:
<http://bugzilla.wikimedia.org/show_bug.cgi?id=5370> turned on for the
English Wikipedia (and probably the rest of the projects too)? At least
one vandal seems to have spesialised in spamming people with passwod
reminders (he must have a bot), and growing number of people have been
complaining about it. After replying to some threads about it on the admin
noticeboard on enWiki I seem to have incured the "wrath" of this
individual as well since I found 112 password reminders in my mailbox
today.
Sure they are easy to filter out, but we can't let spammers have free
reign like this. If there is such a "throttle" feature it needs to be
activated ASAP (how many times a day does the average user forget his
password?). If nothing else all these mails are a unnessesary strain on
our mail system. Pluss some users have "threatened" to report it to
SpamCop and such, and we don't want our mailserver blacklisted now do we?
--
[[:en:User:Sherool]]