-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
Are there still problems with the servers?
I get this message on the fr: wikipedia :
"Désolé! Suite à des problèmes techniques, il est impossible de se connecter à
la base de données pour le moment.
Ceci est une copie de la page demandée et peut ne pas être à jour"
Thanks,
Yann
- --
http://www.non-violence.org/ | Site collaboratif sur la non-violence
http://www.forget-me.net/ | Alternatives sur le Net
http://fr.wikipedia.org/ | Encyclopédie libre
http://www.forget-me.net/pro/ | Formations et services Linux
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
iD8DBQFAFAoKm4KYjQo0y9oRAp8dAJ4tSg1ZxaiEuQcCygW7yJFHEd4TWACfcOUo
+r9t7Z2v48fvM3HFZkJYlZ4=
=RWQN
-----END PGP SIGNATURE-----
Hi,
Is it possible to install MediaWiki on a server on which I am not root?
Currently, I get
Fatal error: Call to undefined function: getallheaders() in
/home/ajh65/public_html/w/wiki.phtml on line 22
at http://www.srcf.ucam.org/~ajh65/w/wiki.phtml
Interestingly, the function "getallheaders" is indeed no-where defined
in the entire source code of MediaWiki.
Any help?
Also, your install script has a little bug: passwords with an apostrophe
in them lead to invalid SQL syntax.
Thanks,
Timwi
Two recent posts about incremental backups:
http://mail.wikipedia.org/pipermail/wikitech-l/2003-September/006036.htmlhttp://mail.wikipedia.org/pipermail/wikitech-l/2003-December/007134.html
Is anyone still working on it? Incremental backups would be very useful
for people like me who have a monthly transfer limit.
xdelta seems to be good for small files. But it has a 2 GB file size
limit. As I didn't manage to install xdelta2, I can't tell if that
version can handle larger files. With two 1.6 and 1.9 GB files
(uncompressed) xdelta starts, but it needs more than 500 MB main memory
and is very slow (10 MB output after approx. 2 hours when I stopped it;
my computer has 1 GB main memory, so swapping is not the problem).
There may also be other suitable programs like rdiff, but the users
have to install them (and may fail as I did with xdelta2). Some of these
programs may not be available for non-Unix operating systems.
It's also possible to extract the difference information for the old
table from the binary log, but for cur this is impractical because there
are often multiple updates of the same article within a short period of
time.
In my opinion, the most natural file format is a sequence of SQL
statements that can be sent dircetly to the database to update it,
whether extracted from the binary log or from the dumps.
I've written two simple programs that use the dumps (see attachment).
Only Perl is needed to run them. "wsqlpatch" is for programs that read
the dumps directly. It produces an exact copy of the newer dump that
was used by wsqldiff. So the users can compare the MD5 sums to check if
the update was successful.
Here are the results of some tests:
Test files (German database):
20040109_cur_table.sql.bz2: 33.7 MB
20040117_cur_table.sql.bz2: 34.9 MB
20040109_old_table.sql.bz2: 429.9 MB
20040117_old_table.sql.bz2: 450.1 MB
export LANG=C # otherwise the umlauts get converted :-(
wsqldiff 20040109_cur_table.sql.bz2 20040117_cur_table.sql.bz2 |
bzip2 -c9 > cur_diff.sql.bz2
(17 min, Athlon XP 2400+)
wsqldiff 20040109_old_table.sql.bz2 20040117_old_table.sql.bz2 |
bzip2 -c9 > old_diff.sql.bz2
(117 min)
cur_diff.sql.bz2: 7.8 MB
old_diff.sql.bz2: 22.1 MB
wsqlpatch 20040109_cur_table.sql.bz2 cur_diff.sql.bz2 |
bzip2 -c9 > 20040117_cur_table2.sql.bz2
(10 min)
wsqlpatch 20040109_old_table.sql.bz2 old_diff.sql.bz2 |
bzip2 -c9 > 20040117_old_table2.sql.bz2
(80 min)
Memory usage of wsqldiff is moderate (64 MB for the German old-table,
so prabably about 300 MB for the English old-table). wsqlpatch needs
much less memory (about 10 MB, largely independent of the file sizes).
The perl scripts are quite slow, but maybe fast enough for the cur
tables. I'd like to know what is planned for the old tables. Will
they be dumped periodically in the future or is that only a temporary
solution? If I understand it correctly, dumping means that the
database must be locked, which is bad unless the dump is made on a
replicated server.
As an alternative, the binary log could be archived. But in this
case someone who has access to it on the Wikipedia server will have
to write a program that extracts the statements that alter the old table,
which probably involves nothing more than selecting lines that start
with "INSERT INTO old" or "DELETE FROM old" or "UPDATE old"
(using mysqlbinlog of course).
BTW, when I wrote the programs I noticed that in
maintenance/tables.sql the last two fields of the cur table
are cur_touched and inverse_timestamp in that order, whereas in the
dumps the order is reversed.
de:Benutzer:El
--
+++ GMX - die erste Adresse für Mail, Message, More +++
Bis 31.1.: TopMail + Digicam für nur 29 EUR http://www.gmx.net/topmail
Here is an alternative solution.
We could split the x Gb dump file in 50 Mb chunks.
As long as the database format does not change all but the last chunk
will be unaltered on subsequent runs.
For example
Dump and split in week 10 produces Chunk_1 up to Chunk-39 (all but the
last are 50 Mb in size)
Dump and split in week 11 produces Chunk_1 up to Chunk-40 (only 39 has
changed, 40 is new)
So only 2 chunks need to be downloaded after the last run.
Then a join operation on all files produces an up to date dump file.
A small script to manage this process would come in handy.
Erik Zachte
There has been a lot of talk on WikiEN-l about the need for citations in
articles. I tend to agree with that. But our current system of wiki refs only
encourage the creation of footnote-like references to external websites
(which is less than ideal).
I have put together a proposal that, if enacted, would create a more
wordprocessor-like footnote system that could be used for all types of
footnotes (web, ISBN, journal articles, and written out dead tree citations).
See and respond on:
http://meta.wikipedia.org/wiki/Footnotes
-- Daniel Mayer (aka mav)
There seems to be a serious technical problem with the mailinglist. I,
and at least one other member, didn't get the following mail. I also
can't find it in the archive. Please, could someone look into that? And
is there a way to find out which mails I might also have missed?
Kurt
On Saturday, January 24, 2004 8:38 PM
Jimmy Wales <jwales(a)bomis.com> wrote:
>> In force. However, do keep in mind that they can be amended, so I'm
>> eager to get feedback. I've already got a collection of notes from
>> a variety of people who have proposed changes, and I'm very receptive
>> to all the changes proposed to me so far.
>>
>> Arne Klempert wrote:
>>
>
>>>> On Tuesday, January 20, 2004 12:37 AM
>>>> Jimmy Wales <jwales(a)bomis.com> wrote:
>>>>
>>
>>>>>> Accessible now from
>>>>>> http://www.wikimediafoundation.org/
>>
>>>>
>>>>
>>>> Short question from Germany:
>>>>
>>>> Is this a proposal or are these bylaws in force?
>>>>
>>>>
>>>> Arne
>>>> [[de:Benutzer:Akl]]
On Sat, 24 Jan 2004 08:21:41 +0100, Jens Frank <JeLuF(a)gmx.de> wrote:
> You're sure their machines are in Europe?
Jens,
it appears that even though they have a egs.edu domain hosted in
California, they also run out of Switzerland, specifically from
egsuniversity.ch (phone number given at the bottom of egs.edu matches the
one at http://www.egsuniversity.ch/index.php?option=contact&Itemid=6).
egsuniversity.ch is definitely in Switzerland, and my packets from Europe
route from GEANT/DANTE over gblx, then Tiscali, and end up at
server-r001.hostpoint.ch after 17 hops.
Cheers,
Ivan