It's interesting to look at the HTTP requests charts at
http://wmperf.mine.nu:8043/wmperf/index.org.wikimedia.all-squids.html
For example, this night at 22:00 and 3:00 UTC (afternoon to evening in the
US) there were two big traffic spikes, easily handled by the squids'
caches. Anyone has ideas about what may cause them? Big sites linking to
us?
Alfio
> After nearly a year of not upgrading the MediaWiki software for my
> Disinfopedia (www.disinfopedia.org), I finally got around to it today.
> I got everything working at a temporary URL except that passwords
> aren't validating properly. When I try to log in, I get a message that
> says:
>
>> Login error:
>> The password you entered is incorrect. Please try again.
Sheldon, add this line to your LocalSettings.php:
$wgPasswordSalt = false;
this will turn on the compatibility with old passwords.
-- brion vibber (brion @ pobox.com)
After nearly a year of not upgrading the MediaWiki software for my
Disinfopedia (www.disinfopedia.org), I finally got around to it
today. I got everything working at a temporary URL except that
passwords aren't validating properly. When I try to log in, I get a
message that says:
>Login error:
>The password you entered is incorrect. Please try again.
If I create a new user account, the password works properly, but
existing user accounts all seem to give me the error message.
Fortunately, for the time being everything is still working properly
at my usual URL, but can someone help me figure out how to fix this?
I searched the Wikitech archives and came across the following
message from March 2003, which may have some bearing on my problem:
>Message: 7
>From: "Tim Starling" <ts4294967296(a)hotmail.com>
>To: wikitech-l(a)wikipedia.org
>Subject: Re: [Wikitech-l] What, no salt?
>Date: Mon, 31 Mar 2003 09:24:03 +1000
>Reply-To: wikitech-l(a)wikipedia.org
>
>
>>Obviously we'd have to add a note explaining that everyone has to reset
> >their password. Not everyone has an e-mail address attached to their
>>account, so we'd need to add a web form for doing this. That obviously
> >would require first validating the person with their current password
>>with the current hashing code; so we'd probably need a marker to
> >indicate that each users' password field is upgraded.
>
>No-one will have to reset their password. I'll just use md5(md5(password) +
>salt) for the new hash. The only thing users will notice is that their
>stored cookies will stop working and they'll have to log in again.
>
>-- Tim Starling.
Apparently the password validation scheme was modified around that
time to add "salt" to the password hash. Is it possible that this is
the cause of my problem?
--
--------------------------------
| Sheldon Rampton
| Editor, PR Watch (www.prwatch.org)
| Author of books including:
| Friends In Deed: The Story of US-Nicaragua Sister Cities
| Toxic Sludge Is Good For You
| Mad Cow USA
| Trust Us, We're Experts
| Weapons of Mass Deception
--------------------------------
Alfio wrote:
> The table+calendar for each year is around 18Kbytes and >100 links,
> so if it has been added to 2000+ years, that's easily enough to
> account for the increase in database size, and I suspect in words and
links.
Right on the spot. I checked a random year:
http://pl.wikipedia.org/wiki/1880
7 links for weekdays occur 12 times each on the page, other doubles occur.
A bit wasteful in my view, but with new server who cares about performance
:)
Perhaps some other bot will tidy things up sometime.
Anyway, I adapted the stats script: each link will only be counted once per
article.
pl: number of internal links dropped from 598K to 373K = 62%
for comparison
nl: dropped from 312K to 292K = 94%
New Perl scripts will be ready for upload probably tomorrow.
They will also add some data on most active contributors:
edits in last 30 days, ranking now and 30 days ago.
---------
Camille/Shaihulud,
Did you produce the stats on your own PC, from downloaded dumps?
Brion used to do just that until a few months ago (I have trouble
downloading the largest dumps intact myself)
but in recent months ran them directly on the server,
(which is why they were not updated in recent weeks, by the way, with all
that server shuffling).
Downloading all dumps each week seems quite a hassle. But if Brion gave you
server access, fine with me,
one less monkey on his back. I just need to know whom to address for
occasional updates.
Erik Zachte
Erik placed the following notice on the IP block page:
<< This IP address is blocked for editing because it belongs to an
anonymizing proxy. If you really need anonymization, please contact Ed
Poor (Edmund.W.Poor at abc dot com), and he'll set up something safe for
you through the underground. >>
I don't know if this was meant as a joke, or to make a point. But it's
not a public service I can provide. What I meant was that people contact
me privately: note that writing to me at me work address is NOT PRIVATE.
Employers have the right to read e-mail, court orders can force them to
divulge contents, etc.
Please take my name and e-mail address out of the IP block notice
quickly.
Thank you.
Ed Poor
Hello,
I would like to write a little utility that, given a text, will automatically substitute all the words or expression in it that match titles of Wikipedia articles with links to the corresponding Wikipedia article.
In order to do this efficiently, and without putting too much strain on the Wikipedia search function, I would need to get an up to date list of the titles of the existing Wikipedia articles.
I could download the SQL dump and extract it from there but, as such a list would probably be of general interest, wouldn't it be possible to generate it together with the SQL dumps and make it available from download.wikipedia.org?
Thanks in advance for your attention
titto assini
We have investigated the reason for wp's intermittent lockups a bit by
following vmstat output on the different machines. All machines- including
the DB- seem to lock up at the same time, with bi/bo and in dropping to 0
or near-0.
The conclusion that this is related to nfs is close. As a first measure i
would propose to move /apache/common onto the local hd on the Apaches. All
php scripts are validated against the nfs server for changes on every
execution at the moment, this should be the major load.
Additionally zwinger is a massive spof right now. If it goes down we're
hosed. I'm currently trying to set up coda (http://www.coda.cs.cmu.edu/)
and heartbeat on my lan. These are part of the LVS project and should
provide a solution for our needs
(http://www.linuxvirtualserver.org/HighAvailability.html), both
performance- and HA-wise.
--
Gabriel Wicke