I was thinking about a printed wikipedia.
Enciclopedias usually are printed in alphabetical
order. As there is no absolute order in wikipedia, we
have named the articles with the best possible
article, but not the one that would be correctly files
if ordered. For example, the biographies are listed
with the proper name first, while
in a printed work we would look for it by last name...
detailed articles about history, economy, etc. of
countries would be all printed together, and far away
from the country itself, and so on.
Would it be possible to add a tag for each article
that says where should it be placed in an
alphabetically ordered list? For example, in Bill
Clinton the tag would
be something like <alpha>Clinton, Bill</alpha> or
something like that.
What do you think?
AstroNomer
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
I apparently got this email from the advertisement on
[[en:Wikipedia:Administrators]].
I don't have time to look into it right now.
>Date: Fri, 24 Jan 2003 14:54:16 +0000
>From: George Richard Russell <George.Russell(a)cis.strath.ac.uk>
>To: toby+wikipedia(a)math.ucr.edu
>Subject: Access
>It seems there is a ban over the University of Strathclydes web proxy,
>so no user can edit a page. This is about ~15000 web users.
>George
Since my plans for a nice, calm evening of backup & language setup were
spoiled by slashdot, I've gotten a little tired of the performance
guessing game. I've started adding some profiling code to actually show
where the bottlenecks are; some preliminary stuff is put into CVS and
running on meta.wikipedia.org.
Set $wgProfiling = true; in LocalSettings.php to enable profiling.
Functions to be profiled (or sections of functions) should be wrapped in
wfProfileIn( "functionnamegoeshere-nospacesplease" ) and wfProfileOut()
calls (before all return points!). The output goes into the debug log
file. An example:
20030123172827 0000.587 /wiki/Meta.wikipedia.org_technical_issues
at 0000.043 in 0000.001 ( 0.1%) - main-misc-setup
at 0000.043 in 0000.002 ( 0.4%) - * Article::getContent
at 0000.046 in 0000.000 ( 0.1%) - *** OutputPage::removeHTMLtags
at 0000.047 in 0000.000 ( 0.1%) - *** OutputPage:replaceVariables
at 0000.048 in 0000.003 ( 0.5%) - *** OutputPage::doBlockLevels
at 0000.051 in 0000.016 ( 2.7%) - *** OutputPage::replaceExternalLinks
at 0000.067 in 0000.050 ( 8.5%) - *** OutputPage::replaceInternalLinks
at 0000.046 in 0000.071 (12.1%) - ** OutputPage::doWikiPass2
at 0000.046 in 0000.071 (12.1%) - * OutputPage::addWikiText
at 0000.043 in 0000.074 (12.6%) - Article::view
at 0000.117 in 0000.000 ( 0.0%) - * OutputPage::output-headers
at 0000.117 in 0000.000 ( 0.0%) - ** Skin::initPage
at 0000.118 in 0000.005 ( 0.8%) - ** Skin::doBeforeContent
at 0000.117 in 0000.006 ( 1.0%) - * OutputPage::output-middle
at 0000.123 in 0000.448 (76.3%) - * OutputPage::output-bodytext
at 0000.578 in 0000.009 ( 1.5%) - *** Skin::quickBar
at 0000.571 in 0000.016 ( 2.8%) - ** Skin::doAfterContent
at 0000.571 in 0000.016 ( 2.8%) - * OutputPage::output-after
at 0000.117 in 0000.470 (80.0%) - OutputPage::output
Note that subcalls appear above their callers, just to make things more
interesting. ;)
Most of our time seems is often spent in OutputPage::output; the precise
section where the time gets spent seems to vary, and I suspect it
depends on output buffering. It may make sense to move the profile
summing bit to before the output actually starts to make it easier to
see what the rendering is really doing.
In rendering land, the winner is OutputPage::replaceInternalLinks(). The
more links in the page, the longer (and longer) this takes. Through the
link cache, it makes a database query for every individual linked page
to check if it exists. More detail should be checked for this to see how
much of that time is spent in db queries; but I suspect that pre-filling
the link cache from the links and brokenlinks tables should speed this
up.
-- brion vibber (brion @ pobox.com)
Hi,
it has been frequently requested that there should be a way to contact
anonymous users. In the past few days, I've hacked together a solution
for this. It requires that you run patch-usernewtalk.sql and there's
also a couple of new texts to translate.
It works as follows:
1) In the recent changes list (both normal and extended), there's now a
"Talk" link, both for logged in and anonymous users. (This was necessary
because the link for anons goes to their contribution list, which is
useful, but from there you can't get to the talk page.)
2) Anonymous users also get the "Talk" link in the upper right corner.
They don't get a user page link (let's not give them too much).
3) When a Talk page of a user is changed, a row is inserted into the
newly created user_newtalk table. If it's a signed in user, the user_id
column is filled with is ID, if it's an anon, the user_ip column is
filled with his IP address.
4) When the Talk page is changed, instead of the "*", there's now a "You
have new messages" link next to the "Printable version" link. (Our top
bar is cluttered on low res, I agree, but I'd rather put "Printable" on
the bottom and omit the top "Older versions" link.)
5) When the user visits the Talk page, the row is removed from
user_newtalk.
6) A text is automatically appended to talk pages for anonymous users
(which are identified using the \d{1,3}.\d{1,3}.\d{1,3}.\d{1,3} regex
pattern) explaining that the IP address may refer to several users, not
just one, and encouraging users to create an account to avoid confusion.
- - -
Numerous changes had to be made in particular to User.php;
patch-usernewtalk.sql creates the new table and removes the user_newtalk
column from the user table.
I tested as much as I could, but I would appreciate if this could be put
on test.wikipedia.org for some testing, preferably before the next
accidental update.
All best,
Erik
--
FOKUS - Fraunhofer Insitute for Open Communication Systems
Project BerliOS - http://www.berlios.de
Is it my imagination or has Wikipedia articles lost
almost all Google page rank ratings? Check out and
test pages listed on:
http://www.wikipedia.org/wiki/Top_10_Google_hits
Every one I tested was no longer in the top 10. For
example, searching for <"Cartesian product"> doesn't
seem to find our Wikipedia article (which was the #1
hit) and more shocking is that < "Cartesian product"
Wikipedia> doesn't bring up the current .org article
either! See
http://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&as_qdr=all&q=%22Ca…
It is as if everything in www.wikipedia.org/wiki/ is
being totally ignored by Googlebot for some reason.
-- Daniel Mayer (aka mav)
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
I've tweaked the random page selection to drop pages from the queue as
they are picked, then refill the queue once it's empty.
Compared to the previous behavior, this should:
a) prevent duplicate selections for people who keep sitting there and
clicking the damn button over and over
b) avoid the "random only shows the main page" bug on smaller, newer
wikis that haven't had enough traffic to happen to trip the random queue
refilling.
-- brion vibber (brion @ pobox.com)
For those who may not be aware, the Wikipedia mailing list addresses at
nupedia.com *no longer work*. They were maintained as forwarding
addresses for a while after the lists were moved to the wikipedia.org
server, but have been removed recently (see forwarded message below).
If you're having trouble sending e-mail to the Wikipedia mailing lists,
check the send-to address and if necessary replace nupedia.com with
wikipedia.org.
-- brion vibber (brion @ pobox.com)
-----Forwarded Message-----
From: Jimmy Wales <jwales(a)bomis.com>
To: wikitech-l(a)wikipedia.org
Subject: Re: [Wikitech-l] canceling @nupedia.com forward to the @wikipedia.org lists
Date: 13 Jan 2003 08:11:36 -0800
I have no opposition to removing these forwards now. People have
had enough time to adjust to the new addresses.
Giskart wrote:
> For several months I recieve on the lists Intliwiki-l and Wikitech-l
> spam almost every day.
>
> see also
> http://www.wikipedia.org/pipermail/wikitech-l/2002-December/001719.html
>
> Those spams always come by the old posting adress of the lists, wikitech-
> l(a)nupedia.com and intlwiki-l(a)nupedia.com
>
> Can the forward be removed?
>
> --
Hello,
just read about Wikipedia and the Wiktionary this morning in
www.heise.de for the first time and I'm impressed ! Good work.
getting on-topic again, I could not find any information on setting up
mirrors (the How to become a Wikipedia hacker page is a bit incomplete
:) Is there any best-practice way to do it, which avoids transferring
SQL Dumps of the whole database every day ?
Do MySQL or PostgreSQL have +working+ replication, transmitting just
the changes/additions, has anybody ever used them to replicate Wikipedia ?
cheers,
buraq
On Tuesday 21 January 2003 02:57 am, Brion Vibber wrote:
> You clearly need to get a better text editor! That should be a single
> search-and-replace operation; change double line breaks to single line
> breaks.
Just as I thought - it /is/ a technical problem with a technical solution.
Thank you giving me the needed information to relatively easily fix these
types of broken pages. Brion the the rescue again! :-)
Sorry you got caught, eh? ;)
Ha ha - no. I felt perfectly justified in what I did since, in my view, it was
the best thing for the page. I wasn't trying to cover anything up (if that
even is possible in a wiki). The only thing I regret is that I did this in
what can only be described as a cold and possibly heartless mannor: I should
have explained what I did and why in order to prevent Anthere from thinking
that I thought the page was vandalized. For that I am sorry.
--mav (If I post any more often to this list, I'm going to have to work on
Wikikarma for Wikitech-l too. Unfortunately I'm coding illiterate - so I
better just reduce the number of posts I make to this list ;-)