Hi people.
This is my first post to wikitech-l, and its a biggun. I've
been mulling over this idea for a while now and have finally
gotten the electrons moving...
I propose a number of changes to the Wikipedia software to
enable "software assisted context resolution", or if you
prefer "software assisted disambiguation". The primary purpose
of these changes is to allow users to more easily resolve
ambiguous links at the time they are created, or if necessary,
at some later stage. I stress that this is not "automatic"
link resolution, although the process will be invoked
automatically in many cases.
The process, detailed below, is invoked in two ways. The
first would be manually and explicitly by the user. The
secound would be automatic when an article with new or
modified links it saved.
To enable this process to be invoked manually I propose that
a new meta-link be added to the footer of every page that
contains at least one link (most pages). IMHO, an appropriate
position would be between "Edit this page" and "Discuss this
page". It would read "Resolve links", and invoke a page titled
"Resolving links from (real title)". This new page would look
identical to the original except that the destination of each
link is changed to a "context selection" page as detailed below.
Note that it is entirely possible (and probable) that many links
will point to UNambiguous pages. "Context selection" pages will
still be generated for these links as the user may have found
the first instance of ambiguity and will need to deal with it.
The "context resolution" process would also be invoked when an
article is saved, and the article contains new or modified
links. In this case the "context resolution" page would not be
a mimic of the real page. Rather, a short list of new or modified
links would be generated in the form of an alphabetized list.
The "context selection" pages are generated from the articles
currently known as "disambiguation pages". The bulleted list
found in these articles is transformed into a set of radio
buttons. In addition, a radio button is generated that basically
means "unresolved". At the bottom of this list is a small form
to allow new links and associated context descriptions to be
added. Whichever option is selected from this page, the link in
the calling page is adjusted to point to the selected destination
article, with the original text preserved by using the pipe trick.
A by-product of these changes will be that the "context selection"
pages will, in the main, be updated by the wiki software (as
opposed to hand editted). This should make it possible to more
tightly control the layout of these pages, perhaps with the
addition of subheadings like "People", "Places", "Things".
Further, when the "Edit this page" link is clicked on a "context
selection" page, the normal edit page is replaced by a purpose-
built form for editing such pages.
I understand that there are probably a millions reasons why some
aspect(s) of the above will be difficult or impracticable. I hope
that the general concept is possible and feasible.
Gary Curtis
[[User:Gaz]] on Wiki
<wikiman.at.freemail.dot.com.dot.au> for all Wiki email
--------------------------------------------------------
Looking for a free email account?
Get one now at http://www.freemail.com.au/
--------------------------------------------------------
The mail is quite long, I 'included' some php code, and a small 'article'
at the end. I did it like that rather than to include files...
> -----Message d'origine-----
> De: Toby Bartels [SMTP:toby+wikipedia@math.ucr.edu]
> Date: lundi 31 mars 2003 07:02
> À: wikitech-l(a)wikipedia.org
> Objet: Re: [Wikitech-l] Using TeX
>
> Michel Mouly wrote:
>
> >However, when I looked to the details, I found (to my taste) too many
> >limitations with the followed approach. Then I wrote an extension to
> >outputpage.php to replace the call to texvc by calls to pdflatex then
> >imagemagick convert. This has reached the stage of minimal functionality
> >(translation and caching). This allows me to have full maths, without
> >having to remember what is supported or not, or modified, compared to
> >LaTeX. In addition, we intend to use it for small music scores (using
Mu-
> >sixTeX or possibly PMX), and I have in mind some other uses of LaTeX.
>
> Is your version safe against DoS attacks with long scripts?
I confess my ignorance on the topic. But I'm ready to learn. Maybe it is
relevant to mention that, contrarily to texvc, the
text to compile is not in the DOS calls: the script writes files and DOS
lines are pretty standard.
> Is it safe against running TeX commands that access files?
Safety is a real problem, I agree. I did not look in any details to the
question with LaTeX. The
small application I'm trying to set up with some friends is (or will be,
I've still this problem with a blank page
return after submit), I hope, sufficiently safe for reasons independent
from the php scripts. Maybe naive...
> OTOH, does it allow inclusion of additional TeX packages (like Xypic)
> with a simple modification to the code opening up the package?
Well, this can be done already just modifying the 'header.tex' file
(included). That is what I will do for music. My idea (see the article) is
that different markups to choose between header files.
BTW, using drawing packages like Xypic is also on my agenda, see article.
> If so, then some of us (me and AxelBoldt, I guess)
> might well prefer your code to the current texvc --
> at least when producing PNG output instead of HTML.
>
> >Please understand that I'm not trying to (re)open a debate. I've read
the
> >"math markup" page on meta.wikipedia (and music markup as well), and I'm
> >aware of (some of) the drawbacks of the approach I followed. I
mentionned
> >what I did just in conformance with GPL: if there is any interest in
this
> >small piece of code (which I doubt, it's rather trivial!), just say it.
>
> I'd like to see the diff to see just what you took away from texvc.
>
I include the relevant part of outputpage.php. It is
basically scratch code, to check if the idea is viable. At least error
handling requires further work. As you will see, I just 'mimicked' texvc
(same input format, same output format) and kept the rest of the code.
I include header.tex (the very basic and trivial one), for completion.
I also include a text I prepared with in mind the possibility to put it
somewhere in wikipedia or meta wikipedia; I'm too new in the business to
decide whether this is valuable, or where exactly to put it. Consider it
backgroupd information. It deals with 'source' for images or sounds: one of
my problems in my small project is music, and allowing others to modify
scores is important. LaTeX provides those tools as well.
An important point, hinted at in the text, is that compiling the 'source'
on the wikipedia site is not really necessary (though definitely useful).
Then security aspects should be less a problem, as well as computing load
(going through pdflatex is quite long on my machine).
>
> -- Toby
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)wikipedia.org
> http://www.wikipedia.org/mailman/listinfo/wikitech-l
This the beginning of outputpage.php; the rest is exactly as in the normal
script. The modifications are
1) first step, encapsulating the call to texvc;
2) second step, function 'fulltex', same input format, same output format
as the function encapsulting texvc. The functionality is very slightly
different: the '$' are included in the call to fulltex, so that it can be
used for text in non math mode.
First comes the 4-line 'header.tex'.
<header.tex>
\documentclass[12 pt]{article}
\pagestyle{empty}
\begin{document}
\LARGE
</header.tex>
<code>
# See design.doc
function linkToMathImage ( $tex, $outputhash )
{
global $wgMathPath;
return "<img src=\"".$wgMathPath."/".$outputhash.".png\"
alt=\"".wfEscapeHTML($tex)."\">";
}
function texvc($tex)
{
global $wgMathDirectory, $wgTmpDirectory, $wgInputEncoding;
$cmd = "./math/texvc ".escapeshellarg($wgTmpDirectory)." ".
escapeshellarg($wgMathDirectory)." ".escapeshellarg($tex)."
".escapeshellarg($wgInputEncoding);
return(`$cmd`);
}
function fullTeX($tex)
#same output syntax as texvc, generation done via pdfLatex and
Imagemagick convert
{
global $wgMathDirectory, $wgTmpDirectory, $wgInputEncoding;
global $wgPdflatex, $wgconvert;
#wgInputEncoding not taken into account, assumed to be compatible with
pdfLatex
#if (!isset($wgPdflatex)) $wgPdflatex = "pdflatex -quiet -halt-on-error
-interaction batchmode -output-directory $wgTmpDirectory";
# chdir() ok, while -output_directory leads to pb (pdflatex can't find
its own .aux!!)
if (!isset($wgPdflatex)) $wgPdflatex = "pdflatex -quiet -halt-on-error
-interaction batchmode";
if (!isset($Convert)) $wgConvert =
'C:\Programs\ImageMagick-5.5.6-Q16\convert';
$headerfilename = "$wgMathDirectory/header.tex";
#$headerfilename = 'header.tex';
$header = fopen($headerfilename, 'r');
$md5 = md5($tex);
$filename = "$wgTmpDirectory/$md5";
$fp = fopen($filename.".tex", 'w+');
fwrite($fp, fread($header, filesize ($headerfilename)));
fwrite($fp, "$tex\r");
fwrite($fp, "\\end{document}");
fclose($fp);
$backupcwd = getcwd();
chdir($wgTmpDirectory);
$cmd = "$wgPdflatex $filename.tex";
$res = `$cmd`;
#todo: test if error; OK if empty (thanks to option -quiet)
#$cmd = "$wgConvert $filename.pdf -trim -bordercolor white -border 5 x 5
$wgMathDirectory/$md5.png";
$cmd = "$wgConvert $filename.pdf -trim $wgMathDirectory/$md5.png";
$res = `$cmd`;
#todo: test if error; OK if empty
#todo : delete temporary files (should be kept for debug and error)
chdir($backupcwd); # don't know if needed, certainly cleaner
return ("+$md5");
}
function renderMath( $tex )
{
global $wgUser, $wgMathDirectory, $wgTmpDirectory, $wgInputEncoding;
$mf = wfMsg( "math_failure" );
$munk = wfMsg( "math_unknown_error" );
$fname = "renderMath";
$math = $wgUser->getOption("math");
if ($math == 3)
return ('$ '.wfEscapeHTML($tex).' $');
$md5 = md5($tex);
$md5_sql = mysql_escape_string(pack("H32", $md5));
if ($math == 0)
$sql = "SELECT math_outputhash FROM math WHERE math_inputhash =
'".$md5_sql."'";
else
$sql = "SELECT math_outputhash,math_html_conservativeness,math_html
FROM math WHERE math_inputhash = '".$md5_sql."'";
$res = wfQuery( $sql, $fname );
if ( wfNumRows( $res ) == 0 )
{
# $cmd = "./math/texvc ".escapeshellarg($wgTmpDirectory)." ".
# escapeshellarg($wgMathDirectory)." ".escapeshellarg($tex)."
".escapeshellarg($wgInputEncoding);
# $contents = `$cmd`;
### $contents = texvc($tex);
$contents = fullTeX("\$$tex\$");
if (strlen($contents) == 0)
return "<b>".$mf." (".$munk."): ".wfEscapeHTML($tex)."</b>";
$retval = substr ($contents, 0, 1);
if (($retval == "C") || ($retval == "M") || ($retval == "L")) {
if ($retval == "C")
<\code>
<article Non text elements in Wikipedia>
This discusses how to handle non text elements in wikipedia pages,
such as images, sounds, or math formulae. More precisely, this
advocates the possibility to have the 'source code' of such
elements, so that they can be modified as easily (almost!) as the
text can be modified.
Akin ideas have been discussed in the past (math markup, SVG
support, chess talk page, ...). I did not looked everywhere (by
far!), so the ideas propounded herein are likely not original! If
they are, the key aspect is that the proposed scheme is general,
not specific to one domain, whether it be math formula,
chessboards or vectorised images.
The present state
Documents can already include different types of material, namely
text, images and sounds.
For texts a 'source file', according to a special syntax, is
uploaded, and is 'compiled' (i.e., translated in HTML) by the
wikipedia site.
Images and sounds are simply uploaded. They are either included in
the text (images) or available for links (images and sounds).
There is an intermediate case, that of mathematical formula. They
are included in the visible page as images, but the 'source' is
uploaded and compiled by the site. Another peculiarity is that the
'source' is embedded in the text 'source'. And still another one
is that a special syntax is to be used (derived from TeX, but not
TeX).
That the images and sounds are uploaded 'as is' is, IMHO, in
contradiction with the general goal of wikipedia, in particular
easiness to modify.
In many cases, sounds and images have been, or could, be generated
from a 'source'. Making this 'source' available would have many
advantages:
* it would allow for free modifications, in conformance with the general
spirit;
* it would allow more or less automatic eventual change to another
format (e.g., extensions of HTML);
* it provides ready-to-use examples to other images/sounds/maths.
Let us take an example. Chess positions. This is done at present
with png images. They are quite nice, I agree, but how to modify
them? How to add new ones in the same style as the existing ones?
Simply because the images are difficult to reproduce, a set of
pages become difficult to extend upon. Either a different style of
drawing is used, and the result is not professional, or somebody
becomes an unavoidable intermediate! The talk page of the chess
article shows such concerns.
Imagine now a simple source code to draw chess positions (this
exists in LaTeX). To recipe for creating new drawings is obvious
and style is consistent. No blocking.
(To complete the example, source code for a chess position with
LaTeX could be (taken from LaTeX graphics companion):
\usepackage{chess}
\board{B* * * KR}
{*r* * *R*}
{* b p p}
{ *P*k*P*}
{*p* P *p}
{ P *P* P}
{* *N*N* }
Ok, this looks a bit esoteric, but this is a simple matrix, with
uppercase for white and lowercase for black, p for pawn, k for
king, n for knight, and so on. The result is a very nice and
professionally
looking drawing. Don't tell me the source is more esoteric
or difficult to use than, say, HTML.)
How to upload the source?
The case of math formula provides one approach: to embed the
source in the page text, with a special markup.
This raises then the issue of the generation of the 'compiled'
version. In the case of math, this is done by the site. This
offers the advantage to the users that they don't have to install
anything. On the other hand, this requires that the generation
software is installed on the site, thus limiting freedom, and
consumes some site resources (who does consider that the response
time is short enough??), in particular in the case of successive
corrections, e.g., to correct syntax errors.
The other possibility is to ask the user to upload both the source
and the result. This is more complex for the user, mainly because
this requires the software, but this allows for checking prior
upload (less load on the side, and possibly, all taken into
account, less operations for the user).
In practice (for the user), this consists in extending the upload
page to include:
* the result;
* (optional) the source;
* when not obvious, a description of the 'compiling' method (e.g.,
texvc, pdflatex with such or such header then imagemagick convert,
povray 3.5).
Conversely, clicking on a drawing (for instance) opens a page more
or less as the present one, extended with the source and the
compiling indications, plus the possibility to edit the source
code (exactly as for a text page).
Embedding in the page text can still be a possibility (better for
math than for images for instance), but either has to be limited
to what the site can compile, or has to be coupled with the upload
of the result.
Which formats are acceptable?
Ideally, the source format should be such that:
* it is in plain text;
* it is public, free of copyrights or other constraints;
* it is already in use;
* at least one free version of a 'compiler' is easily available,
easy to install, and easy to use for as many
platforms as possible;
* it must be as secure as possible (to prevent carrying nasty
code).
IMHO, texvc does not respect all the conditions.
Examples (in my limited knowledge) that do respect them include :
* music (scores): lilypond, musixtex;
* music (sound) : midi;
* math : LaTeX;
* images : povray (security??), drawing packages in LaTeX;
Browsing through LaTeX drawing packages, one can see the potential
richness of such a scheme. Could be mentioned, in any order, board
games, card games, graphs, Feynman diagrams, chemical diagrams,
electrical diagrams, ...
Should the list of formats be explicitly prescribed?
IMO, no. Wikipedia is assumed to be self-regulating. If a format
is considered wrong, somebody can transcribe it in something more
appropriate.
<\article>
I've already briefly discussed this with Gary on his user talk page. A few
observations:
In his proposal below, Gary does not explicitly deal with what happens when
a previously unambiguous name is given a second meaning. Here's what I
suggest:
* The original page is moved to a disambiguated name. This name is selected
by the user who creates the second page.
* All existing links are updated via the pipe trick to point to the
newly-disambiguated primary article.
Gary's "resolve links" page, in my opinion, could be made easier to use by
making it a list of links, each one followed by a combo box, rather than a
clone of the full article.
As I said on [[User talk:Gaz]], this kind of software disambiguation, if
implemented properly, could satisfy both camps in the city names preemptive
disambiguation debate (see wikiEN-L). Pre-emptive disambiguation would be
(IMHO) unnecessary, and since manual links to [[Perth]] instead of [[Perth,
Australia]] are encouraged, I would see no reason to push primary-topic
disambiguation.
Gary, are you offering to code this?
-- Tim Starling.
>From: Gary Curtis <wikiman(a)freemail.com.au>
>Reply-To: wikitech-l(a)wikipedia.org
>To: wikitech-l(a)wikipedia.org
>Subject: [Wikitech-l] Software Assisted Context Resolution
>Date: Mon, 31 Mar 2003 05:08:12 GMT
>
>Hi people.
>This is my first post to wikitech-l, and its a biggun. I've
>been mulling over this idea for a while now and have finally
>gotten the electrons moving...
>
>I propose a number of changes to the Wikipedia software to
>enable "software assisted context resolution", or if you
>prefer "software assisted disambiguation". The primary purpose
>of these changes is to allow users to more easily resolve
>ambiguous links at the time they are created, or if necessary,
>at some later stage. I stress that this is not "automatic"
>link resolution, although the process will be invoked
>automatically in many cases.
>
>The process, detailed below, is invoked in two ways. The
>first would be manually and explicitly by the user. The
>secound would be automatic when an article with new or
>modified links it saved.
>
>To enable this process to be invoked manually I propose that
>a new meta-link be added to the footer of every page that
>contains at least one link (most pages). IMHO, an appropriate
>position would be between "Edit this page" and "Discuss this
>page". It would read "Resolve links", and invoke a page titled
>"Resolving links from (real title)". This new page would look
>identical to the original except that the destination of each
>link is changed to a "context selection" page as detailed below.
>Note that it is entirely possible (and probable) that many links
>will point to UNambiguous pages. "Context selection" pages will
>still be generated for these links as the user may have found
>the first instance of ambiguity and will need to deal with it.
>
>The "context resolution" process would also be invoked when an
>article is saved, and the article contains new or modified
>links. In this case the "context resolution" page would not be
>a mimic of the real page. Rather, a short list of new or modified
>links would be generated in the form of an alphabetized list.
>
>The "context selection" pages are generated from the articles
>currently known as "disambiguation pages". The bulleted list
>found in these articles is transformed into a set of radio
>buttons. In addition, a radio button is generated that basically
>means "unresolved". At the bottom of this list is a small form
>to allow new links and associated context descriptions to be
>added. Whichever option is selected from this page, the link in
>the calling page is adjusted to point to the selected destination
>article, with the original text preserved by using the pipe trick.
>
>A by-product of these changes will be that the "context selection"
>pages will, in the main, be updated by the wiki software (as
>opposed to hand editted). This should make it possible to more
>tightly control the layout of these pages, perhaps with the
>addition of subheadings like "People", "Places", "Things".
>Further, when the "Edit this page" link is clicked on a "context
>selection" page, the normal edit page is replaced by a purpose-
>built form for editing such pages.
>
>I understand that there are probably a millions reasons why some
>aspect(s) of the above will be difficult or impracticable. I hope
>that the general concept is possible and feasible.
>
>Gary Curtis
>[[User:Gaz]] on Wiki
><wikiman.at.freemail.dot.com.dot.au> for all Wiki email
_________________________________________________________________
Hotmail now available on Australian mobile phones. Go to
http://ninemsn.com.au/mobilecentral/hotmail_mobile.asp
Hi!
If you remember, I have set up a small very local wiki using the wikipedia
scripts. Thanks for the comments I received last time. They were very
valuable.
I have not solved my problem with page editing; I intended to reload all
the scripts. I got an account on sourceforge, but did not (yet) manage the
download. I have a password pb. Is it needed to be registered in the
project itself to download?
One of the reasons why I looked to wikipedia was because of maths and the
use of TeX.
However, when I looked to the details, I found (to my taste) too many
limitations with the followed approach. Then I wrote an extension to
outputpage.php to replace the call to texvc by calls to pdflatex then
imagemagick convert. This has reached the stage of minimal functionality
(translation and caching). This allows me to have full maths, without
having to remember what is supported or not, or modified, compared to
LaTeX. In addition, we intend to use it for small music scores (using Mu
sixTeX or possibly PMX), and I have in mind some other uses of LaTeX.
Please understand that I'm not trying to (re)open a debate. I've read the
"math markup" page on meta.wikipedia (and music markup as well), and I'm
aware of (some of) the drawbacks of the approach I followed. I mentionned
what I did just in conformance with GPL: if there is any interest in this
small piece of code (which I doubt, it's rather trivial!), just say it.
Michel Mouly
Gee, the interesting things you find when browsing the wikipedia codebase.
Don't you people know what salt is? I'll give you a clue. Here's how an
attacker with access to Wikipedia's hashed passwords would currently
inverse-MD5 the passwords:
sort user table by hashed password;
foreach (possible password) {
x = md5(password_guess);
binary search table for match;
}
And here's how it would work with salt:
for (userNum=0; userNum < numUsers; userNum++) {
foreach(possible password) {
x = md5("wikipedia" + userNum + password_guess);
check for match
}
}
Some numbers: my password is 9 essentially random lower case letters. By
brute force, it would take a hacker about a week to inverse MD5 it, with one
computer. With the current scheme, if all 10000 users of Wikipedia used the
same kind of password, the hacker would successfully inverse MD5 one roughly
once every 10 minutes. He could then check those username/password
combinations against other sites -- say, Internet banking, unix accounts on
various servers, email, etc.
Don't worry, I fixed it. What do I do with the rectified code (once I've
read over it a couple more times)?
-- Tim Starling.
_________________________________________________________________
MSN Instant Messenger now available on Australian mobile phones. Go to
http://ninemsn.com.au/mobilecentral/hotmail_messenger.asp
>Is it just for Brisbane, Queensland?
>
>An easier thing to do might be to request the change be made from the
>backend instead.
No, it's not just for Brisbane, Queensland, it's for whatever I need it for
in the future. In the near future, there's a few other cities and towns I
want to move. And it wouldn't be easier in terms of human effort, since the
bot's already written and ready to go. It's only ~70 lines -- it's rather
scary how easy these things are, especially since it's one of the first
things I've ever written in perl and it only took me a few hours. Of course
there's the issue of swamping RC -- if someone is happy to do it from the
backend to reduce the annoyance level, then that's alright with me.
-- Tim Starling
_________________________________________________________________
MSN Instant Messenger now available on Australian mobile phones. Go to
http://ninemsn.com.au/mobilecentral/hotmail_messenger.asp
>>what does that mean "you are not in a position to
>>contribute to the approval process" ???
>
>I was suggesting that it's not Fred who makes the decisions, it's people
>like Brion and Lee. It doesn't make much sense to me either, now that I've
>calmed down. Please forget I said it.
Sorry, that quote (>>what...) was from Anthere. I didn't properly attribute
it.
_________________________________________________________________
Hotmail now available on Australian mobile phones. Go to
http://ninemsn.com.au/mobilecentral/hotmail_mobile.asp
What we want to be able to do is:
1) Change a set of links pointing to a redirect, so that they're pointing to
the real article
2) Change a set of links pointing to an incorrectly named image, to a new,
correctly named image.
The second one was suggested by Tarquin on [[User talk:Timbot]]. Now the
problem with this feature is that it has the potential to create excessive
server load, especially if an edit war breaks out utilising it. My scheme
below is intended to do the following things:
* Make it appear weighty and time-consuming, so that users won't do it
frivolously.
* Make the smallest impact on the server possible while not wasting people's
time.
* Make edit wars utilising the feature take up a minimum of server load, and
to favour a conservative (changeless) outcome.
As a tentative short name, I suggest the "backlink redirect". It's a
mouthful, it doesn't make much sense, but it's better than anything else
I've come up with.
Here's my current vision for how it will operate:
On Special:Movepage, you now get an OPTION group looking like this:
(*) Move page only
( ) Make the page a redirect, and update all links so that they point to the
new article
( ) Move page and update links
Any logged in user has access to these options. If the user selects the
second or third option, a new thread is created on the server, set to low
priority -- low enough that it might take an hour or more during peak times
to fix a large set of articles. This new thread does the following:
* Updates Wikipedia:BacklinkRedirects (or related DB table) to indicate that
a backlink redirect has started. This appears on RC.
* Starts updating the links, one at a time. Changes do not appear on RC.
* After it finishes updating each article, it checks to see if someone has
clicked on the "cancel" link in Wikipedia:BacklinkRedirects. If so, it
reverts its changes and stops, indicating this on RC and
Wikipedia:BacklinkRedirects.
* Once it has finished, it updates the table related to
Wikipedia:BacklinkRedirects to indicate that the job is now over. This does
not appear on RC.
The job stays there on the lower half of the page for all time, with some
method of accessing multiple pages of them. Anyone can revert such completed
jobs. Reversions of complete jobs are handled at the usual thread priority
(arguable, I could be wrong). Articles which have changed since the initial
update are, of course, not reverted.
As you can see, with this scheme, even an edit war over a huge set of links
will create little server load in peak times, as long as both sides of the
fray watch Wikipedia:BacklinkRedirects vigilantly.
-- Tim Starling.
_________________________________________________________________
MSN Instant Messenger now available on Australian mobile phones. Go to
http://ninemsn.com.au/mobilecentral/hotmail_messenger.asp
Brion said:
> > >We'd be much better off if we add the needed functionality on the
> > >serverside, I believe.
> >
> > Perhaps, but I believe in the motto "if you want something done, do it
> > yourself." Who's going to write this server-side code -- you're
>obviously
> > not interested.
>
>Meanwhile, you obviously _are_ interested in making this ability
>available. If you'd like to try your hand, I'd be delighted.
Alright, but in an attempt to prevent Wikipedia from completely taking over
my life, I'm going to stop making edits. I'm not sure if I can go cold
turkey, so if anyone sees me doing anything, please block me.
Now on to Eric Moeller's post:
>If you are basically telling Fred "I don't like you, you have
>opinions I disagree with, you won't get to use my code", that is certainly
>your decision to make.
Yeah, that's pretty much it. And from Eclecticology:
>I've had some serious concerns about this bot. On the surface it has to do
>with changing [[Brisbane, Queensland]] to simply [[Brisbane]] with the
>possibility that it could also be used for other city names. This is a
>questionable use of a bot to impose a naming convention that may not have
>unanimous support. If it is used in the course of an edit war, it makes
>for an unequal fight between those who use a bot and those who don't.
That's a very good point. Okay, forget the bot. Like I said above, I'll
write it on the server side. I'll start a new thread on how it should be
implemented.
>what does that mean "you are not in a position to
>contribute to the approval process" ???
I was suggesting that it's not Fred who makes the decisions, it's people
like Brion and Lee. It doesn't make much sense to me either, now that I've
calmed down. Please forget I said it.
-- Tim Starling.
_________________________________________________________________
MSN Instant Messenger now available on Australian mobile phones. Go to
http://ninemsn.com.au/mobilecentral/hotmail_messenger.asp
>We'd be much better off if we add the needed functionality on the
>serverside, I believe.
Perhaps, but I believe in the motto "if you want something done, do it
yourself." Who's going to write this server-side code -- you're obviously
not interested.
>If you could describe exactly what the problem is that you wrote the
>code to solve, that would be very helpful.
>
>I'm not saying "don't ever run this"; I'm saying that we should think
>about the consequences of this.
>
>I really think we should err on the side of not worrying so much about
>"fixing" links, if the redirects go to the right place. Redirects don't
>affect the functionality for the user, and because of that, there should
>be a degree of intransigence about adding complexity/running bots to fix
>a problem that isn't much of a problem.
I think you basically understand why I wrote it - to fix links after a page
move. There's a few hundred national parks articles that have been created
with a bot/template -- they form the bulk of the work. I understand your
point of view -- no, it doesn't really matter when there are working
redirects. I'm just a perfectionist, that's all.
Please understand that this bot will be make much less impact on RC than if
I did it by hand. Like I said, I did about 100 of them manually. I was able
to get them out quite quickly, doing them in batches of 15 or so I
absolutely swamped RC for about 20 minutes. No-one complained about that.
>Please send me the code.
Done, and I sent a copy to Lee as well.
_________________________________________________________________
Hotmail now available on Australian mobile phones. Go to
http://ninemsn.com.au/mobilecentral/hotmail_mobile.asp