From hashar at altern.org Tue Nov 1 01:32:16 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Tue, 01 Nov 2005 02:32:16 +0100 Subject: [Wikitech-l] Re: PHP Security Information: PHP File-Upload $GLOBALS Overwrite Vulnerability In-Reply-To: <43668463.1050502@tgries.de> References: <43668463.1050502@tgries.de> Message-ID: Thomas Gries wrote: > To whom it may concern: > > > PHP File-Upload $GLOBALS Overwrite Vulnerability > http://www.hardened-php.net/advisory_202005.79.html > > > $GLOBAL Overwrite and it's Consequences: > http://www.hardened-php.net/index.76.html Hello, Thanks for the notice about the exploit: "overwriting the GLOBALS array when register_globals is turned on" We dont use register_globals on WikiMedia website, i think most php packages now ship with register_globals to off and anyone still using it should recode their scripts :) cheers, -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From cgranade at greens.org Tue Nov 1 02:08:52 2005 From: cgranade at greens.org (Christopher E. Granade) Date: Mon, 31 Oct 2005 17:08:52 -0900 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> <87d5llaiex.fsf@mat.ucm.es> <200511010004.37975@bloodgate.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Gregory Maxwell wrote: > On 10/31/05, Tels wrote: >> OTOH, the currenty wikipages are not really suited to discussions at all - >> I'd rather not have someone edit my "posts" - whether it be for fixing >> the speling or whatever. >> >> When all you have is a hammer, everything looks like a nail - but I think >> a new disccusion support would be very good to have, even if it is a lot >> of work. The longer the old sytle pages are the only possibility, hte >> longer we will get more and more of them. :) > > I don't know about that, I often get into discussions where we > alternate between wikimode and thread mode. I think the fear of > editing comments comes from a lack of trust in the system and the > other editors. Often in a discussion I'll post a list or a fragment > of text, and multiple people with work on it. I don't see what the > harm is in leaving the possibility of editing other peoples words, if > someone does it in an objectionable way they will be caught and hung > like the evil creatures they are. ;) > > Really, the only two problems I've had is tracking changes across > multiple parts of a page, but viewing diffs from my last edit fixes > that, and getting notified which watchlists mostly fix. > > Too bad there isn't a way to watchlist sections. :) I think what you allude to in wikimode discussion forums could be solved by establishing a Scratch: namespace that mirrors the global namespace (e.g.: Scratch:Template:NPOV) that is not considered to be "finished." During development of new formats, or extensive editing, provisional changes could be placed under Scratch, and such suggestions could be referenced from LiquidThreads. - --Chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFDZs4y0dXuuZr00J4RAjdTAKDn7j4M8/04ZnHHuOAMVqjtmu6GsACfR5F7 j4b2zv16i4JxanrGpDPbeqk= =dXay -----END PGP SIGNATURE----- From the.gray at gmx.net Tue Nov 1 03:02:22 2005 From: the.gray at gmx.net (Daniel Wunsch) Date: Tue, 1 Nov 2005 04:02:22 +0100 Subject: [Wikitech-l] LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: <200511010004.37975@bloodgate.com> References: <877jbu2avo.fsf@mat.ucm.es> <200511010004.37975@bloodgate.com> Message-ID: <200511010402.23116.the.gray@gmx.net> On Tuesday 01 November 2005 00:04, Tels wrote: > When all you have is a hammer, everything looks like a nail - but I think > a new disccusion support would be very good to have, even if it is a lot > of work. The longer the old sytle pages are the only possibility, hte > longer we will get more and more of them. :) just do not forget the one thing: in a wiki, you can refactor a discussion, in a usual forum this is impossible. i know this is not a common practice on wikipedia, but it is on other wikis, and it makes it much easier to read a diskussion later. daniel From oxusnet at gmail.com Tue Nov 1 04:31:55 2005 From: oxusnet at gmail.com (Kerim Friedman) Date: Mon, 31 Oct 2005 23:31:55 -0500 Subject: [Wikitech-l] Re: Bibtex In-Reply-To: References: Message-ID: Your talking about the PM wiki extension? I was asking about something for MediaWiki. There is nothing insecure about bibtex itself... kerim On 10/31/05, Christopher E. Granade wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Kerim Friedman wrote: > > Does anyone know of any efforts to integrate bibtex citation data into > > MediWiki? Something like this plugin for PMWiki? > > > > > > > > Thanks! > > > > kerim > Looking at it, I'd be very concerned about security; the bibtexquery > action accepts PHP code as a parameter. Takes the hard work out of code > injection/XSS attacks. > > - --Chris > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.2 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFDZmZF0dXuuZr00J4RAiBiAJoC9VfHuk7fr3X0rOL4hOCPvqXCxACgm5h6 > pfWow5Plb4ku2Zn7/MxzcBc= > =1emk > -----END PGP SIGNATURE----- > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- _______________________________________ P. KERIM FRIEDMAN http://kerim.oxus.net DO NOT hit "reply" to send me personal messages! This address is for BULK mail only. My private address is: kerim (dot) my last name (at) oxus (dot) net. _______________________________________ From mail at tgries.de Tue Nov 1 06:30:52 2005 From: mail at tgries.de (Thomas Gries) Date: Tue, 01 Nov 2005 07:30:52 +0100 Subject: [Wikitech-l] E-mail notifications for user talk pages Message-ID: <43670B9C.9070103@tgries.de> Hello, I justed wanted to ask what stands against enabling $wgEnotifUserTalk in all wikis ? As you know, the number of changes on user talk pages per days is quite low, in the range of about 2.000 per day. As Enotifs are only sent for the first "foreign" change (not for changes of the user/owner of that page), the number of sent e-mails is even lower. I'd like to have this enabled on the wikis, so that users can opt-in to have a notification, when someone changes their talk page. Tom From mail at tgries.de Tue Nov 1 06:37:46 2005 From: mail at tgries.de (Thomas Gries) Date: Tue, 01 Nov 2005 07:37:46 +0100 Subject: [Wikitech-l] Re: PHP Security Information: PHP File-Upload $GLOBALS Overwrite Vulnerability Message-ID: <43670D3A.1050106@tgries.de> Ashar Voultoiz > wrote Thomas Gries wrote: > To whom it may concern: > PHP File-Upload $GLOBALS Overwrite Vulnerability > http://www.hardened-php.net/advisory_202005.79.html > $GLOBAL Overwrite and it's Consequences: > http://www.hardened-php.net/index.76.html We dont use register_globals on WikiMedia website, i think most php packages now ship with register_globals to off and anyone still using it should recode their scripts :) Ashar, thank you for quick reply. However, the above references describe a severe problem even for the case, that register_globals _is_ off. The UPLOAD function has the flaw (pls. carefully study the both resources), which can cause a glitch in the PHP internal setting for register_globals. I recommend the MediaWiki developers study the both references for consequences. Tom From magnus.manske at web.de Tue Nov 1 10:11:27 2005 From: magnus.manske at web.de (Magnus Manske) Date: Tue, 01 Nov 2005 11:11:27 +0100 Subject: [Wikitech-l] Re: [WikiEN-l] Ratings again In-Reply-To: <4366790B.8070003@telus.net> References: <435DFD26.7070009@pobox.com> <435E0FE8.7000605@tonal.clara.co.uk> <4360A7D0.6040505@web.de> <4366790B.8070003@telus.net> Message-ID: <43673F4F.5010403@web.de> Ray Saintonge wrote: > Magnus Manske wrote: > >> Neil Harris wrote: >> >> >>> Am I being naive here, or would a super-dumb implementation with a >>> single table with the columns shown below be enough to work in the >>> short term? >>> >>> Page_ID >>> Revision_ID >>> User_ID >>> Rating_ID >>> Rating value >>> Timestamp >>> >> This is what I did; no timestamp, but a varchar for comments. Topics to >> rate and their range (e.g, 1-5) are encoded here as well for user #0. >> That's about as dumb as it gets ;-) >> > I still prefer a 0-10 range of ratings. I think a decimal > normalization would be easier to work with in any subsequent analysis > of results. One can set the range for each topic individually. BTW, with values 0-10, you'd have eleven values... Magnus From dgerard at gmail.com Tue Nov 1 10:24:40 2005 From: dgerard at gmail.com (David Gerard) Date: Tue, 1 Nov 2005 10:24:40 +0000 Subject: [Wikitech-l] Re: [WikiEN-l] Ratings again Message-ID: Ray Saintonge wrote: > Magnus Manske wrote: >> I still prefer a 0-10 range of ratings. I think a decimal >> normalization would be easier to work with in any subsequent analysis >> of results. >One can set the range for each topic individually. Mmm. See discussion at [[m:En validation topics]] and its archive - too many choices of rating is probably a bad thing, because it's hard to agree what a given value means. The test plan so far includes probably far more variables than we'd want in any case ... - d. From dgerard at gmail.com Tue Nov 1 12:14:06 2005 From: dgerard at gmail.com (David Gerard) Date: Tue, 1 Nov 2005 12:14:06 +0000 Subject: [Wikitech-l] Spam filters on Wikimedia lists? Message-ID: I still see what looks like breathtaking quantities of obvious spam coming to the wikien-l queue. Is the spam filtering and greylisting still on? Do we have numbers on what's not even making it to the queue? - d. From rowan.collins at gmail.com Tue Nov 1 12:41:09 2005 From: rowan.collins at gmail.com (Rowan Collins) Date: Tue, 1 Nov 2005 12:41:09 +0000 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> <87d5llaiex.fsf@mat.ucm.es> <200511010004.37975@bloodgate.com> Message-ID: <9f02ca4c0511010441p1a413adfg@mail.gmail.com> On 01/11/05, Christopher E. Granade wrote: > I think what you allude to in wikimode discussion forums could be solved > by establishing a Scratch: namespace that mirrors the global namespace > (e.g.: Scratch:Template:NPOV) that is not considered to be "finished." > During development of new formats, or extensive editing, provisional > changes could be placed under Scratch, and such suggestions could be > referenced from LiquidThreads. Well, my initial reaction to this was "Yuk!" - the whole wiki is supposed to be subject to "extensive editting". Even by having a separate discussion namespace, we've rather removed the incentive to refactor discussions into content or summaries, to our own detriment. Splitting it even further into mostly-stable articles, hard-structured discussions, and unstructured "scratch buffers" would be like having an open-access CMS-cum-wiki, a forum system, and a traditional wiki operating in parallel, which all seems rather stifling. I know that's not what you were suggesting, but it's a kind of worst-case scenario of going down that path. OTOH, I guess there is some kind of logic to saying that if we're going to force discussion pages to act like classical discussions, we need somewhere else to put their other purposes - and it's true that people already use them as "scratch". I just think we need to be careful in making things too structured, lest we lose the immediacy that is the whole point of being a wiki in the first place. -- Rowan Collins BSc [IMSoP] From cgranade at greens.org Tue Nov 1 15:11:35 2005 From: cgranade at greens.org (Christopher E. Granade) Date: Tue, 01 Nov 2005 06:11:35 -0900 Subject: [Wikitech-l] Re: Bibtex In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Kerim Friedman wrote: > Your talking about the PM wiki extension? I was asking about something > for MediaWiki. There is nothing insecure about bibtex itself... > > kerim > > On 10/31/05, Christopher E. Granade wrote: > Kerim Friedman wrote: >>>> Does anyone know of any efforts to integrate bibtex citation data into >>>> MediWiki? Something like this plugin for PMWiki? >>>> >>>> >>>> >>>> Thanks! >>>> >>>> kerim > Looking at it, I'd be very concerned about security; the bibtexquery > action accepts PHP code as a parameter. Takes the hard work out of code > injection/XSS attacks. > > --Chris >> Sorry... I didn't mean that BiBTex is insecure. I was just making an off-hand comment about the PMWiki extension, as I was floored that the syntax used by the extension allowed for code injection /as a feature/. Once again, sorry to clutter inboxes. - --Chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFDZ4Wl0dXuuZr00J4RAl1jAJ9XNu7+y4D10odSjFGncFS0jxDTfQCffZdP lTl3U14h0IR0+gWTJFLl7Us= =RGcv -----END PGP SIGNATURE----- From cgranade at greens.org Tue Nov 1 15:14:20 2005 From: cgranade at greens.org (Christopher E. Granade) Date: Tue, 01 Nov 2005 06:14:20 -0900 Subject: [Wikitech-l] Re: [WikiEN-l] Ratings again In-Reply-To: <43673F4F.5010403@web.de> References: <435DFD26.7070009@pobox.com> <435E0FE8.7000605@tonal.clara.co.uk> <4360A7D0.6040505@web.de> <4366790B.8070003@telus.net> <43673F4F.5010403@web.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Magnus Manske wrote: > Ray Saintonge wrote: >> Magnus Manske wrote: >> >>> Neil Harris wrote: >>> >>> >>>> Am I being naive here, or would a super-dumb implementation with a >>>> single table with the columns shown below be enough to work in the >>>> short term? >>>> >>>> Page_ID >>>> Revision_ID >>>> User_ID >>>> Rating_ID >>>> Rating value >>>> Timestamp >>>> >>> This is what I did; no timestamp, but a varchar for comments. Topics to >>> rate and their range (e.g, 1-5) are encoded here as well for user #0. >>> That's about as dumb as it gets ;-) >>> >> I still prefer a 0-10 range of ratings. I think a decimal >> normalization would be easier to work with in any subsequent analysis >> of results. > One can set the range for each topic individually. > > BTW, with values 0-10, you'd have eleven values... > > Magnus 11 values, 10 histogram bins. It'd be dreadfully easy to make a page like Special:Pages_Rated_0_to_1 with that kind of an approach, and to make a table of links to each of these ten pages. - --Chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFDZ4ZK0dXuuZr00J4RAsjXAJ43EzUDPkYKlK50hhn/A1nGqU7uEwCeOG3d 2edq3UasVPTz49OKuRm0Fno= =Ligb -----END PGP SIGNATURE----- From timwi at gmx.net Tue Nov 1 15:36:36 2005 From: timwi at gmx.net (Timwi) Date: Tue, 01 Nov 2005 15:36:36 +0000 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: <200511010004.37975@bloodgate.com> References: <877jbu2avo.fsf@mat.ucm.es> <87d5llaiex.fsf@mat.ucm.es> <200511010004.37975@bloodgate.com> Message-ID: > OTOH, the currenty wikipages are not really suited to discussions at all - > I'd rather not have someone edit my "posts" - whether it be for fixing > the speling or whatever. Well, I for one would object to any system that doesn't allow me to edit other people's comments. It's too useful to let traditionalism and conservatism ruin it. It's not just about fixing atrocious spellings, it's also about removing objectionable parts of comments without removing the entire comment, or about summarising an unnecessarily long piece of prose. I don't see any point in listing the advantages here since wikis have shown time and again that they work, and Wikipedia wasn't the first. Yes, it defies the well-established and widely loved web forum paradigm where everyone "owns" their own comments, but we're not a web forum, we're a wiki, and wiki is our paradigm. Timwi From cgranade at greens.org Tue Nov 1 16:01:38 2005 From: cgranade at greens.org (Christopher E. Granade) Date: Tue, 01 Nov 2005 07:01:38 -0900 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> <87d5llaiex.fsf@mat.ucm.es> <200511010004.37975@bloodgate.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Timwi wrote: > >> OTOH, the currenty wikipages are not really suited to discussions at >> all - I'd rather not have someone edit my "posts" - whether it be for >> fixing the speling or whatever. > > Well, I for one would object to any system that doesn't allow me to edit > other people's comments. It's too useful to let traditionalism and > conservatism ruin it. It's not just about fixing atrocious spellings, > it's also about removing objectionable parts of comments without > removing the entire comment, or about summarising an unnecessarily long > piece of prose. I don't see any point in listing the advantages here > since wikis have shown time and again that they work, and Wikipedia > wasn't the first. Yes, it defies the well-established and widely loved > web forum paradigm where everyone "owns" their own comments, but we're > not a web forum, we're a wiki, and wiki is our paradigm. > > Timwi Any model, if over applied, is harmful. Making things into discussion fora that do not lend themselves to such a model can only result in the imposing of restrictions upon content which are harmful to the capability of the model. Thus, in applying a model, it is important to recognize what it was intended for, and what it is good at doing. A Wiki model is good for quasi-static documents (depends on time of access, but not on query), whereas a forum is good for an ongoing discussion. What about a discussion that is itself a document? I see two approaches to this problem: 1) Implement the new LiquidThreads model, which combines the two models. 2) Add discussion-specific metadata syntax to the Wiki syntax to allow for specialized handling of discussions. Expanding on this second point, consider something like this: Current format: ==NPOV Complaint== This page is not NPOV! --Someuser :Why not? --Author ::Because of... :We need more justification than that. --Otheruser ::Well, there's... Proposed format: @topic ==NPOV Complaint== @comment This page is not NPOV! --Someuser @/comment @c:Why not? --Author @/c @c::Because of... @/c @c:We need more justification than that. --Otheruser @/c @c::Well, there's... @/c @/topic where @c is shorthand for @comment, and the colons following the @c tell MW how nested it is. If you find this markup ugly, suggest something else; I thought of this off the top of my head. - --Chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFDZ5Fg0dXuuZr00J4RAjYJAKCofNcuwcYDLQQ5Lvy2647AWhobWwCfUN7d gnYTuSN2b0tO7NHwf03oJ9E= =1DbF -----END PGP SIGNATURE----- From lars at aronsson.se Tue Nov 1 15:49:52 2005 From: lars at aronsson.se (Lars Aronsson) Date: Tue, 1 Nov 2005 16:49:52 +0100 (CET) Subject: [Wikitech-l] Indexing Wikisource Message-ID: As I have reported earlier on wikide-l and wikipedia-l, I digitized two small encyclopedias in September and early October, one in German and one in English, and made them available on Wikisource for proofreading and reference. Every book page has a wiki subpage of its own, presenting the scanned image and the OCR text. Since then I have monitored how fast Google has been to index these titles. More than half of the German pages were indexed within a few weeks, which is in line with my experience of how fast Google can be. But to my surprise, only very few of the English pages have yet been indexed. It was only this summer that wikisource.org was split into language subdomains like de.wikisource.org and en.wikisource.org, so it is understandable that pages in the new subdomains still have a low Google rank. However, this is the same for all languages, and doesn't explain the difference that I see between the German and the English language subdomain. Today de.wikisource.org reports to having 2311 articles and 4638 pages. A word that occurs on every page is "letzte" ("recent" in Recent Changes) and Google gives 770 hits for the search http://www.google.com/search?q=site%3Ade.wikisource.org+letzte Of the 4638 pages, 443 are subpages to http://de.wikisource.org/wiki/Meyers_Blitz-Lexikon and Google finds 418 of them, http://www.google.com/search?q=site%3Ade.wikisource.org+%22Meyers+Blitz-Lexikon%22 This means 17% of the de.wikisource pages are indexed, but 94% of the pages from the book I scanned. En.wikisource.org today has 19,006 articles and 23,780 total pages. A word that occurs on every page is "recent" and Google givs 12,500 hits for the query http://www.google.com/search?q=site%3Aen.wikisource.org+recent Of the 23,780 pages, 2791 are subpages to http://en.wikisource.org/wiki/The_New_Student%27s_Reference_Work but Google only finds 4 of them, http://www.google.com/search?q=site%3Aen.wikisource.org+%22The+New+Student%27s+Reference+Work%22 This means 53% of en.wikisource.org articles are indexed, but only 0.14 % of the book pages I scanned. I can understand that some enthusiastic Germans have linked to "Meyers Blitz-Lexikon" and increased its Google rank. But it also seems that there is a negative Google rank for "The New Student's Reference Work". Has it been trapped in some spam filter? -- Lars Aronsson (lars at aronsson.se) Aronsson Datateknik - http://aronsson.se From timwi at gmx.net Tue Nov 1 16:36:51 2005 From: timwi at gmx.net (Timwi) Date: Tue, 01 Nov 2005 16:36:51 +0000 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> <87d5llaiex.fsf@mat.ucm.es> <200511010004.37975@bloodgate.com> Message-ID: > Any model, if over applied, is harmful. Agree. But you didn't elaborate on this fully. Discussion threads and wiki are not mutually exclusive models. I am strongly in favour of LiquidThreads as long as it doesn't let users claim ownership over "their" comments (which the current proposal proposes to do). Timwi From timwi at gmx.net Tue Nov 1 16:43:25 2005 From: timwi at gmx.net (Timwi) Date: Tue, 01 Nov 2005 16:43:25 +0000 Subject: [Wikitech-l] Re: Indexing Wikisource In-Reply-To: References: Message-ID: > It was only this summer that wikisource.org was split into > language subdomains like de.wikisource.org and en.wikisource.org, > so it is understandable that pages in the new subdomains still > have a low Google rank. However, this is the same for all > languages, and doesn't explain the difference that I see between > the German and the English language subdomain. This is wrong; it is not the same for all languages. The English one is privileged because wikisource.org/wiki/Anything redirects you to it. Therefore, from Google's point of view, de.wikisource.org is "new", but en.wikisource.org is just a new name of a site that is "old". Hence, I'm not surprised that the Google spider gives more priority to de.wikisource.org. In fact, it may still have been in the process of spidering the site when you put the stuff up, and then the spider may have come across a link to it, while Google might be thinking that it has finished an indexing run of the English one, thus taking a break before giving it another go. Timwi From nospam-abuse at bloodgate.com Tue Nov 1 16:56:01 2005 From: nospam-abuse at bloodgate.com (Tels) Date: Tue, 1 Nov 2005 17:56:01 +0100 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> Message-ID: <200511011756.19482@bloodgate.com> -----BEGIN PGP SIGNED MESSAGE----- Moin, On Tuesday 01 November 2005 17:01, Christopher E. Granade wrote: > Timwi wrote: > >> OTOH, the currenty wikipages are not really suited to discussions at > >> all - I'd rather not have someone edit my "posts" - whether it be > >> for fixing the speling or whatever. > > > > Well, I for one would object to any system that doesn't allow me to > > edit other people's comments. It's too useful to let traditionalism > > and conservatism ruin it. It's not just about fixing atrocious > > spellings, it's also about removing objectionable parts of comments > > without removing the entire comment, or about summarising an > > unnecessarily long piece of prose. I don't see any point in listing > > the advantages here since wikis have shown time and again that they > > work, and Wikipedia wasn't the first. Yes, it defies the > > well-established and widely loved web forum paradigm where everyone > > "owns" their own comments, but we're not a web forum, we're a wiki, > > and wiki is our paradigm. > > > > Timwi > > Any model, if over applied, is harmful. Making things into discussion > fora that do not lend themselves to such a model can only result in the > imposing of restrictions upon content which are harmful to the > capability of the model. > > Thus, in applying a model, it is important to recognize what it was > intended for, and what it is good at doing. A Wiki model is good for > quasi-static documents (depends on time of access, but not on query), > whereas a forum is good for an ongoing discussion. I have to stongly agree. Part of the problems with the current discussions are: *people editing other people's comment, making it appear that person X said Y, while they instead did say "!Y". (I can see lawsuits happen already... ) * people forgetting to sign their name/date, re-arrangement of comments any any other things that make it virtually impossible to reconstruct a discussion (remember, a discussion not only needs to track who said what, but also when, e.g. in which order where things said) Yes, all changes are theoretically in the history, but good luck in reconstructing the page after some amounts of edits. * you cannot collapse parts of a discussioin thread, because there is no structure/tree/thread etc - it is all a flat text. * likewise, seeing hat changed last on the discussion page involves the cumbersome history - you cannot simple read the last entries on a long page after some time because they are all merged into the same flat text. A discussion thread is simple not the same as an article, and I think the wiki principle cannot work good for it. However, see below: > What about a > discussion that is itself a document? I see two approaches to this > problem: 1) Implement the new LiquidThreads model, which combines the > two models. 2) Add discussion-specific metadata syntax to the Wiki > syntax to allow for specialized handling of discussions. What I had in mind that the discussion page could be constructed from a series of "posts". Each post would be a wiki-mini-article in itself. Thus you get the tree structure (can collapse threads sort them etc), plus the "show me the latest posts", and you still have the "edit other people's text" feature. You need no new markup, it probably suffices to have a front-end that can collect the posts (all articles in name-space "MyArticle::Discussion::Post?) and display them on one page, with edit buttons etc per post. You could even make it so that when the original author requests this, only admins can edit/delete this post (in case of trouble). If the author does request it, anybody can edit the text. You could even have a "this post was last edited by XYZ on ABC" - thus showing immidiately that the original author wasn't the last one to touch the text and thus giving you a hint to look at the history. Currently you would need always check the history to spot malicous modifications. Threading by subject, time etc are all possible because these are just rearangements of the post articles. Btw, even if the main wikipedia does not use this new discussion style, a lot of small wikis could benefit greatly from an improved discussion page. The current model ends in quite a chaos after a while. > Expanding on this second point, consider something like this: > > Current format: > ==NPOV Complaint== > This page is not NPOV! --Someuser [snip] > Proposed format: > @topic ==NPOV Complaint== > @comment This page is not NPOV! --Someuser @/comment > @c:Why not? --Author @/c > @c::Because of... @/c > @c:We need more justification than that. --Otheruser @/c > @c::Well, there's... @/c > @/topic > > where @c is shorthand for @comment, and the colons following the @c > tell MW how nested it is. If you find this markup ugly, suggest > something else; I thought of this off the top of my head. Oh, please not more markup to remember, parse and translate. I think a real one-article-per-post model would solve the problem more elegant. ;) Btw, I do not know what Liquidthreads is, but if it works like what I proposed, just count me in favour of it :) Best wishes, Tels - -- Signed on Tue Nov 1 17:43:13 2005 with key 0x93B84C15. Visit my photo gallery at http://bloodgate.com/photos/ PGP key on http://bloodgate.com/tels.asc or per email. "You know the world is going crazy when the best rapper is a white guy, the best golfer is a black guy, the tallest guy in the NBA is Chinese, the Swiss hold the America's Cup, France is accusing the U.S. of arrogance, Germany doesn't want to go to war, and the three most powerful men in America are named 'Bush', 'Dick', and 'Colon'. Need I say more?" -Chris Rock -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iQEVAwUBQ2eeMncLPEOTuEwVAQHH0gf8CquO3EAcTgozFVZCdLmHFzUBQKBWmdLB G+S8YHWQXGo8Luq8ZbSvPof5zZvS6kpMnbUnDAOe7iMKFmnkazG051/h15Wo6led gS+tB0Mts8TnMlDixKWDC7CV6MkDUhqDcp06+OMbmuOp11DFW7IlaAGhQOUP/AFF IQIq5ORrnYMEKPGMJl5eCaZowYg49ynNhr24Nmsr3tzVGlKaMbUzVQX63RL4OLIm 08PnCWRpH1y3voio4L0pMmPzCrxaLhflR2RPIJ6lYab6wvziELj94RUEbm6X2axn MNEmDnL4/s9thIUUYlivqHn4e25d0aEyRXmI0WdumsfoFdIOL7m/wg== =U/ex -----END PGP SIGNATURE----- From cgranade at greens.org Tue Nov 1 17:11:20 2005 From: cgranade at greens.org (Christopher E. Granade) Date: Tue, 01 Nov 2005 08:11:20 -0900 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> <87d5llaiex.fsf@mat.ucm.es> <200511010004.37975@bloodgate.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Timwi wrote: > Discussion threads and wiki are not mutually exclusive models. Not in general, no, but they disagree on some specific points, such as content ownership. In such cases, we need to specify how to resolve the conflict. The "right" answer may be to allow specific sites to configure options. I can easily imagine a set of config options that specifies LiquidThreads content-ownership model behaviors. $ltAuthorOwnsComment = false; $ltAdminOwnsComment = true; or perhaps $ltOwnsComment = $ltAdmin; where $ltAdmin is a class that LiquidThreads instantiates before processing the config file, and can recognize as a criterion. This latter approach would allow for finer-grained access control by allowing for new classes to be created as combinations of provided classes. $ltOwnsComment = ltAnd($ltAdmin, $ltAuthor); - --Chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFDZ6G20dXuuZr00J4RAiSHAJ9T4secpzdiqD4wtAi4WWv6N7p6OwCffwHo TcnZ5jnBp89tHoLix3XxtF4= =kLyw -----END PGP SIGNATURE----- From cgranade at greens.org Tue Nov 1 17:16:10 2005 From: cgranade at greens.org (Christopher E. Granade) Date: Tue, 01 Nov 2005 08:16:10 -0900 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: <200511011756.19482@bloodgate.com> References: <877jbu2avo.fsf@mat.ucm.es> <200511011756.19482@bloodgate.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Tels wrote: > Oh, please not more markup to remember, parse and translate. I think a > real one-article-per-post model would solve the problem more elegant. ;) > > Btw, I do not know what Liquidthreads is, but if it works like what I > proposed, just count me in favour of it :) > > Best wishes, > > Tels > I only said it was a solution- not a good one. BTW, LiquidThreads is documented at: http://meta.wikimedia.org/wiki/LiquidThreads - --Chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFDZ6LZ0dXuuZr00J4RAu8VAKDxKwmikeGwH16UoLf4RkVvT/Z4rACfVs8L niC5jCxwcjqxMqaokD60QiY= =fkGt -----END PGP SIGNATURE----- From nospam-abuse at bloodgate.com Tue Nov 1 17:26:39 2005 From: nospam-abuse at bloodgate.com (Tels) Date: Tue, 1 Nov 2005 18:26:39 +0100 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> Message-ID: <200511011826.56652@bloodgate.com> -----BEGIN PGP SIGNED MESSAGE----- Moin, On Tuesday 01 November 2005 17:36, Timwi wrote: > > Any model, if over applied, is harmful. > > Agree. But you didn't elaborate on this fully. Discussion threads and > wiki are not mutually exclusive models. > > I am strongly in favour of LiquidThreads as long as it doesn't let > users claim ownership over "their" comments (which the current proposal > proposes to do). I do think these are two seperate points: * how to improve the discussion pages on a wiki * whether each author own his/her comment or not. Thinking about this, a wiki article is "owned" by multiple authors, none of which are readily apparent (you have to check the history and even than it is hard to track who "owns" what). OTOH, it is written in NPOV, and checked by a lot of people. Discussions, OTOH, also involve personal opinions. Danger lies ahead when the opionion can be changed, but is still labeled (or signed, if you wish) with the original authors name. The current model doesn't even have a way to handle this, let alone to prvent it (if you wanted to prevent it). Just imagine that this discussion we have is on a wiki, this is the latest edition (you would need to check the history, aka mailing list archives to see the full revisions) and it contained: On Tuesday 01 November 2005 17:36, Timwi wrote: > > Any model, if over applied, is harmful. > Agree. > I am strongly in favour of LiquidThreads. See the danger? (for the record, the above quote of three lines was written/shortened by me, not Timwi). The current model only relies on "Prinzip Hoffnung", e.g. that is you *hope* that the reader checks the history, or someone checks and edits the discussion page back. If we can improve the discussion page itself, *and* prevent misrepresentation at the same time, well, that would be great :) Best wishes, Tels - -- Signed on Tue Nov 1 18:20:44 2005 with key 0x93B84C15. Visit my photo gallery at http://bloodgate.com/photos/ PGP key on http://bloodgate.com/tels.asc or per email. "What is fair use? Fair use is not a law. There's nothing in law. Right now, any professor can show a complete movie in his classroom without paying a dime - that's fair use. What is not fair use is making a copy of an encrypted DVD, because once you're able to break the encryption, you've undermined the encryption itself." - Jack Valenti -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iQEVAwUBQ2elV3cLPEOTuEwVAQEitgf/ViZeKTUwUSkrYfrFTDC5CMAfLi6xiJXJ Y31eACazA5pvKfsWa/8DcLgO6zM+5b33VCMADBDqrGRkgWZiy8uakTGKWPJaSdTy 03m5Vwfk7MkC3iieRaOR4vEisanzvOJrYLd0rhI3BDJYd07EL0i8cZaFgqKvW/+8 b068qYl0YisHDHug1HjPv/O9Q9upw1ouW/sAsWnWbdm57Kflv57RVRC9E0oDb8ay hIoVwpeqEnwsjdy6FJZzulPGRXxfQNLBrr/vAMp6SyhcIzsSSk0wHPhJlccJ4vEb Z9ZKRMush0eH2EaWZUP5XpsqzIp9HA0arJkU+SuqHR3o3c7XD/Tmgg== =2yAj -----END PGP SIGNATURE----- From nospam-abuse at bloodgate.com Tue Nov 1 17:31:47 2005 From: nospam-abuse at bloodgate.com (Tels) Date: Tue, 1 Nov 2005 18:31:47 +0100 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> <200511011756.19482@bloodgate.com> Message-ID: <200511011831.48269@bloodgate.com> -----BEGIN PGP SIGNED MESSAGE----- Moin, On Tuesday 01 November 2005 18:16, Christopher E. Granade wrote: > Tels wrote: > > Oh, please not more markup to remember, parse and translate. I think > > a real one-article-per-post model would solve the problem more > > elegant. ;) > > > > Btw, I do not know what Liquidthreads is, but if it works like what I > > proposed, just count me in favour of it :) > > > > Best wishes, > > > > Tels > > I only said it was a solution- not a good one. :) > BTW, LiquidThreads is documented at: > http://meta.wikimedia.org/wiki/LiquidThreads So far, it seem a sensible proposition. The technical details are of course, technical details. Best wishes, Tels - -- Signed on Tue Nov 1 18:31:10 2005 with key 0x93B84C15. Visit my photo gallery at http://bloodgate.com/photos/ PGP key on http://bloodgate.com/tels.asc or per email. Ich bin mit der Gesamtsituation unzufrieden! -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iQEVAwUBQ2emg3cLPEOTuEwVAQHmsQf8DoTMwQ3OGHxJt8Ukq64yYKZ8Gx0I5lJw 4OdQgfXDcbhTMkFh84n2GlhGe2tJW5l30k7b3pVgyiEMnfH0MMOh3kAcnILHl8uU BlMydqyQ6ifId5ctJIlwr0uw+1mY8GmDj8EXpOWLK2mvRr8GNdZO6rUziv8aEivj zCoozct5SLo5hPueEYBecpjifpKVboLnvf1u67kZ/S739FMUuo11xYiA+X6IIZGr 2MKC9OUPDx3gmXXVghHG0O3EZz24F7+mG5vejRzv9eRmLt2ALWIpTSZk89tqtWvC IMkHIXFP75rsJkoakfBn+q2ggShD2C4kXIfEitagehXy43g7H1Q0yw== =Meri -----END PGP SIGNATURE----- From evan at wikitravel.org Tue Nov 1 17:23:35 2005 From: evan at wikitravel.org (Evan Prodromou) Date: Tue, 01 Nov 2005 12:23:35 -0500 Subject: [Wikitech-l] Edit bailouts Message-ID: <1130865815.9004.22.camel@zhora.1481ruerachel.net> So, I got interested a few days ago in the question of how many Wikitravel contributors start editing a page but bail out before completing their edit. I started combing the Apache logs for some answers. wikitravel.org's robots.txt hides editing pages, but there are some non-compliant spiders that still follow edit links; some even mask their identity with a fake User-Agent header. Still, I can say with some confidence that of non-spider hits on our edit pages, only about 25% result in a "submit" post afterwards. Although I think there are a lot of people who click "edit" without the idea of seriously contributing (the aforementioned spiders; people who are curious to see what will happen; people who hit edit by mistake), this still seems quite high. I'm wondering if anyone has similar statistics for other Mediawiki sites, other wiki-engine sites, and specifically for Wikimedia sites. I'd like to get a comparison to see what we can do on Wikitravel to cut down on these bailouts and help people finish their contributions; a rough idea of what rate of bailing out other sites get would be helpful for that. Thanks, ~Evan -- Evan Prodromou Wikitravel (http://wikitravel.org/) -- the free, complete, up-to-date and reliable world-wide travel guide From cormaggio at gmail.com Tue Nov 1 19:13:36 2005 From: cormaggio at gmail.com (Cormac Lawler) Date: Tue, 1 Nov 2005 19:13:36 +0000 Subject: [Wikitech-l] Re: [Foundation-l] Vote to create Wikiversity Vote In-Reply-To: <43679B57.4040208@netzero.net> References: <43679B57.4040208@netzero.net> Message-ID: On 11/1/05, Robert Scott Horning wrote: > This is a reminder/formal notice that the voting period for the creation > of Wikiversity is now over, and that the proposal to create Wikiversity > as a new Wikimedia sister project is now being submitted to the > Wikimedia Foundation board for a formal review..[snip] I'd just like to thank you Robert for all the work you've put into this proposal, timetable, vote etc. I've been meaning to write up a proposal to the community on this, as Wikiversity is something that really energises me at the moment (mentally at least). (My lack of) time at the moment forbids me from going into everything I'd like to say about Wikiversity, but I'd liek to make some brief points. It is clear that many people are afraid that the idea is too half-baked or not ready enough to be started. This is currently true - Wikiversity exists as many different ideas in many people's heads, with plenty of enthusiasm but not much to actually to show for it. But my counter argument to this is: *every* wiki project has developed from a similar position. Every wiki is an idea which is generated and created through the combined energy of its participants - you only have to look at the various listings of people at the vote or on the proposed projects page or on the meta Wikiversity talk page amongst others, to see that there is so much energy there waiting to be tapped, and rearing to go. That, surely, is the main thing. I think the crucial point is that Wikiversity, if created now, will not (in the main) be ready to actually go live as a learning centre *just yet*. It needs to have a creation period, it needs to be widely known about to generate a learner base - and *then* it can flourish. Just don't expect results yet (though some courses could be created quite quickly - and who's to say that we need to constuct whole courses in the first place? What about single lesson plans? What about collections of flash cards? etc.) Another major concern is resources - both human and electronic (ie. financial). I don't know about the latter part (and I'd really like someone to tell us how much it would cost in server/work hours terms to set up a new project in comparison to what it would cost to set up a new language project on any other WMF project - hence crossposting to wikitech). But on the human resources side - i think, on the contrary, this will be an excellent opportunity for the projects to cross-pollinate (Wikipedia and Wikibooks especially) and draw in a huge sector of people that may have erstwhile have remained on the periphery of especially Wikipedia. I have a hunch that not only could we get a whole lot more poeple involved in setting up this project, but we could also get some major funding. UNESCO's ecucation for all campaign comes to mind - and I'd appreciate any other suggestions. That's all time permits me to say for now, but suffice to say that i have been reading around Wikiversity for a few months now and, for one, am highly motivated to get this off the ground. I know of quiet a few more and I know that it could really take off. So if we could have some very clear technical issues that need to be dealt with, I think that's where we should be focussing our attention, rather than the fact that it isn't ready/finished yet - because it just needs a bit of time, space, and a steady stream of energy that i know exists. I look forward to hearing from you. Cormac / Cormaggio From timwi at gmx.net Tue Nov 1 19:42:17 2005 From: timwi at gmx.net (Timwi) Date: Tue, 01 Nov 2005 19:42:17 +0000 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: <200511011826.56652@bloodgate.com> References: <877jbu2avo.fsf@mat.ucm.es> <200511011826.56652@bloodgate.com> Message-ID: > I do think these are two seperate points: > > * how to improve the discussion pages on a wiki > * whether each author own his/her comment or not. But the point is that the answer to the second influences whether the solution proposed for the first is seen as an "improvement". I feel that if the ability to edit other people's comments is taken away from me, I can't label it an "improvement". > Discussions, OTOH, also involve personal opinions. Danger lies ahead when > the opionion can be changed, but is still labeled (or signed, if you > wish) with the original authors name. We already have this "danger", and we've had it since the beginning of Phase II, and it has not turned out to be a great problem, so this is not an argument. > Just imagine that this discussion we have is on a wiki, this is the latest > edition (you would need to check the history, aka mailing list archives > to see the full revisions) and it contained: > > On Tuesday 01 November 2005 17:36, Timwi wrote: >>>Any model, if over applied, is harmful. >>Agree. >>I am strongly in favour of LiquidThreads. > > See the danger? A fallacious argument by false dilemma, or by lack of imagination, or whatever you wanna call it. You almost provided the answer to this one yourself: > (for the record, the above quote of three lines was > written/shortened by me, not Timwi). And that is what it should say. COMMENT #328645 by [[User:Timwi]] Agree. I am strongly in favour of LiquidThreads. (This comment was last edited by [[User:Tels]] .) If is a minute ago, I better check the diff. If it was an hour ago, I can probably assume that your edit was harmless. Therefore, again, your "danger" is not an argument against the ability to edit comments. > If we can improve the discussion page itself, *and* prevent > misrepresentation at the same time, well, that would be great :) It's really easy. Timwi From timwi at gmx.net Tue Nov 1 19:47:54 2005 From: timwi at gmx.net (Timwi) Date: Tue, 01 Nov 2005 19:47:54 +0000 Subject: [Wikitech-l] Re: Edit bailouts In-Reply-To: <1130865815.9004.22.camel@zhora.1481ruerachel.net> References: <1130865815.9004.22.camel@zhora.1481ruerachel.net> Message-ID: > Still, I can say with some confidence that of non-spider hits on our > edit pages, only about 25% result in a "submit" post afterwards. > Although I think there are a lot of people who click "edit" without the > idea of seriously contributing (the aforementioned spiders; people who > are curious to see what will happen; people who hit edit by mistake), > this still seems quite high. No, I think it's quite plausible. I wouldn't be surprised if most people click "edit" to see what happens, and then bail out because the edit box looks completely different from the rendered article, and they think they don't have the technical knowledge to make an edit (meaning: they don't know the syntax). Timwi From laner at navo.navy.mil Tue Nov 1 20:12:12 2005 From: laner at navo.navy.mil (Ryan Lane) Date: Tue, 1 Nov 2005 20:12:12 +0000 (UTC) Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] References: <877jbu2avo.fsf@mat.ucm.es> <87d5llaiex.fsf@mat.ucm.es> <200511010004.37975@bloodgate.com> Message-ID: Timwi writes: > > > > OTOH, the currenty wikipages are not really suited to discussions at all - > > I'd rather not have someone edit my "posts" - whether it be for fixing > > the speling or whatever. > > Well, I for one would object to any system that doesn't allow me to edit > other people's comments. It's too useful to let traditionalism and > conservatism ruin it. It's not just about fixing atrocious spellings, > it's also about removing objectionable parts of comments without > removing the entire comment, or about summarising an unnecessarily long > piece of prose. I don't see any point in listing the advantages here > since wikis have shown time and again that they work, and Wikipedia > wasn't the first. Yes, it defies the well-established and widely loved > web forum paradigm where everyone "owns" their own comments, but we're > not a web forum, we're a wiki, and wiki is our paradigm. > > Timwi > Who are you to decide what is objectionable, or unnecessarily long, especially in someone's opinion based comment? This is EXACTLY what a lot of people are talking about when they dislike the idea of someone editing their comments. Just because something is status quo doesn't mean that it is working. The discussion pages are probably one of the worst features of mediawiki IMO. I hate the idea of someone editing my comments. 99% of the time, people who are reading the discussion section will NOT check the history to see if what someone said is really what someone said. I believe the proposed idea is quite a good balance between the current model, and a traditional forum/thread style model. It still allows editing (when users allow it), and refactoring. But it could also allow an easier model to track threads, reply to comments, get notification when a person responds to your posts/replies, etc, etc... I think your objection to a new system is conservatism at its best. The new model could offer quite a bit of benefit. Ryan Lane From mark at geekhive.net Tue Nov 1 20:40:34 2005 From: mark at geekhive.net (Mark Jaroski) Date: Tue, 1 Nov 2005 21:40:34 +0100 Subject: [Wikitech-l] Re: Edit bailouts In-Reply-To: References: <1130865815.9004.22.camel@zhora.1481ruerachel.net> Message-ID: <20051101204034.GJ11436@who.int> Timwi wrote: > > >Still, I can say with some confidence that of non-spider hits on our > >edit pages, only about 25% result in a "submit" post afterwards. > >Although I think there are a lot of people who click "edit" without the > >idea of seriously contributing (the aforementioned spiders; people who > >are curious to see what will happen; people who hit edit by mistake), > >this still seems quite high. > > No, I think it's quite plausible. I wouldn't be surprised if most people > click "edit" to see what happens, and then bail out because the edit box > looks completely different from the rendered article, and they think > they don't have the technical knowledge to make an edit (meaning: they > don't know the syntax). At least some of the edit hits which don't result in submits are due to work-arounds for Cache404. Sometimes if there's something wrong with a cached page I'll click edit, and then edit the URL instead of the article, eg, clicking edit for Florence puts this in the location bar: http://wikitravel.org/wiki/en/index.php?title=Florence&action=edit but what I really want to do is re-cache the page, so I do this: http://wikitravel.org/wiki/en/index.php?title=Florence -mark -- -- ================================================================= -- mark at geekhive dot net -- From laner at NAVO.NAVY.MIL Tue Nov 1 20:40:20 2005 From: laner at NAVO.NAVY.MIL (Lane, Ryan) Date: Tue, 1 Nov 2005 14:40:20 -0600 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] Message-ID: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> Timwi writes: > > > > I do think these are two seperate points: > > > > * how to improve the discussion pages on a wiki > > * whether each author own his/her comment or not. > > But the point is that the answer to the second influences whether the > solution proposed for the first is seen as an "improvement". I feel that > if the ability to edit other people's comments is taken away from me, I > can't label it an "improvement". > You may not label it an improvement, but there are others who definately would. > > Discussions, OTOH, also involve personal opinions. Danger lies ahead when > > the opionion can be changed, but is still labeled (or signed, if you > > wish) with the original authors name. > > We already have this "danger", and we've had it since the beginning of > Phase II, and it has not turned out to be a great problem, so this is > not an argument. > I've had people complain to me about moving their comments around on my LDAP patch's page on meta. I erased one person's edit because it was a non-working solution, and had a complaint about that. Just because you don't think this is a problem, doesn't mean it isn't a problem. I can definately see lawsuits based upon this. This is definately a valid argument. > > Just imagine that this discussion we have is on a wiki, this is the latest > > edition (you would need to check the history, aka mailing list archives > > to see the full revisions) and it contained: > > > > On Tuesday 01 November 2005 17:36, Timwi wrote: > >>>Any model, if over applied, is harmful. > >>Agree. > >>I am strongly in favour of LiquidThreads. > > > > See the danger? > > A fallacious argument by false dilemma, or by lack of imagination, or > whatever you wanna call it. You almost provided the answer to this one > yourself: > > > (for the record, the above quote of three lines was > > written/shortened by me, not Timwi). > > And that is what it should say. > > COMMENT #328645 by [[User:Timwi]] > > Agree. I am strongly in favour of LiquidThreads. > (This comment was last edited by [[User:Tels]] .) > > If is a minute ago, I better check the diff. If it was an > hour ago, I can probably assume that your edit was harmless. > > Therefore, again, your "danger" is not an argument against the ability > to edit comments. > Why can you assume that the edit was harmless? During katrina, I had no internet access for weeks. If someone maliciously edited some of my comments during that time, would you assume that what was there is actually what I wrote? Ignoring catastrophies like a large blackout, or a hurricane: say someone goes on vacation, or simply hasn't checked his discussions recently, or if an article's discussion page hasn't been updated in a long while, and someone stops checking it as often; in these cases, vandalism may go unnoticed for QUITE a while, where readers may be seeing the vandalised version for the entire time. In this aspect, there is "danger" in others editing comments. > > If we can improve the discussion page itself, *and* prevent > > misrepresentation at the same time, well, that would be great :) > > It's really easy. > > Timwi > I think the original idea of LiquidThreads is a good solution for the problem. I don't believe the implementation would be easy though ;). Ryan Lane From timwi at gmx.net Tue Nov 1 21:16:31 2005 From: timwi at gmx.net (Timwi) Date: Tue, 01 Nov 2005 21:16:31 +0000 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> <87d5llaiex.fsf@mat.ucm.es> <200511010004.37975@bloodgate.com> Message-ID: > Who are you to decide what is objectionable, or unnecessarily long, > especially in someone's opinion based comment? Strawman argument: I didn't say I "decide" anything. If anyone can edit, anyone can revert; the "decision" is ultimately that of the community, not of any individual. The purpose of the comment is (and will always be) to reflect a given user's opinion, and the community will ensure this will be met. On the contrary, who are you to decide that nobody should be allowed to edit something, ESPECIALLY in a system where you are perfectly free to simply NOT SUBMIT whatever it is you don't want edited? We already have mailing lists where you can publish stuff that you don't want edited. > This is EXACTLY what a lot of people are talking about when they > dislike the idea of someone editing their comments. And the fact that it is a fallacy is EXACTLY what I'm talking about when I dislike the idea of giving in to it. > Just because something is status quo doesn't mean that it is working. Nor does it mean that it is _not_ working. I see very little of the problems you have described, and even the problems that you have described are false dilemmas, i.e. they can perfectly well be addressed without preventing anyone from editing something. Example: > 99% of the time, people who are reading the discussion section will > NOT check the history to see if what someone said is really what > someone said. Right, so it needs to be made easier to check this, and/or there needs to be some sort of visual indication that there is a chance that it might not be what someone said. Both is addressed by my suggestion of adding a little "This comment was last edited by " with a link to a diff between the comment's original author's latest revision and the current revision. It surely does not mean it's necessary to make editing of any comment impossible. > I believe the proposed idea is quite a good balance between the > current model, and a traditional forum/thread style model. The proposed idea *is* a traditional forum/thread style model, with only two additional features (users can allow other users to edit their comments, and the concept of "channels"). It is to the wiki philosophy like Everything2 is to Wikipedia. Ideologically, it's kind of like allowing people to submit articles and then keep them protected so only they can edit them. You're going to tell me that it's not the same thing because comments express a person's opinion, but this is why I added the word "ideologically". If it was just about letting people's paranoia free reign, then we could as well just use a box-standard web forum. In fact, it would already be possible _theoretically_ to move all discussions to a mailing list and use the Talk namespace only for the summaries/refactorings. Why don't we do this? Because IT'S NOT WIKI. > I think your objection to a new system is conservatism at its best. It would be conservatism if I was an avid Talk-page participant. However you will notice that I am actually quite a bit more active on the mailing lists than the Talk pages. > The new model could offer quite a bit of benefit. You have not convinced me of such a "benefit" of making it completely impossible to edit some comments. You have shown some drawbacks of the current system, but as I said, concluding from it that editing needs to be made impossible is a false-dilemma fallacy. Timwi From magnus.manske at web.de Tue Nov 1 20:52:16 2005 From: magnus.manske at web.de (Magnus Manske) Date: Tue, 01 Nov 2005 21:52:16 +0100 Subject: [Wikitech-l] Re: [WikiEN-l] Ratings again In-Reply-To: References: Message-ID: <4367D580.1040907@web.de> David Gerard wrote: > Magnus Manske wrote: > > >> P.S.: I think I actually found a *real* bug; it seems anyone can change >> the list of topics. I'll have to restrict that to sysops. >> > > > I'd expect on Wikipedia that'd be a steward-level thing. Remember that > after the testing phase, it's unlikely to change very often if at all, > and then only with the consent of the wiki's community. > > As the selection of topics/ranges is a per-wikipedia thing, wouldn't that be more suitable for a buerocrat? OK, the real reason I'm asking is that I can easily check for bureocrat status, but didn't find the steward-check yet ;-) Magnus From magnus.manske at web.de Tue Nov 1 20:53:49 2005 From: magnus.manske at web.de (Magnus Manske) Date: Tue, 01 Nov 2005 21:53:49 +0100 Subject: [Wikitech-l] Re: [WikiEN-l] Ratings again In-Reply-To: References: Message-ID: <4367D5DD.2060002@web.de> David Gerard wrote: > Ray Saintonge wrote: > >> Magnus Manske wrote: >> > > >>> I still prefer a 0-10 range of ratings. I think a decimal >>> normalization would be easier to work with in any subsequent analysis >>> of results. >>> > > >> One can set the range for each topic individually. >> > > > Mmm. See discussion at [[m:En validation topics]] and its archive - > too many choices of rating is probably a bad thing, because it's hard > to agree what a given value means. The test plan so far includes > probably far more variables than we'd want in any case ... > I already tried to cut down the number of topics by merging them some time ago. But, for the initial test scenario, better too many topics than too few, IMHO. Magnus From timwi at gmx.net Tue Nov 1 21:36:17 2005 From: timwi at gmx.net (Timwi) Date: Tue, 01 Nov 2005 21:36:17 +0000 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> Message-ID: Lane, Ryan wrote: > Timwi writes: > >>>I do think these are two seperate points: >>> >>>* how to improve the discussion pages on a wiki >>>* whether each author own his/her comment or not. >> >>But the point is that the answer to the second influences whether the >>solution proposed for the first is seen as an "improvement". I feel that >>if the ability to edit other people's comments is taken away from me, I >>can't label it an "improvement". > > You may not label it an improvement, but there are others who definately > would. I know that. I was only pointing out that the two points mentioned above (can't check who they were from because you haven't replied to the thread properly and have instead started a new one) aren't separate. >>We already have this "danger", and we've had it since the beginning of >>Phase II, and it has not turned out to be a great problem, so this is >>not an argument. > > I've had people complain to me about moving their comments around on > my LDAP patch's page on meta. There are also complaints about edits to articles that are accurate and NPOV. That doesn't mean they (the ones complaining) have a right to claim ownership over something. Ignore them and get over it. > Just because you don't think this is a problem, doesn't mean it isn't > a problem. So far I have addressed only the "problem" of people not knowing whether what they are reading is really what the author wrote. People complaining about edits is a whole other matter. To address that problem, we must think about why they are complaining. I am convinced that the large majority of such complaints are solely out of irrational, thoughtless behaviour: people just assume that they "own" their comments by default, and complain about any form of "tinkering" even when it's perfectly legitimate if they thought about it for only a second. Surely you can't agree to let this kind of stupidity take precedence over our wiki philosophy. > I can definately see lawsuits based upon this. This is > definately a valid argument. I'm finding your "lawsuits" claim highly dubitable, and your repeated misspelling of "definitely" quite irritating. Are you a lawyer? (You're clearly not an English teacher, so the chances of you being a lawyer are somewhat higher.) > Why can you assume that the edit was harmless? During katrina, I had > no internet access for weeks. If someone maliciously edited some of > my comments during that time, would you assume that what was there is > actually what I wrote? And you think you're the only reasonable person in the world and everyone else only makes bad-faith edits and vadalises your comments. Get real. People already _do_ malicious editing, and other (well-meaning) people revert it. It's already happening, on all wikis. It's one of our very own Replies to Common Criticisms?! > In this aspect, there is "danger" in others editing comments. You haven't shown any, except for the possible "lawsuits" claim. Do you have anything substantial to back that up? > I think the original idea of LiquidThreads is a good solution for the > problem. I don't believe the implementation would be easy though ;). I believe a rudimentary implementation would be relatively easy, but it would be laborious, and so, few people will be willing to work it through until the end, and so, it will likely not get done very soon. A _good_ implementation (UI-wise as well as performance-wise) is quite a bit more challenging, so it will likely not get done at all. Timwi From brion at pobox.com Tue Nov 1 21:25:21 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 01 Nov 2005 13:25:21 -0800 Subject: [Wikitech-l] Re: Indexing Wikisource In-Reply-To: References: Message-ID: <4367DD41.1000405@pobox.com> Timwi wrote: >> It was only this summer that wikisource.org was split into language >> subdomains like de.wikisource.org and en.wikisource.org, so it is >> understandable that pages in the new subdomains still have a low >> Google rank. However, this is the same for all languages, and doesn't >> explain the difference that I see between the German and the English >> language subdomain. > > > This is wrong; it is not the same for all languages. The English one is > privileged because wikisource.org/wiki/Anything redirects you to it. No it doesn't. See for yourself: http://wikisource.org/wiki/Asdflkj Note that there is a *multilingual* Wikisource at wikisource.org, and a number of *monolingual* Wikisources at XX.wikisource.org. This is a horrible ugly situation, but apparently people preferred that. *shrug* -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 253 bytes Desc: OpenPGP digital signature URL: From oub at gmx.net Tue Nov 1 22:29:54 2005 From: oub at gmx.net (Uwe Brauer) Date: Tue, 01 Nov 2005 23:29:54 +0100 Subject: [Wikitech-l] Re: E-mail notifications for user talk pages References: <43670B9C.9070103@tgries.de> Message-ID: <873bmgngrh.fsf@gmx.net> >>>>> "Thomas" == Thomas Gries >>>>> writes: Thomas> Hello, I justed wanted to ask what stands against enabling Thomas> $wgEnotifUserTalk in all wikis ? Hello I would support that proposal, it seem very convenient to me. Uwe Brauer From laner at navo.navy.mil Tue Nov 1 23:09:25 2005 From: laner at navo.navy.mil (Ryan Lane) Date: Tue, 1 Nov 2005 23:09:25 +0000 (UTC) Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> Message-ID: Timwi writes: > > Just because you don't think this is a problem, doesn't mean it isn't > > a problem. > > So far I have addressed only the "problem" of people not knowing whether > what they are reading is really what the author wrote. People > complaining about edits is a whole other matter. To address that > problem, we must think about why they are complaining. I am convinced > that the large majority of such complaints are solely out of irrational, > thoughtless behaviour: people just assume that they "own" their comments > by default, and complain about any form of "tinkering" even when it's > perfectly legitimate if they thought about it for only a second. Surely > you can't agree to let this kind of stupidity take precedence over our > wiki philosophy. > It isn't that people wish to "own" their comments, but that people believe their opinion should be represented as they wrote it. I believe there is a fundamental difference between an article, and a discussion. I don't understand why you should need to edit someone's comment, other than possibly spellchecking it. When you edit someone's comment, you are changing the discussion as a whole. I've heard many reasons for maliciously changing a person's comment, but can you give me some examples of non-malicious changes? > > I can definately see lawsuits based upon this. This is > > definately a valid argument. > > I'm finding your "lawsuits" claim highly dubitable, and your repeated > misspelling of "definitely" quite irritating. Are you a lawyer? (You're > clearly not an English teacher, so the chances of you being a lawyer are > somewhat higher.) > Oh shit, I mispelled a word, I must be an idiot. Do you always resort to flame-style tactics? How about we get back to the topic at hand? I'm positive some time in the future we will see a lawsuit generated from userA against userB because userB changed what userA stated in a discussion page. I do agree that this is mostly paranoia, so this argument by itself is not a good enough reason to change the discussions. > > Why can you assume that the edit was harmless? During katrina, I had > > no internet access for weeks. If someone maliciously edited some of > > my comments during that time, would you assume that what was there is > > actually what I wrote? > > And you think you're the only reasonable person in the world and > everyone else only makes bad-faith edits and vadalises your comments. > Get real. People already _do_ malicious editing, and other > (well-meaning) people revert it. It's already happening, on all wikis. > It's one of our very own Replies to Common Criticisms?! > There is a difference between an article, and someone's comments on an article. The article is a community written piece. Someone's comment is not a community written piece, it is an individual's written piece. The discussion as a whole is a community written piece. > > In this aspect, there is "danger" in others editing comments. > > You haven't shown any, except for the possible "lawsuits" claim. Do you > have anything substantial to back that up? > I think there is a danger in changing the meaning of a discussion by changing the meaning of the comments contained within. Whether this is done maliciously or not, it can be harmful. > > I think the original idea of LiquidThreads is a good solution for the > > problem. I don't believe the implementation would be easy though ;). > > I believe a rudimentary implementation would be relatively easy, but it > would be laborious, and so, few people will be willing to work it > through until the end, and so, it will likely not get done very soon. A > _good_ implementation (UI-wise as well as performance-wise) is quite a > bit more challenging, so it will likely not get done at all. > I don't disagree entirely with you here; it probably won't get done. However, every large piece of software is laborious, and there is quite a bit of software out there that has been written (take mediawiki for example). Ryan Lane From cgranade at greens.org Tue Nov 1 23:43:49 2005 From: cgranade at greens.org (Christopher Granade) Date: Tue, 01 Nov 2005 14:43:49 -0900 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Timwi wrote: > So far I have addressed only the "problem" of people not knowing > whether what they are reading is really what the author wrote. People > complaining about edits is a whole other matter. To address that > problem, we must think about why they are complaining. I am convinced > that the large majority of such complaints are solely out of > irrational, thoughtless behaviour: people just assume that they "own" > their comments by default, and complain about any form of "tinkering" > even when it's perfectly legitimate if they thought about it for only > a second. Surely you can't agree to let this kind of stupidity take > precedence over our wiki philosophy. I have two major issues with this comment. First, your tone is very confrontational, and is far more aggressive than the situation calls for. Please, let's keep civil here. Second, I think we can let "this kind of stupidity" take precedence. This is exactly the kind of thing I meant earlier when I warned about over applying models. The Wiki model applies to collaborative documents. No one works with me to create a "better comment," but rather I write a comment so as to communicate what *I* am saying to someone else. In that sense, I *should* own comments, or there should be a large, loud disclaimer on every single page in the Talk namespace that comments may not belong to their professed authors. If you think this is too obvious, remember that irons often come with warning labels these days stating that one shouldn't iron clothes that they are currently wearing. - --Chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFDZ/200dXuuZr00J4RAnuCAKC9eRbS3A4eFPoEQL/m5sfdq3TZggCfc7kQ 1gfFBhHvtGo6LJ+jVUKUNiQ= =vLvH -----END PGP SIGNATURE----- From gmaxwell at gmail.com Wed Nov 2 00:16:07 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Tue, 1 Nov 2005 19:16:07 -0500 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> Message-ID: On 11/1/05, Lane, Ryan wrote: > Just because you don't think this is a problem, doesn't mean it isn't a > problem. > I can definately see lawsuits based upon this. This is definately a valid > argument. Hm. Whatever credibility this thread had previously has now been lost, at least in my opinion. If you're so worried about such activity, encourage people to read using the history tab, or better: find another project so you don't taint this one with mistrust and litigiousness. It's very hard for me to see through arguments like what I quoted above and stay focused on the real technical merits of more software facilitation on our discussion pages, so I'm just going to ignore the thread... but as I go here is a potentially useful idea: There is a lot of thought and work going into how we can merge Wikipedia with a (semi-)structured backend database for the sort of material that fits that model well. It seems to me that a sufficiently generalized system for such applications would also be useful to facilitate greater mediation with threaded discussion, if it had the right UI sugar added on top. Perhaps a middle ground between pure-wiki and threaded discussion is possible. From t.starling at physics.unimelb.edu.au Wed Nov 2 00:24:56 2005 From: t.starling at physics.unimelb.edu.au (Tim Starling) Date: Wed, 02 Nov 2005 11:24:56 +1100 Subject: [Wikitech-l] Re: Edit bailouts In-Reply-To: <1130865815.9004.22.camel@zhora.1481ruerachel.net> References: <1130865815.9004.22.camel@zhora.1481ruerachel.net> Message-ID: Evan Prodromou wrote: > So, I got interested a few days ago in the question of how many > Wikitravel contributors start editing a page but bail out before > completing their edit. I started combing the Apache logs for some > answers. > > wikitravel.org's robots.txt hides editing pages, but there are some > non-compliant spiders that still follow edit links; some even mask their > identity with a fake User-Agent header. > > Still, I can say with some confidence that of non-spider hits on our > edit pages, only about 25% result in a "submit" post afterwards. > Although I think there are a lot of people who click "edit" without the > idea of seriously contributing (the aforementioned spiders; people who > are curious to see what will happen; people who hit edit by mistake), > this still seems quite high. > > I'm wondering if anyone has similar statistics for other Mediawiki > sites, other wiki-engine sites, and specifically for Wikimedia sites. > I'd like to get a comparison to see what we can do on Wikitravel to cut > down on these bailouts and help people finish their contributions; a > rough idea of what rate of bailing out other sites get would be helpful > for that. I suspect a lot of the hits on edit pages are due to readers following red links, rather than people clicking the edit tab. I'm not sure how many hits per second Wikimedia gets, but from the profiling data I can tell you that about 8% of our backend requests (i.e. squid cache misses) are edit form requests. About 16% of edit form requests result in a save attempt. Here's an amusing story about this phenomenon: Jerome, on first seeing the huge number of edit requests in the logs, thought we were under a DoS attack from "many IPs", and made moves to start blocking them. Luckily we set him straight before he did any damage :) -- Tim Starling From 2.718281828 at gmail.com Wed Nov 2 03:57:19 2005 From: 2.718281828 at gmail.com (SJ) Date: Tue, 1 Nov 2005 22:57:19 -0500 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: <200511011756.19482@bloodgate.com> References: <877jbu2avo.fsf@mat.ucm.es> <200511011756.19482@bloodgate.com> Message-ID: <742dfd060511011957s5abec50dke7794441b4dfbfee@mail.gmail.com> On 11/1/05, Tels wrote: > > What I had in mind that the discussion page could be constructed from a > series of "posts". Each post would be a wiki-mini-article in itself. Thus > you get the tree structure (can collapse threads sort them etc), plus the > "show me the latest posts", and you still have the "edit other people's > text" feature. You need no new markup, it probably suffices to have a > front-end that can collect the posts (all articles in name-space > "MyArticle::Discussion::Post?) and display them on one page, with edit > buttons etc per post. < > You could even have a "this post was last edited by XYZ on ABC" - thus > showing immidiately that the original author wasn't the last one to touch > the text and thus giving you a hint to look at the history. < > Threading by subject, time etc are all possible because these are just > rearangements of the post articles. Yes and yes. I would dearly like to see the above implemented. A system optimized for threaded discussions, rather than what one could hack up with the current transclusion implementation. SJ From brion at pobox.com Wed Nov 2 07:19:17 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 01 Nov 2005 23:19:17 -0800 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: <742dfd060511011957s5abec50dke7794441b4dfbfee@mail.gmail.com> References: <877jbu2avo.fsf@mat.ucm.es> <200511011756.19482@bloodgate.com> <742dfd060511011957s5abec50dke7794441b4dfbfee@mail.gmail.com> Message-ID: <43686875.4070608@pobox.com> SJ wrote: > On 11/1/05, Tels wrote: >>Threading by subject, time etc are all possible because these are just >>rearangements of the post articles. > > Yes and yes. I would dearly like to see the above implemented. A > system optimized for threaded discussions, rather than what one could > hack up with the current transclusion implementation. Transclusion hacks make the Baby Cthulu cry. And trust me, you *don't* want to see the Baby Cthulu cry! -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 253 bytes Desc: OpenPGP digital signature URL: From timwi at gmx.net Wed Nov 2 10:38:04 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 10:38:04 +0000 Subject: [Wikitech-l] Re: Indexing Wikisource In-Reply-To: <4367DD41.1000405@pobox.com> References: <4367DD41.1000405@pobox.com> Message-ID: Brion Vibber wrote: > Timwi wrote: > >>> It was only this summer that wikisource.org was split into language >>> subdomains like de.wikisource.org and en.wikisource.org, so it is >>> understandable that pages in the new subdomains still have a low >>> Google rank. However, this is the same for all languages, and doesn't >>> explain the difference that I see between the German and the English >>> language subdomain. >> >> This is wrong; it is not the same for all languages. The English one is >> privileged because wikisource.org/wiki/Anything redirects you to it. > > No it doesn't. See for yourself: http://wikisource.org/wiki/Asdflkj I did see for myself, and I did notice it doesn't actually forward you to a URL with "en." in front, but I just assumed that it's the English one nonetheless... :) Thanks for the clarification, Timwi From dgerard at gmail.com Wed Nov 2 10:57:03 2005 From: dgerard at gmail.com (David Gerard) Date: Wed, 2 Nov 2005 10:57:03 +0000 Subject: [Wikitech-l] Re: [WikiEN-l] Ratings again In-Reply-To: References: Message-ID: Magnus Manske wrote: >David Gerard wrote: >> Magnus Manske wrote: >>> P.S.: I think I actually found a *real* bug; it seems anyone can change >>> the list of topics. I'll have to restrict that to sysops. >> I'd expect on Wikipedia that'd be a steward-level thing. Remember that >> after the testing phase, it's unlikely to change very often if at all, >> and then only with the consent of the wiki's community. >As the selection of topics/ranges is a per-wikipedia thing, wouldn't >that be more suitable for a buerocrat? Someone rare and high-up, anyway ;-) Certainly not every admin. - d. From timwi at gmx.net Wed Nov 2 11:12:06 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 11:12:06 +0000 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> Message-ID: Ryan Lane wrote: > > It isn't that people wish to "own" their comments, but that people believe their > opinion should be represented as they wrote it. That's the same thing. Or it comes down to the same thing, namely a strong objection to anyone editing one's comments. As you said, it's a belief -- not an argument. > I believe there is a fundamental difference between an article, and a > discussion. I don't understand why you should need to edit someone's > comment, other than possibly spellchecking it. When you edit > someone's comment, you are changing the discussion as a whole. Primarily I want for people to be able to edit out offensive remarks and personal attacks. The only people who would object to this being done would be the people who would make such remarks and then scream blue murder when they're removed. Clearly we can live without them. Secondarily I want to be able to fix spellings, in the faint hope that it will help some people learn better spelling. Again, the only people who would object to this would be people who can't spell and are therefore unsuitable for writing an encyclopedia anyway. > I've heard many reasons for maliciously changing a person's comment, > but can you give me some examples of non-malicious changes? I've heard many reasons for maliciously changing an article. Yet articles on Wikipedia tend to get better. Interesting, innit? >>>I can definately see lawsuits based upon this. This is >>>definately a valid argument. >> >>I'm finding your "lawsuits" claim highly dubitable, and your repeated >>misspelling of "definitely" quite irritating. Are you a lawyer? (You're >>clearly not an English teacher, so the chances of you being a lawyer are >>somewhat higher.) > > Oh shit, I mispelled a word, I must be an idiot. Do you always resort to > flame-style tactics? No, the purpose of this was to test your reaction. You fell right for it exactly the way I expected: you picked up only on the emotional side of the paragraph (taking it as an insult) and responded only to that. You didn't address any of what it actually *says*; in particular, you haven't answered my question. Are you a lawyer? And this highlights what I mean: you (and many other people) only object to being able to edit comments because it somehow "feels" wrong. You can't really say why it *is* wrong. You just think that it will "somehow" change the comment "and hence" the entire discussion, "and therefore" it must not be done. Same with the malice argument: people could edit comments maliciously OMG that's bad "and thus" the whole idea is really bad!... Get my drift? > There is a difference between an article, and someone's comments on an article. > The article is a community written piece. Someone's comment is not a community > written piece, it is an individual's written piece. The discussion as a whole is > a community written piece. In order for the discussion to be a community-written piece, the community must be able to intervene when something (or someone) in the discussion is (or becomes) disruptive. Not letting people edit comments paves the way to flamewars, trolling and (dare I say it) just another clone of Usenet. >>>In this aspect, there is "danger" in others editing comments. >> >>You haven't shown any, except for the possible "lawsuits" claim. Do you >>have anything substantial to back that up? > > I think there is a danger in changing the meaning of a discussion by > changing the meaning of the comments contained within. Whether this > is done maliciously or not, it can be harmful. Notice how you haven't answered my question again? >>I believe a rudimentary implementation would be relatively easy, but it >>would be laborious, and so, few people will be willing to work it >>through until the end, and so, it will likely not get done very soon. A >>_good_ implementation (UI-wise as well as performance-wise) is quite a >>bit more challenging, so it will likely not get done at all. > > I don't disagree entirely with you here; it probably won't get done. However, > every large piece of software is laborious, and there is quite a bit of software > out there that has been written (take mediawiki for example). What I meant was, a _good implementation_ won't get done. MediaWiki has been written, yes, but it's a rather bad implementation. Timwi From oub at mat.ucm.es Wed Nov 2 11:20:31 2005 From: oub at mat.ucm.es (Uwe Brauer) Date: Wed, 02 Nov 2005 12:20:31 +0100 Subject: [Wikitech-l] prbls with external editor (was: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?]) References: <877jbu2avo.fsf@mat.ucm.es> <200511011826.56652@bloodgate.com> Message-ID: <874q6vgutc.fsf_-_@mat.ucm.es> >>>>> "Timwi" == Timwi writes: Hello Timwi> We already have this "danger", and we've had it since the Timwi> beginning of Phase II, and it has not turned out to be a Timwi> great problem, so this is not an argument. >> Just imagine that this discussion we have is on a wiki, this is >> the latest edition (you would need to check the history, aka >> mailing list archives to see the full revisions) and it >> contained: >> On Tuesday 01 November 2005 17:36, Timwi wrote: >>>> Any model, if over applied, is harmful. >>> Agree. >>> I am strongly in favour of LiquidThreads. >> See the danger? Timwi> A fallacious argument by false dilemma, or by lack of Timwi> imagination, or whatever you wanna call it. You almost Timwi> provided the answer to this one yourself: >> (for the record, the above quote of three lines was >> written/shortened by me, not Timwi). Timwi> And that is what it should say. Timwi> COMMENT #328645 by [[User:Timwi]] Timwi> Agree. I am strongly in favour of LiquidThreads. Timwi> (This comment was last edited by [[User:Tels]] Timwi> .) Timwi> If is a minute ago, I better check the diff. If Timwi> it was an hour ago, I can probably assume that your edit was Timwi> harmless. May be I miss here something important, (or you are talking about a feature yet to be implemented) but I just did that evil thing we are discussing. I went to a discussion page, which contained 2 comments, mine and one from another author. I "bravely" deleted the final dot of his last sentence, using an external editor. And no when I later looked at it from show different versions I could see the chanced but the time stamp you mentioned I could not see, either it that display nor in the source file. Another important point. Recently I wrote a comment in another discussion page (that one quite huge), again using an external editor. By accident it seems that I changed the coding of the page so, some non ASCII symbols changed. Now you might say that this is the danger which may occur also when I edit an article, however I consider it more annoying in a discussion, where my intention was just to add a single comment. Conclusion in a discussion page every comment should be protected somehow. Uwe Brauer From servien at gmail.com Wed Nov 2 11:23:42 2005 From: servien at gmail.com (Servien Ilaino) Date: Wed, 2 Nov 2005 13:23:42 +0200 Subject: [Wikitech-l] How to change time? Message-ID: Does anyone know how you can change the time settings (NL wiki)? I've tried to go to Preferences and then to Daten and time settings, I entered "add from browser"... it doesn't work, I entered it by hand +2 hours... doesn't work... I entered +8 hours, still doesn't change... c'est tr?s irritant! Please hellup mii!! Serv From timwi at gmx.net Wed Nov 2 11:50:35 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 11:50:35 +0000 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> Message-ID: Christopher Granade wrote: > Timwi wrote: > >>So far I have addressed only the "problem" of people not knowing >>whether what they are reading is really what the author wrote. People >> complaining about edits is a whole other matter. To address that >>problem, we must think about why they are complaining. I am convinced >> that the large majority of such complaints are solely out of >>irrational, thoughtless behaviour: people just assume that they "own" >>their comments by default, and complain about any form of "tinkering" >>even when it's perfectly legitimate if they thought about it for only >>a second. Surely you can't agree to let this kind of stupidity take >>precedence over our wiki philosophy. > > I have two major issues with this comment. First, your tone is very > confrontational, and is far more aggressive than the situation calls > for. Please, let's keep civil here. I've read it again and again and I don't see anything confrontational or aggressive about it. Either way, I never mean to be confrontational or aggressive, but I make a point of arguing logically and rationally. It tends to yield more meaningful conclusions than "hmm this doesn't feel right let's not do it", much less "hmm this doesn't feel right so let's prohibit it for everyone". If you misinterpret that as aggression or confrontationalism, then I can only advise you to get to know me better. If you felt attacked because you count yourself as one of the people whose attitude I referred to as "stupidity", then I apologise, but this must mean that you agree that you "just assume that you "own" your comments by default, and complain about any form of "tinkering" even when it's perfectly legitimate if you thought about it for only a second". Surely you can't agree with me that any intelligent thinking being would find that kind of behaviour reasonable? > No one works with me to create a "better comment," but rather I write > a comment so as to communicate what *I* am saying to someone else. This sounds like you think that all of your comments are perfect and never need any form of improvement. This is hardly plausible: everyone sometimes makes a comment which inadvertantly ticks someone off (like I did, apparently; I would have liked for someone to be able to replace the word "stupidity" with "behaviour" or anything that runs a lower risk of offending someone); and everyone sometimes makes lesser mistakes (e.g. half of a comment is off-topic; a "not" was forgotten, or inserted where it was obviously not intended; accidentally referring to the wrong user or linking to the wrong diff when making an accusation, etc. etc. etc.). There needs to be a way for other people to correct these problems, or else (1) threads will too easily drift off-topic; (2) misunderstandings will exacerbate and get out of control; (3) flamewars and trolling will thrive. > In that sense, I *should* own comments, or there should be a large, > loud disclaimer on every single page in the Talk namespace that > comments may not belong to their professed authors. No, but every single comment that was edited by someone else should bear a small but noticeable disclaimer to that effect. (I was going to insert some extra polemics here about something being obvious, but I've realised that this kind of bickering is not going to get us anywhere.) Timwi From dgerard at gmail.com Wed Nov 2 12:38:02 2005 From: dgerard at gmail.com (David Gerard) Date: Wed, 2 Nov 2005 12:38:02 +0000 Subject: [Wikitech-l] How actually usable are audio captchas? Message-ID: With various discussions around Wikipedia of captchas to impede vandalbots, I was wondering how usable audio captchas actually are for those who can't see images. Most sites with visual captchas offer an audio option ... but what's the actual usability of these? Are they a minor impediment, as a visual captcha is, or a major usability problem? Is there data on this? - d. From timwi at gmx.net Wed Nov 2 13:02:41 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 13:02:41 +0000 Subject: [Wikitech-l] Re: How to change time? In-Reply-To: References: Message-ID: Servien Ilaino wrote: > Does anyone know how you can change the time settings (NL wiki)? I've > tried to go to Preferences and then to Daten and time settings, I > entered "add from browser"... it doesn't work, I entered it by hand +2 > hours... doesn't work... I entered +8 hours, still doesn't change... > c'est tr?s irritant! Please hellup mii!! Type in "02:00". From timwi at gmx.net Wed Nov 2 13:04:26 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 13:04:26 +0000 Subject: [Wikitech-l] Re: prbls with external editor In-Reply-To: <874q6vgutc.fsf_-_@mat.ucm.es> References: <877jbu2avo.fsf@mat.ucm.es> <200511011826.56652@bloodgate.com> <874q6vgutc.fsf_-_@mat.ucm.es> Message-ID: > (or you are talking about a feature yet to be implemented) Yes, we are. From krstic at fas.harvard.edu Wed Nov 2 13:40:39 2005 From: krstic at fas.harvard.edu (Ivan Krstic) Date: Wed, 02 Nov 2005 08:40:39 -0500 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> Message-ID: <4368C1D7.3040904@fas.harvard.edu> Timwi wrote: > Secondarily I want to be able to fix spellings, in the faint hope that > it will help some people learn better spelling. Again, the only people > who would object to this would be people who can't spell and are > therefore unsuitable for writing an encyclopedia anyway. Totally broken reasoning. > I've heard many reasons for maliciously changing an article. Yet > articles on Wikipedia tend to get better. Interesting, innit? That's an irrelevant non-answer to Ryan's question. > No, the purpose of this was to test your reaction. You fell right for it > exactly the way I expected: you picked up only on the emotional side of > the paragraph (taking it as an insult) Do you understand how conceited this makes you sound? > And this highlights what I mean: you (and many other people) only object > to being able to edit comments because it somehow "feels" wrong. You > can't really say why it *is* wrong. You need to relax, and start spending less time writing borderline-offensive e-mail to people who are trying to reason constructively, and more time thinking about what they're saying. I can tell you exactly why it *is* wrong: comments are not Wikipedia articles, even if you seem to be constantly confounding the two. A Wikipedia article isn't signed by a single person's name. It doesn't represent the views of an individual, but tries to become an objective reflection of its topic. As Brion puts it, a wiki is a place where you let wackos edit your site, and with luck, the good wackos outnumber the bad. The iterative editing process is a good way to ensure eventual NPOV conformance. Comments are absolutely different. They are written and signed by a single person, represent only that person's views, have no requirement of adherence to a NPOV, and that means that essentially none of the reasons that Wikipedia articles are editable by everyone apply to them. If allowing comment cross-editing was in any way beneficial, the popular web-based discussion forums with tens of millions of posts would have, without a doubt, adopted such a model quite a while ago. There's a reason they haven't done it. I am not interested in continuing this discussion further, so please refrain from writing a snide reply that questions my intelligence so as to "test my reaction". -- Ivan Krstic | 0x147C722D From servien at gmail.com Wed Nov 2 14:10:37 2005 From: servien at gmail.com (Servien Ilaino) Date: Wed, 2 Nov 2005 16:10:37 +0200 Subject: [Wikitech-l] Re: How to change time? In-Reply-To: References: Message-ID: Was already like that but doesn't work. In date and time it displays my time correct but not in my signature, then it just displays the normal CET. Serv 2005/11/2, Timwi : > Servien Ilaino wrote: > > Does anyone know how you can change the time settings (NL wiki)? I've > > tried to go to Preferences and then to Daten and time settings, I > > entered "add from browser"... it doesn't work, I entered it by hand +2 > > hours... doesn't work... I entered +8 hours, still doesn't change... > > c'est tr?s irritant! Please hellup mii!! > > Type in "02:00". > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > From node.ue at gmail.com Wed Nov 2 14:25:36 2005 From: node.ue at gmail.com (Mark Williamson) Date: Wed, 2 Nov 2005 07:25:36 -0700 Subject: [Wikitech-l] How to change time? In-Reply-To: References: Message-ID: <849f98ed0511020625x258bd3e8s@mail.gmail.com> To change time settings, do as Timwi instructed. To change _time_, on the other hand, is a great deal more complex, and I don't think anybody on this mailinglist wants to explain it. Mark On 02/11/05, Servien Ilaino wrote: > Does anyone know how you can change the time settings (NL wiki)? I've > tried to go to Preferences and then to Daten and time settings, I > entered "add from browser"... it doesn't work, I entered it by hand +2 > hours... doesn't work... I entered +8 hours, still doesn't change... > c'est tr?s irritant! Please hellup mii!! > > Serv > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- If you would like a gmail invite, please send me an e-mail. Si ud. quiere que le env?e una invitaci?n para ingresar gmail, env?eme un mensaje. Si vous voulez que je vous envoie une invitation ? joindre gmail, envoyez-moi s.v.p un message. Se vce. gostaria que lhe envie um convite para juntar gmail, favor de envie-me uma mensagem. Se vuleti chi vi manu 'n invitu a uniri gmail, mandatimi n messaggiu. From csieber at math.byu.edu Wed Nov 2 15:49:24 2005 From: csieber at math.byu.edu (Christian Sieber) Date: Wed, 02 Nov 2005 08:49:24 -0700 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: <4368C1D7.3040904@fas.harvard.edu> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> Message-ID: Thank you Ivan for a voice of reason. Your statements seem perfectly accurate. The gains made by editable comments are minor: The community can collaboratively prevent flamewars and edit spelling. The losses are major: The community could prevent speech that they deem offensive, which is an violation of free speech in general, (especially since people often consider differing viewpoints "offensive" and would edit them out) and people would not have their comments represented as written, which makes the discussion difficult to follow at best and pointless at worst. To say exactly why it IS wrong that I edit other people's comments: This has as much to do with free speech as the concept of wikis in general. (Go read "Free Software, Free Society" for a good chunk of Stallman on the issue of opinion writing.) The purpose of an encyclopedia is to be informative about specific topics, hopefully in a NPOV manner as far as this is possible. Because of this goal, the wiki structure is amazing--the wiki structure fulfills the goal through collaboration. A message board, comment thread, or whatever you want to call it, does NOT have this purpose. The purpose is to discuss something. More fundamentally, the purpose of *my comment* is to represent *my opinion*, which is not subject to the bounds of an encyclopedia article. Therefore, my comment as posted fulfills this goal in a way directly proportional to my ability to express myself. Assuming I can express myself sufficiently to reflect my thoughts, then my comment *as posted* fulfills the purpose of a comment 100%. Editing of any fashion only harms the effectiveness of my comment--it makes it fulfill less of its purpose, whereas editing an encyclopedia article improves the effectiveness thereof. That is why comments DO "belong" to their authors, not in a sense of ownership per se, (they belong to the community for reading and to inform the community about the poster's opinion) but because any changes to those comments by others defeat the philosophical purpose of a comment/threaded discussion. Thanks for reading. No invective responses, please. Take care, -- Christian Sieber Brigham Young University Ivan Krstic wrote: > Timwi wrote: > >>Secondarily I want to be able to fix spellings, in the faint hope that >>it will help some people learn better spelling. Again, the only people >>who would object to this would be people who can't spell and are >>therefore unsuitable for writing an encyclopedia anyway. > > > Totally broken reasoning. > > >>I've heard many reasons for maliciously changing an article. Yet >>articles on Wikipedia tend to get better. Interesting, innit? > > > That's an irrelevant non-answer to Ryan's question. > > >>No, the purpose of this was to test your reaction. You fell right for it >>exactly the way I expected: you picked up only on the emotional side of >>the paragraph (taking it as an insult) > > > Do you understand how conceited this makes you sound? > > >>And this highlights what I mean: you (and many other people) only object >>to being able to edit comments because it somehow "feels" wrong. You >>can't really say why it *is* wrong. > > > You need to relax, and start spending less time writing > borderline-offensive e-mail to people who are trying to reason > constructively, and more time thinking about what they're saying. I can > tell you exactly why it *is* wrong: comments are not Wikipedia articles, > even if you seem to be constantly confounding the two. > > A Wikipedia article isn't signed by a single person's name. It doesn't > represent the views of an individual, but tries to become an objective > reflection of its topic. As Brion puts it, a wiki is a place where you > let wackos edit your site, and with luck, the good wackos outnumber the > bad. The iterative editing process is a good way to ensure eventual NPOV > conformance. > > Comments are absolutely different. They are written and signed by a > single person, represent only that person's views, have no requirement > of adherence to a NPOV, and that means that essentially none of the > reasons that Wikipedia articles are editable by everyone apply to them. > If allowing comment cross-editing was in any way beneficial, the popular > web-based discussion forums with tens of millions of posts would have, > without a doubt, adopted such a model quite a while ago. There's a > reason they haven't done it. > > I am not interested in continuing this discussion further, so please > refrain from writing a snide reply that questions my intelligence so as > to "test my reaction". > From mail at tgries.de Wed Nov 2 16:01:48 2005 From: mail at tgries.de (Thomas Gries) Date: Wed, 02 Nov 2005 17:01:48 +0100 Subject: [Wikitech-l] Re: How to change time? In-Reply-To: References: Message-ID: <4368E2EC.1040005@tgries.de> Set $wgLocalTZoffset = 1; for Netherlands time. See http://cvs.defau.lt/cvsweb.cgi/phase3/includes/DefaultSettings.php?annotate=1.387 for explanation of this setting. Remark: Users must have their personal settings (Timezone offset) empty ! Tom Servien Ilaino schrieb: >Was already like that but doesn't work. In date and time it displays >my time correct but not in my signature, then it just displays the >normal CET. > >Serv > > >2005/11/2, Timwi : > > >>Servien Ilaino wrote: >> >> >>>Does anyone know how you can change the time settings (NL wiki)? I've >>>tried to go to Preferences and then to Daten and time settings, I >>>entered "add from browser"... it doesn't work, I entered it by hand +2 >>>hours... doesn't work... I entered +8 hours, still doesn't change... >>>c'est tr?s irritant! Please hellup mii!! >>> >>> >>Type in "02:00". >> >>_______________________________________________ >>Wikitech-l mailing list >>Wikitech-l at wikimedia.org >>http://mail.wikipedia.org/mailman/listinfo/wikitech-l >> >> >> >_______________________________________________ >Wikitech-l mailing list >Wikitech-l at wikimedia.org >http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > > > From node.ue at gmail.com Wed Nov 2 14:24:18 2005 From: node.ue at gmail.com (Mark Williamson) Date: Wed, 2 Nov 2005 07:24:18 -0700 Subject: [Wikitech-l] How actually usable are audio captchas? In-Reply-To: References: Message-ID: <849f98ed0511020624r6c64d0cfr@mail.gmail.com> Well, I for one tend to avoid media like the plague (although quite obviously I don't do the same for cliches). Video, audio, and PDFs are things I don't enjoy on a website. Now, that aside, there's another major problem: accent. Who's to say I could understand what you were saying, if you said "star"? There is no unambiguous way to represent words through audio. Mark On 02/11/05, David Gerard wrote: > With various discussions around Wikipedia of captchas to impede > vandalbots, I was wondering how usable audio captchas actually are for > those who can't see images. Most sites with visual captchas offer an > audio option ... but what's the actual usability of these? Are they a > minor impediment, as a visual captcha is, or a major usability > problem? Is there data on this? > > > - d. > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- If you would like a gmail invite, please send me an e-mail. Si ud. quiere que le env?e una invitaci?n para ingresar gmail, env?eme un mensaje. Si vous voulez que je vous envoie une invitation ? joindre gmail, envoyez-moi s.v.p un message. Se vce. gostaria que lhe envie um convite para juntar gmail, favor de envie-me uma mensagem. Se vuleti chi vi manu 'n invitu a uniri gmail, mandatimi n messaggiu. From erik_moeller at gmx.de Wed Nov 2 16:33:53 2005 From: erik_moeller at gmx.de (Erik Moeller) Date: Wed, 02 Nov 2005 17:33:53 +0100 Subject: [Wikitech-l] prbls with external editor In-Reply-To: <874q6vgutc.fsf_-_@mat.ucm.es> References: <877jbu2avo.fsf@mat.ucm.es> <200511011826.56652@bloodgate.com> <874q6vgutc.fsf_-_@mat.ucm.es> Message-ID: <4368EA71.2030803@gmx.de> Uwe Brauer: > Another important point. Recently I wrote a comment in another > discussion page (that one quite huge), again using an external > editor. By accident it seems that I changed the coding of the page so, > some non ASCII symbols changed. This happens if you use an editor which is not capable of editing UTF-8. You can set Transcode UTF-8=true in ee.ini under [Settings], but your editor will still mangle characters that are not part of the iso8859-1 character-set. The best thing to do is to use a UTF-8 editor (or, if you already do, set the encoding to UTF-8 when editing wiki pages). I personally use Kate for KDE, which I have found to be an excellent editor for both text and code. Best, Erik From nospam-abuse at bloodgate.com Wed Nov 2 16:31:39 2005 From: nospam-abuse at bloodgate.com (Tels) Date: Wed, 2 Nov 2005 17:31:39 +0100 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: References: <877jbu2avo.fsf@mat.ucm.es> <200511011826.56652@bloodgate.com> Message-ID: <200511021731.48383@bloodgate.com> -----BEGIN PGP SIGNED MESSAGE----- Moin, On Tuesday 01 November 2005 20:42, Timwi wrote: > > I do think these are two seperate points: > > > > * how to improve the discussion pages on a wiki > > * whether each author own his/her comment or not. > > But the point is that the answer to the second influences whether the > solution proposed for the first is seen as an "improvement". I feel > that if the ability to edit other people's comments is taken away from > me, I can't label it an "improvement". I have to disagree (because I am not opposed to locking comments). > > Discussions, OTOH, also involve personal opinions. Danger lies ahead > > when the opionion can be changed, but is still labeled (or signed, if > > you wish) with the original authors name. > > We already have this "danger", and we've had it since the beginning of > Phase II, and it has not turned out to be a great problem, so this is > not an argument. I consider it a (current) problem, maybe not at wikipedia, but on other wikis. > > Just imagine that this discussion we have is on a wiki, this is the > > latest edition (you would need to check the history, aka mailing list > > archives to see the full revisions) and it contained: > > > > On Tuesday 01 November 2005 17:36, Timwi wrote: > >>>Any model, if over applied, is harmful. > >> > >>Agree. > >>I am strongly in favour of LiquidThreads. > > > > See the danger? > > A fallacious argument by false dilemma, or by lack of imagination, or > whatever you wanna call it. You almost provided the answer to this one Yes I did. That was intentional. I never said that comment locking is the only solution, but also that comment labeling (like "last edited by") is a method. > yourself: > > (for the record, the above quote of three lines was > > written/shortened by me, not Timwi). > > And that is what it should say. > > COMMENT #328645 by [[User:Timwi]] > > Agree. I am strongly in favour of LiquidThreads. > (This comment was last edited by [[User:Tels]] .) > > If is a minute ago, I better check the diff. If it was an > hour ago, I can probably assume that your edit was harmless. > > Therefore, again, your "danger" is not an argument against the ability > to edit comments. I think you misunderstood me. I said that we should improve the discussion page: #1 with threads etc #2 doing something against falsely labeled comments #2a by locking them #2b OR by labeling them with the last change #2c whatever else we can come up with You say that you cannot accept improvements on #1 when 2a is implemented and therefore would rather do nothing. I disagree, for me either 2a OR 2b would be good. But my point is, that no matter what we choose, we should do SOMETHING. E.g. either 2a, 2b or something else should be done, preferable in conjunction with #1 (doesn't make much sense, technically, anyway). > > If we can improve the discussion page itself, *and* prevent > > misrepresentation at the same time, well, that would be great :) > It's really easy. Then why wasn't it done already? :-P (Yes, that was a joke, laugh, it is funny.) Best wishes, Tels - -- Signed on Wed Nov 2 17:24:39 2005 with key 0x93B84C15. Visit my photo gallery at http://bloodgate.com/photos/ PGP key on http://bloodgate.com/tels.asc or per email. "Any sufficiently rigged demo is indistinguishable from an advanced technology." -- Don Quixote, slashdot guy -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iQEVAwUBQ2jp8ncLPEOTuEwVAQF+Pgf8Dpzdkk/ZU3mSTOpSljpewQPVFHcuxr36 giVeDMDHNZUGmgNUXyfPmuNV19VBA0peYpTk+qAhWcWWMpHgFu+IYsQjGStSYOj/ Hj86iQN+2GdC1uhhAT24z/sRE0sLI2+uZHJf8WRnqG3JaM5qeDfhaBuI/4Bwlsm5 YPB5OzCewEr2DRNRdPHBA9Sd1aQuVqslIWOO/EBD0npIAMxnkrjHJNkXB9eEbG+p yK4JtXuQ3ir24S1KdcGBd+OKQh/d6d073sUdHlxG0L1+ARJxYGm0TVeYjgweiDdl 1hz7fkTtjm0sAFt/Vb5Q1AjWA29JU4bd2uoyE0aVJRetqLEmJEjVKA== =6gCj -----END PGP SIGNATURE----- From dgerard at gmail.com Wed Nov 2 16:44:03 2005 From: dgerard at gmail.com (David Gerard) Date: Wed, 2 Nov 2005 16:44:03 +0000 Subject: [Wikitech-l] How actually usable are audio captchas? Message-ID: Mark Williamson wrote: >Now, that aside, there's another major problem: accent. Who's to say I >could understand what you were saying, if you said "star"? >There is no unambiguous way to represent words through audio. That's a theoretical possible problem rather than actual data (which is why that' s what I asked for) or even anecdote. I haven't found data yet, but here's a page of theory with anecdote: http://www.standards-schmandards.com/index.php?2005/01/01/11-captcha Note they get around your problem by using numbers. This apparently worked on three casual test subjects. Though that, of course, is anecdote, not data. The W3 paper just provides possible approaches with no words on effectiveness: http://www.w3.org/TR/turingtest/ Evidently the audio option was frequently unusable a couple of years ago: http://news.com.com/2100-1032-1022814.html - I would *presume* there's been improvement since then. Does anyone have or know of *actual data* (rather than hypothesis or anecdote) on whether non-visual captchas are any good as yet? - d. From Jan-Paul at jpkoester.de Wed Nov 2 16:45:07 2005 From: Jan-Paul at jpkoester.de (=?UTF-8?B?SmFuLVBhdWwgS8O2c3Rlcg==?=) Date: Wed, 02 Nov 2005 17:45:07 +0100 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> Message-ID: <4368ED13.9010205@jpkoester.de> Hello everyone, I think the core question in this discussion is not "Can people change other peoples comments so that they contradict the original poster's point of view?" but the question is "Do people actually change other peoples comments so that they misstate the original poster's point of view, and are they successful or are those changes quickly reverted wiki-style?" The answer to the first question is clearly "yes" which sufficiently explains why many people are afraid of it happening (one side of the posters here). The answer to the second question cannot be given very easily but from my experience the wiki system generally works also in discussions. So can the wiki system be abused? Yes! Is it actually being abused? I don't see it. The fear of abuse is the prime argument that critics of wikis in general bring forward and the general answer to it is to look at what actually happens and be amazed that it does in fact work by just trusting people and assuming good faith. I feel the same arguments apply to the discussion pages. So what's my opinion? I am in favor of keeping the discussion pages wiki style, refactorable, editable by all but as Timwi proposed adding a "This comment was last edited on by which will allow other users to easily spot any abuse while keeping all wiki-advantages. Just my 2c, JP From dgerard at gmail.com Wed Nov 2 16:56:12 2005 From: dgerard at gmail.com (David Gerard) Date: Wed, 2 Nov 2005 16:56:12 +0000 Subject: [Wikitech-l] Re: LiquidThreads Message-ID: Jan-Paul K?ster wrote: >I think the core question in this discussion is not "Can people change >other peoples comments so that they contradict the original poster's >point of view?" but the question is "Do people actually change other >peoples comments so that they misstate the original poster's point of >view, and are they successful or are those changes quickly reverted >wiki-style?" >The answer to the first question is clearly "yes" which sufficiently >explains why many people are afraid of it happening (one side of the >posters here). The answer to the second question cannot be given very >easily but from my experience the wiki system generally works also in >discussions. So can the wiki system be abused? Yes! Is it actually being >abused? I don't see it. It's rare enough on en: that people are outraged if refactoring is abused. Changing others' comments tends to just not happen - not because of technical constraints, but because it's considered rude to change others' signed comments. So yeah, assuming good faith works here. I'm really not convinced this whole forum idea isn't a solution in search of a problem. Every web forum I've ever seen is a bad recreation of Usenet. And for Usenet, we already have the mailing lists through gmane. - d. From rowan.collins at gmail.com Wed Nov 2 17:31:12 2005 From: rowan.collins at gmail.com (Rowan Collins) Date: Wed, 2 Nov 2005 17:31:12 +0000 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: <4368ED13.9010205@jpkoester.de> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> <4368ED13.9010205@jpkoester.de> Message-ID: <9f02ca4c0511020931i69adc4ddj@mail.gmail.com> On 02/11/05, Jan-Paul K?ster wrote: > So what's my opinion? I am in favor of keeping the discussion pages wiki > style, refactorable, editable by all but as Timwi proposed adding a > "This comment was last edited on by which will > allow other users to easily spot any abuse while keeping all > wiki-advantages. Well, the problem is, you can't have a page which is both "completely refactorable" and consists of comments which can be labelled automatically by the software. Either, as currently, pages are essentially atomic to the software (it can't "see" individual comments, just lots of versions of the whole page), or the concept of "a comment" is given significance to the software, and edits to it can be tracked, locked, or whatever. The challenge, surely, is to come up with a compromise between these two positions - just making individual comments into wiki pages is not enough, because you can't, in general, refactor one comment. -- Rowan Collins BSc [IMSoP] From oub at mat.ucm.es Wed Nov 2 17:51:59 2005 From: oub at mat.ucm.es (Uwe Brauer) Date: Wed, 02 Nov 2005 18:51:59 +0100 Subject: [Wikitech-l] Re: prbls with external editor References: <877jbu2avo.fsf@mat.ucm.es> <200511011826.56652@bloodgate.com> <874q6vgutc.fsf_-_@mat.ucm.es> <4368EA71.2030803@gmx.de> Message-ID: <874q6vc4zk.fsf@mat.ucm.es> >>>>> "Erik" == Erik Moeller >>>>> writes: Erik> Uwe Brauer: >> Another important point. Recently I wrote a comment in another >> discussion page (that one quite huge), again using an external >> editor. By accident it seems that I changed the coding of the >> page so, some non ASCII symbols changed. Erik> This happens if you use an editor which is not capable of Erik> editing UTF-8. You can set Erik> Transcode UTF-8=true I know, the problem is (X)emacs has no full UTF-8 support, however it supports iso8859-1. I think my error was not to restrict that set to the german symbols and thats why the error occurred. Erik> in ee.ini under [Settings], but your editor will still mangle Erik> characters that are not part of the iso8859-1 Erik> character-set. The best thing to do is to use a UTF-8 editor Erik> (or, if you already do, set the encoding to UTF-8 when Erik> editing wiki pages). I personally use Kate for KDE, which I Erik> have found to be an excellent editor for both text and code. Another option would be to use recode. I might try that. From evanm at google.com Wed Nov 2 18:05:16 2005 From: evanm at google.com (Evan Martin) Date: Wed, 2 Nov 2005 10:05:16 -0800 Subject: [Wikitech-l] How actually usable are audio captchas? In-Reply-To: References: Message-ID: <9f43d19d0511021005j73c38aeey7d97cdc6852202c5@mail.google.com> On 11/2/05, David Gerard wrote: > Does anyone have or know of *actual data* (rather than hypothesis or > anecdote) on whether non-visual captchas are any good as yet? Anecdote, again, but it is a success story: when we launched CAPTCHAs on LiveJournal which included audio support, we got a surprising response of thank-you letters from blind users (maybe two or three, which is pretty significant in comparison to the feedback about other features...). One way to look at it is: with a visual CAPTCHA, a blind user is completely out of luck. So even with awful usability, it's better than nothing. From gerard.meijssen at gmail.com Wed Nov 2 18:57:01 2005 From: gerard.meijssen at gmail.com (Gerard Meijssen) Date: Wed, 02 Nov 2005 19:57:01 +0100 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: <4368ED13.9010205@jpkoester.de> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> <4368ED13.9010205@jpkoester.de> Message-ID: <43690BFD.5090001@gmail.com> Jan-Paul K?ster wrote: > Hello everyone, > > I think the core question in this discussion is not "Can people change > other peoples comments so that they contradict the original poster's > point of view?" but the question is "Do people actually change other > peoples comments so that they misstate the original poster's point of > view, and are they successful or are those changes quickly reverted > wiki-style?" > > The answer to the first question is clearly "yes" which sufficiently > explains why many people are afraid of it happening (one side of the > posters here). The answer to the second question cannot be given very > easily but from my experience the wiki system generally works also in > discussions. So can the wiki system be abused? Yes! Is it actually > being abused? I don't see it. I am sorry to say that I have seen misrepresentations occur on several occasions. Problematic is that when you start adding comments like I do now in this post, you are likely to have follow up reactions and as a consequence you may not appreciate what was said later in the Wiki format. Certainly when a comment is over several paragraphs this is often seen. The worst case of splitting is when paragraphs are split up resulting in something that that can really be badly read. Asking people to refrain from doing this resulted in the statement that it is allowed by the GFDL and that it is old style Internet working method (as if this is an excuse). > > The fear of abuse is the prime argument that critics of wikis in > general bring forward and the general answer to it is to look at what > actually happens and be amazed that it does in fact work by just > trusting people and assuming good faith. I feel the same arguments > apply to the discussion pages. > > So what's my opinion? I am in favor of keeping the discussion pages > wiki style, refactorable, editable by all but as Timwi proposed adding > a "This comment was last edited on by which > will allow other users to easily spot any abuse while keeping all > wiki-advantages. > I do not have opinion as to what to do differently. With technical solutions you do not take away the core problem. Behaviour acceptable to some is not acceptable to others. I think we do a reasonable job at keeping things reasonable. By raising the threshold for nuisance you only get worse nuisance.. I think we are fortunate that things are as good as they are. So to express my opinion.. let's be conservative :) Thanks, GerardM From timwi at gmx.net Wed Nov 2 19:22:12 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 19:22:12 +0000 Subject: [Wikitech-l] Re: LiquidThreads [WAS: Re: the discussion page: summary and better display of the treats, mbox format?] In-Reply-To: <200511021731.48383@bloodgate.com> References: <877jbu2avo.fsf@mat.ucm.es> <200511011826.56652@bloodgate.com> <200511021731.48383@bloodgate.com> Message-ID: Note this: >>>* how to improve the discussion pages on a wiki >>>* whether each author own his/her comment or not. and this: > #1 with threads etc > #2 doing something against falsely labeled comments is not the same thing. That's where the misunderstanding came from. > You say that you cannot accept improvements on #1 when 2a is implemented > and therefore would rather do nothing. No, I said that I wouldn't consider #2a ("locking comments") an improvement. I would consider #1 (threads) an improvement. > for me either 2a OR 2b would be good. There doesn't seem to be any disagreement between us then. It was only a misunderstanding. Timwi From timwi at gmx.net Wed Nov 2 19:34:57 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 19:34:57 +0000 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: <4368C1D7.3040904@fas.harvard.edu> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> Message-ID: Ivan Krstic wrote: > >>No, the purpose of this was to test your reaction. You fell right for it >>exactly the way I expected: you picked up only on the emotional side of >>the paragraph (taking it as an insult) > > Do you understand how conceited this makes you sound? No, apparently not. But your reply shows the same thing about you that I was trying to demonstrate about Ryan. You, too, have not replied to what I said, and instead digressed into a treatise on the emotional/social effects of my tone. > You need to relax, and start spending less time writing > borderline-offensive e-mail to people who are trying to reason > constructively, I'm disappointed that you (and others) find that I am not trying to reason constructively, because I am. I would rather consider your accusation of my lack of constructiveness "borderline offensive", but as I said before, this kind of bickering is not going to get us anywhere. > and more time thinking about what they're saying. I can tell you > exactly why it *is* wrong: comments are not Wikipedia articles, even > if you seem to be constantly confounding the two. Everything you say in the rest of your posting was already said elsewhere in the thread. You seem to think that I don't understand it, but in fact I do, and you don't seem to understand my refutation of it. I am not contradicting any of the things you say (comments aren't like articles, comments are signed by a particular person, etc.etc.). All I'm saying is that this is a fallacy: > Comments [...] represent only that person's views, have no requirement > of adherence to a NPOV, and that means that essentially none of the > reasons that Wikipedia articles are editable by everyone apply to them. It is the "and that means" that is wrong. It DOESN'T mean that. Or, more precisely, it doesn't mean that comments shouldn't be editable. This is a fallacy. It is also a fallacy to state that we should do it like all other web-forums. It is not true that they would have changed if there was reason to do so, in the same way that encyclopedias haven't gone wiki long before Wikipedia. Timwi From timwi at gmx.net Wed Nov 2 19:43:31 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 19:43:31 +0000 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: <9f02ca4c0511020931i69adc4ddj@mail.gmail.com> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> <4368ED13.9010205@jpkoester.de> <9f02ca4c0511020931i69adc4ddj@mail.gmail.com> Message-ID: Rowan Collins wrote: > On 02/11/05, Jan-Paul K?ster wrote: > >>So what's my opinion? I am in favor of keeping the discussion pages wiki >>style, refactorable, editable by all but as Timwi proposed adding a >>"This comment was last edited on by which will >>allow other users to easily spot any abuse while keeping all >>wiki-advantages. > > Well, the problem is, you can't have a page which is both "completely > refactorable" and consists of comments which can be labelled > automatically by the software. I've thought about this, and my idea here is to keep the Talk namespace the way it is (as a wiki-editable article-like page), but re-define its purpose as containing a summary (refactoring) of past and on-going discussions (which, of course, would be taking place in the threads). From timwi at gmx.net Wed Nov 2 19:46:25 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 19:46:25 +0000 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> Message-ID: > The losses are major: The community could prevent speech that they deem > offensive, which is an violation of free speech in general, (especially > since people often consider differing viewpoints "offensive" and would > edit them out) and people would not have their comments represented as > written, which makes the discussion difficult to follow at best and > pointless at worst. Jan-Paul K?ster already pointed out to you that you have not shown that this would actually happen enough to be a significant problem. Moreover, I would like to add that you can go to any Talk page right now and verify (pardon me, falsify) your theory. Timwi From timwi at gmx.net Wed Nov 2 19:56:14 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 19:56:14 +0000 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: References: Message-ID: > I'm really not convinced this whole forum idea isn't a solution in > search of a problem. Every web forum I've ever seen is a bad > recreation of Usenet. And for Usenet, we already have the mailing > lists through gmane. My opinion exactly; except that I believe that making comments editable is the major difference that makes it not be like Usenet. I think most people acknowledge that the lack of technical structure to the discussion threads /is/ a problem. Timwi From timwi at gmx.net Wed Nov 2 19:52:43 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 19:52:43 +0000 Subject: [Wikitech-l] Re: How actually usable are audio captchas? In-Reply-To: <9f43d19d0511021005j73c38aeey7d97cdc6852202c5@mail.google.com> References: <9f43d19d0511021005j73c38aeey7d97cdc6852202c5@mail.google.com> Message-ID: Evan Martin wrote: > On 11/2/05, David Gerard wrote: > >>Does anyone have or know of *actual data* (rather than hypothesis or >>anecdote) on whether non-visual captchas are any good as yet? > > Anecdote, again, but it is a success story: when we launched CAPTCHAs > on LiveJournal which included audio support, we got a surprising > response of thank-you letters from blind users (maybe two or three, > which is pretty significant in comparison to the feedback about other > features...). I was going to mention LiveJournal. :-) I have personally seen one of those "thank you" support requests from a blind user. But I was going to add: David, you can try it yourself. Just go to http://www.livejournal.com/create.bml and take the audio test. Personally I actually found it easier to pass than the visual one - the letters in the visual captchas are sometimes warped enough to make them ambiguous even for humans (e.g. g vs. q). Timwi From gmaxwell at gmail.com Wed Nov 2 20:19:34 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Wed, 2 Nov 2005 15:19:34 -0500 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: <43690BFD.5090001@gmail.com> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> <4368ED13.9010205@jpkoester.de> <43690BFD.5090001@gmail.com> Message-ID: On 11/2/05, Gerard Meijssen wrote: > I am sorry to say that I have seen misrepresentations occur on several > occasions. Problematic is that when you start adding comments like I do > now in this post, you are likely to have follow up reactions and as a > consequence you may not appreciate what was said later in the Wiki > format. Certainly when a comment is over several paragraphs this is > often seen. The worst case of splitting is when paragraphs are split up > resulting in something that that can really be badly read. Asking people > to refrain from doing this resulted in the statement that it is allowed > by the GFDL and that it is old style Internet working method (as if this > is an excuse). [snip] You do realize that you are complaining about inline responses while responding inline yourself, no? :) I like it when people respond inline in the wiki and break up my long posts. Yes, sometimes it gets out of hand. However, since I assume good faith it doesn't worry me. Might I suggest an additional feature for mediawiki? How about [{here}] which becomes a difflink to the edit where that tag was inserted? People could add that to their signatures and thus every post of their would be equipped to a handy difflink to an original version. From mail at tgries.de Wed Nov 2 20:27:17 2005 From: mail at tgries.de (Thomas Gries) Date: Wed, 02 Nov 2005 21:27:17 +0100 Subject: [Wikitech-l] Question re. MediaWiki_FAQ Message-ID: <43692125.10607@tgries.de> The entry on http://meta.wikimedia.org/wiki/MediaWiki_FAQ#How_do_I_delete_a_user_from_my_list_of_users.3F says "MediaWiki does not support the deletion of user accounts. To prevent an account from being used, either scramble the password or set up an indefinite block on the account. Do not remove users from the user table in the mySQL database; this causes problems with other parts of the wiki due to the relational structure of the database." My question: is the last sentence "Do no remove ..." still valid and important ? From laner at navo.navy.mil Wed Nov 2 20:42:44 2005 From: laner at navo.navy.mil (Ryan Lane) Date: Wed, 2 Nov 2005 20:42:44 +0000 (UTC) Subject: [Wikitech-l] Re: LiquidThreads References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> Message-ID: Timwi writes: > It is also a fallacy to state that we should do it like all other > web-forums. It is not true that they would have changed if there was > reason to do so, in the same way that encyclopedias haven't gone wiki > long before Wikipedia. > > Timwi > You know... I don't even understand why we are arguing over such a mundane thing. There isn't any reason why we can't have an option in the code allowing admins to choose whether a wiki allows editing of comments or not. Shouldn't it be the decision of the users, or community leaders, whether they would want comments edited or not? Put it up to a vote if this piece of software is ever written. Anyway, not everyone using this software is wikipedia. I know the software is aimed at wikipedia, but there are many pieces of the software that were added by those who use the software internally. I use mediawiki internally, and I (and my users) would prefer if their comments were not editable. Disallowing the feature just because it goes against your views is holding back functionality from those who could truly use it. Ryan Lane From datrio at gmail.com Wed Nov 2 20:57:14 2005 From: datrio at gmail.com (Dariusz Siedlecki) Date: Wed, 2 Nov 2005 21:57:14 +0100 Subject: [Wikitech-l] Requests for bot status Message-ID: <283332d20511021257pdaadc2an@mail.gmail.com> Just wanted to notify the Wikimedia community, that [[Requests for bot status]] on Meta was created a few minutes ago. Of course, [[Requests for permissions]] is still active - please put all the requests for sysop/bureaucrat status on that page. Nothing has changed with the current bot policy besides that. Please update the local request pages on your Wikis to reflect the page change on Meta. -- Pozdrawiam, Dariusz "Datrio" Siedlecki From timwi at gmx.net Wed Nov 2 21:54:29 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 21:54:29 +0000 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> Message-ID: > Shouldn't it be the decision of the users, or community leaders, whether they > would want comments edited or not? Yes. And at least some of those users are reading this mailing list :) Timwi From timwi at gmx.net Wed Nov 2 21:57:53 2005 From: timwi at gmx.net (Timwi) Date: Wed, 02 Nov 2005 21:57:53 +0000 Subject: [Wikitech-l] Re: Question re. MediaWiki_FAQ In-Reply-To: <43692125.10607@tgries.de> References: <43692125.10607@tgries.de> Message-ID: Thomas Gries wrote: > The entry on > http://meta.wikimedia.org/wiki/MediaWiki_FAQ#How_do_I_delete_a_user_from_my_list_of_users.3F > > says > "MediaWiki does not support the deletion of user accounts. To prevent an > account from being used, either scramble the password or set up an > indefinite block on the account. > Do not remove users from the user table in the mySQL database; this > causes problems with other parts of the wiki due to the relational > structure of the database." > > My question: > is the last sentence "Do no remove ..." still valid and important ? Yes. Loads of database tables contain references to users, including (but not limited to) article revisions (i.e. edits). If you feel *reeeaaally* confident, you can track down all of those references and remove them (or change them to point to another user), and then delete the user row. But seriously, it would be easier to follow the suggestions you quoted from the FAQ. Don't do it the complicated way _just_ because you think it will make your database "cleaner". :-) Timwi From avenier at venier.net Wed Nov 2 22:47:12 2005 From: avenier at venier.net (Andrew Venier) Date: Wed, 02 Nov 2005 16:47:12 -0600 Subject: [Wikitech-l] How actually usable are audio captchas? In-Reply-To: References: Message-ID: <436941F0.6070003@venier.net> David Gerard wrote: >Does anyone have or know of *actual data* (rather than hypothesis or >anecdote) on whether non-visual captchas are any good as yet? > > To address one particular type: a conference paper reports experiments that suggest using speech synthesis degraded by noise is not an effective basis for audio CAPTCHA: "...although there seems to be a gap in the ability of understanding synthesized speech with background noise between humans and computers, our results discourage using this gap to build an audio-based CAPTCHA" Only the abstract is available online for free. The full paper would require an online purchase or a library visit. Tsz-Yan Chan. "Using a Text-to-Speech Synthesizer to Generate a Reverse Turing Test," /ictai/, p. 226, 15th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'03), 2003. **http://doi.ieeecomputersociety.org/10.1109/TAI.2003.1250195 Though the paper is from 2003, one would guess that particular CAPTCHA schemes get less, not more, effective over time as automated recognition technologies improve. From 2.718281828 at gmail.com Wed Nov 2 23:18:26 2005 From: 2.718281828 at gmail.com (SJ) Date: Wed, 2 Nov 2005 18:18:26 -0500 Subject: [Wikitech-l] Re: [Foundation-l] Requests for bot status In-Reply-To: <283332d20511021257pdaadc2an@mail.gmail.com> References: <283332d20511021257pdaadc2an@mail.gmail.com> Message-ID: <742dfd060511021518k2b3f83f0ncda932473cf5c2d6@mail.gmail.com> Good move, Dariusz. Bots are getting awfully popular... On 11/2/05, Dariusz Siedlecki wrote: > Just wanted to notify the Wikimedia community, that [[Requests for bot > status]] on Meta was created a few minutes ago. > > Of course, [[Requests for permissions]] is still active - please put > all the requests for sysop/bureaucrat status on that page. From mindspillage at gmail.com Wed Nov 2 23:20:26 2005 From: mindspillage at gmail.com (Kat Walsh) Date: Wed, 2 Nov 2005 18:20:26 -0500 Subject: [Wikitech-l] Re: LiquidThreads In-Reply-To: References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <4368C1D7.3040904@fas.harvard.edu> <4368ED13.9010205@jpkoester.de> <43690BFD.5090001@gmail.com> Message-ID: <8e253f560511021520i3e5480fs72de360a79ac2c5b@mail.gmail.com> On 11/2/05, Gregory Maxwell wrote: > Might I suggest an additional feature for mediawiki? How about > [{here}] which becomes a difflink to the edit where that tag was > inserted? People could add that to their signatures and thus every > post of their would be equipped to a handy difflink to an original > version. I wouldn't mind seeing the timestamp on signatures be a diff link , actually: makes it easy to see what was originally posted and doesn't add more to the standard sig. -Kat [[en:User:Mindspillage]] -- "There was a point to this story, but it has temporarily escaped the chronicler's mind." --Douglas Adams From 2.718281828 at gmail.com Thu Nov 3 01:51:05 2005 From: 2.718281828 at gmail.com (SJ) Date: Wed, 2 Nov 2005 20:51:05 -0500 Subject: [Wikitech-l] Fwd:Shock site bot In-Reply-To: References: Message-ID: <742dfd060511021751x7fe445fag84c84f58cea9a602@mail.gmail.com> Forwarded from Wikien-l. I wonder if it would be useful to have an index of external URLs that supports throttling of the # of times per day a given base URL can be added to WP sites... SJ PS - this is my favorite part -- "I'm a regular wikipedia user although i don't have an account here" I wonder how many people consider themselves 'regular uers' -- and how many of those actually read user and policy pages... ---------- Forwarded message ---------- From: Brett Gustafson Date: Nov 2, 2005 7:22 PM Subject: [WikiEN-l] Shock site bot To: wikien-l at wikipedia.org I recently recieved this message from a user: "I'm a regular wikipedia user although i don't have an account here. I think this site is great and it really helps me with my college work. But I recently heard of these people that were talking about wikipedia that they were all programming a hack for it. So after a little while I found it was a spider to hunt down all the pages links and change them to shocks site links or something along those lines. I didn't know who to tell so I just thought I'd tell an administrator as they might know who to tell or what to do. Just giving an advanced warning so you might be able to do something to protect this wonderful resourse. Apparently they permenantly change their ip address using some thing (a bit beyond me). Something like that. I just didn't know what to do. I hope I didn't embaress myself here. Thanks for your time." Brett _______________________________________________ WikiEN-l mailing list WikiEN-l at Wikipedia.org To unsubscribe from this mailing list, visit: http://mail.wikipedia.org/mailman/listinfo/wikien-l -- ++SJ From evan at wikitravel.org Thu Nov 3 02:41:55 2005 From: evan at wikitravel.org (Evan Prodromou) Date: Wed, 02 Nov 2005 21:41:55 -0500 Subject: [Wikitech-l] Re: Edit bailouts In-Reply-To: References: <1130865815.9004.22.camel@zhora.1481ruerachel.net> Message-ID: <1130985715.11729.66.camel@zhora.1481ruerachel.net> On Wed, 2005-02-11 at 11:24 +1100, Tim Starling wrote: > I suspect a lot of the hits on edit pages are due to readers following red > links, rather than people clicking the edit tab. That's a very good point. > About 16% of edit form requests result in a save attempt. That's the number I was looking for. I guess there's not an easy way to find out how many people are bailing out of edits that they really want to make... except maybe to ask them. Perhaps a pop-up poll for folks who navigate away from an edit page in some way besides saving? Kind of intrusive but it might have interesting results. ~Evan -- Evan Prodromou Wikitravel (http://wikitravel.org/) -- the free, complete, up-to-date and reliable world-wide travel guide From node.ue at gmail.com Thu Nov 3 03:15:13 2005 From: node.ue at gmail.com (Mark Williamson) Date: Wed, 2 Nov 2005 20:15:13 -0700 Subject: [Wikitech-l] Fwd:Shock site bot In-Reply-To: <742dfd060511021751x7fe445fag84c84f58cea9a602@mail.gmail.com> References: <742dfd060511021751x7fe445fag84c84f58cea9a602@mail.gmail.com> Message-ID: <849f98ed0511021915g7eb09138g@mail.gmail.com> Using Wikipedia, to me at least, means reading, learning from. Editing, on the other hand, means making changes. You'd say the same for any other encyclopaedia -- I have used EB, but certainly never _edited_ it. Mark On 02/11/05, SJ <2.718281828 at gmail.com> wrote: > Forwarded from Wikien-l. > > I wonder if it would be useful to have an index of external URLs that > supports throttling of the # of times per day a given base URL can be > added to WP sites... > > SJ > > > PS - this is my favorite part -- "I'm a regular wikipedia user > although i don't have an account here" I wonder how many people > consider themselves 'regular uers' -- and how many of those actually > read user and policy pages... > > ---------- Forwarded message ---------- > From: Brett Gustafson > Date: Nov 2, 2005 7:22 PM > Subject: [WikiEN-l] Shock site bot > To: wikien-l at wikipedia.org > > > I recently recieved this message from a user: > "I'm a regular wikipedia user although i don't have an account here. I think > this site is great and it really helps me with my college work. But I > recently heard of these people that were talking about wikipedia that they > were all programming a hack for it. So after a little while I found it was a > spider to hunt down all the pages links and change them to shocks site links > or something along those lines. I didn't know who to tell so I just thought > I'd tell an administrator as they might know who to tell or what to do. Just > giving an advanced warning so you might be able to do something to protect > this wonderful resourse. Apparently they permenantly change their ip address > using some thing (a bit beyond me). Something like that. I just didn't know > what to do. I hope I didn't embaress myself here. Thanks for your time." > > Brett > _______________________________________________ > WikiEN-l mailing list > WikiEN-l at Wikipedia.org > To unsubscribe from this mailing list, visit: > http://mail.wikipedia.org/mailman/listinfo/wikien-l > > > -- > ++SJ > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- If you would like a gmail invite, please send me an e-mail. Si ud. quiere que le env?e una invitaci?n para juntar gmail, env?eme un mensaje. Si vous voulez que je vous envoie une invitation ? joindre gmail, envoyez-moi s.v.p un message. Se vce. gostaria que lhe envie um convite para juntar gmail, favor de envie-me uma mensagem. Se vuleti chi vi manu 'n invitu a uniri gmail, mandatimi n messaggiu. From node.ue at gmail.com Thu Nov 3 03:13:27 2005 From: node.ue at gmail.com (Mark Williamson) Date: Wed, 2 Nov 2005 20:13:27 -0700 Subject: [Wikitech-l] Re: Edit bailouts In-Reply-To: <1130985715.11729.66.camel@zhora.1481ruerachel.net> References: <1130865815.9004.22.camel@zhora.1481ruerachel.net> <1130985715.11729.66.camel@zhora.1481ruerachel.net> Message-ID: <849f98ed0511021913g2644d2b9i@mail.gmail.com> Don't think that would happen at Wikipedia. You could certainly try it at Wikitravel, though... perhaps it would be better-accepted if it only popped up for every 3rd bailout? (not per person, but total). Mark On 02/11/05, Evan Prodromou wrote: > On Wed, 2005-02-11 at 11:24 +1100, Tim Starling wrote: > > > I suspect a lot of the hits on edit pages are due to readers following red > > links, rather than people clicking the edit tab. > > That's a very good point. > > > About 16% of edit form requests result in a save attempt. > > That's the number I was looking for. > > I guess there's not an easy way to find out how many people are bailing > out of edits that they really want to make... except maybe to ask them. > Perhaps a pop-up poll for folks who navigate away from an edit page in > some way besides saving? Kind of intrusive but it might have interesting > results. > > ~Evan > > -- > Evan Prodromou > Wikitravel (http://wikitravel.org/) -- the free, complete, up-to-date > and reliable world-wide travel guide > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- If you would like a gmail invite, please send me an e-mail. Si ud. quiere que le env?e una invitaci?n para juntar gmail, env?eme un mensaje. Si vous voulez que je vous envoie une invitation ? joindre gmail, envoyez-moi s.v.p un message. Se vce. gostaria que lhe envie um convite para juntar gmail, favor de envie-me uma mensagem. Se vuleti chi vi manu 'n invitu a uniri gmail, mandatimi n messaggiu. From brion at pobox.com Thu Nov 3 03:40:09 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 02 Nov 2005 19:40:09 -0800 Subject: [Wikitech-l] MediaWiki 1.5.2, 1.4.12, 1.3.18 released Message-ID: <43698699.3000208@pobox.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 MediaWiki 1.5.2 is a bugfix maintenance release. A change in PHP 4.4.1 and PHP 5.1.0RC broke handling of extension and
 sections, causing garbage data to be inserted in output and saved
edits. This version works around the change.

Several other glitches with MySQL 5.0 and PHP 5.0.5 were also fixed;
see the change log below for a complete list.


1.4.12 and 1.3.18 include the PHP 4.4.1 fix as well as the Internet
Explorer CSS JavaScript injection fixes from 1.5.1.


Release notes:
1.5.2 http://sourceforge.net/project/shownotes.php?release_id=368103
1.4.12 http://sourceforge.net/project/shownotes.php?release_id=368102
1.3.18 http://sourceforge.net/project/shownotes.php?release_id=368101

Download:
http://prdownloads.sourceforge.net/wikipedia/mediawiki-1.5.2.tar.gz?download
http://prdownloads.sourceforge.net/wikipedia/mediawiki-1.4.12.tar.gz?download
http://prdownloads.sourceforge.net/wikipedia/mediawiki-1.3.18.tar.gz?download

MD5 checksums:
3aedde46937494bc21a5e493ea6a8ffd mediawiki-1.5.2.tar.gz
246aa2a830b63f5be48fc7ad499fc3ca mediawiki-1.4.12.tar.gz
7fb4100839ff5ab2ea9e2e86c0bd9aa0 mediawiki-1.3.18.tar.gz

SHA-1 checksums:
6fd9519d102e1954997774e50bd95d490f03778f mediawiki-1.5.2.tar.gz
9ff1222d168241f2dc06ef4e91420d8cb4ca348e mediawiki-1.4.12.tar.gz
75258724080311ac1938468c0a9bfd39128c2b1a mediawiki-1.3.18.tar.gz


Before asking for help, try the FAQ:
http://meta.wikimedia.org/wiki/MediaWiki_FAQ

Low-traffic release announcements mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-announce

Wiki admin help mailing list:
http://mail.wikipedia.org/mailman/listinfo/mediawiki-l

Bug report system:
http://bugzilla.wikimedia.org/

Play "stump the developers" live on IRC:
#mediawiki on irc.freenode.net

- -- brion vibber (brion @ pobox.com / brion @ wikimedia.org)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFDaYaZwRnhpk1wk44RAk1/AKDJNzPwKtRRqUwH0xKD/uVOt4ts3ACg0Ppv
v/IjZdihGSWwfp+wYa2py0A=
=e7o6
-----END PGP SIGNATURE-----


From brian0918 at gmail.com  Thu Nov  3 05:18:33 2005
From: brian0918 at gmail.com (Brian)
Date: Thu, 03 Nov 2005 00:18:33 -0500
Subject: [Wikitech-l] Fwd:Shock site bot
In-Reply-To: <742dfd060511021751x7fe445fag84c84f58cea9a602@mail.gmail.com>
References: 
	<742dfd060511021751x7fe445fag84c84f58cea9a602@mail.gmail.com>
Message-ID: <43699DA9.1040205@gmail.com>

I do know of one anon that has refused to start an account and has 
contributed extensively to evolution/species-related articles. He has 
turned his user page into a detailed watchlist (linked below; each of 
those dots is a watched page). He also appears to grasp policy and 
guidelines more than most Wikipedians. Check out an old copyedit of 
his:  
http://en.wikipedia.org/w/index.php?title=Great_Lakes_Storm_of_1913&diff=10227257&oldid=10225881


http://en.wikipedia.org/wiki/User:68.81.231.127

Several people have pleaded on his talk page for him to start an account.


SJ wrote:

>Forwarded from Wikien-l.
>
>I wonder if it would be useful to have an index of external URLs that
>supports throttling of the # of times per day a given base URL can be
>added to WP sites...
>
>SJ
>
>
>PS - this is my favorite part --  "I'm a regular wikipedia user
>although i don't have an account here"  I wonder how many people
>consider themselves 'regular uers' -- and how many of those actually
>read user and policy pages...
>
>---------- Forwarded message ----------
>From: Brett Gustafson 
>Date: Nov 2, 2005 7:22 PM
>Subject: [WikiEN-l] Shock site bot
>To: wikien-l at wikipedia.org
>
>
>I recently recieved this message from a user:
>"I'm a regular wikipedia user although i don't have an account here. I think
>this site is great and it really helps me with my college work. But I
>recently heard of these people that were talking about wikipedia that they
>were all programming a hack for it. So after a little while I found it was a
>spider to hunt down all the pages links and change them to shocks site links
>or something along those lines. I didn't know who to tell so I just thought
>I'd tell an administrator as they might know who to tell or what to do. Just
>giving an advanced warning so you might be able to do something to protect
>this wonderful resourse. Apparently they permenantly change their ip address
>using some thing (a bit beyond me). Something like that. I just didn't know
>what to do. I hope I didn't embaress myself here. Thanks for your time."
>
>Brett
>_______________________________________________
>WikiEN-l mailing list
>WikiEN-l at Wikipedia.org
>To unsubscribe from this mailing list, visit:
>http://mail.wikipedia.org/mailman/listinfo/wikien-l
>
>
>--
>++SJ
>_______________________________________________
>Wikitech-l mailing list
>Wikitech-l at wikimedia.org
>http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>
>  
>


From oub at mat.ucm.es  Thu Nov  3 10:27:07 2005
From: oub at mat.ucm.es (Uwe Brauer)
Date: Thu, 03 Nov 2005 11:27:07 +0100
Subject: [Wikitech-l] Probl with no UTF-8 in ee-helper (was: prbls with
	external editor)
References: <877jbu2avo.fsf@mat.ucm.es> 
	 <200511011826.56652@bloodgate.com>
	 <874q6vgutc.fsf_-_@mat.ucm.es>
	<4368EA71.2030803@gmx.de>
Message-ID: <87wtjqxc04.fsf_-_@mat.ucm.es>

>>>>> "Erik" == Erik Moeller
>>>>>  writes:


   Erik> This happens if you use an editor which is not capable of
   Erik> editing UTF-8.  You can set

   Erik> 	Transcode UTF-8=true

I have that line, however when I "download" article that is, use the
external editor _before_ I edit anything I note that for example the
German quotation marks like ,,this" are displayed as ?this?, so it
seem *my editor* is not the culprit but ee-helper.

Am I correct, or what is my mistake?

Uwe 





From smolensk at eunet.yu  Thu Nov  3 06:07:10 2005
From: smolensk at eunet.yu (Nikola Smolenski)
Date: Thu, 3 Nov 2005 07:07:10 +0100
Subject: [Wikitech-l] How actually usable are audio captchas?
In-Reply-To: 
References: 
Message-ID: <200511030707.10403.smolensk@eunet.yu>

On Wednesday 02 November 2005 13:38, David Gerard wrote:
> With various discussions around Wikipedia of captchas to impede
> vandalbots, I was wondering how usable audio captchas actually are for
> those who can't see images. Most sites with visual captchas offer an
> audio option ... but what's the actual usability of these? Are they a
> minor impediment, as a visual captcha is, or a major usability
> problem? Is there data on this?

How actually needed are audio captchas? Regardless of usability, you may have 
deaf users with Lynx, or non-impaired users with Lynx and no sound, so there 
has to be e-mail fallback. If there has to be e-mail fallback, it could be 
used by blind users as well, as they are relatively small percentage.


From nospam-abuse at bloodgate.com  Thu Nov  3 17:45:16 2005
From: nospam-abuse at bloodgate.com (Tels)
Date: Thu, 3 Nov 2005 18:45:16 +0100
Subject: [Wikitech-l] Re: LiquidThreads
In-Reply-To: 
References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil>
	
	
Message-ID: <200511031845.17000@bloodgate.com>

-----BEGIN PGP SIGNED MESSAGE-----

Moin,

On Wednesday 02 November 2005 22:54, Timwi wrote:
> > Shouldn't it be the decision of the users, or community leaders,
> > whether they would want comments edited or not?
>
> Yes. And at least some of those users are reading this mailing list :)

But I still get the idea that we will not even have the option to lock 
comments because some don't want this at all. :-(

Best wishes,

Tels

- --
 Signed on Thu Nov  3 18:40:02 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.

 "Eat, eat, eat, eat the delicious sandwich!" -- Elan the Bard (Order of
 the Stick)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iQEVAwUBQ2pMrHcLPEOTuEwVAQEBbAf9Gd6k7hyJyc/B9oshT1PfCU3FljilU0c8
JQqny93CQUaryaOAtocTEevvzE2RTLNGNuLOoMoLkUYbNDj+vmbyXV3OWOmhAk7W
x6GDKGSGvfQF7wTaqRSgCFsHu/3y3oovE8XRu1TrPWoNY7H1NQjUre7tCgL206i2
fW91li/BOB4A0BCh2TuGtuAXaE+vIuxNNj3hHkHs9FSniqlFAvgEuvya9MrXi+SP
wkAvZDPlHXu9SztDYvmwJDbddrJJs86pNqtWIUZFIzr1ZM5GIqDEyM0sw4jmt1Fp
7vm4PXTHr5uEkj+cmBvrE/5YPs1J5Wj1A76wxTdrJDeqIFr6GePvCw==
=WO4/
-----END PGP SIGNATURE-----


From nospam-abuse at bloodgate.com  Thu Nov  3 17:44:49 2005
From: nospam-abuse at bloodgate.com (Tels)
Date: Thu, 3 Nov 2005 18:44:49 +0100
Subject: [Wikitech-l] Re: LiquidThreads
In-Reply-To: 
References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil>
	 
Message-ID: <200511031844.50693@bloodgate.com>

-----BEGIN PGP SIGNED MESSAGE-----

Moin,

On Wednesday 02 November 2005 20:46, Timwi wrote:
> > The losses are major: The community could prevent speech that they
> > deem offensive, which is an violation of free speech in general,
> > (especially since people often consider differing viewpoints
> > "offensive" and would edit them out) and people would not have their
> > comments represented as written, which makes the discussion difficult
> > to follow at best and pointless at worst.
>
> Jan-Paul K?ster already pointed out to you that you have not shown that
> this would actually happen enough to be a significant problem.

If this (misrepresenting what I wrote) happens only once to *my* comments,
it is a significant problem for me. Up to the point where I would stop
posting comments *at all* until the website makes sure that my comments
are not edited and still show my signature.

There is a reason I digitally sign all emails :-)

Best wishes,

Tels

- --
 Signed on Thu Nov  3 18:37:25 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.

 "I intend to live forever, or die trying." -- Groucho Marx

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iQEVAwUBQ2pMkXcLPEOTuEwVAQFV+Qf+PeDBYZf14zuZFU5VEcHZ2Djfwnw2PZEi
0F7B+cCfqne4QOSniftuvwlhyHKeNkDxFDiKJj0MgtKs7mDt2WpVSfDFpAKvuZYB
/BulKpsBDd3KdhIwKRLessamwxRH0YjMTexEcaigXRoyuCdFlmUVytD3hDLoSblz
dH+5vYwMSeE52q+HbbYAlsCBHVyX+bgurovxDiP+x3DRQbIiNtq09UhwSLwJLj6l
mANlm50apBt2bPXca5cQJwnx34hE6SyxqdE/7YEypcNRf+s16HVefYzFgePWJEor
vx/AwQRisC8OCSPNpq2eeKYLpXH+8HqRW2QX7SLtLeJyMdG6SHBb/g==
=aiKJ
-----END PGP SIGNATURE-----


From shizhao at gmail.com  Thu Nov  3 14:26:48 2005
From: shizhao at gmail.com (shizhao)
Date: Thu, 3 Nov 2005 14:26:48 +0000 (UTC)
Subject: [Wikitech-l] Template:Itn on zh.wp can't protected
Message-ID: 

Template:Itn on zh.wp, http://zh.wikipedia.org/wiki/Template:Itn

it already protected, but anonymous user however can edit. other protected 
pages without this problem. 

Otherwise, Blocked anonymous user however can edit Template:Itn. 

[[zh:user:shizhao]]



From hacker at timsprint.com  Tue Nov  1 10:05:36 2005
From: hacker at timsprint.com (John Ky)
Date: Tue, 01 Nov 2005 21:05:36 +1100
Subject: [Wikitech-l] One wiki, multiple websites
Message-ID: <43673DF0.1040803@timsprint.com>

Hello,

I want to setup up a number of websites which share a single
wiki underneath.  The only difference between the websites
will be:

* The URI used to access the website
* The main page
* The navigation links

Is it possible, and how much work is involved in configuring
such a wiki.

Thanks

-John



From howells at kde.org  Wed Nov  2 23:21:01 2005
From: howells at kde.org (Chris Howells)
Date: Wed, 2 Nov 2005 23:21:01 +0000
Subject: [Wikitech-l] Question re. MediaWiki_FAQ
In-Reply-To: <43692125.10607@tgries.de>
References: <43692125.10607@tgries.de>
Message-ID: <200511022321.01992.howells@kde.org>

On Wednesday 02 November 2005 20:27, Thomas Gries wrote:
> My question:
> is the last sentence "Do no remove ..." still valid and important ?

I'd imagine so, see 
http://www.google.com/search?q=sql+deletion+anomaly&ie=UTF-8&oe=UTF-8

-- 
Cheers, Chris Howells -- chris at chrishowells.co.uk, howells at kde.org
Web: http://chrishowells.co.uk, PGP ID: 0x33795A2C
KDE/Qt/C++/PHP Developer: http://www.kde.org



From evanm at google.com  Thu Nov  3 18:56:25 2005
From: evanm at google.com (Evan Martin)
Date: Thu, 3 Nov 2005 10:56:25 -0800
Subject: [Wikitech-l] Indexing Wikisource
In-Reply-To: 
References: 
Message-ID: <9f43d19d0511031056r30261844p53551bd9ed43b1ca@mail.google.com>

On 11/1/05, Lars Aronsson  wrote:
> Since then I have monitored how fast Google has been to index
> these titles.  More than half of the German pages were indexed
> within a few weeks, which is in line with my experience of how
> fast Google can be.  But to my surprise, only very few of the
> English pages have yet been indexed.
[lots snipped]

Thanks for the detailed mail.
I just wanted to let you know that the right people have been
notified, and they're looking into it.


From gmaxwell at gmail.com  Thu Nov  3 21:03:49 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Thu, 3 Nov 2005 16:03:49 -0500
Subject: [Wikitech-l] Re: LiquidThreads
In-Reply-To: <200511031844.50693@bloodgate.com>
References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil>
	 
	<200511031844.50693@bloodgate.com>
Message-ID: 

On 11/3/05, Tels  wrote:
> If this (misrepresenting what I wrote) happens only once to *my* comments,
> it is a significant problem for me. Up to the point where I would stop
> posting comments *at all* and whine until the website makes sure that my rants
> are not edited and still show my signature.
> There is a reason I digitally sign all emails :-)

The version in history is always authoritative. This is just like on a
mailing lists where there is a risk of people changing your words, but
the version in prior posts/archives are authoritative...  Tell people
to read the authoritative version and yell at the people who edit the
front version..

That said, you could digitally sign your wikitext posts, just please
wrap the sigs in html comments. :)


From joshua.l.bass at lmco.com  Thu Nov  3 21:21:10 2005
From: joshua.l.bass at lmco.com (Bass, Joshua L)
Date: Thu, 03 Nov 2005 15:21:10 -0600
Subject: [Wikitech-l] One wiki, multiple websites
Message-ID: <94AC76452370C6409AC567DF26AFEC5A0CC3F628@emss07m11.us.lmco.com>

It is possible:

http://www.gtr-tech.com/gtrwiki/Main_Page

http://www.350z-tech.com/ztrwiki/Main_Page

Just point them to the same database, but you will have to have the
correct url in your LocalSettings.php for each site. 

As far as different navigation links, that could be done manually in the
skin.



-----Original Message-----
From: wikitech-l-bounces at wikimedia.org
[mailto:wikitech-l-bounces at wikimedia.org] On Behalf Of John Ky
Sent: Tuesday, November 01, 2005 4:06 AM
To: wikitech-l at wikimedia.org
Subject: [Wikitech-l] One wiki, multiple websites

Hello,

I want to setup up a number of websites which share a single
wiki underneath.  The only difference between the websites
will be:

* The URI used to access the website
* The main page
* The navigation links

Is it possible, and how much work is involved in configuring
such a wiki.

Thanks

-John

_______________________________________________
Wikitech-l mailing list
Wikitech-l at wikimedia.org
http://mail.wikipedia.org/mailman/listinfo/wikitech-l




From beesley at gmail.com  Thu Nov  3 21:05:34 2005
From: beesley at gmail.com (Angela)
Date: Thu, 3 Nov 2005 22:05:34 +0100
Subject: [Wikitech-l] Template:Itn on zh.wp can't protected
In-Reply-To: 
References: 
Message-ID: <8b722b800511031305m3263ca29ybc513d55ddb61b50@mail.gmail.com>

On 11/3/05, shizhao  wrote:
> Template:Itn on zh.wp, http://zh.wikipedia.org/wiki/Template:Itn
>
> it already protected, but anonymous user however can edit. other protected
> pages without this problem.

Perhaps it was previously only protected against page moves. It
doesn't seem to be editable now. The log
 will only show
protections - it doesn't state whether the protection was against
editing or against moving.

Angela.


From dgerard at gmail.com  Thu Nov  3 10:55:03 2005
From: dgerard at gmail.com (David Gerard)
Date: Thu, 3 Nov 2005 10:55:03 +0000
Subject: [Wikitech-l] rcid in default sig? (was LiquidThreads)
Message-ID: 

Mindspillage wrote:
>On 11/2/05, Gregory Maxwell  wrote:

>> Might I suggest an additional feature for mediawiki? How about
>> [{here}] which becomes a difflink to the edit where that tag was
>> inserted? People could add that to their signatures and thus every
>> post of their would be equipped to a handy difflink to an original
>> version.

>I wouldn't mind seeing the timestamp on signatures be a diff link ,
>actually: makes it easy to see what was originally posted and doesn't
>add more to the standard sig.


Thoroughly excellent idea! (Does an edit know what its revision ID is
going to be while it's saving?)


- d.


From beesley at gmail.com  Thu Nov  3 22:38:52 2005
From: beesley at gmail.com (Angela)
Date: Thu, 3 Nov 2005 23:38:52 +0100
Subject: [Wikitech-l] rcid in default sig? (was LiquidThreads)
In-Reply-To: 
References: 
Message-ID: <8b722b800511031438i587805b3q955f3b7a969e70f4@mail.gmail.com>

> >I wouldn't mind seeing the timestamp on signatures be a diff link ,
> >actually: makes it easy to see what was originally posted and doesn't
> >add more to the standard sig.
>
>
> Thoroughly excellent idea! (Does an edit know what its revision ID is
> going to be while it's saving?)

It only sounds a good idea if these links are not going to show up in
the edit text or in diffs. Signatures are getting bad enough now with
all the s and   junk in
them. I wouldn't want to read an edit page where full URLs appeared
after every comment.

Angela.


From gmaxwell at gmail.com  Thu Nov  3 23:01:07 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Thu, 3 Nov 2005 18:01:07 -0500
Subject: [Wikitech-l] rcid in default sig? (was LiquidThreads)
In-Reply-To: <8b722b800511031438i587805b3q955f3b7a969e70f4@mail.gmail.com>
References: 
	<8b722b800511031438i587805b3q955f3b7a969e70f4@mail.gmail.com>
Message-ID: 

On 11/3/05, Angela  wrote:
> > >I wouldn't mind seeing the timestamp on signatures be a diff link ,
> > >actually: makes it easy to see what was originally posted and doesn't
> > >add more to the standard sig.
> > Thoroughly excellent idea! (Does an edit know what its revision ID is
> > going to be while it's saving?)
>
> It only sounds a good idea if these links are not going to show up in
> the edit text or in diffs. Signatures are getting bad enough now with
> all the s and   junk in
> them. I wouldn't want to read an edit page where full URLs appeared
> after every comment.

Thats a technical detail that we shouldn't let block achieving a
desired outcome.
I agree that difflinks are long an ugly. Perhaps it's come time for a
special difflink syntax which is nice and compact?


From saintonge at telus.net  Thu Nov  3 19:49:09 2005
From: saintonge at telus.net (Ray Saintonge)
Date: Thu, 03 Nov 2005 11:49:09 -0800
Subject: [Wikitech-l] Re: [WikiEN-l] Ratings again
In-Reply-To: <43673F4F.5010403@web.de>
References: 	<435DFD26.7070009@pobox.com>	<435E0FE8.7000605@tonal.clara.co.uk>	<4360A7D0.6040505@web.de>	<4366790B.8070003@telus.net>
	<43673F4F.5010403@web.de>
Message-ID: <436A69B5.4080802@telus.net>

Magnus Manske wrote:

> Ray Saintonge wrote:
>
>> Magnus Manske wrote:
>>
>>> Neil Harris wrote:
>>>
>>>> Am I being naive here, or would a super-dumb implementation with a
>>>> single table with the columns shown below be enough to work in the
>>>> short term?
>>>>
>>>> Page_ID
>>>> Revision_ID
>>>> User_ID
>>>> Rating_ID
>>>> Rating value
>>>> Timestamp  
>>>
>>> This is what I did; no timestamp, but a varchar for comments. Topics to
>>> rate and their range (e.g, 1-5) are encoded here as well for user #0.
>>> That's about as dumb as it gets ;-)
>>
>> I still prefer a 0-10 range of ratings.  I think a decimal 
>> normalization would be easier to work with in any subsequent analysis 
>> of results.
>
> One can set the range for each topic individually.
>
> BTW, with values 0-10, you'd have eleven values...

Yes.  That's a problem??

Probabilities are based on a continuum between 0.0 and 1.0, but I think 
we want to limit people to strictly integral input with an apparent 
middle value of 5.  The net averaged results will be probabilities times 
ten.

I think there are algorithms that can be applied to simple votes that 
will give us a single results.  (Since I'm not a developper/programmer, 
I try to avoid being a thorn in the side of that lot through persistent 
POV pushing. :-) ).  The net rating of an article can be a simple 
weighted average of individuals' ratings over some number of edits.  A 
completely unrated article would have a rating of 0.0 over 0 edits.

The weighting of the average would depend on the age of the edit.  
Ratings since the last edit would receive full weight, those before the 
last edit would have a 0.9 weight, those before the second last edit 
would have a 0.8 weight, and so on.  The net rating would be 
recalculated whenever an edit is made or a new rating added.  Safeguards 
can be added to ensure that only a user's most recent rating is considered.

Ec





From saintonge at telus.net  Thu Nov  3 20:05:15 2005
From: saintonge at telus.net (Ray Saintonge)
Date: Thu, 03 Nov 2005 12:05:15 -0800
Subject: [Wikitech-l] Re: [WikiEN-l] Ratings again
In-Reply-To: 
References: 
Message-ID: <436A6D7B.4010609@telus.net>

David Gerard wrote:

>Ray Saintonge wrote:
>  
>
>>Magnus Manske wrote:
>>    
>>
>>>I still prefer a 0-10 range of ratings.  I think a decimal
>>>normalization would be easier to work with in any subsequent analysis
>>>of results.
>>>      
>>>
>>One can set the range for each topic individually.
>>    
>>
>Mmm. See discussion at [[m:En validation topics]] and its archive -
>too many choices of rating is probably a bad thing, because it's hard
>to agree what a given value means. The test plan so far includes
>probably far more variables than we'd want in any case ...
>  
>
The only variable is the individual subjective valuation of the 
article.  The number of points in the range are not different variables, 
but different values for the same variable.  To a large extent the size 
of the range is arbitrary because one range can be changed to another by 
a simple multiplication with a constant.  What's the difference between 
a value of 3 on a 1-5 scale and a value of 5 on a 0-10 range?

Since the value which an individual gives to an article is essentially 
subjective there is no reliable way to define the ratings that can be 
given.  Statistical results tend to be self-normalizing.

Ec



From wikiwatcher at yahoo.co.jp  Thu Nov  3 07:08:45 2005
From: wikiwatcher at yahoo.co.jp (=?ISO-2022-JP?B?GyRCRURDZhsoQiAbJEJLfEJATzobKEI=?=)
Date: Thu, 3 Nov 2005 16:08:45 +0900 (JST)
Subject: [Wikitech-l] still active: www.tocatch.info
Message-ID: <20051103070845.15521.qmail@web3108.mail.bbt.yahoo.co.jp>

Hello again,

I just wanted to say the site "www.tocatch.info" is still
active and is STEALING the Wikipedia content as a live
mirror (proxy).

Please, if this is the wrong place to report this, where
can I report it?

On Sunday 30 October 2005 17:00, ?? ??? wrote:
> Hello
>
> a www.tocatch.info appears to be mirroring various
> Wikipedias live, examples:
>
> http://www.tocatch.info/en/Special:Recentchanges.htm
> (ENGLISH)
> http://www.tocatch.info/ja/Special:Recentchanges.htm
> (JAPANESE)
>
> Thankyou

TANAKA


--------------------------------------
Yahoo! Mail - supported by 10million people
http://pr.mail.yahoo.co.jp/10m/



From rowan.collins at gmail.com  Thu Nov  3 23:23:13 2005
From: rowan.collins at gmail.com (Rowan Collins)
Date: Thu, 3 Nov 2005 23:23:13 +0000
Subject: [Wikitech-l] rcid in default sig? (was LiquidThreads)
In-Reply-To: 
References: 
Message-ID: <9f02ca4c0511031523x1ba42fcev@mail.gmail.com>

te:On 03/11/05, David Gerard  wrote:
> Mindspillage wrote:
> >On 11/2/05, Gregory Maxwell  wrote:
>
> >> Might I suggest an additional feature for mediawiki? How about
> >> [{here}] which becomes a difflink to the edit where that tag was
> >> inserted? People could add that to their signatures and thus every
> >> post of their would be equipped to a handy difflink to an original
> >> version.
>
> >I wouldn't mind seeing the timestamp on signatures be a diff link ,
> >actually: makes it easy to see what was originally posted and doesn't
> >add more to the standard sig.
>
> Thoroughly excellent idea! (Does an edit know what its revision ID is
> going to be while it's saving?)

I'd been pondering this myself recently, but it looks like it doesn't
- and probably can't - know its ID soon enough. Not only does the
Revision object not get an ID until the insert function (obviously too
late for text manipulation) but it has to actually be saved in MySQL
for the autoincrement field to autoincrement.

But having spent ages looking at that, I remember that we have
functions for doing diffs based on "next revision after this". So,
it's not pretty, but presumably a pre save transform (e.g. a
signature) could embed a link to a diff between the revision *before*
itself and "whatever revision comes next"  - which in all but the
oddest cases will be the revision you're in the middle of saving...
So, sort of like "/index.php?oldid=$lastrevision&direction=next" where
$lastrevision is the latest revision actually *saved* of that page.

--
Rowan Collins BSc
[IMSoP]


From rowan.collins at gmail.com  Thu Nov  3 23:36:42 2005
From: rowan.collins at gmail.com (Rowan Collins)
Date: Thu, 3 Nov 2005 23:36:42 +0000
Subject: [Wikitech-l] rcid in default sig? (was LiquidThreads)
In-Reply-To: 
References: 
	<8b722b800511031438i587805b3q955f3b7a969e70f4@mail.gmail.com>
	
Message-ID: <9f02ca4c0511031536h3601a1e5x@mail.gmail.com>

On 03/11/05, Gregory Maxwell  wrote:
> On 11/3/05, Angela  wrote:
> > It only sounds a good idea if these links are not going to show up in
> > the edit text or in diffs. Signatures are getting bad enough now with
> > all the s and   junk in
> > them. I wouldn't want to read an edit page where full URLs appeared
> > after every comment.

Hm, I admit I hadn't thought of that...

> I agree that difflinks are long an ugly. Perhaps it's come time for a
> special difflink syntax which is nice and compact?

Well, they could be made pretty compact just using templates (or, more
sanely, a built-in "variable"). It seems you need the article name
when using relative diffs (see my previous message), so a template
"nextdiff" of the form:

[{{fullurl:{{PAGENAME}}|diff=next|oldid={{{1}}}}} {{{2}}}]

(first parameter is the pre-change revision, the second the text to
display as the link) could appear in the wikisource as:

{{nextdiff|21221469|Diff}}

Make this an inbuilt variable for portability and performance, and
combine it with the idea someone mentionned of linking the date in the
signature (or part of it?) and it ought not to be too intrusive, I
think.

--
Rowan Collins BSc
[IMSoP]


From t.starling at physics.unimelb.edu.au  Fri Nov  4 00:37:20 2005
From: t.starling at physics.unimelb.edu.au (Tim Starling)
Date: Fri, 04 Nov 2005 11:37:20 +1100
Subject: [Wikitech-l] Re: Edit bailouts
In-Reply-To: <1130985715.11729.66.camel@zhora.1481ruerachel.net>
References: <1130865815.9004.22.camel@zhora.1481ruerachel.net>	
	<1130985715.11729.66.camel@zhora.1481ruerachel.net>
Message-ID: 

Evan Prodromou wrote:
> I guess there's not an easy way to find out how many people are bailing
> out of edits that they really want to make... except maybe to ask them.
> Perhaps a pop-up poll for folks who navigate away from an edit page in
> some way besides saving? Kind of intrusive but it might have interesting
> results.

You could tell the difference between red link clicks and edit tab clicks by
checking the referrer, or by adding an extra parameter to the URL. Detecting
when the user starts typing with javascript could be another measure,
although some might consider that sort of thing to be an invasion of
privacy. It would certainly be an invasion of privacy to send text from the
edit box to the server before the user clicks "save", unless they are warned
in advance.

-- Tim Starling



From brian0918 at gmail.com  Thu Nov  3 05:18:33 2005
From: brian0918 at gmail.com (Brian)
Date: Thu, 03 Nov 2005 00:18:33 -0500
Subject: [Wikitech-l] Fwd:Shock site bot
In-Reply-To: <742dfd060511021751x7fe445fag84c84f58cea9a602@mail.gmail.com>
References: 
	<742dfd060511021751x7fe445fag84c84f58cea9a602@mail.gmail.com>
Message-ID: <43699DA9.1040205@gmail.com>

I do know of one anon that has refused to start an account and has 
contributed extensively to evolution/species-related articles. He has 
turned his user page into a detailed watchlist (linked below; each of 
those dots is a watched page). He also appears to grasp policy and 
guidelines more than most Wikipedians. Check out an old copyedit of 
his:  
http://en.wikipedia.org/w/index.php?title=Great_Lakes_Storm_of_1913&diff=10227257&oldid=10225881


http://en.wikipedia.org/wiki/User:68.81.231.127

Several people have pleaded on his talk page for him to start an account.


SJ wrote:

>Forwarded from Wikien-l.
>
>I wonder if it would be useful to have an index of external URLs that
>supports throttling of the # of times per day a given base URL can be
>added to WP sites...
>
>SJ
>
>
>PS - this is my favorite part --  "I'm a regular wikipedia user
>although i don't have an account here"  I wonder how many people
>consider themselves 'regular uers' -- and how many of those actually
>read user and policy pages...
>
>---------- Forwarded message ----------
>From: Brett Gustafson 
>Date: Nov 2, 2005 7:22 PM
>Subject: [WikiEN-l] Shock site bot
>To: wikien-l at wikipedia.org
>
>
>I recently recieved this message from a user:
>"I'm a regular wikipedia user although i don't have an account here. I think
>this site is great and it really helps me with my college work. But I
>recently heard of these people that were talking about wikipedia that they
>were all programming a hack for it. So after a little while I found it was a
>spider to hunt down all the pages links and change them to shocks site links
>or something along those lines. I didn't know who to tell so I just thought
>I'd tell an administrator as they might know who to tell or what to do. Just
>giving an advanced warning so you might be able to do something to protect
>this wonderful resourse. Apparently they permenantly change their ip address
>using some thing (a bit beyond me). Something like that. I just didn't know
>what to do. I hope I didn't embaress myself here. Thanks for your time."
>
>Brett
>_______________________________________________
>WikiEN-l mailing list
>WikiEN-l at Wikipedia.org
>To unsubscribe from this mailing list, visit:
>http://mail.wikipedia.org/mailman/listinfo/wikien-l
>
>
>--
>++SJ
>_______________________________________________
>Wikitech-l mailing list
>Wikitech-l at wikimedia.org
>http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>
>  
>


From naeblis at kc.rr.com  Fri Nov  4 00:21:06 2005
From: naeblis at kc.rr.com (David Reynolds)
Date: Thu, 03 Nov 2005 18:21:06 -0600
Subject: [Wikitech-l] Re: LiquidThreads
In-Reply-To: <4368C1D7.3040904@fas.harvard.edu>
References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil>
	
	
	 <4368C1D7.3040904@fas.harvard.edu>
Message-ID: <436AA972.6080100@kc.rr.com>

Ivan Krstic wrote:
> Timwi wrote:
> 
>>Secondarily I want to be able to fix spellings, in the faint hope that
>>it will help some people learn better spelling. Again, the only people
>>who would object to this would be people who can't spell and are
>>therefore unsuitable for writing an encyclopedia anyway.
> 
> 
> Totally broken reasoning.
> 
> 
>>I've heard many reasons for maliciously changing an article. Yet
>>articles on Wikipedia tend to get better. Interesting, innit?
> 
> 
> That's an irrelevant non-answer to Ryan's question.
> 
> 
>>No, the purpose of this was to test your reaction. You fell right for it
>>exactly the way I expected: you picked up only on the emotional side of
>>the paragraph (taking it as an insult)
> 
> 
> Do you understand how conceited this makes you sound?
> 
> 
>>And this highlights what I mean: you (and many other people) only object
>>to being able to edit comments because it somehow "feels" wrong. You
>>can't really say why it *is* wrong. 
> 
> 
> You need to relax, and start spending less time writing
> borderline-offensive e-mail to people who are trying to reason
> constructively, and more time thinking about what they're saying. I can
> tell you exactly why it *is* wrong: comments are not Wikipedia articles,
> even if you seem to be constantly confounding the two.
> 
> A Wikipedia article isn't signed by a single person's name. It doesn't
> represent the views of an individual, but tries to become an objective
> reflection of its topic. As Brion puts it, a wiki is a place where you
> let wackos edit your site, and with luck, the good wackos outnumber the
> bad. The iterative editing process is a good way to ensure eventual NPOV
> conformance.
> 
> Comments are absolutely different. They are written and signed by a
> single person, represent only that person's views, have no requirement
> of adherence to a NPOV, and that means that essentially none of the
> reasons that Wikipedia articles are editable by everyone apply to them.
> If allowing comment cross-editing was in any way beneficial, the popular
> web-based discussion forums with tens of millions of posts would have,
> without a doubt, adopted such a model quite a while ago. There's a
> reason they haven't done it.
> 
> I am not interested in continuing this discussion further, so please
> refrain from writing a snide reply that questions my intelligence so as
> to "test my reaction".
> 

Agreed. This discussion has degenerated, on Timwi's side at least, to ad 
hominem attacks and straw man arguments. Why do you persist in saying 
that opinions on talk pages have the same ownership/POV claims as 
articles in mainspace?

David, new but dismayed



From brion at pobox.com  Fri Nov  4 01:09:32 2005
From: brion at pobox.com (Brion Vibber)
Date: Thu, 03 Nov 2005 17:09:32 -0800
Subject: [Wikitech-l] still active: www.tocatch.info
In-Reply-To: <20051103070845.15521.qmail@web3108.mail.bbt.yahoo.co.jp>
References: <20051103070845.15521.qmail@web3108.mail.bbt.yahoo.co.jp>
Message-ID: <436AB4CC.1020004@pobox.com>

?? ??? wrote:
> Hello again,
>
> I just wanted to say the site "www.tocatch.info" is still
> active and is STEALING the Wikipedia content as a live
> mirror (proxy).

Killed it.

-- brion vibber (brion @ pobox.com)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 253 bytes
Desc: OpenPGP digital signature
URL: 

From beesley at gmail.com  Fri Nov  4 01:50:34 2005
From: beesley at gmail.com (Angela)
Date: Fri, 4 Nov 2005 02:50:34 +0100
Subject: [Wikitech-l] Re: Edit bailouts
In-Reply-To: 
References: <1130865815.9004.22.camel@zhora.1481ruerachel.net>
	
	<1130985715.11729.66.camel@zhora.1481ruerachel.net>
	
Message-ID: <8b722b800511031750p764fa226r2c289b6dc352edc7@mail.gmail.com>

On 11/4/05, Tim Starling  wrote:

> It would certainly be an invasion of privacy to send text from the
> edit box to the server before the user clicks "save", unless they are warned
> in advance.

If that's true, users might need to be warned about previewing
timelines and equations, which are saved even when the page is not.

http://upload.wikimedia.org/wikipedia/en/timeline/d2153e614be88c9ddb74c92670f68003.png

Angela


From mail at tgries.de  Thu Nov  3 06:51:00 2005
From: mail at tgries.de (Thomas Gries)
Date: Thu, 03 Nov 2005 07:51:00 +0100
Subject: [Wikitech-l] Re: Question re. MediaWiki_FAQ
In-Reply-To: 
References: <43692125.10607@tgries.de> 
Message-ID: <4369B354.9050806@tgries.de>


Timwi schrieb:

> Thomas Gries wrote:
>
>> The entry on 
>> http://meta.wikimedia.org/wiki/MediaWiki_FAQ#How_do_I_delete_a_user_from_my_list_of_users.3F 
>>
>> says
>> "MediaWiki does not support the deletion of user accounts. To prevent 
>> an account from being used, either scramble the password or set up an 
>> indefinite block on the account.
>> Do not remove users from the user table in the mySQL database; this 
>> causes problems with other parts of the wiki due to the relational 
>> structure of the database."
>>
>> My question:
>> is the last sentence "Do no remove ..." still valid and important ?
>
>
> Yes. Loads of database tables contain references to users, including 
> (but not limited to) article revisions (i.e. edits).
>
> If you feel *reeeaaally* confident, you can track down all of those 
> references and remove them (or change them to point to another user), 
> and then delete the user row. But seriously, it would be easier to 
> follow the suggestions you quoted from the FAQ. Don't do it the 
> complicated way _just_ because you think it will make your database 
> "cleaner". :-)
>
> Timwi 

Thanks.
A last question:
It appears to be safe to delete user entries from table user for users 
who haven't made edits (can you confirm this ?)
T.



From smolensk at eunet.yu  Fri Nov  4 06:21:39 2005
From: smolensk at eunet.yu (Nikola Smolenski)
Date: Fri, 4 Nov 2005 07:21:39 +0100
Subject: [Wikitech-l] Re: Edit bailouts
In-Reply-To: 
References: <1130865815.9004.22.camel@zhora.1481ruerachel.net>
	<1130985715.11729.66.camel@zhora.1481ruerachel.net>
	
Message-ID: <200511040721.40044.smolensk@eunet.yu>

On Friday 04 November 2005 01:37, Tim Starling wrote:
> Evan Prodromou wrote:
> > I guess there's not an easy way to find out how many people are bailing
> > out of edits that they really want to make... except maybe to ask them.
> > Perhaps a pop-up poll for folks who navigate away from an edit page in
> > some way besides saving? Kind of intrusive but it might have interesting
> > results.
>
> You could tell the difference between red link clicks and edit tab clicks
> by checking the referrer, or by adding an extra parameter to the URL.

Even simpler solution: if the article existed when the URL was accessed it is 
certainly an edit, otherwise it's not (might have been attempt of creation of 
a new article, but we can't know that anyway).


From wikiwatcher at yahoo.co.jp  Fri Nov  4 06:27:55 2005
From: wikiwatcher at yahoo.co.jp (=?ISO-2022-JP?B?GyRCRURDZhsoQiAbJEJLfEJATzobKEI=?=)
Date: Fri, 4 Nov 2005 15:27:55 +0900 (JST)
Subject: [Wikitech-l] still active: www.tocatch.info
In-Reply-To: <436AB4CC.1020004@pobox.com>
Message-ID: <20051104062755.33950.qmail@web3114.mail.bbt.yahoo.co.jp>

--- Brion Vibber  ?????????
> ?? ??? wrote:
> > Hello again,
> >
> > I just wanted to say the site "www.tocatch.info"
> is still
> > active and is STEALING the Wikipedia content as a
> live
> > mirror (proxy).
> 
> Killed it.

Thankyou!

TANAKA

--------------------------------------
Yahoo! Mail - supported by 10million people
http://pr.mail.yahoo.co.jp/10m/



From magnus.manske at web.de  Fri Nov  4 09:12:50 2005
From: magnus.manske at web.de (Magnus Manske)
Date: Fri, 04 Nov 2005 10:12:50 +0100
Subject: [Wikitech-l] Re: [WikiEN-l] Ratings again
In-Reply-To: <436A69B5.4080802@telus.net>
References: 	<435DFD26.7070009@pobox.com>	<435E0FE8.7000605@tonal.clara.co.uk>	<4360A7D0.6040505@web.de>	<4366790B.8070003@telus.net>	<43673F4F.5010403@web.de>
	<436A69B5.4080802@telus.net>
Message-ID: <436B2612.4010700@web.de>

Ray Saintonge wrote:

> Magnus Manske wrote:
>
>> Ray Saintonge wrote:
>>
>>> Magnus Manske wrote:
>>>
>>>> Neil Harris wrote:
>>>>
>>>>> Am I being naive here, or would a super-dumb implementation with a
>>>>> single table with the columns shown below be enough to work in the
>>>>> short term?
>>>>>
>>>>> Page_ID
>>>>> Revision_ID
>>>>> User_ID
>>>>> Rating_ID
>>>>> Rating value
>>>>> Timestamp  
>>>>
>>>>
>>>> This is what I did; no timestamp, but a varchar for comments.
>>>> Topics to
>>>> rate and their range (e.g, 1-5) are encoded here as well for user #0.
>>>> That's about as dumb as it gets ;-)
>>>
>>>
>>> I still prefer a 0-10 range of ratings.  I think a decimal
>>> normalization would be easier to work with in any subsequent
>>> analysis of results.
>>
>>
>> One can set the range for each topic individually.
>>
>> BTW, with values 0-10, you'd have eleven values...
>
>
> Yes.  That's a problem??

Not at all. I was just wondering, eleven values vs. "decimal
normalization" (which seems to be based on ten). But, what do I know
about statistics?

Anyway, the range will not be determined by the software. Bureaucrats
can set a range of their choice when they create a topic.

Magnus



From t.starling at physics.unimelb.edu.au  Fri Nov  4 10:51:05 2005
From: t.starling at physics.unimelb.edu.au (Tim Starling)
Date: Fri, 04 Nov 2005 21:51:05 +1100
Subject: [Wikitech-l] Re: Edit bailouts
In-Reply-To: <200511040721.40044.smolensk@eunet.yu>
References: <1130865815.9004.22.camel@zhora.1481ruerachel.net>	<1130985715.11729.66.camel@zhora.1481ruerachel.net>	
	<200511040721.40044.smolensk@eunet.yu>
Message-ID: 

Nikola Smolenski wrote:
> On Friday 04 November 2005 01:37, Tim Starling wrote:
> 
>>Evan Prodromou wrote:
>>
>>>I guess there's not an easy way to find out how many people are bailing
>>>out of edits that they really want to make... except maybe to ask them.
>>>Perhaps a pop-up poll for folks who navigate away from an edit page in
>>>some way besides saving? Kind of intrusive but it might have interesting
>>>results.
>>
>>You could tell the difference between red link clicks and edit tab clicks
>>by checking the referrer, or by adding an extra parameter to the URL.
> 
> 
> Even simpler solution: if the article existed when the URL was accessed it is 
> certainly an edit, otherwise it's not (might have been attempt of creation of 
> a new article, but we can't know that anyway).

You can't tell that from apache logs. Checking the referrer can be done
solely with the apache logs, changing the URL is a one-line change to
MediaWiki. Checking whether a title exists in the database is more
complicated, especially if you want to do it efficiently.

-- Tim Starling



From bob at jones-cliffe.freeserve.co.uk  Fri Nov  4 12:35:01 2005
From: bob at jones-cliffe.freeserve.co.uk (Robert Jones)
Date: Fri, 4 Nov 2005 12:35:01 -0000
Subject: [Wikitech-l] Whatlinkshere
Message-ID: <000201c5e13c$2ef51c30$0132a8c0@WILLLAPTOP>

Hi

 

I am using Mediawiki 1.5.1, and I Imported my entire collection of pages in
XML. Now, Special:Whatlinkshere does not show any pages on many of those
which I've tested. Is this a bug? Can the whatlinkshere database be flushed
somehow?

 

Many thanks.



From anysomefile at gmail.com  Fri Nov  4 14:06:48 2005
From: anysomefile at gmail.com (Any File)
Date: Fri, 4 Nov 2005 15:06:48 +0100
Subject: [Wikitech-l] Re: Edit bailouts
Message-ID: 

On 11/4/05, wikitech-l-request at wikimedia.org
 wrote:
> Send Wikitech-l mailing list submissions to
>         wikitech-l at wikimedia.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://mail.wikipedia.org/mailman/listinfo/wikitech-l
> or, via email, send a message with subject or body 'help' to
>         wikitech-l-request at wikimedia.org
>
> You can reach the person managing the list at
>         wikitech-l-owner at wikimedia.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Wikitech-l digest..."
>
>
> Today's Topics:
>
>    1. Re: Re: [WikiEN-l] Ratings again (Ray Saintonge)
>    2. still active: www.tocatch.info (?? ???)
>    3. Re: rcid in default sig? (was LiquidThreads) (Rowan Collins)
>    4. Re: rcid in default sig? (was LiquidThreads) (Rowan Collins)
>    5. Re: Edit bailouts (Tim Starling)
>    6. Re: Fwd:Shock site bot (Brian)
>    7. Re: Re: LiquidThreads (David Reynolds)
>    8. Re: still active: www.tocatch.info (Brion Vibber)
>    9. Re: Re: Edit bailouts (Angela)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 03 Nov 2005 12:05:15 -0800
> From: Ray Saintonge 
> Subject: Re: [Wikitech-l] Re: [WikiEN-l] Ratings again
> To: Wikimedia developers 
> Message-ID: <436A6D7B.4010609 at telus.net>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> David Gerard wrote:
>
> >Ray Saintonge wrote:
> >
> >
> >>Magnus Manske wrote:
> >>
> >>
> >>>I still prefer a 0-10 range of ratings.  I think a decimal
> >>>normalization would be easier to work with in any subsequent analysis
> >>>of results.
> >>>
> >>>
> >>One can set the range for each topic individually.
> >>
> >>
> >Mmm. See discussion at [[m:En validation topics]] and its archive -
> >too many choices of rating is probably a bad thing, because it's hard
> >to agree what a given value means. The test plan so far includes
> >probably far more variables than we'd want in any case ...
> >
> >
> The only variable is the individual subjective valuation of the
> article.  The number of points in the range are not different variables,
> but different values for the same variable.  To a large extent the size
> of the range is arbitrary because one range can be changed to another by
> a simple multiplication with a constant.  What's the difference between
> a value of 3 on a 1-5 scale and a value of 5 on a 0-10 range?
>
> Since the value which an individual gives to an article is essentially
> subjective there is no reliable way to define the ratings that can be
> given.  Statistical results tend to be self-normalizing.
>
> Ec
>
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 3 Nov 2005 16:08:45 +0900 (JST)
> From: ?? ???    
> Subject: [Wikitech-l] still active: www.tocatch.info
> To: wikitech-l at wikimedia.org
> Message-ID: <20051103070845.15521.qmail at web3108.mail.bbt.yahoo.co.jp>
> Content-Type: text/plain; charset=iso-2022-jp
>
> Hello again,
>
> I just wanted to say the site "www.tocatch.info" is still
> active and is STEALING the Wikipedia content as a live
> mirror (proxy).
>
> Please, if this is the wrong place to report this, where
> can I report it?
>
> On Sunday 30 October 2005 17:00, ?? ??? wrote:
> > Hello
> >
> > a www.tocatch.info appears to be mirroring various
> > Wikipedias live, examples:
> >
> > http://www.tocatch.info/en/Special:Recentchanges.htm
> > (ENGLISH)
> > http://www.tocatch.info/ja/Special:Recentchanges.htm
> > (JAPANESE)
> >
> > Thankyou
>
> TANAKA
>
>
> --------------------------------------
> Yahoo! Mail - supported by 10million people
> http://pr.mail.yahoo.co.jp/10m/
>
>
>
> ------------------------------
>
> Message: 3
> Date: Thu, 3 Nov 2005 23:23:13 +0000
> From: Rowan Collins 
> Subject: Re: [Wikitech-l] rcid in default sig? (was LiquidThreads)
> To: Wikimedia developers 
> Message-ID: <9f02ca4c0511031523x1ba42fcev at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> te:On 03/11/05, David Gerard  wrote:
> > Mindspillage wrote:
> > >On 11/2/05, Gregory Maxwell  wrote:
> >
> > >> Might I suggest an additional feature for mediawiki? How about
> > >> [{here}] which becomes a difflink to the edit where that tag was
> > >> inserted? People could add that to their signatures and thus every
> > >> post of their would be equipped to a handy difflink to an original
> > >> version.
> >
> > >I wouldn't mind seeing the timestamp on signatures be a diff link ,
> > >actually: makes it easy to see what was originally posted and doesn't
> > >add more to the standard sig.
> >
> > Thoroughly excellent idea! (Does an edit know what its revision ID is
> > going to be while it's saving?)
>
> I'd been pondering this myself recently, but it looks like it doesn't
> - and probably can't - know its ID soon enough. Not only does the
> Revision object not get an ID until the insert function (obviously too
> late for text manipulation) but it has to actually be saved in MySQL
> for the autoincrement field to autoincrement.
>
> But having spent ages looking at that, I remember that we have
> functions for doing diffs based on "next revision after this". So,
> it's not pretty, but presumably a pre save transform (e.g. a
> signature) could embed a link to a diff between the revision *before*
> itself and "whatever revision comes next"  - which in all but the
> oddest cases will be the revision you're in the middle of saving...
> So, sort of like "/index.php?oldid=$lastrevision&direction=next" where
> $lastrevision is the latest revision actually *saved* of that page.
>
> --
> Rowan Collins BSc
> [IMSoP]
>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 3 Nov 2005 23:36:42 +0000
> From: Rowan Collins 
> Subject: Re: [Wikitech-l] rcid in default sig? (was LiquidThreads)
> To: Wikimedia developers 
> Message-ID: <9f02ca4c0511031536h3601a1e5x at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On 03/11/05, Gregory Maxwell  wrote:
> > On 11/3/05, Angela  wrote:
> > > It only sounds a good idea if these links are not going to show up in
> > > the edit text or in diffs. Signatures are getting bad enough now with
> > > all the s and   junk in
> > > them. I wouldn't want to read an edit page where full URLs appeared
> > > after every comment.
>
> Hm, I admit I hadn't thought of that...
>
> > I agree that difflinks are long an ugly. Perhaps it's come time for a
> > special difflink syntax which is nice and compact?
>
> Well, they could be made pretty compact just using templates (or, more
> sanely, a built-in "variable"). It seems you need the article name
> when using relative diffs (see my previous message), so a template
> "nextdiff" of the form:
>
> [{{fullurl:{{PAGENAME}}|diff=next|oldid={{{1}}}}} {{{2}}}]
>
> (first parameter is the pre-change revision, the second the text to
> display as the link) could appear in the wikisource as:
>
> {{nextdiff|21221469|Diff}}
>
> Make this an inbuilt variable for portability and performance, and
> combine it with the idea someone mentionned of linking the date in the
> signature (or part of it?) and it ought not to be too intrusive, I
> think.
>
> --
> Rowan Collins BSc
> [IMSoP]
>
>
> ------------------------------
>
> Message: 5
> Date: Fri, 04 Nov 2005 11:37:20 +1100
> From: Tim Starling 
> Subject: [Wikitech-l] Re: Edit bailouts
> To: wikitech-l at wikimedia.org
> Message-ID: 
> Content-Type: text/plain; charset=ISO-8859-1
>
> Evan Prodromou wrote:
> > I guess there's not an easy way to find out how many people are bailing
> > out of edits that they really want to make... except maybe to ask them.
> > Perhaps a pop-up poll for folks who navigate away from an edit page in
> > some way besides saving? Kind of intrusive but it might have interesting
> > results.
>
> You could tell the difference between red link clicks and edit tab clicks by
> checking the referrer, or by adding an extra parameter to the URL. Detecting
> when the user starts typing with javascript could be another measure,
> although some might consider that sort of thing to be an invasion of
> privacy. It would certainly be an invasion of privacy to send text from the
> edit box to the server before the user clicks "save", unless they are warned
> in advance.
>
> -- Tim Starling
>
>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 03 Nov 2005 00:18:33 -0500
> From: Brian 
> Subject: Re: [Wikitech-l] Fwd:Shock site bot
> To: Wikimedia developers 
> Cc: brett.gustafson at gmail.com, Wikitech-L 
> Message-ID: <43699DA9.1040205 at gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> I do know of one anon that has refused to start an account and has
> contributed extensively to evolution/species-related articles. He has
> turned his user page into a detailed watchlist (linked below; each of
> those dots is a watched page). He also appears to grasp policy and
> guidelines more than most Wikipedians. Check out an old copyedit of
> his:
> http://en.wikipedia.org/w/index.php?title=Great_Lakes_Storm_of_1913&diff=10227257&oldid=10225881
>
>
> http://en.wikipedia.org/wiki/User:68.81.231.127
>
> Several people have pleaded on his talk page for him to start an account.
>
>
> SJ wrote:
>
> >Forwarded from Wikien-l.
> >
> >I wonder if it would be useful to have an index of external URLs that
> >supports throttling of the # of times per day a given base URL can be
> >added to WP sites...
> >
> >SJ
> >
> >
> >PS - this is my favorite part --  "I'm a regular wikipedia user
> >although i don't have an account here"  I wonder how many people
> >consider themselves 'regular uers' -- and how many of those actually
> >read user and policy pages...
> >
> >---------- Forwarded message ----------
> >From: Brett Gustafson 
> >Date: Nov 2, 2005 7:22 PM
> >Subject: [WikiEN-l] Shock site bot
> >To: wikien-l at wikipedia.org
> >
> >
> >I recently recieved this message from a user:
> >"I'm a regular wikipedia user although i don't have an account here. I think
> >this site is great and it really helps me with my college work. But I
> >recently heard of these people that were talking about wikipedia that they
> >were all programming a hack for it. So after a little while I found it was a
> >spider to hunt down all the pages links and change them to shocks site links
> >or something along those lines. I didn't know who to tell so I just thought
> >I'd tell an administrator as they might know who to tell or what to do. Just
> >giving an advanced warning so you might be able to do something to protect
> >this wonderful resourse. Apparently they permenantly change their ip address
> >using some thing (a bit beyond me). Something like that. I just didn't know
> >what to do. I hope I didn't embaress myself here. Thanks for your time."
> >
> >Brett
> >_______________________________________________
> >WikiEN-l mailing list
> >WikiEN-l at Wikipedia.org
> >To unsubscribe from this mailing list, visit:
> >http://mail.wikipedia.org/mailman/listinfo/wikien-l
> >
> >
> >--
> >++SJ
> >_______________________________________________
> >Wikitech-l mailing list
> >Wikitech-l at wikimedia.org
> >http://mail.wikipedia.org/mailman/listinfo/wikitech-l
> >
> >
> >
>
>
> ------------------------------
>
> Message: 7
> Date: Thu, 03 Nov 2005 18:21:06 -0600
> From: David Reynolds 
> Subject: Re: [Wikitech-l] Re: LiquidThreads
> To: Wikimedia developers 
> Message-ID: <436AA972.6080100 at kc.rr.com>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> Ivan Krstic wrote:
> > Timwi wrote:
> >
> >>Secondarily I want to be able to fix spellings, in the faint hope that
> >>it will help some people learn better spelling. Again, the only people
> >>who would object to this would be people who can't spell and are
> >>therefore unsuitable for writing an encyclopedia anyway.
> >
> >
> > Totally broken reasoning.
> >
> >
> >>I've heard many reasons for maliciously changing an article. Yet
> >>articles on Wikipedia tend to get better. Interesting, innit?
> >
> >
> > That's an irrelevant non-answer to Ryan's question.
> >
> >
> >>No, the purpose of this was to test your reaction. You fell right for it
> >>exactly the way I expected: you picked up only on the emotional side of
> >>the paragraph (taking it as an insult)
> >
> >
> > Do you understand how conceited this makes you sound?
> >
> >
> >>And this highlights what I mean: you (and many other people) only object
> >>to being able to edit comments because it somehow "feels" wrong. You
> >>can't really say why it *is* wrong.
> >
> >
> > You need to relax, and start spending less time writing
> > borderline-offensive e-mail to people who are trying to reason
> > constructively, and more time thinking about what they're saying. I can
> > tell you exactly why it *is* wrong: comments are not Wikipedia articles,
> > even if you seem to be constantly confounding the two.
> >
> > A Wikipedia article isn't signed by a single person's name. It doesn't
> > represent the views of an individual, but tries to become an objective
> > reflection of its topic. As Brion puts it, a wiki is a place where you
> > let wackos edit your site, and with luck, the good wackos outnumber the
> > bad. The iterative editing process is a good way to ensure eventual NPOV
> > conformance.
> >
> > Comments are absolutely different. They are written and signed by a
> > single person, represent only that person's views, have no requirement
> > of adherence to a NPOV, and that means that essentially none of the
> > reasons that Wikipedia articles are editable by everyone apply to them.
> > If allowing comment cross-editing was in any way beneficial, the popular
> > web-based discussion forums with tens of millions of posts would have,
> > without a doubt, adopted such a model quite a while ago. There's a
> > reason they haven't done it.
> >
> > I am not interested in continuing this discussion further, so please
> > refrain from writing a snide reply that questions my intelligence so as
> > to "test my reaction".
> >
>
> Agreed. This discussion has degenerated, on Timwi's side at least, to ad
> hominem attacks and straw man arguments. Why do you persist in saying
> that opinions on talk pages have the same ownership/POV claims as
> articles in mainspace?
>
> David, new but dismayed
>
>
>
> ------------------------------
>
> Message: 8
> Date: Thu, 03 Nov 2005 17:09:32 -0800
> From: Brion Vibber 
> Subject: Re: [Wikitech-l] still active: www.tocatch.info
> To: Wikimedia developers 
> Message-ID: <436AB4CC.1020004 at pobox.com>
> Content-Type: text/plain; charset="iso-2022-jp"
>
> ?? ??? wrote:
> > Hello again,
> >
> > I just wanted to say the site "www.tocatch.info" is still
> > active and is STEALING the Wikipedia content as a live
> > mirror (proxy).
>
> Killed it.
>
> -- brion vibber (brion @ pobox.com)
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: signature.asc
> Type: application/pgp-signature
> Size: 253 bytes
> Desc: OpenPGP digital signature
> Url : http://mail.wikipedia.org/pipermail/wikitech-l/attachments/20051103/bd559816/signature-0001.bin
>
> ------------------------------
>
> Message: 9
> Date: Fri, 4 Nov 2005 02:50:34 +0100
> From: Angela 
> Subject: Re: [Wikitech-l] Re: Edit bailouts
> To: Wikimedia developers 
> Message-ID:
>         <8b722b800511031750p764fa226r2c289b6dc352edc7 at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On 11/4/05, Tim Starling  wrote:
>
> > It would certainly be an invasion of privacy to send text from the
> > edit box to the server before the user clicks "save", unless they are warned
> > in advance.
>
> If that's true, users might need to be warned about previewing
> timelines and equations, which are saved even when the page is not.
>
I belive that everithig edited is send when the Preview button is pressed.
A user who press the preview button, do it at his/her will. He can
reasonable understand that data are sent. (well actually maybe
somebody can not undestand that).

If I undestand correctly what are you saing is that a minor sign of
what enetered in the text box and previewed remains on server even
ifthe change are descarded and the page not saved.

Nevertless there is not a direct way to say tis image (as far as I
know) nor to link them to the editors.

In my humble opinion if someone agree to send all the data (and is
aware of this), he/she  also agree to  send the math code.  In any
case this image are not pubblicy displayed (as it is the text of a
preview page).

If a log of the server with all the post form parameter were taken
much more information would be avaible that that.

In my opinion it is not a great problem, as regarded of privacy
concerns. This can be however be bigger problem if someone (an user of
the system) develop a system to misuse of this information. For
instance two people could use this system to create an image that
another user can check for existence and rad it and use this system as
a sort of crypto comunication. Hovewer this seem not a serious point
untill everybody is free to write anything pubblically.

How long such image survive on server?

AnyFile


From rowan.collins at gmail.com  Fri Nov  4 16:15:04 2005
From: rowan.collins at gmail.com (Rowan Collins)
Date: Fri, 4 Nov 2005 16:15:04 +0000
Subject: [Wikitech-l] Whatlinkshere
In-Reply-To: <000201c5e13c$2ef51c30$0132a8c0@WILLLAPTOP>
References: <000201c5e13c$2ef51c30$0132a8c0@WILLLAPTOP>
Message-ID: <9f02ca4c0511040815y7fff8680i@mail.gmail.com>

On 04/11/05, Robert Jones  wrote:
> I am using Mediawiki 1.5.1, and I Imported my entire collection of pages in
> XML. Now, Special:Whatlinkshere does not show any pages on many of those
> which I've tested.

I think maybe the import tool doesn't pre-process the new pages it's
creating in the way a normal page would; this includes recording the
links, and updating the search index.

> Is this a bug?

Presumably ;)

> Can the whatlinkshere database be flushed somehow?

If you have shell (ssh/telnet/etc) access to your host, you can use
the appropriate PHP script in the maintenance sub-directory; it's
called something like "rebuildLinks.php" IIRC.

--
Rowan Collins BSc
[IMSoP]


From timwi at gmx.net  Fri Nov  4 18:32:40 2005
From: timwi at gmx.net (Timwi)
Date: Fri, 04 Nov 2005 18:32:40 +0000
Subject: [Wikitech-l] Re: LiquidThreads
In-Reply-To: <200511031844.50693@bloodgate.com>
References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil>	
	 <200511031844.50693@bloodgate.com>
Message-ID: 


> If this (misrepresenting what I wrote) happens only once to *my* comments,
> it is a significant problem for me. Up to the point where I would stop
> posting comments *at all* until the website makes sure that my comments
> are not edited and still show my signature.

Of course you are always free to not participate in any discussion. I 
don't see any problem with that.



From timwi at gmx.net  Fri Nov  4 18:38:41 2005
From: timwi at gmx.net (Timwi)
Date: Fri, 04 Nov 2005 18:38:41 +0000
Subject: [Wikitech-l] Re: LiquidThreads
In-Reply-To: <200511031845.17000@bloodgate.com>
References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil>		
	<200511031845.17000@bloodgate.com>
Message-ID: 


>>>Shouldn't it be the decision of the users, or community leaders,
>>>whether they would want comments edited or not?
>>
>>Yes. And at least some of those users are reading this mailing list :)
> 
> But I still get the idea that we will not even have the option to lock 
> comments because some don't want this at all. :-(

It is entirely up to the implementor what and how they will implement 
it. If I were to implement it, I would make all comments editable except 
by blocked users, so existing Wikipedia policies against disruption and 
vandalism can continue to function. Someone else might make comments 
editable only by their author. It is then up to Wikimedia to decide 
whether to enable the feature as implemented, request a change to the 
implementation and hope that someone goes for it, or keep the feature 
disabled forever without telling anyone why. Given that the latter one 
is currently happening to the article-validation feature, and that its 
author (Magnus Manske) is getting visibly frustrated and agitated about 
it, I fear the same will happen to this feature, and as a result, I have 
discarded my willingness to implement this.

Timwi



From timwi at gmx.net  Fri Nov  4 18:44:55 2005
From: timwi at gmx.net (Timwi)
Date: Fri, 04 Nov 2005 18:44:55 +0000
Subject: [Wikitech-l] Re: LiquidThreads
In-Reply-To: <436AA972.6080100@kc.rr.com>
References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil>			
	<4368C1D7.3040904@fas.harvard.edu> <436AA972.6080100@kc.rr.com>
Message-ID: 


> Agreed. This discussion has degenerated, on Timwi's side at least, to
> ad hominem attacks and straw man arguments. Why do you persist in
> saying that opinions on talk pages have the same ownership/POV claims
> as articles in mainspace?

Firstly, I am not aware of any ad hominem attacks from me. Since the 
definition of ad hominem attack is vague, though, it is easily possible 
that people mistake my argumentation for such.

As for strawman arguments, I know they are notoriously difficult to 
spot, so I cannot claim I haven't made any. On the other hand, no-one 
has pointed out any to me.

As for the second part, who is the "you" you are referring to? If it's 
me, the answer is: I don't, and I never did. I feel misunderstood by 
about half of the participants in this discussion.

Timwi



From timwi at gmx.net  Fri Nov  4 18:54:14 2005
From: timwi at gmx.net (Timwi)
Date: Fri, 04 Nov 2005 18:54:14 +0000
Subject: [Wikitech-l] Re: rcid in default sig? (was LiquidThreads)
In-Reply-To: <9f02ca4c0511031523x1ba42fcev@mail.gmail.com>
References: 
	<9f02ca4c0511031523x1ba42fcev@mail.gmail.com>
Message-ID: 

Rowan Collins wrote:
> 
> I'd been pondering this myself recently, but it looks like it doesn't
> - and probably can't - know its ID soon enough. Not only does the
> Revision object not get an ID until the insert function (obviously too
> late for text manipulation) but it has to actually be saved in MySQL
> for the autoincrement field to autoincrement. [...etc.etc.etc.]

All throughout this you're assuming that the diff link, or indeed 
anything that isn't the comment proper, would have to be part of the 
comment text. I understand this assumption may seem self-evident because 
this is the way signatures are done on Talk pages now, but from a 
development point of view, it's a dangerous assumption to make because 
it will clearly lead to an extremely bad implementation.

Timwi



From timwi at gmx.net  Fri Nov  4 18:55:19 2005
From: timwi at gmx.net (Timwi)
Date: Fri, 04 Nov 2005 18:55:19 +0000
Subject: [Wikitech-l] Re: rcid in default sig? (was LiquidThreads)
In-Reply-To: <8b722b800511031438i587805b3q955f3b7a969e70f4@mail.gmail.com>
References: 
	<8b722b800511031438i587805b3q955f3b7a969e70f4@mail.gmail.com>
Message-ID: 

Angela wrote:
>>>I wouldn't mind seeing the timestamp on signatures be a diff link ,
>>>actually: makes it easy to see what was originally posted and doesn't
>>>add more to the standard sig.
>>
>>Thoroughly excellent idea! (Does an edit know what its revision ID is
>>going to be while it's saving?)
> 
> It only sounds a good idea if these links are not going to show up in
> the edit text or in diffs.

Obviously, they wouldn't. They aren't part of the comment text.

Timwi



From timwi at gmx.net  Fri Nov  4 19:00:53 2005
From: timwi at gmx.net (Timwi)
Date: Fri, 04 Nov 2005 19:00:53 +0000
Subject: [Wikitech-l] Re: Question re. MediaWiki_FAQ
In-Reply-To: <4369B354.9050806@tgries.de>
References: <43692125.10607@tgries.de> 
	<4369B354.9050806@tgries.de>
Message-ID: 


> A last question:
> It appears to be safe to delete user entries from table user for users 
> who haven't made edits (can you confirm this ?)

No. Even if they haven't made edits, moves, deletes, protections, and 
don't have any watchlist entries, even then there could be entries for 
them in 'user_groups' and/or 'user_newtalk'. You're safe only if you 
remove all of these, and I'm not making any guarantees that I haven't 
forgotten any ;-).

Timwi



From gmaxwell at gmail.com  Fri Nov  4 19:18:19 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Fri, 4 Nov 2005 14:18:19 -0500
Subject: [Wikitech-l] Re: rcid in default sig? (was LiquidThreads)
In-Reply-To: 
References: 
	<9f02ca4c0511031523x1ba42fcev@mail.gmail.com>
	
Message-ID: 

On 11/4/05, Timwi  wrote:
> Rowan Collins wrote:
> >
> > I'd been pondering this myself recently, but it looks like it doesn't
> > - and probably can't - know its ID soon enough. Not only does the
> > Revision object not get an ID until the insert function (obviously too
> > late for text manipulation) but it has to actually be saved in MySQL
> > for the autoincrement field to autoincrement. [...etc.etc.etc.]
>
> All throughout this you're assuming that the diff link, or indeed
> anything that isn't the comment proper, would have to be part of the
> comment text. I understand this assumption may seem self-evident because
> this is the way signatures are done on Talk pages now, but from a
> development point of view, it's a dangerous assumption to make because
> it will clearly lead to an extremely bad implementation.

I don't agree. We're talking about a new feature which would be useful
for a number of things and is somewhat orthogonal to the threads
discussion here. However, it would also be useful for threads so long
as people make a difflink containing signature a part of their
comment. I don't think that it's an unreasonable request... What it
might get us to is a 90% solution which reduces the cost of change
detection, ... which mostly kills the arguments about comment
alteration without really changing what we are doing.


From node.ue at gmail.com  Fri Nov  4 20:26:45 2005
From: node.ue at gmail.com (Mark Williamson)
Date: Fri, 4 Nov 2005 13:26:45 -0700
Subject: [Wikitech-l] Parameters in RSS feeds
Message-ID: <849f98ed0511041226o7ccaa6bay@mail.gmail.com>

Hi all,

Is there a way to add paramaters to the _URL_ of an RSS feed for
recent changes pages to exclude certain types of edits, such as
additions to logs (creation of user acc'ts, or hiding minor edits, for
example)?

I'd greatly appreciate this.

Mark
--
If you would like a gmail invite, please send me an e-mail.
Si ud. quiere que le env?e una invitaci?n para ingresar gmail, env?eme
un mensaje.
Si vous voulez que je vous envoie une invitation ? joindre gmail,
envoyez-moi s.v.p un message.
Se vce. gostaria que lhe envie um convite para juntar gmail, favor de
envie-me uma mensagem.
Se vuleti chi vi manu 'n invitu a uniri gmail, mandatimi n messaggiu.


From timwi at gmx.net  Fri Nov  4 20:16:35 2005
From: timwi at gmx.net (Timwi)
Date: Fri, 04 Nov 2005 20:16:35 +0000
Subject: [Wikitech-l] Re: rcid in default sig? (was LiquidThreads)
In-Reply-To: 
References: 	<9f02ca4c0511031523x1ba42fcev@mail.gmail.com>	
	
Message-ID: 

Gregory Maxwell wrote:
> 
> We're talking about a new feature which [...]

Oh, I see. Looks like I didn't quite follow the change of topic. I 
apologise.



From maillist at roomity.com  Fri Nov  4 22:26:45 2005
From: maillist at roomity.com (shenanigans)
Date: Fri,  4 Nov 2005 14:26:45 -0800 (PST)
Subject: [Wikitech-l] [OTAnn] Groups:New Developments at Roomity
Message-ID: <6693276.221131143205499.JavaMail.tomcat5@slave1.roomity.com>

I was interested in getting feedback from current communities of Roomity.com and let you know the recent improvements we are working on for better interface.

Roomity.com v 1.5 is a web 2.01/RiA poster child community webapp. This new version adds broadcast video, social networking such as favorite authors and html editor.

Its likely already you have groups and content you are already using but aggregated and safer, including technology, Java, etc., but it only works on broadband.

S.

*This is not spam! I work for Roomity and are trying to find better ways to enhance our members' experience.


----------------------------------------------------------------------------------------------------------------------------------------------
Broadband interface (RIA) + mail box saftey = Wikimedia_Developers_List.roomity.com
*Your* clubs, no sign up to read, ad supported; try broadband internet. ~~1131143205496~~
----------------------------------------------------------------------------------------------------------------------------------------------


From brion at pobox.com  Fri Nov  4 23:19:29 2005
From: brion at pobox.com (Brion Vibber)
Date: Fri, 04 Nov 2005 15:19:29 -0800
Subject: [Wikitech-l] [OTAnn] Groups:New Developments at Roomity
In-Reply-To: <6693276.221131143205499.JavaMail.tomcat5@slave1.roomity.com>
References: <6693276.221131143205499.JavaMail.tomcat5@slave1.roomity.com>
Message-ID: <436BEC81.3070401@pobox.com>

I've removed the spammer from the list and blocked the address from
resubscribing.

-- brion vibber (brion @ pobox.com)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 253 bytes
Desc: OpenPGP digital signature
URL: 

From newhoggy at gmail.com  Fri Nov  4 23:33:16 2005
From: newhoggy at gmail.com (John Ky)
Date: Sat, 5 Nov 2005 10:33:16 +1100
Subject: [Wikitech-l] One wiki, multiple websites
Message-ID: 

Hello,

I want to setup up a number of websites which share a single
wiki underneath.  The only difference between the websites
will be:

* The URI used to access the website
* The main page
* The navigation links

For instance:

I want to be able to have two websites with the following
navigation links:

marrickville.community.org:
  Main page -> Marrickville community
  News -> News at Marrickville
  Events -> Events at Marrickville

bankstown.community.org:
  Main page -> Bankstown community
  News -> News at Bankstown
  Events -> Events at Bankstown

But, I want both websites to use the same wiki, so that they can
share pages, and have the same sign on.

Is it possible, and how much work is involved in configuring
such a wiki?

Thanks

-John


From gmaxwell at gmail.com  Sat Nov  5 07:48:40 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Sat, 5 Nov 2005 02:48:40 -0500
Subject: [Wikitech-l] Thoughts and measurments related to a new storage
	framework for large wikis.
Message-ID: 

In many modern wikis (such as mediawiki) every version of a page
through it's life is made accessable. There are many options on how to
store this information, the most simple being to store a complete copy
of every article. Another popular option is grouping up old versions
into one record and gzipping the stack together, block compression.
Versions can be gziped by themselves but this isn't a huge win for
most articles. Duplicate versions can eliminated, diffs can be
computed.. there are many options.

Every one of these options has a time/space tradeoff. Different
options are better for different access patterns.   For Wikipedia we
mostly use whole versions, with a little bit of block compression
thrown in every once in a while for good measure.  This makes deletion
fairly easy although the requirement to truly support deletion is not
that clear.  It is believed that this has reasonable performance
characteristics, but I will explain at the end that it might not be
ideal even from a performance perspective. It is not obvious not ideal
from a space perspective.

I'd like to propose a new framework for storage of archive versions. I
will then back up this concept with some measurements.

When a page is moved into the archive (or sometime before), we compute
its cryptographic hash. If the archive already has that hash, we are
done. This eliminates the bit identical duplicates that come out of
reverting.

If the hash is new, we compute a binary delta between the new version
and the previous version of the page. The previous version is obtained
from version_cache (discussed below). If it is the first version we
diff against the empty string. We store an archive row:
new_page_hash,old_page_hash,delta_blob.

We then insert the new page into version_cache (keyed on the hash). If
the page is large (100K?) it is gziped before inserting if the storage
backend doesn't do this for us (some DB's like PostgreSQL do...).  it
is flaged to mark that it is 'second from top' in its record. We
update the entry for the previous version to indicate that it is no
longer second from the top.

When someone requests a non-current version, we check the
version_cache. If the version is there, we return it. We might keep a
hit counter or a last used date, if we do, we update that. If it is
not in the cache, we find the delta, and check to see if it is in the
cache. We walk back deltas until we find the most recent version in
the cache. We then forward apply diffs to generate all versions up to
the desired cached copy. All generated versions are inserted into the
version cache, although the hitcount/lastused on the middle versions
should not be set as high as the desired version if we maintain that.

Periodically or based on fixed storage pressure, objects in the
version cache are purged. 'Second from the top' objects are never
purged, objects used less often and less recently are dropped first,
however, we try to maintain a version for every 50(? a tunable)
revisions of a page or so.

So thats basically the idea. Now I want to show you some data which
makes this proposal compelling.

Let us consider the article "Anarchism" on enwiki, it's reasonably
large (average 57k over its entire life, 66k currently) but not huge,
and its had a reasonably large number of revisions (4370 in the 10/29
dump)... it's also near the top of the dump which makes it quick to
extract. :) It's not subject to an abnormal amount of editwarring or
vandalism... pretty typical for what we probably want articles to be..

The concatination of all versions of the article is 242MB in size.
This is how much it would take to store ideally without any
compression. In reality the storage size for this article would be
much larger due to non-ideal packing.

If we gzip -9 the concatination, it is reduced to 80MB. This
represents the savings we could get with completely ideal block
compression. If we group the article into blocks of 5 revisions, we
find the storage to require 85MB, which is a little more realistic.

If we store based on the content hash, we eliminate 673 versions.
Without gziping the size is now 206MB. Gzip -9ing each version alone
reduces it to 69MB. (I didn't measure block compression for this one).

If we compute a diff -du for each version with a new hash from the
previous versions we find the concatenation of the diffs are 15MB.
Gzipping the diffs one at a time gives us a 5.2MB file while gziping
in blocks of 5 gives us 3MB.

If we use bsdiff (http://www.daemonology.net/bsdiff/ fast and
efficent, if you ignore the anti-social license) rather than diff -du,
and work on the non-duplicate files we get a total output of 1.4MB.
Plus bsdiff is much faster than diff -du, and much faster to apply
than patch.

If we use xdelta (1.1.3 tested) the total is 1.5MB. Xdelta is almost
much faster than diff/patch. If we disable gzip in xdelta and block
compress in groups of 5, the total is 968K. We can get that down to
500k in blocks of 100 with lzma.. 378k lzma all deltas.  It takes my
system about 4 seconds to apply all 3697 deltas recover the most
recent version from the empty string.. and I suspect most of that time
is spent in fork() since I'm starting xdelta once per diff. If we our
cache tends to keep 1 full version around for every 50 that will only
add a couple of megs of storage.

So frankly, given the size of the current dumps and these results, I
believe that it is likely that  it would be possible to get the entire
working set of Wikipedia article text into ram on the sort of hardware
we could easily afford (a couple of 1u AMD64 boxes running some
imaginary non-sucky version of memcached). Based on these numbers, the
growth we've seen, and the cost curves for dram,  I think we could
continue to keep our working set in ram for the forseeable future.

The computational costs of my proposed model can be substantially
reduced by smart cache management, and are in any case infinitesimal
by comparison to the reduction in IO cost by getting the 160:1
compression and getting the entire working set into ram..

As the wiki grows the gains will only be greater, and there will be
less and less interest in most old page versions.

Thoughts?


From nospam-abuse at bloodgate.com  Sat Nov  5 09:47:43 2005
From: nospam-abuse at bloodgate.com (Tels)
Date: Sat, 5 Nov 2005 10:47:43 +0100
Subject: [Wikitech-l] Thoughts and measurments related to a new storage
	framework for large wikis.
In-Reply-To: 
References: 
Message-ID: <200511051047.54206@bloodgate.com>

-----BEGIN PGP SIGNED MESSAGE-----

Moin,

On Saturday 05 November 2005 08:48, Gregory Maxwell wrote:
> In many modern wikis (such as mediawiki) every version of a page
[snip]
> Thoughts?

Sounds great - now someone has to write code to experimentally implement 
it :)

Even if we don't manage to keep the data in RAM, reducing the database 
size by a factor 100 would be really great.

Best wishes,

Tels

- -- 
 Signed on Sat Nov  5 10:46:47 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.

 "The UAC is making safer worlds through superior firepower."

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iQEVAwUBQ2x/yHcLPEOTuEwVAQFQdwf8Cc9qd+4snb8jRrMw226lCDn/i1Tl/r2S
WTerP7+hvM0awZWIOCBIu1XG9qaSabTwxZ2CMTvtNe0oY9//Yuf8dXGnVWKjNLko
bO8DLIluI01I4t4yncFs4CtE63mt3as1fev8uZairaA7HhYqDl1bo/BdN94xR4tT
BwcTQt2izijOfr8BCizLaqvmgmpbEETiiObBLRPxotYQAySB1+IOHSIDBvMZZthL
XKmPTK/ZP8M0eeG4dCaUyQD1zFKmR9Pu9R+uJdpOJF7L5DDds+6CF6X2rPLLhL7x
SvfdYZXgz0Bsco0kl0phuZikY2ZotmdG8YD3kWAu5S8eQc4REpRIng==
=5joF
-----END PGP SIGNATURE-----


From JeLuF at gmx.de  Sat Nov  5 13:12:50 2005
From: JeLuF at gmx.de (Jens Frank)
Date: Sat, 5 Nov 2005 14:12:50 +0100 (MET)
Subject: [Wikitech-l] (no subject)
Message-ID: <30595.1131196370@www46.gmx.net>

We now provide sitemaps for all Wikimedia wikis. The sitemaps
can be found at http://sitemap.wikimedia.org/sitemap/

Sitemaps are XML lists of all the pages in a website. We also
provide the last-modified timestamp for each page and a
"priority". We use a high priority for articles and lower
priorities for the other namespaces.

In mapping.txt there's a  mapping from website name to the URL
of the sitemap. The sitemaps are split by namespace, so if
you're only interested in one namespace, you can get it without
downloading the rest of the files.

A descirption of the XML schema is available
at
http://www.google.com/webmasters/sitemaps/login?sourceid=gsm&subid=us-et-about2

If your application uses our sitemaps we'd be happy to hear about
it.

Regards,

JeLuF

-- 
10 GB Mailbox, 100 FreeSMS/Monat http://www.gmx.net/de/go/topmail
+++ GMX - die erste Adresse f?r Mail, Message, More +++


From t.starling at physics.unimelb.edu.au  Sat Nov  5 14:29:42 2005
From: t.starling at physics.unimelb.edu.au (Tim Starling)
Date: Sun, 06 Nov 2005 01:29:42 +1100
Subject: [Wikitech-l] Re: Thoughts and measurments related to a new storage
 framework for large wikis.
In-Reply-To: 
References: 
Message-ID: 

Gregory Maxwell wrote:
> I'd like to propose a new framework for storage of archive versions. I
> will then back up this concept with some measurements.

Sounds good. If you're serious about coding this, then there's some
implementation details I'd like to discuss. Just stylistic, nothing
fundamental. Otherwise, I'd say go for it.

-- Tim Starling



From evan at wikitravel.org  Sat Nov  5 18:37:14 2005
From: evan at wikitravel.org (Evan Prodromou)
Date: Sat, 05 Nov 2005 13:37:14 -0500
Subject: [Wikitech-l] Re: Edit bailouts
In-Reply-To: <849f98ed0511021913g2644d2b9i@mail.gmail.com>
References: <1130865815.9004.22.camel@zhora.1481ruerachel.net>
	
	<1130985715.11729.66.camel@zhora.1481ruerachel.net>
	<849f98ed0511021913g2644d2b9i@mail.gmail.com>
Message-ID: <1131215834.8975.65.camel@zhora.1481ruerachel.net>

On Wed, 2005-02-11 at 20:13 -0700, Mark Williamson wrote:

> Don't think that would happen at Wikipedia. You could certainly try it
> at Wikitravel, though... perhaps it would be better-accepted if it
> only popped up for every 3rd bailout? (not per person, but total).

My guess was about once in every 10 bailouts, and only ask the same IP
address once, and only ask if the user's never had a successful saved
edit.

I think it would make a good academic paper for someone -- analyzing
barriers to entry for first-time Wiki contributors. It'd be interesting
to compare across large vs. small wikis, different wiki engines,
different kinds of edits (red link vs. edit tab). A clever grad student
could probably get some money from e.g. SocialText or JotSpot to sponsor
the research.

Anywho. Might be something I pick up in my plentiful spare time.

~ESP

-- 
Evan Prodromou 
Wikitravel (http://wikitravel.org/) -- the free, complete, up-to-date
and reliable world-wide travel guide


From gmaxwell at gmail.com  Sat Nov  5 19:45:51 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Sat, 5 Nov 2005 14:45:51 -0500
Subject: [Wikitech-l] Re: Thoughts and measurments related to a new
	storage framework for large wikis.
In-Reply-To: 
References: 
	
Message-ID: 

On 11/5/05, Tim Starling  wrote:
> Sounds good. If you're serious about coding this, then there's some
> implementation details I'd like to discuss. Just stylistic, nothing
> fundamental. Otherwise, I'd say go for it.

Sure, I'll give a crack at it...
Either email me your thoughts or grab me on irc..


From glimmer_phoenix at yahoo.es  Sat Nov  5 19:53:53 2005
From: glimmer_phoenix at yahoo.es (Felipe Ortega)
Date: Sat, 5 Nov 2005 20:53:53 +0100 (CET)
Subject: [Wikitech-l] Thoughts and measurments related to a new storage
	framework for large wikis.
In-Reply-To: 
Message-ID: <20051105195353.33842.qmail@web26611.mail.ukl.yahoo.com>

 --- Gregory Maxwell  escribi?:

> So frankly, given the size of the current dumps and
> these results, I
> believe that it is likely that  it would be possible
> to get the entire
> working set of Wikipedia article text into ram on
> the sort of hardware
> we could easily afford 

> As the wiki grows the gains will only be greater,
> and there will be less and less interest in most old
page versions.

Thoughts?
> 
=== message truncated ===

Sounds good, in fact I think it's the logical
evolution for the storage framework. Diff's can be
managed and compressed much more efficiently.

Actually, I have another idea (possibly incremental to
this one): how about starting to prove with a version
control system (for example, subversion)? I mean,
moving *gradually* from database to filesystem
framework.

Obviously, for the short term the idea of being
capable of storing the entire database in RAM, if
possible, is better and faster for recovering
wikitext, but with a filesystem you could even think
about content distribution...




		
______________________________________________ 
Renovamos el Correo Yahoo! 
Nuevos servicios, m?s seguridad 
http://correo.yahoo.es


From gmaxwell at gmail.com  Sat Nov  5 23:05:40 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Sat, 5 Nov 2005 18:05:40 -0500
Subject: [Wikitech-l] Thoughts and measurments related to a new storage
	framework for large wikis.
In-Reply-To: <20051105195353.33842.qmail@web26611.mail.ukl.yahoo.com>
References: 
	<20051105195353.33842.qmail@web26611.mail.ukl.yahoo.com>
Message-ID: 

On 11/5/05, Felipe Ortega  wrote:
> Actually, I have another idea (possibly incremental to
> this one): how about starting to prove with a version
> control system (for example, subversion)? I mean,
> moving *gradually* from database to filesystem
> framework.
>
> Obviously, for the short term the idea of being
> capable of storing the entire database in RAM, if
> possible, is better and faster for recovering
> wikitext, but with a filesystem you could even think
> about content distribution...

Well part of the challenge there is that the output files are quite
small... and most filesystems stink at efficiently storing small file.

Actually, what I might implement at first is a read only archive
retriever that latches to a mmaped file, you ask it for an object, it
goes and builds it from the archive and hands it back... It's the sort
of thing we could drop in to take care of serving up old versions
without really redesigning anything.


From newhoggy at gmail.com  Sun Nov  6 02:01:05 2005
From: newhoggy at gmail.com (John Ky)
Date: Sun, 6 Nov 2005 13:01:05 +1100
Subject: [Wikitech-l] Re: One wiki, multiple websites
In-Reply-To: 
References: 
Message-ID: 

Hello again,

I sort of got this working except I am having a terrible cache problem:

Pages saved from one website are visible from the other website, which
is fine - however, the links on those pages will goto the website from
which the page was saved rather than the website from which the page
was viewed.

Can I turn page caching off altogether?

-John

On 11/5/05, John Ky  wrote:
> Hello,
>
> I want to setup up a number of websites which share a single
> wiki underneath.  The only difference between the websites
> will be:
>
> * The URI used to access the website
> * The main page
> * The navigation links
>
> For instance:
>
> I want to be able to have two websites with the following
> navigation links:
>
> marrickville.community.org:
>   Main page -> Marrickville community
>   News -> News at Marrickville
>   Events -> Events at Marrickville
>
> bankstown.community.org:
>   Main page -> Bankstown community
>   News -> News at Bankstown
>   Events -> Events at Bankstown
>
> But, I want both websites to use the same wiki, so that they can
> share pages, and have the same sign on.
>
> Is it possible, and how much work is involved in configuring
> such a wiki?
>
> Thanks
>
> -John
>


From newhoggy at gmail.com  Sun Nov  6 02:12:16 2005
From: newhoggy at gmail.com (John Ky)
Date: Sun, 6 Nov 2005 13:12:16 +1100
Subject: [Wikitech-l] Re: One wiki, multiple websites
In-Reply-To: 
References: 
	
Message-ID: 

Hi again,

I think I worked out how to turn off page caching.  Everything seems
to be working now.

Would it be possible for me to submit a patch with my changes?

It involves replacing some occurances of 'mainpage' with $wgMainpage
and 'sidebar' with $wgSidebar, and then adding the sensible defaults
to DefaultSettings.php.

This will allow me to reconfigure $wgMainpage to 'Bankstown community'
on one website and 'Marrickville community on another website, while
reusing the same wiki.

The change would have no effect behaviour of mediawiki unless the user
explicitly decides to change it from LocalSettings.php

Thanks

-John

On 11/6/05, John Ky  wrote:
> Hello again,
>
> I sort of got this working except I am having a terrible cache problem:
>
> Pages saved from one website are visible from the other website, which
> is fine - however, the links on those pages will goto the website from
> which the page was saved rather than the website from which the page
> was viewed.
>
> Can I turn page caching off altogether?
>
> -John
>
> On 11/5/05, John Ky  wrote:
> > Hello,
> >
> > I want to setup up a number of websites which share a single
> > wiki underneath.  The only difference between the websites
> > will be:
> >
> > * The URI used to access the website
> > * The main page
> > * The navigation links
> >
> > For instance:
> >
> > I want to be able to have two websites with the following
> > navigation links:
> >
> > marrickville.community.org:
> >   Main page -> Marrickville community
> >   News -> News at Marrickville
> >   Events -> Events at Marrickville
> >
> > bankstown.community.org:
> >   Main page -> Bankstown community
> >   News -> News at Bankstown
> >   Events -> Events at Bankstown
> >
> > But, I want both websites to use the same wiki, so that they can
> > share pages, and have the same sign on.
> >
> > Is it possible, and how much work is involved in configuring
> > such a wiki?
> >
> > Thanks
> >
> > -John
> >
>


From gtg808u at mail.gatech.edu  Sun Nov  6 16:14:21 2005
From: gtg808u at mail.gatech.edu (Amruta Lonkar)
Date: Sun,  6 Nov 2005 11:14:21 -0500
Subject: [Wikitech-l] Re: Whihc php file called
In-Reply-To: 
References: <1130334628.435f89a4cb189@webmail.mail.gatech.edu>
	
Message-ID: <1131293661.436e2bdd0e553@webmail.mail.gatech.edu>

I have a few doubts about the flow of code once the user clicks
save/preview/show diff buttons.

I have made changes to our locally installed wiki
1)Added a reference button in the edit toolbar which when clicked pulls up a
pop up where the user can enter reference information. Once user cliks submit
in this form the information gets entered to the reference table in wikidb.

What i need to understand is, how should i proceed when user clicks eithr the
save/preview button. I have a slight idea that only after user clicks save
page, information should get entered to the  referencelinks table which is a
replica of the imagelinks table.

I cannot figure out at what point and where in the code i can add information
to the referencelinks table. Or should this be done when user clicks the
preview page button?

I appreciate your time and help.

Amruta



Quoting Tim Starling :

> Amruta Lonkar wrote:
> > Hi,
> >
> > I am trying to find whihc function is called when user hits the save button
> on
> > edit page and in whihc file the button click for save is checked.
> >
> > Any help appreciated.
> >
> > Thanks,
> > --
> > Amruta
>
> Article::save(), in Article.php. Submitting the form generates a POST
> request to index.php with action=save in the query string. This action is
> dispatched to the Article object.
>
> -- Tim Starling
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l at wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>


--
Amruta


From glimmer_phoenix at yahoo.es  Sun Nov  6 17:15:17 2005
From: glimmer_phoenix at yahoo.es (Felipe Ortega)
Date: Sun, 6 Nov 2005 18:15:17 +0100 (CET)
Subject: [Wikitech-l] Thoughts and measurments related to a new storage
	framework for large wikis.
In-Reply-To: 
Message-ID: <20051106171517.18017.qmail@web26607.mail.ukl.yahoo.com>


 --- Gregory Maxwell  escribi?:

> 
> Well part of the challenge there is that the output
> files are quite
> small... and most filesystems stink at efficiently
> storing small file.
> 
> Actually, what I might implement at first is a read
> only archive
> retriever that latches to a mmaped file, you ask it
> for an object, it
> goes and builds it from the archive and hands it
> back... It's the sort
> of thing we could drop in to take care of serving up
> old versions
> without really redesigning anything.
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l at wikimedia.org
>
http://mail.wikipedia.org/mailman/listinfo/wikitech-l
> 

Well, to some extent I agree with you in that small
files put filesystems in a squeeze. My reasoning was
made in a parallel level: version reconstruction.

If I understood well, when somebody requets a previous
version for a certain article, you should go back on
to the tree of deltas until you reach that precise
version, then go forward again to generate the
requested page (and btw all the "middle" versions
too).

As far as I know, thats the precise task of version
control systems (is also Xdelta a version control). I
only wonder if, in some way, we want to reinvent the
wheel doing the same job but in a DB+cache
framework...

Finally, a couple of questions about two points I'm
afraid I didn't got quite well:

1.- What's the meaning of marking as 'second from top'
  the current version of the article in the DB?

2.- Why does the system store all the "middle
versions" in cache when it resolves a request for a
non-current version? I can't see the utility of this
behaviour.

PD. Nevertheless, I find your idea quite interesting.
I will be happy if I could collaborate in some way to
implement it...



	
	
		
______________________________________________ 
Renovamos el Correo Yahoo! 
Nuevos servicios, m?s seguridad 
http://correo.yahoo.es


From list_ob at gmx.net  Sun Nov  6 16:44:50 2005
From: list_ob at gmx.net (list_ob at gmx.net)
Date: Sun, 06 Nov 2005 17:44:50 +0100
Subject: [Wikitech-l] PATH_INFO can also work with PHP as CGI
Message-ID: <436f30a9.1219383620@z1.oliverbetz.de>

Hi,

the hosting service I'm using has PHP as CGI, and supports pathinfo.

But there are several places in MediaWiki saying/assuming PATH_INFO
does not work in CGI-based configurations.

Anything I missed?

If not, the detection of the server capabilities should be improved.

At least, there should be a wgUsePathInfo entry in LocalSettings.php
along with some explanation.

Oliver
-- 
Oliver Betz, Muenchen (oliverbetz.de)



From brion at pobox.com  Mon Nov  7 06:22:46 2005
From: brion at pobox.com (Brion Vibber)
Date: Sun, 06 Nov 2005 22:22:46 -0800
Subject: [Wikitech-l] Search results changed
Message-ID: <436EF2B6.80208@pobox.com>

I've changed the lucene-based search to require all given terms to match 
by default. This gives more focused results and is more consistent with 
typical search engine practice, and our behavior under the old MySQL search.

You can still make terms optional by explicitly using the 'OR' operator 
such as: "civil OR war", "blueberry OR pie". (Note that 'OR' must be 
capitalized.)

-- brion vibber (brion @ pobox.com)


From magnus.manske at web.de  Mon Nov  7 08:11:46 2005
From: magnus.manske at web.de (Magnus Manske)
Date: Mon, 07 Nov 2005 09:11:46 +0100
Subject: [Wikitech-l] Search results changed
In-Reply-To: <436EF2B6.80208@pobox.com>
References: <436EF2B6.80208@pobox.com>
Message-ID: <436F0C42.90402@web.de>

Brion Vibber wrote:

> I've changed the lucene-based search to require all given terms to
> match by default. This gives more focused results and is more
> consistent with typical search engine practice, and our behavior under
> the old MySQL search.
>
> You can still make terms optional by explicitly using the 'OR'
> operator such as: "civil OR war", "blueberry OR pie". (Note that 'OR'
> must be capitalized.)

Why not eregi_replace it? The only situation where you'd actually search
for the word "or" is a book/movie/song title, where it can be omitted.

BTW, do we support searches like
* stuff AND (foo OR bar)
* stuff "foo bar"
?

Magnus
(too lazy to try)


From ccanddyy at yahoo.com  Sun Nov  6 23:18:24 2005
From: ccanddyy at yahoo.com (candy)
Date: Sun, 06 Nov 2005 15:18:24 -0800
Subject: [Wikitech-l] Adding a new tab
Message-ID: 

Hi ,

  Does anyone has any idea as to how to add a new tab to the existing 
tabs (article,discussion,edit and history) to the wikipedia page.I have 
the mediawiki installed and the database dump imported. Now i wana tweak 
and make additions to wikipedia. I have very little idea of php but am 
ready to learn the advance concepts as the need may arise.

candy



From migdejong at gmail.com  Mon Nov  7 12:41:18 2005
From: migdejong at gmail.com (Mig de Jong)
Date: Mon, 7 Nov 2005 13:41:18 +0100
Subject: [Wikitech-l] {{CURRENTSECOND}}
Message-ID: <167a3d3b0511070441g16452f53k11eac8ffb667f41d@mail.gmail.com>

Hi developers,
 would it be possible to make a function {{CURRENTSECOND}}, in order to be
able to have a sort of random function. What I wat to do is the following: I
want to give the protal I've build a random logo. ''ll make several logo's
and have one popped-up randomly. With a function {{currentsecond}} this
would be possible.
 Another request, way much less important is whether one could refresh the
speciaal:wantedpages in the Dutch wiki (nl). Thanks for listening
 Best regards,
Mig de Jong

--

Migdejong at gmail.com
Rode Kruislaan 1121 A
1111 XA Diemen
The Netherlands
00-31-6-14800370


From brion at pobox.com  Mon Nov  7 20:17:12 2005
From: brion at pobox.com (Brion Vibber)
Date: Mon, 07 Nov 2005 12:17:12 -0800
Subject: [Wikitech-l] Search results changed
In-Reply-To: <436F0C42.90402@web.de>
References: <436EF2B6.80208@pobox.com> <436F0C42.90402@web.de>
Message-ID: <436FB648.4070109@pobox.com>

Magnus Manske wrote:
> Brion Vibber wrote:
>> You can still make terms optional by explicitly using the 'OR'
>> operator such as: "civil OR war", "blueberry OR pie". (Note that 'OR'
>> must be capitalized.)
> 
> Why not eregi_replace it? The only situation where you'd actually search
> for the word "or" is a book/movie/song title, where it can be omitted.

Or if you speak French and are searching for gold. :)

> BTW, do we support searches like
> * stuff AND (foo OR bar)
> * stuff "foo bar"

Should work...

-- brion vibber (brion @ pobox.com)


From phil.boswell at gmail.com  Mon Nov  7 16:05:49 2005
From: phil.boswell at gmail.com (Phil Boswell)
Date: Mon, 7 Nov 2005 16:05:49 -0000
Subject: [Wikitech-l] Search very intermittent on :en:
Message-ID: 

Is it just me who keep getting some sort of strange error on searches on 
:en:?

The symptoms so far include showing the Google/Yahoo boxes and an odd error 
message indicating some sort of bad return from an IP in the [10.*.*.*] 
range. My IP knowledge is rusty: is that a private range?

Sorry I can't be more precise: the error sometimes goes away when I try 
again, and did this time :-(
-- 
Phil
[[en:User:Phil Boswell]] 





From oub at gmx.net  Mon Nov  7 16:45:34 2005
From: oub at gmx.net (Uwe Brauer)
Date: Mon, 07 Nov 2005 17:45:34 +0100
Subject: [Wikitech-l] Re: prbls with external editor
References: <877jbu2avo.fsf@mat.ucm.es> 
	 <200511011826.56652@bloodgate.com>
	 <874q6vgutc.fsf_-_@mat.ucm.es>
	<4368EA71.2030803@gmx.de>
Message-ID: <87y840h0ep.fsf@gmx.net>

>>>>> "Erik" == Erik Moeller
>>>>>  writes:

   Erik> 	Transcode UTF-8=true

   Erik> in ee.ini under [Settings], but your editor will still mangle
   Erik> characters that are not part of the iso8859-1
   Erik> character-set. The best thing to do is to use a UTF-8 editor
   Erik> (or, if you already do, set the encoding to UTF-8 when
   Erik> editing wiki pages). I personally use Kate for KDE, which I
   Erik> have found to be an excellent editor for both text and code.


May I suggest something different in addition. I recently tried 
latex-->html-->wiki(pedia)

Since I did not use utf8, the non ASCII symbols got decoded as SGML
entities. That text was well understood by wiki it seems.

As far as I can see, when I set
Transcode UTF-8=false
then your script tries to use some iso8859-x setting. Could it use
sgml setting instead?

Thanks

Uwe Brauer 



From arbeo_m at yahoo.de  Mon Nov  7 16:41:24 2005
From: arbeo_m at yahoo.de (Arbeo M)
Date: Mon, 7 Nov 2005 17:41:24 +0100 (CET)
Subject: [Wikitech-l] Request for the creation of a Nedersaksisch Wikipedia
Message-ID: <20051107164124.32403.qmail@web25805.mail.ukl.yahoo.com>

Hello Wikitechnicians!

Hereby I would like to request the creation of the
Nedersaksisch Wikipedia (request filed 2005-06-12) as
soon as possible. 
Domain code: "nds-nl" (ISO language code + ISO country
code, because no individual language code exists
here).

Thank you very much in advance, also on behalf of the
speakers of Low Saxon living the Netherlands who will
now  be able to contribute to Wikipedia.

Arbeo


	

	
		
___________________________________________________________ 
Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de


From frando at xcite-online.de  Mon Nov  7 21:33:27 2005
From: frando at xcite-online.de (Frando)
Date: Mon, 07 Nov 2005 22:33:27 +0100
Subject: [Wikitech-l] Thoughts on wikidata
Message-ID: 

Hey guys,

I've read a lot on the whole wikidata thing in the past view
month. The whole project will IMO be a enourmous advantage not only to
mediawiki and wikipedia but also to the whole idea of free information.

I think, however, that most of you concur with me in this point.

There are loads of amazing ideas in the minds of all us and in 
meta.wikimedia.org, so this is not the point ;)

What the project lacks of at the moment is a better coordination and 
organisation.

I think that we should link the idea of wikidata with some other 
improvements that mediawiki needs, especially better semantic web 
abilities, iow. a xml in- and output implentation, the possibilty to
tag articles with a standart (as for Jimbo's "1.0"-proposal) and maybe 
also a improved discussion system (liquidthreads).

So I propose to take all this together and call it MediaWiki 2.0 resp. 
phase4.
I know, the idea of MediaWiki 2.0 existed all the time as a "dream for 
the future" with no atcual plans for the moment.

But why not start working more target-aimed?
When to start, when not now?

Even if you think that this is too much for one task, IMO we should do 
this just for wikidata as well.

So I propose to do the following:

  - create a newsgroup "gmane.org.wikimedia.mediawiki.wikidata"
    or "gmane.org.wikimedia.mediawiki.2-0" to discuss the stuff
    there instead of on [[talk:wikidata]]
  - and, which is very important, decide ASAP how to implent
    wikidata into the database, there are quite a few models flowing
    around in meta.wikimedia.org
    (e.g. after a last regarding of pros and cons of the different
     models in the newly created newsgroup ;) )
  - decide ASAP in which areas of the current mediawiki code major
    changes are needed
    (e.g. if we want to rewrite the parser to support different output
    formats natively or if we just include a xml/rdf output abilty into
    the existing code

when all this is and further discussion is done we should

  - create a new cvs branch 'mediawiki20' (or use the wikidata tag that
    does already exist) or a new module 'phase4'
  - create a roadmap which shows the *concrete* steps towards
    the realisation

I think we have enough guys out there who're more than willingly to help 
coding, but the problem is that no one really knows where to start ..

Concerning myself, I think I will not be able to write too much of code 
myself, because my little freetime is already rather occupied, but I'd 
love to contribute to the project as much as possible.

best regards,
Frando



From gmaxwell at gmail.com  Mon Nov  7 21:57:37 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Mon, 7 Nov 2005 16:57:37 -0500
Subject: [Wikitech-l] Thoughts on wikidata
In-Reply-To: 
References: 
Message-ID: 

On 11/7/05, Frando  wrote:
> I've read a lot on the whole wikidata thing in the past view
> month. The whole project will IMO be a enourmous advantage not only to
> mediawiki and wikipedia but also to the whole idea of free information.
>
> I think, however, that most of you concur with me in this point.
>
> There are loads of amazing ideas in the minds of all us and in
> meta.wikimedia.org, so this is not the point ;)
[snip]

This is a complex subject. If we try to discuss everything out we'll
just be in discussion forever.
It's pretty much come to a point in time where we should say "Show me the code".

If it's going to move forward it will either be via incremental
improvement, or via a crazed rewrite (if you do this, please write the
parser in a c-callable language and make it versatile enough to use it
outside of mediawiki!).

Either approach is valid it's just a question of who is motivated to
do what work.


From lars at aronsson.se  Mon Nov  7 22:13:38 2005
From: lars at aronsson.se (Lars Aronsson)
Date: Mon, 7 Nov 2005 23:13:38 +0100 (CET)
Subject: [Wikitech-l] Thoughts and measurments related to a new storage
	framework for large wikis.
In-Reply-To: 
References: 
Message-ID: 

Gregory Maxwell wrote:
> I'd like to propose a new framework for storage of archive versions. I
> will then back up this concept with some measurements.

Can your new approach replace RCS, CVS, Subversion, Bitkeeper and 
Linus Torvalds' new "git" as a software development versioning 
backend?  If not, perhaps you should include a comparison with 
some of these?  I think a standard for all would be great, if this 
could be achieved.  Git operates on directories of files (as 
opposed to RCS which uses a long linear textfile), and a 
filesystem can be a virtual thing (like /proc) that actually has a 
MySQL storage and your code below the surface.  Think of it.  You 
could call it WikiVersion, WikiKeeper, or Wit.


-- 
  Lars Aronsson (lars at aronsson.se)
  Aronsson Datateknik - http://aronsson.se


From walter at wikipedia.be  Mon Nov  7 22:29:18 2005
From: walter at wikipedia.be (Walter Vermeir)
Date: Mon, 07 Nov 2005 23:29:18 +0100
Subject: [Wikitech-l] Re: Request for the creation of a Nedersaksisch
	Wikipedia
In-Reply-To: <20051107164124.32403.qmail@web25805.mail.ukl.yahoo.com>
References: <20051107164124.32403.qmail@web25805.mail.ukl.yahoo.com>
Message-ID: 

Arbeo M schreef:
> Hello Wikitechnicians!
> 
> Hereby I would like to request the creation of the
> Nedersaksisch Wikipedia (request filed 2005-06-12) as
> soon as possible. 


There is already a wikipedia that says it is "neddersassisch". Very
confusing.

http://nds.wikipedia.org

-- 
Ook een artikeltje schrijven? WikipediaNL, de vrije GNU/FDL encyclopedie
http://www.wikipedia.be



From nospam-abuse at bloodgate.com  Mon Nov  7 23:02:17 2005
From: nospam-abuse at bloodgate.com (Tels)
Date: Tue, 8 Nov 2005 00:02:17 +0100
Subject: [Wikitech-l] Re: Request for the creation of a Nedersaksisch
	Wikipedia
In-Reply-To: 
References: <20051107164124.32403.qmail@web25805.mail.ukl.yahoo.com>
	
Message-ID: <200511080002.18809@bloodgate.com>

-----BEGIN PGP SIGNED MESSAGE-----

Moin,

On Monday 07 November 2005 23:29, Walter Vermeir wrote:
> Arbeo M schreef:
> > Hello Wikitechnicians!
> >
> > Hereby I would like to request the creation of the
> > Nedersaksisch Wikipedia (request filed 2005-06-12) as
> > soon as possible.
>
> There is already a wikipedia that says it is "neddersassisch". Very
> confusing.

I am also confused. Does this mean we will have wikipedias for every 
German dialect (saxionian, bavarian etc), too? *confused*

How does "nedersaksisch" relate to "nieders?chsisch"?

Best wishes,

Tels

>
> http://nds.wikipedia.org

- -- 
 Signed on Tue Nov  8 00:00:37 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.

 "If Duke Nukem Forever is not out in 2001, something's very wrong." -
 George Broussard, 2001 (http://tinyurl.com/6m8nh)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iQEVAwUBQ2/c+XcLPEOTuEwVAQErdgf+PqZWpbRNHz9sLuZF21rteisrzjK+rSBy
YcH3iFRF6xrD+Kr0BgjTnUXkFow9qJZ1x3nFIC9j/pnijJX9aCoROxDz3Mm2i2FU
GQyCXP5Oztc9unzmAjV0SPIN407a0JEj+hGohJz831Jb/tySbJ9I+pO3afHEwmFB
RMzJXgwKhQdVmNZy1YJeZ03vo/EEA+8JzWXr7jD5pcT4QVn2wYCGeXD/iPlNSoxX
c66G7cVDPYkurD6oMfUeGVA1kxa4OhR0numNysB9aktDGId+RX00znijJwHzGQvE
EWITEYGCy6MsU4FAVzqFhUG8Au3EHR8XTRHX0y1CuMmWIPmTs5lCEg==
=Q2wP
-----END PGP SIGNATURE-----


From node.ue at gmail.com  Tue Nov  8 00:28:15 2005
From: node.ue at gmail.com (Mark Williamson)
Date: Mon, 7 Nov 2005 17:28:15 -0700
Subject: [Wikitech-l] Request for the creation of a Nedersaksisch Wikipedia
In-Reply-To: <20051107164124.32403.qmail@web25805.mail.ukl.yahoo.com>
References: <20051107164124.32403.qmail@web25805.mail.ukl.yahoo.com>
Message-ID: <849f98ed0511071628q64a3cb38j@mail.gmail.com>

Consensus regarding this controversial addition to the Wikipedia
family has not yet been reached.

Mark

On 07/11/05, Arbeo M  wrote:
> Hello Wikitechnicians!
>
> Hereby I would like to request the creation of the
> Nedersaksisch Wikipedia (request filed 2005-06-12) as
> soon as possible.
> Domain code: "nds-nl" (ISO language code + ISO country
> code, because no individual language code exists
> here).
>
> Thank you very much in advance, also on behalf of the
> speakers of Low Saxon living the Netherlands who will
> now  be able to contribute to Wikipedia.
>
> Arbeo
>
>
>
>
>
>
> ___________________________________________________________
> Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l at wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>


--
"Take away their language, destroy their souls." -- Joseph Stalin


From beesley at gmail.com  Tue Nov  8 01:05:15 2005
From: beesley at gmail.com (Angela)
Date: Tue, 8 Nov 2005 02:05:15 +0100
Subject: [Wikitech-l] {{CURRENTSECOND}}
In-Reply-To: <167a3d3b0511070441g16452f53k11eac8ffb667f41d@mail.gmail.com>
References: <167a3d3b0511070441g16452f53k11eac8ffb667f41d@mail.gmail.com>
Message-ID: <8b722b800511071705y5a72604dp469bd0dc5c913e9d@mail.gmail.com>

On 11/7/05, Mig de Jong  wrote:
> Hi developers,
>  would it be possible to make a function {{CURRENTSECOND}}, in order to be
> able to have a sort of random function. What I wat to do is the following: I
> want to give the protal I've build a random logo. ''ll make several logo's
> and have one popped-up randomly. With a function {{currentsecond}} this
> would be possible.
>  Another request, way much less important is whether one could refresh the
> speciaal:wantedpages in the Dutch wiki (nl). Thanks for listening

It might make more sense to install the random selection extension
() so
you can do this without having to make lots of new pages. See
http://games.wikicities.com/wiki/Dice and
http://uncyclopedia.org/wiki/Template:Stub for examples of the
extension in use.

Angela.


From arbeo_m at yahoo.de  Tue Nov  8 01:32:23 2005
From: arbeo_m at yahoo.de (Arbeo M)
Date: Tue, 8 Nov 2005 02:32:23 +0100 (CET)
Subject: [Wikitech-l] Request for the creation of a Nedersaksisch
	Wikipedia 
Message-ID: <20051108013223.7665.qmail@web25808.mail.ukl.yahoo.com>

Hi Tels!

Thanks for your inquiry. This can be a bit confusing
indeed. That's part of the reason why discussion took
so long here.

> I am also confused. Does this mean we will have 
> wikipedias for every
> German dialect (saxionian, bavarian etc), too? 
> *confused*

> How does "nedersaksisch" relate to
"nieders?chsisch"?

1. I believe and hope there will be no Wikipedias for
German or any other dialects. I am quite strictly
against creating Wikipedias for individual dialects.
That's why I voted against WPs for Bavarian and
Ripuarian (please cf. 'Requests for new Languages'
page on Meta).

2. However, Low Saxon (=Nedersaksisch, Nieders?chsisch
or Plattd??tsch) is almost unanimously considered a
separate language and not regarded as is belonging to
the German language. As a German, I can confirm and
assure that the two languages are not mutually
intelligible.

3. Low Saxon consists, like most languages, of various
dialects. They are sometimes even considered separate
languages (but for the most part mutually
intelligible).

4. Low Saxon is spoken in Germany as well in the
Netherlands.

5. Due to historical reasons, the dialects of Low
Saxon used in Germany are highly influenced (loan
words, technical expressions, fixed expressions and
especially spelling) by the German language while
those spoken in the Netherlands show many Dutch
characteristics because of a century-long influx
coming from the national, official language.

6. There is a Low Saxon Wikipedia already (nds).
However, this Wikipedia solely comprises content
written in Low Saxon from Germany (where the clear
majority of Low Saxon speakers lives). The possibility
of including content written in "Dutch" Low Saxon was
discussed widely earlier this year at Wikipedia-l (see
archives) but considered not possible by the vast
majority of participants, especially by all
participants coming from Germany and from the
Netherlands. This was not due to any nationalistic
reasons or the like but solely due to practical
reasons (intelligibility).

7. Thus, the varieties of Low Saxon used in the
Netherlands are currently de facto excluded from
Wikipedia. That is why numerous Dutch "Low Saxons"
have requested a new Wikipedia.

8. "Nedersaksisch" (=Low Saxon) is the most common way
of referring to the language as a whole in the
Netherlands. That is why is has been agreed upon as a
designation for the new Wikipedia. Btw: it has been
suggested that it can be referred to as "Low Saxon
(NL)" or the like in other languages where a
translation is needed.

9. "Plattd??tsch" (=Low [or, literally 'flat'] German)
is the most common way speakers refer to this language
in Germany. It is also the self-designation of the
existing "nds"-Wikipedia. This designation reflects
the fact that its speakers have been part of the
German nation for many centuries. It rather alludes to
geographical (it is spoken in the low-laying, coastal
areas of Germany) than to linguistic facts. Including
the component "German", this name is not used in the
Netherlands, of course.

10. "Nieders?chsisch" is simply the German
(linguistic) term for the language. ("Plattdeutsch" is
another, less formal name).

Hope this helps a little bit to understand the
background of this request.

Thanks for your consideration!

Arbeo





	

	
		
___________________________________________________________ 
Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de


From node.ue at gmail.com  Tue Nov  8 01:53:57 2005
From: node.ue at gmail.com (Mark Williamson)
Date: Mon, 7 Nov 2005 18:53:57 -0700
Subject: [Wikitech-l] Request for the creation of a Nedersaksisch Wikipedia
In-Reply-To: <20051108013223.7665.qmail@web25808.mail.ukl.yahoo.com>
References: <20051108013223.7665.qmail@web25808.mail.ukl.yahoo.com>
Message-ID: <849f98ed0511071753q4d4866efq@mail.gmail.com>

Arbeo left out the part where there was lots of fighting over the request.

There are alternative proposals for Veluws, Gronings, and Stellingwarfs.

The proposal for Veluws got 7 support votes and 2 oppose, one of who
is Arbeo himself and the other which is conditional and may convince
to change.

The Veluws proposal has been around for about 1 month now, and in this
time it has quickly gained large margin of support.

The "Nedersaksisch" proposal, on the other hand, has 17 support votes
after a period of over 5 months, and there is still significant
opposition.

It seems that, in the long term, it will be better to allow the
mutually unintelligible Low Saxon languages in the Netherlands have
their own Wikipedias. Arbeo doesn't consider this and instead says he
opposes them because they are dialects, without considering
practically that they are difficult for mutual intelligibility,
especially with marginal varieties such as Veluws and Gronings.

Mark


On 07/11/05, Arbeo M  wrote:
> Hi Tels!
>
> Thanks for your inquiry. This can be a bit confusing
> indeed. That's part of the reason why discussion took
> so long here.
>
> > I am also confused. Does this mean we will have
> > wikipedias for every
> > German dialect (saxionian, bavarian etc), too?
> > *confused*
>
> > How does "nedersaksisch" relate to
> "nieders?chsisch"?
>
> 1. I believe and hope there will be no Wikipedias for
> German or any other dialects. I am quite strictly
> against creating Wikipedias for individual dialects.
> That's why I voted against WPs for Bavarian and
> Ripuarian (please cf. 'Requests for new Languages'
> page on Meta).
>
> 2. However, Low Saxon (=Nedersaksisch, Nieders?chsisch
> or Plattd??tsch) is almost unanimously considered a
> separate language and not regarded as is belonging to
> the German language. As a German, I can confirm and
> assure that the two languages are not mutually
> intelligible.
>
> 3. Low Saxon consists, like most languages, of various
> dialects. They are sometimes even considered separate
> languages (but for the most part mutually
> intelligible).
>
> 4. Low Saxon is spoken in Germany as well in the
> Netherlands.
>
> 5. Due to historical reasons, the dialects of Low
> Saxon used in Germany are highly influenced (loan
> words, technical expressions, fixed expressions and
> especially spelling) by the German language while
> those spoken in the Netherlands show many Dutch
> characteristics because of a century-long influx
> coming from the national, official language.
>
> 6. There is a Low Saxon Wikipedia already (nds).
> However, this Wikipedia solely comprises content
> written in Low Saxon from Germany (where the clear
> majority of Low Saxon speakers lives). The possibility
> of including content written in "Dutch" Low Saxon was
> discussed widely earlier this year at Wikipedia-l (see
> archives) but considered not possible by the vast
> majority of participants, especially by all
> participants coming from Germany and from the
> Netherlands. This was not due to any nationalistic
> reasons or the like but solely due to practical
> reasons (intelligibility).
>
> 7. Thus, the varieties of Low Saxon used in the
> Netherlands are currently de facto excluded from
> Wikipedia. That is why numerous Dutch "Low Saxons"
> have requested a new Wikipedia.
>
> 8. "Nedersaksisch" (=Low Saxon) is the most common way
> of referring to the language as a whole in the
> Netherlands. That is why is has been agreed upon as a
> designation for the new Wikipedia. Btw: it has been
> suggested that it can be referred to as "Low Saxon
> (NL)" or the like in other languages where a
> translation is needed.
>
> 9. "Plattd??tsch" (=Low [or, literally 'flat'] German)
> is the most common way speakers refer to this language
> in Germany. It is also the self-designation of the
> existing "nds"-Wikipedia. This designation reflects
> the fact that its speakers have been part of the
> German nation for many centuries. It rather alludes to
> geographical (it is spoken in the low-laying, coastal
> areas of Germany) than to linguistic facts. Including
> the component "German", this name is not used in the
> Netherlands, of course.
>
> 10. "Nieders?chsisch" is simply the German
> (linguistic) term for the language. ("Plattdeutsch" is
> another, less formal name).
>
> Hope this helps a little bit to understand the
> background of this request.
>
> Thanks for your consideration!
>
> Arbeo
>
>
>
>
>
>
>
>
>
> ___________________________________________________________
> Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l at wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>


--
"Take away their language, destroy their souls." -- Joseph Stalin


From arbeo_m at yahoo.de  Tue Nov  8 02:52:54 2005
From: arbeo_m at yahoo.de (Arbeo M)
Date: Tue, 8 Nov 2005 03:52:54 +0100 (CET)
Subject: [Wikitech-l] Request for the creation of a Nedersaksisch Wikipedia
Message-ID: <20051108025254.12998.qmail@web25810.mail.ukl.yahoo.com>

Mark Williamson wrote:

> Arbeo left out the part where there was lots of 
> fighting over the request.

I left it out because the "fighting" is over now.

> The proposal for Veluws got 7 support votes and 2 
> oppose, one of who is Arbeo himself and the other 
> which is conditional and may convince
> to change.

Correct. And the proposal for Nedersaksisch got 17
support votes and 2 oppose (Mark + one anonymous).

> The "Nedersaksisch" proposal, on the other hand, has
> 17 support votes after a period of over 5 months,
and > there is still significant opposition.

Incorrect. There is no significant opposition. There
is only one user trying to block something the rest of
the community accepts.

> It seems that, in the long term, it will be better
to > allow the mutually unintelligible Low Saxon
languages > in the Netherlands have their own
Wikipedias. 

Wikitech-l is actually not the right place to discuss
such details. Only the result matters here. Matter of
fact is that nobody from the area concerned shares
this individual opinion (actually, nobody from Europe
altogether).

A.


	

	
		
___________________________________________________________ 
Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de


From node.ue at gmail.com  Tue Nov  8 03:42:05 2005
From: node.ue at gmail.com (Mark Williamson)
Date: Mon, 7 Nov 2005 20:42:05 -0700
Subject: [Wikitech-l] Request for the creation of a Nedersaksisch Wikipedia
In-Reply-To: <20051108025254.12998.qmail@web25810.mail.ukl.yahoo.com>
References: <20051108025254.12998.qmail@web25810.mail.ukl.yahoo.com>
Message-ID: <849f98ed0511071942s74fc8f6cw@mail.gmail.com>

Arbeo, I think that an opposing proposal getting 7 support votes after
not nearly as long of a time is reason enough to wait.

Mark

On 07/11/05, Arbeo M  wrote:
> Mark Williamson wrote:
>
> > Arbeo left out the part where there was lots of
> > fighting over the request.
>
> I left it out because the "fighting" is over now.
>
> > The proposal for Veluws got 7 support votes and 2
> > oppose, one of who is Arbeo himself and the other
> > which is conditional and may convince
> > to change.
>
> Correct. And the proposal for Nedersaksisch got 17
> support votes and 2 oppose (Mark + one anonymous).
>
> > The "Nedersaksisch" proposal, on the other hand, has
> > 17 support votes after a period of over 5 months,
> and > there is still significant opposition.
>
> Incorrect. There is no significant opposition. There
> is only one user trying to block something the rest of
> the community accepts.
>
> > It seems that, in the long term, it will be better
> to > allow the mutually unintelligible Low Saxon
> languages > in the Netherlands have their own
> Wikipedias.
>
> Wikitech-l is actually not the right place to discuss
> such details. Only the result matters here. Matter of
> fact is that nobody from the area concerned shares
> this individual opinion (actually, nobody from Europe
> altogether).
>
> A.
>
>
>
>
>
>
> ___________________________________________________________
> Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l at wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>


--
"Take away their language, destroy their souls." -- Joseph Stalin


From t.starling at physics.unimelb.edu.au  Tue Nov  8 04:02:34 2005
From: t.starling at physics.unimelb.edu.au (Tim Starling)
Date: Tue, 08 Nov 2005 15:02:34 +1100
Subject: [Wikitech-l] Re: Search very intermittent on :en:
In-Reply-To: 
References: 
Message-ID: 

Phil Boswell wrote:
> Is it just me who keep getting some sort of strange error on searches on 
> :en:?
> 
> The symptoms so far include showing the Google/Yahoo boxes and an odd error 
> message indicating some sort of bad return from an IP in the [10.*.*.*] 
> range. My IP knowledge is rusty: is that a private range?
> 
> Sorry I can't be more precise: the error sometimes goes away when I try 
> again, and did this time :-(

At the moment, we have three search servers: maurus (10.0.0.16), vincent
(10.0.0.17) and coronelli (10.0.0.230). For reasons not quite understood,
they sometimes hang, accepting connections but closing them immediately.
Yesterday, Brion set up an hourly restart of the daemons to mitigate this.
Most of the search errors before then were probably due to this problem. The
server you get for any given query is random, that's why it often goes away
when you try again.

The other potential problem is a timeout: if the search server takes more
than 3 seconds to respond, then an error will be returned. This could happen
due to high load, or due to random temporary events. Looking at the
profiling data I gathered yesterday, it's likely that this has been
happening a fair bit during peak time.

-- Tim Starling



From t.starling at physics.unimelb.edu.au  Tue Nov  8 08:50:39 2005
From: t.starling at physics.unimelb.edu.au (Tim Starling)
Date: Tue, 08 Nov 2005 19:50:39 +1100
Subject: [Wikitech-l] Re: Special:Ipblocklist doesn't function well in
	zh.wikipedia
In-Reply-To: 
References: 
Message-ID: 

Moses wrote:
> Hi guys,
> 
> Chinese wikipedia (zh.wikipedia) admins can't block vandalic users from
> Oct. 18, 2005. Please compare
> http://zh.wikipedia.org/wiki/Special:Ipblocklist and
> http://zh.wikipedia.org/w/index.php?title=Special%3ALog&type=block for
> more details.
> 
> As you see in the Ipblocklist, the last user successful blocked useing a
> very long ID, after the upgrade of the MediaWiki system, the user name
> was truncated. That's maybe why this list doesn't work any more. So,
> could any developer fix the problem? Or, if it's a bug, please fix it in
> the next upgrade.

No, the problem isn't the long ID, the problem is this:

http://zh.wikipedia.org/w/index.php?title=MediaWiki:Ipboptions&diff=1104262&oldid=861199

The Chinese and English versions are around the wrong way, you need to swap
them. See

http://fr.wikipedia.org/wiki/MediaWiki:Ipboptions

for an example of how it's meant to be done.

-- Tim Starling



From magnus.manske at web.de  Tue Nov  8 09:04:00 2005
From: magnus.manske at web.de (Magnus Manske)
Date: Tue, 08 Nov 2005 10:04:00 +0100
Subject: [Wikitech-l] Thoughts on wikidata
In-Reply-To: 
References: 
Message-ID: <43706A00.8080505@web.de>

Frando wrote:

> What the project lacks of at the moment is a better coordination and
> organisation.

Erik Moeller is working on it; he *is* the project, and I am confident
that he can coordinate himself :-)

>
> I think that we should link the idea of wikidata with some other
> improvements that mediawiki needs, especially better semantic web
> abilities, iow. a xml in- and output implentation, 

My initial but mostly working XML export implementation:

http://magnusmanske.de/wikipedia/wiki2xml.php

Paste wikitext in, get XML out. I've been meaning to work on a further
conversion to OpenOffice XML, but didn't really get around to it.

> the possibilty to
> tag articles with a standart (as for Jimbo's "1.0"-proposal) 

It's waiting in the wings, for Brion to copy SpecialValidate.php back
from CVS HEAD and turn it on.

> and maybe also a improved discussion system (liquidthreads).

Didn't write that yet. Surprise! ;-)

>
> So I propose to take all this together and call it MediaWiki 2.0 resp.
> phase4.

Why? They're not dependent on each other in any way. WikiData is
developing, validation is ready for testing; XML export(/import) is in
early stages, and LiquidThreads is in consensus limbo. None of them
needs any of the others to become reality.

We can say "together, we'll call them MediaWiki 2.0", but that's a
marketing thing, not a technical necessity IMHO.

>
> I think we have enough guys out there who're more than willingly to
> help coding, but the problem is that no one really knows where to
> start ..

* WikiData should stay with Erik for the time being; in such an early
phase, a single developer can be more effective than a dozen guys who
have to be told every detail first. We'll grab it once it has reached a
beta or whatever
* Validation waits for field testing, though I'd be happy if you'd have
a look at the code
* XML import/export would be something where you can jump right in.
Speed up/bugfix my parser, add an ODT converter, write an importer (easy
enough), whatever
* LiquidThreads seems to be stuck between better order and security vs.
"everyone can edit this comment".

>
> Concerning myself, I think I will not be able to write too much of
> code myself, because my little freetime is already rather occupied,
> but I'd love to contribute to the project as much as possible.

Welcome to the club of people-who-don't-have-time-but-still-make-some :-)

Magnus


From dgerard at gmail.com  Tue Nov  8 10:12:49 2005
From: dgerard at gmail.com (David Gerard)
Date: Tue, 8 Nov 2005 10:12:49 +0000
Subject: [Wikitech-l] [Foundation-l] checkUser live
Message-ID: 

Anthere wrote:

>* on a project with no arbcom, the community will have to vote for its
>editors with checkuser access. A limit of votes number has been set on
>purpose. I recommand avoiding using sockpuppet for voting. A wiki
>community with 10 editors and 30 voters is likely to be frowned upon.


And next, we'll be voting for root, database access and CVS access.
Get your votes in now! Brion, Tim or Lir for Mediawiki lead? It's a
hot contest!


[cc: to wikitech-l]


- d.


From avarab at gmail.com  Tue Nov  8 10:34:59 2005
From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=)
Date: Tue, 8 Nov 2005 10:34:59 +0000
Subject: [Wikitech-l] Thoughts and measurments related to a new storage
	framework for large wikis.
In-Reply-To: 
References: 
Message-ID: <51dd1af80511080234r4b335cb6o53192db2484fa008@mail.gmail.com>

On 11/5/05, Gregory Maxwell  wrote:
  > If we use bsdiff (http://www.daemonology.net/bsdiff/ fast and
> efficent, if you ignore the anti-social license) rather than diff -du,

It uses the BSD license without an advertising clause, what's
anti-social about it?


From timwi at gmx.net  Tue Nov  8 11:24:15 2005
From: timwi at gmx.net (Timwi)
Date: Tue, 08 Nov 2005 11:24:15 +0000
Subject: [Wikitech-l] Re: Adding a new tab
In-Reply-To: 
References: 
Message-ID: 


> 
>  Does anyone has any idea as to how to add a new tab to the existing 
> tabs (article,discussion,edit and history) to the wikipedia page.I have 
> the mediawiki installed and the database dump imported. Now i wana tweak 
> and make additions to wikipedia. I have very little idea of php but am 
> ready to learn the advance concepts as the need may arise.

You make additions to Wikipedia at http://wikipedia.org. Don't call your 
site Wikipedia, or you would be violating trademark law.

That said, have a look at Magnus Manske's validation feature; it adds a 
new tab, 'validate', to every page.

Timwi



From timwi at gmx.net  Tue Nov  8 11:27:18 2005
From: timwi at gmx.net (Timwi)
Date: Tue, 08 Nov 2005 11:27:18 +0000
Subject: [Wikitech-l] Re: Search very intermittent on :en:
In-Reply-To: 
References:  
Message-ID: 


>>The symptoms so far include showing the Google/Yahoo boxes and an odd error 
>>message indicating some sort of bad return from an IP in the [10.*.*.*] 
>>range. My IP knowledge is rusty: is that a private range?
> 
> At the moment, we have three search servers: maurus (10.0.0.16), vincent
> (10.0.0.17) and coronelli (10.0.0.230).

Do we have to expose this detail to non-technical users? The above user 
clearly knew a fair bit about IPs, and the message was still meaningless 
to them; imagine what an average non-technical user might think.

Timwi



From timwi at gmx.net  Tue Nov  8 11:32:48 2005
From: timwi at gmx.net (Timwi)
Date: Tue, 08 Nov 2005 11:32:48 +0000
Subject: [Wikitech-l] Re: Request for the creation of a Nedersaksisch
	Wikipedia
In-Reply-To: <20051108013223.7665.qmail@web25808.mail.ukl.yahoo.com>
References: <20051108013223.7665.qmail@web25808.mail.ukl.yahoo.com>
Message-ID: 


> 9. "Plattd??tsch" [...] is the most common way speakers refer to this
> language in Germany. [...] Including the component "German", this
> name is not used in the Netherlands, of course.

You're afraid that referring to it as "German" would create association 
with Germany, so you call it "Nieders?chsisch" instead which clearly 
refers to Niedersachsen, which is part of Germany?

Timwi



From dgerard at gmail.com  Tue Nov  8 11:55:41 2005
From: dgerard at gmail.com (David Gerard)
Date: Tue, 8 Nov 2005 11:55:41 +0000
Subject: [Wikitech-l] [Wikipedia-l] CheckUser policy (was Status of
	Wikimedia)
Message-ID: 

>But right now, we do NOT have this log. And people are ASKING for the
>check user status to go live !


I would really like to know who thought voting for checkuser was a
good idea and why.


- d.


From gmaxwell at gmail.com  Tue Nov  8 12:17:26 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Tue, 8 Nov 2005 07:17:26 -0500
Subject: [Wikitech-l] Thoughts and measurments related to a new storage
	framework for large wikis.
In-Reply-To: <51dd1af80511080234r4b335cb6o53192db2484fa008@mail.gmail.com>
References: 
	<51dd1af80511080234r4b335cb6o53192db2484fa008@mail.gmail.com>
Message-ID: 

On 11/8/05, ?var Arnfj?r? Bjarmason  wrote:
> On 11/5/05, Gregory Maxwell  wrote:
>   > If we use bsdiff (http://www.daemonology.net/bsdiff/ fast and
> > efficent, if you ignore the anti-social license) rather than diff -du,
>
> It uses the BSD license without an advertising clause, what's
> anti-social about it?

Ha, if only.  It uses the "BSD Protection License" which looks a lot
like the BSD License unless you actually look at it, then you notice
that it's free for any use *except* use with a non-X11 equivalent open
source license. Microsoft could use it in Windows, yet you couldn't
legally use it inside SVN. It's a very odd license.  I suppose it's no
more 'anti-social' then any other license that makes you jump through
hoops, but that it doesn't look like it does to a quick glance...


From dgerard at gmail.com  Tue Nov  8 14:10:55 2005
From: dgerard at gmail.com (David Gerard)
Date: Tue, 8 Nov 2005 14:10:55 +0000
Subject: [Wikitech-l] [Wikipedia-l] Re: CheckUser policy
Message-ID: 

[cc to wikitech-l]

Anthere wrote:
>David Gerard wrote:
>>Anthere wrote:

>>>But right now, we do NOT have this log. And people are ASKING for the
>>>check user status to go live !

>> I would really like to know who thought voting for checkuser was a
>> good idea and why.


>The polish wikipedia has Taw with checkuser status.
>The english wikipedia has David.
>How did that happen ? (correct me I am wrong on a detail)
>Initially, the developers were doing that job upon request (I myself
>asked twice for information in three years if I remember, to Tim or to
>Brion).
>When the requests started being too numerous, Tim made the checkUser
>tool, in order to hand out to the community the role of doing checks,
>rather than to let it to the developers.
>Two people were given access. David, probably per agreement with Tim and
>support from Jimbo. Taw, because he had developer access, but his only
>activity (if I understood well) was to check on users.


I don't know about Taw, but that's about right for me. Also
(presumably) because I'd been through some detailed investigations
with Tim so he had some idea of how well I understood what the process
involves. He also refers people to me, so he can get on with things
like the software and the servers.


>Then, requests went on pouring on the developers, who answered there was
>a tool now to do this. So, editors asked to have access or asked for
>other people to do the job for them.
>This is when the policy started to be discussed.
>   [...]
>Second option : people get checkuser access through an approval system
>(with a community vote or an arbitrator vote)
>   [...]
>That lets the second option... I think any large community can be fully
>trusted to give that status to good people who will not abuse it.


Voting for access to the user database access still seems a
fundamentally defective idea, precisely analogous to voting for root
or voting for CVS access. What do the devs with access think?

It also notably doesn't solve some of the bad examples you gave
before, e.g. the Wikipedia where they wanted to routinely use it on
all votes.

I am entirely unconvinced this is a less worse idea than no access at all.


- d.


From anthere9 at yahoo.com  Tue Nov  8 14:36:05 2005
From: anthere9 at yahoo.com (Anthere)
Date: Tue, 08 Nov 2005 15:36:05 +0100
Subject: [Wikitech-l] Re: [Foundation-l] checkUser live
In-Reply-To: 
References: 
Message-ID: 

David Gerard wrote:

> Anthere wrote:
> 
> 
>>* on a project with no arbcom, the community will have to vote for its
>>editors with checkuser access. A limit of votes number has been set on
>>purpose. I recommand avoiding using sockpuppet for voting. A wiki
>>community with 10 editors and 30 voters is likely to be frowned upon.
> 
> 
> 
> And next, we'll be voting for root, database access and CVS access.
> Get your votes in now! Brion, Tim or Lir for Mediawiki lead? It's a
> hot contest!
> 
> 
> [cc: to wikitech-l]
> 
> 
> - d.


I think it should be possible to discuss without using fallacious 
arguments David. There is no comparison between a checkuser access and a 
root access.

The main problem I see here is that it seems you consider that check 
user access should only be given to sysadmins. I do not think the 
majority of editors would agree with you.

I see your argumentation aiming only at restricting the use of this tool 
to a very limited number of editors, approved by Jimbo or Tim. Right 
now, Jimbo has approved the access to a half dozen english editors, none 
of whom are actually sysadmins.

What is your feeling toward these nominations ?

Ant



From dgerard at gmail.com  Tue Nov  8 15:15:51 2005
From: dgerard at gmail.com (David Gerard)
Date: Tue, 8 Nov 2005 15:15:51 +0000
Subject: [Wikitech-l] Re: CheckUser policy
Message-ID: 

Anthere wrote:
>David Gerard wrote:
>> Anthere wrote:

>>>* on a project with no arbcom, the community will have to vote for its
>>>editors with checkuser access. A limit of votes number has been set on
>>>purpose. I recommand avoiding using sockpuppet for voting. A wiki
>>>community with 10 editors and 30 voters is likely to be frowned upon.

>> And next, we'll be voting for root, database access and CVS access.
>> Get your votes in now! Brion, Tim or Lir for Mediawiki lead? It's a
>> hot contest!


>I think it should be possible to discuss without using fallacious
>arguments David. There is no comparison between a checkuser access and a
>root access.


There is, really: neither is a voting matter. I raised this before,
but you appear to regard the objection as (to quote you) "no real
opposition". Not to mention Tim's quote when voting for checkuser was
floated: "Users would vote themselves root if they could."

What I said was that users need:
- the technical knowledge to know what they're seeing (which a network
admin was one example of);
- the trustworthiness that they won't break the privacy policy


>The main problem I see here is that it seems you consider that check
>user access should only be given to sysadmins. I do not think the
>majority of editors would agree with you.


Please don't misrepresent my words. I said that was not what I thought
and I meant that was not what I thought. You therefore have no
justification to say that that's what I said or meant. I ask you to
retract it.


>I see your argumentation aiming only at restricting the use of this tool
>to a very limited number of editors, approved by Jimbo or Tim. Right
>now, Jimbo has approved the access to a half dozen english editors, none
>of whom are actually sysadmins.
>What is your feeling toward these nominations ?


As you FULLY KNOW BECAUSE I CC'D YOU ON THE EMAIL IN QUESTION, I am
fine with all of those.

Why are you pretending I am saying things I didn't or not saying things I did?


>But I would like to know why you have not made any comments this week
>while I have indicated a week ago that unless there was opposition, this
>policy would go live this week.


After you complained on arbcom-l of people not commenting, I went and
checked that I had in fact commented ... and had already pointed out
the ridiculousness of voting on the matter.


As Chris Jenkinson said:

>Surely the enforcement of the Foundation's privacy policy is the
>responsibility of the Foundation, and thus access to personal
>information (such as IP addresses) should be given out upon approval by
>the Board, rather than by some kind of election system?


Indeed. Anthere, I originally understood this was your position.


- d.


From t.starling at physics.unimelb.edu.au  Tue Nov  8 15:22:02 2005
From: t.starling at physics.unimelb.edu.au (Tim Starling)
Date: Wed, 09 Nov 2005 02:22:02 +1100
Subject: [Wikitech-l] Re: Search very intermittent on :en:
In-Reply-To: 
References:  
	
Message-ID: 

Timwi wrote:
> 
>>> The symptoms so far include showing the Google/Yahoo boxes and an odd
>>> error message indicating some sort of bad return from an IP in the
>>> [10.*.*.*] range. My IP knowledge is rusty: is that a private range?
>>
>>
>> At the moment, we have three search servers: maurus (10.0.0.16), vincent
>> (10.0.0.17) and coronelli (10.0.0.230).
> 
> 
> Do we have to expose this detail to non-technical users? The above user
> clearly knew a fair bit about IPs, and the message was still meaningless
> to them; imagine what an average non-technical user might think.

We have detailed error messages so that sysadmins can debug problems when
users report them.

-- Tim Starling



From hippytrail at gmail.com  Tue Nov  8 15:53:42 2005
From: hippytrail at gmail.com (Andrew Dunbar)
Date: Tue, 8 Nov 2005 09:53:42 -0600
Subject: [Wikitech-l] Re: Adding a new tab
In-Reply-To: 
References:  
Message-ID: 

On the English Wiktionary I've added a new tab using javascript.
The code is here: http://en.wiktionary.org/wiki/User:Hippietrail/monobook.js

Look for the function "addCiteTab"

Good luck.

Andrew Dunbar (hippietrail)

On 11/8/05, Timwi  wrote:
>
> >
> >  Does anyone has any idea as to how to add a new tab to the existing
> > tabs (article,discussion,edit and history) to the wikipedia page.I have
> > the mediawiki installed and the database dump imported. Now i wana tweak
> > and make additions to wikipedia. I have very little idea of php but am
> > ready to learn the advance concepts as the need may arise.
>
> You make additions to Wikipedia at http://wikipedia.org. Don't call your
> site Wikipedia, or you would be violating trademark law.
>
> That said, have a look at Magnus Manske's validation feature; it adds a
> new tab, 'validate', to every page.
>
> Timwi
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l at wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>


--
http://linguaphile.sf.net


From nospam-abuse at bloodgate.com  Tue Nov  8 16:52:52 2005
From: nospam-abuse at bloodgate.com (Tels)
Date: Tue, 8 Nov 2005 17:52:52 +0100
Subject: [Wikitech-l] Request for the creation of a Nedersaksisch Wikipedia
In-Reply-To: <20051108013223.7665.qmail@web25808.mail.ukl.yahoo.com>
References: <20051108013223.7665.qmail@web25808.mail.ukl.yahoo.com>
Message-ID: <200511081753.19082@bloodgate.com>

-----BEGIN PGP SIGNED MESSAGE-----

Helo Arbeo!

On Tuesday 08 November 2005 02:32, Arbeo M wrote:
> Hi Tels!
>
> Thanks for your inquiry. This can be a bit confusing
> indeed. That's part of the reason why discussion took
> so long here.
>
> > I am also confused. Does this mean we will have
> > wikipedias for every
> > German dialect (saxionian, bavarian etc), too?
> > *confused*
> >
> > How does "nedersaksisch" relate to
>
> "nieders?chsisch"?

[snipabit]

Thanx for the clarification. I think it would have been good to include a 
few links for casual readers of this mailing-list, not everyone is 
familiar with all the details/languages and ongoing/past discussions etc.

Regarding these two wikipedias, it seems that this is the unique situation 
where a dialect (the dutch one) of a language (plattdeutsch) will get 
it's own wikipedia.

I am not per se opposed to dialects getting their own wikipedia. It might 
be very interesting to see the content (even if it is just for preserving 
the dialect/language).

OTOH, "real" languages like "Sorbisch" (ISO code "wen") don't have thier 
own wikipedia yet, and these would at least have some "official" rules on 
how to spell things. I am no language expert, but things like saxonian 
could get very messy in written form. Since I know nothing about 
nedersaksisch, I will refrain from vote for or against it :)

> 2. However, Low Saxon (=Nedersaksisch, Nieders?chsisch
> or Plattd??tsch) is almost unanimously considered a
> separate language and not regarded as is belonging to
> the German language. As a German, I can confirm and
> assure that the two languages are not mutually
> intelligible.

Heh, you could say that between bavarian and saxonian! (I jest! :)

Here is a .sig of mine relating to that topic:

	Neulich in Dresden geh?rt:
	  "Gundach. Schbindadoni. Isvleisch dadidada?" -- Ex-Kahl-Libur

Best wishes,

Tels

- -- 
 Signed on Tue Nov  8 17:44:52 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.

 "Nuclear powered vacuum cleaners will probably be ready within 10
 years." Alex Lewyt, of the Lewyt Corporation, a vacuum maker, predicted
 in The New York Times on June 10, 1955 -- A warning to all technophiles

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iQEVAwUBQ3DX+XcLPEOTuEwVAQGazQf+LLp/3ma4wV9/jy/hm2UObuerVcIJC1Di
sAnU20RbvqQ7S8YoxFEdnrn9qI6n+HDkQotN8Gg+cswb9AjdYN1tY7rruTLQ+15y
ljtZx/7Py22Gb0KJlrfXLj14ZVs8fjxIflYfFBxnxz5Jhnh+g6rr5ghcCFNSMp9K
y7jk0zz7JVfCixDatzkMfPNjqLNsxL8/ARmcQPPlVp4l91cIskYDEnMLW+0lHoQy
AqaVKGb4HUlDzp4Ap2493qQTFWnVJ535LPISFIVm0gKKbChGjMKHSoPu8OqnExvR
5J/v8cMox81LTB8kjM6yF49rSI1DU3ZbREimplFxrX0rAteTN30drQ==
=waKc
-----END PGP SIGNATURE-----


From anthere9 at yahoo.com  Tue Nov  8 16:48:36 2005
From: anthere9 at yahoo.com (Anthere)
Date: Tue, 08 Nov 2005 17:48:36 +0100
Subject: [Wikitech-l] check user policy
In-Reply-To: 
References: 
Message-ID: 

David Gerard wrote:

> Anthere wrote:
> 
>>David Gerard wrote:
>>
>>>Anthere wrote:
> 
> 
>>>>* on a project with no arbcom, the community will have to vote for its
>>>>editors with checkuser access. A limit of votes number has been set on
>>>>purpose. I recommand avoiding using sockpuppet for voting. A wiki
>>>>community with 10 editors and 30 voters is likely to be frowned upon.
> 
> 
>>>And next, we'll be voting for root, database access and CVS access.
>>>Get your votes in now! Brion, Tim or Lir for Mediawiki lead? It's a
>>>hot contest!
> 
> 
> 
>>I think it should be possible to discuss without using fallacious
>>arguments David. There is no comparison between a checkuser access and a
>>root access.
> 
> 
> 
> There is, really: neither is a voting matter. I raised this before,
> but you appear to regard the objection as (to quote you) "no real
> opposition". Not to mention Tim's quote when voting for checkuser was
> floated: "Users would vote themselves root if they could."
> 
> What I said was that users need:
> - the technical knowledge to know what they're seeing (which a network
> admin was one example of);
> - the trustworthiness that they won't break the privacy policy

I still do not see how that allows the comparison between a check user 
access and a root access.


>>The main problem I see here is that it seems you consider that check
>>user access should only be given to sysadmins. I do not think the
>>majority of editors would agree with you.
> 
> 
> 
> Please don't misrepresent my words. I said that was not what I thought
> and I meant that was not what I thought. You therefore have no
> justification to say that that's what I said or meant. I ask you to
> retract it.

I do not really see how I can retract what I think is your position. 
Admittedly, I may be wrong and not seeing your correct position, but I 
can't lie about what I believe. That would be denying my beliefs. I hope 
you see the difference.
Now, I hope you will agree that you consider only editors with a certain 
level of technical knowledge should have access to this tool. When I 
suggested that this technical knowledge could be offered in the help 
pages on meta, you said it was not easy to provide such knowledge.
So ? What can we do ?


>>I see your argumentation aiming only at restricting the use of this tool
>>to a very limited number of editors, approved by Jimbo or Tim. Right
>>now, Jimbo has approved the access to a half dozen english editors, none
>>of whom are actually sysadmins.
>>What is your feeling toward these nominations ?
> 
> 
> 
> As you FULLY KNOW BECAUSE I CC'D YOU ON THE EMAIL IN QUESTION, I am
> fine with all of those.
> 
> Why are you pretending I am saying things I didn't or not saying things I did?

Because I do *not* understand what your position is.
I understand you oppose access by voting, but I still do not understand 
by what you suggest the "nomination system" to be replaced. Correct me 
if I am wrong, but as I understood, you currently suggest that people be 
approved by Jimbo or Tim ?

You say on one hand that check user should have a certain technical 
knowledge and be trustworthy. And on the other hand, apparentely that 
only Jimbo or Tim (I am  not entirely sure of this point) should agree 
on who should have access. My question is : how do you think Jimbo or 
Tim will do to check the trustworthyness of editors who will maybe not 
even talk english ? Jimbo or Tim could probably check in most cases the 
technical ability, but how will they check the trustworthy ability ?

How will they do this for more than 300 projects ????

Another point is, right now, all stewards have check user access. But no 
steward was approved by a developer or by Tim himself (who is at the 
origine of the steward status creation), nor by Jimbo.
So, do you suggest that stewards are asked not to use this tool ? Or 
should they be allowed only after approval by Jimbo or Tim ? Or should 
stewards be only nominated by Jimbo or Tim in the future ?

My problem with your position David, is that I understand against what 
you are, but I do not understand at all what you propose to replace that 
proposal with.

One thing I know is that the way our projects work is a mixture of 
various political systems. I am pushing toward a system more inspired of 
democracy or oligarchy (community or subcommunity approval). It seems to 
me you are pushing toward monarchy or technocracy. This seems to be the 
root of the current problem.


>>But 
I would like to know why you have not made any comments this week
>>while I have indicated a week ago that unless there was opposition, this
>>policy would go live this week.
> 
> 
> 
> After you complained on arbcom-l of people not commenting, I went and
> checked that I had in fact commented ... and had already pointed out
> the ridiculousness of voting on the matter.

It seems most people who gave their opinion approved the voting method. 
Are all these people ridicule ?

In case some people did not understand, the english wikipedia will NOT 
be voting. The checkUser system on the english wikipedia will rely on 
the arbcom, and for now, nominees were confirmed by Jimbo himself. So, 
no one will have to bother with voting there.

Now, the fact is, David, while I understand your position to a certain 
point, I am not sure you have a very strong experience with the 
non-english communities. You are mostly involved (very much) in the 
english one. It seems the english community is quite happy with a 
mixture of monarchy (Jimbo) and oligarchy (committees....). This is not 
necessarily the case on most projects. And I think most projects will 
not be very happy with Jimbo (for example) making a decision for them. 
My best example on this issue is that... stewards... are definitly 
approved by the community. And though I think most of those who know Tim 
  consider him a great guy, I also think going back to a system where 
decision making is done by a developer... will not be something really 
appreciated.

In the current proposal, any community has actually a CHOICE between 
voting and not voting. If they REALLY do not want a community vote, they 
have two options
* they can set up a sort of arbcom system to make this kind of decision 
for them
* or they can entirely avoid voting for anyone and rely on stewards to 
carry on ponctual requests.

And thinking about the french arbcom, if the french community feels like 
the community should handle a vote, then the arbcom itself is empowered 
to just say "okay, we could make this decision for you, but we prefer 
the community make that decision".


> As Chris Jenkinson said:
> 
> 
>>Surely the enforcement of the Foundation's privacy policy is the
>>responsibility of the Foundation, and thus access to personal
>>information (such as IP addresses) should be given out upon approval by
>>the Board, rather than by some kind of election system?
> 
> 
> 
> Indeed. Anthere, I originally understood this was your position.


No. Not upon approval. We are not a top down organisation.

If we were a top-down organisation, with board approval at any step, you 
would possibly not have yourself the check user status as the board was 
never asked to approve it.

By default, we consider you are doing the job with full honesty and 
understanding.


Anthere





From wp at lehle.it  Tue Nov  8 17:04:47 2005
From: wp at lehle.it (JuergenL)
Date: Tue, 8 Nov 2005 18:04:47 +0100
Subject: [Wikitech-l] Invalid range LoadBalancer.php line 584
Message-ID: <1771277491.20051108180447@lehle.it>

Hi,

I am using MediaWiki 1.5.1 with PHP 4.1.2 and I often get the
following warning when accessing my wiki pages:

Warning:  mt_rand(): Invalid range:  0..0 in
/path/to/mediawiki/includes/LoadBalancer.php on line 584

Do you know that warning and can you tell me how to fix it?

Thanks in advance.

JuergenL



From beesley at gmail.com  Tue Nov  8 17:52:26 2005
From: beesley at gmail.com (Angela)
Date: Tue, 8 Nov 2005 18:52:26 +0100
Subject: [Wikitech-l] check user policy
In-Reply-To: 
References: 
	
Message-ID: <8b722b800511080952jb726301xf42e4a8f8959c731@mail.gmail.com>

> Another point is, right now, all stewards have check user access. But no
> steward was approved by a developer or by Tim himself (who is at the
> origine of the steward status creation), nor by Jimbo.
> So, do you suggest that stewards are asked not to use this tool ? Or
> should they be allowed only after approval by Jimbo or Tim ? Or should
> stewards be only nominated by Jimbo or Tim in the future ?

It isn't true that all stewards have CheckUser access. Stewards have
the ability, though not the right, to assign themselves this access.
It's unfortunate that some have violated their privileges by assigning
themselves CheckUser access without approval of the communities
they're using it on. I thought stewards could be trusted not to do
that, but seemingly not.

Angela
A steward with no checkuser access


From gtg808u at mail.gatech.edu  Tue Nov  8 21:24:50 2005
From: gtg808u at mail.gatech.edu (Amruta Lonkar)
Date: Tue,  8 Nov 2005 16:24:50 -0500
Subject: [Wikitech-l] How to proceed from save page/preview page
In-Reply-To: <1131293661.436e2bdd0e553@webmail.mail.gatech.edu>
References: <1130334628.435f89a4cb189@webmail.mail.gatech.edu>
	
	<1131293661.436e2bdd0e553@webmail.mail.gatech.edu>
Message-ID: <1131485090.437117a2ddc04@webmail.mail.gatech.edu>


I have a few doubts about the flow of code once the user clicks
save/preview/show diff buttons. I am sorry for posting this again but am
really stuck at this :(

I have made changes to our locally installed wiki
1)Added a reference button in the edit toolbar which when clicked pulls up a
pop up where the user can enter reference information. Once user cliks submit
in this form the information gets entered to the reference table in wikidb.

What i need to understand is, how should i proceed when user clicks eithr the
save/preview button. I have a slight idea that only after user clicks save
page, information should get entered to the  referencelinks table which is a
replica of the imagelinks table.

Also I tried looking into the Article.php code as said by Tim, but i still
cant figure out how i should proceed.

I cannot figure out at what point and where in the code i can add information
to the referencelinks table. Or should this be done when user clicks the
preview page button?

I would really appreciate any help.

Amruta


>
>
> Quoting Tim Starling :
>
> > Amruta Lonkar wrote:
> > > Hi,
> > >
> > > I am trying to find whihc function is called when user hits the save
> button
> > on
> > > edit page and in whihc file the button click for save is checked.
> > >
> > > Any help appreciated.
> > >
> > > Thanks,
> > > --
> > > Amruta
> >
> > Article::save(), in Article.php. Submitting the form generates a POST
> > request to index.php with action=save in the query string. This action is
> > dispatched to the Article object.
> >
> > -- Tim Starling
> >
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l at wikimedia.org
> > http://mail.wikipedia.org/mailman/listinfo/wikitech-l
> >
>
>
> --
> Amruta
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l at wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>


--
Amruta


From pcd at wikitex.org  Tue Nov  8 23:06:07 2005
From: pcd at wikitex.org (Peter Danenberg)
Date: Tue, 8 Nov 2005 17:06:07 -0600
Subject: [Wikitech-l] [OT] IOCCC
In-Reply-To: 
References: 
	
Message-ID: <20051108230607.GA22537@wikitex.org>

Is it our Timwi, by the way, who just won the IOCCC?

     http://www.ioccc.org/whowon2005.html

--
Peter Danenberg                                           .
wikisophia.org                                           ..:


From ccanddyy at yahoo.com  Tue Nov  8 20:27:15 2005
From: ccanddyy at yahoo.com (candy)
Date: Tue, 08 Nov 2005 12:27:15 -0800
Subject: [Wikitech-l] Re: Adding a new tab
In-Reply-To: 
References:  
	
Message-ID: 


Thanks Andrew but the mediawiki I installed doesnt have any javascript 
code. It has monobook.php and css files
In tht case can I use the js code. If so how or else whats the alternative?

C

Andrew Dunbar wrote:
> On the English Wiktionary I've added a new tab using javascript.
> The code is here: http://en.wiktionary.org/wiki/User:Hippietrail/monobook.js
> 
> Look for the function "addCiteTab"
> 
> Good luck.
> 
> Andrew Dunbar (hippietrail)
> 
> On 11/8/05, Timwi  wrote:
> 
>>> Does anyone has any idea as to how to add a new tab to the existing
>>>tabs (article,discussion,edit and history) to the wikipedia page.I have
>>>the mediawiki installed and the database dump imported. Now i wana tweak
>>>and make additions to wikipedia. I have very little idea of php but am
>>>ready to learn the advance concepts as the need may arise.
>>
>>You make additions to Wikipedia at http://wikipedia.org. Don't call your
>>site Wikipedia, or you would be violating trademark law.
>>
>>That said, have a look at Magnus Manske's validation feature; it adds a
>>new tab, 'validate', to every page.
>>
>>Timwi
>>
>>_______________________________________________
>>Wikitech-l mailing list
>>Wikitech-l at wikimedia.org
>>http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>>
> 
> 
> 
> --
> http://linguaphile.sf.net



From bob at jones-cliffe.freeserve.co.uk  Wed Nov  9 02:05:39 2005
From: bob at jones-cliffe.freeserve.co.uk (Robert Jones)
Date: Wed, 9 Nov 2005 02:05:39 -0000
Subject: [Wikitech-l] Speeding up Mediawiki
Message-ID: <001a01c5e4d2$17276510$0132a8c0@WILLLAPTOP>

Are there any tricks to speeding up Mediawiki? I have many other pages on my
site that talk to the MySQL database that open in <1s but the Mediawiki ones
tend to take longer c.2-5s, and I don't have an enormous amount of load on
the server.

 

It is a wiki installation that isn't editable by the public and doesn't use
a skin as such (I just allow all the HTML I want to allow by editing the
parser and another file), so I was wondering if there were any tricks to
bypass things and make the pages load quicker. It's probably worth noting
that I don't use the cache facility and that I have an extension that
displays only if someone is logged in (this is the reason for chche not
being used).

 

I think Mediawiki is great, just looking to find out if I can make it run
faster. Sincere thanks to all responsible for the hard work that's gone into
the script.



From hippytrail at gmail.com  Wed Nov  9 02:25:10 2005
From: hippytrail at gmail.com (Andrew Dunbar)
Date: Tue, 8 Nov 2005 20:25:10 -0600
Subject: [Wikitech-l] Re: Adding a new tab
In-Reply-To: 
References:  
	
	
Message-ID: 

Hi Candy.

I don't really know all the options since perhaps you're running your
own wiki but I've just made some code that any user can use to add a
tab to an existing wiki that she does not own.

My js is not global but could be. Currently just the people who are
interested in it put a copy in their own js page. Not all skins
support custom per-user js. I know monobook does and classic doesn't.
I'm not sure about the others.

Andrew Dunbar (hippietrail)

On 11/8/05, candy  wrote:
>
> Thanks Andrew but the mediawiki I installed doesnt have any javascript
> code. It has monobook.php and css files
> In tht case can I use the js code. If so how or else whats the alternative?
>
> C
>
> Andrew Dunbar wrote:
> > On the English Wiktionary I've added a new tab using javascript.
> > The code is here: http://en.wiktionary.org/wiki/User:Hippietrail/monobook.js
> >
> > Look for the function "addCiteTab"
> >
> > Good luck.
> >
> > Andrew Dunbar (hippietrail)
> >
> > On 11/8/05, Timwi  wrote:
> >
> >>> Does anyone has any idea as to how to add a new tab to the existing
> >>>tabs (article,discussion,edit and history) to the wikipedia page.I have
> >>>the mediawiki installed and the database dump imported. Now i wana tweak
> >>>and make additions to wikipedia. I have very little idea of php but am
> >>>ready to learn the advance concepts as the need may arise.
> >>
> >>You make additions to Wikipedia at http://wikipedia.org. Don't call your
> >>site Wikipedia, or you would be violating trademark law.
> >>
> >>That said, have a look at Magnus Manske's validation feature; it adds a
> >>new tab, 'validate', to every page.
> >>
> >>Timwi
> >>
> >>_______________________________________________
> >>Wikitech-l mailing list
> >>Wikitech-l at wikimedia.org
> >>http://mail.wikipedia.org/mailman/listinfo/wikitech-l
> >>
> >
> >
> >
> > --
> > http://linguaphile.sf.net
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l at wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>


--
http://linguaphile.sf.net


From brion at pobox.com  Wed Nov  9 02:45:59 2005
From: brion at pobox.com (Brion Vibber)
Date: Tue, 08 Nov 2005 18:45:59 -0800
Subject: [Wikitech-l] Speeding up Mediawiki
In-Reply-To: <001a01c5e4d2$17276510$0132a8c0@WILLLAPTOP>
References: <001a01c5e4d2$17276510$0132a8c0@WILLLAPTOP>
Message-ID: <437162E7.9080808@pobox.com>

Robert Jones wrote:
> Are there any tricks to speeding up Mediawiki? I have many other pages on my
> site that talk to the MySQL database that open in <1s but the Mediawiki ones
> tend to take longer c.2-5s, and I don't have an enormous amount of load on
> the server.

If you're not already using one, install a PHP opcode cache. Without 
one, your server wastes a lot of time recompiling MediaWiki's source 
code every time you hit a page.

Free/Open Source:
APC: http://pecl.php.net/package/APC
eAccelerator: http://eaccelerator.net/
Turck MMCache: http://turck-mmcache.sourceforge.net/index_old.html 
(older version)

Proprietary non-commercial:
PHP Accelerator: http://www.php-accelerator.co.uk/

Proprietary commercial:
Zend Accelerator: http://www.zend.com/

(Note that 'Zend Optimizer' is a different product, and less likely to 
improve performance significantly.)


You may also get a slight improvement from using memcached instead of 
the in-database cache for messages and rendered pages.
http://www.danga.com/memcached/


> It is a wiki installation that isn't editable by the public and doesn't use
> a skin as such (I just allow all the HTML I want to allow by editing the
> parser and another file), so I was wondering if there were any tricks to
> bypass things and make the pages load quicker. It's probably worth noting
> that I don't use the cache facility and that I have an extension that
> displays only if someone is logged in (this is the reason for chche not
> being used).

Enabling the parser cache can make a difference with longer pages as 
well. If your extension just shows, and isn't particularly 
time-dependent, you might hack User::getPageRenderingHash() to include 
the logged-in state. This would cause the parser cache to store separate 
entries for renderings for logged-in and non-logged-in users.

If you get a lot of reads by external viewers, you might also try 
enabling the file cache, which can shave off a little more time for 
anonymous viewer cache hits by reading complete page HTML from disk. 
(See DefaultSettings.php; note you may have to tweak a couple of setting 
for it to work.)

-- brion vibber (brion @ pobox.com)


From ilooy.gaon at gmail.com  Wed Nov  9 03:37:46 2005
From: ilooy.gaon at gmail.com (ilooy)
Date: Tue, 8 Nov 2005 22:37:46 -0500
Subject: [Wikitech-l] Help with translating database
Message-ID: <6ed35420511081937t566fc3cel@mail.gmail.com>

Hi all,
I have a question regarding editing a specific language
database to aid in translating it to another language.

I have set up a test wiki on a Linux/Linspire 4.5 notebook
and have the software running quite nicely.

My wish is to upload a backup of one of the language
wikis and see how I can tweak a translation out of the
file, something that will go through the database and
basically do a search and replace using a vocabulary
list.

Is there a better way of doing this?

With regards,
Jay B.
ilooy.gaon at gmail.com


From gerard.meijssen at gmail.com  Wed Nov  9 06:54:30 2005
From: gerard.meijssen at gmail.com (Gerard Meijssen)
Date: Wed, 09 Nov 2005 07:54:30 +0100
Subject: [Wikitech-l] check user policy
In-Reply-To: <8b722b800511080952jb726301xf42e4a8f8959c731@mail.gmail.com>
References: 	
	<8b722b800511080952jb726301xf42e4a8f8959c731@mail.gmail.com>
Message-ID: <43719D26.9050905@gmail.com>

Angela wrote:
>> Another point is, right now, all stewards have check user access. But no
>> steward was approved by a developer or by Tim himself (who is at the
>> origine of the steward status creation), nor by Jimbo.
>> So, do you suggest that stewards are asked not to use this tool ? Or
>> should they be allowed only after approval by Jimbo or Tim ? Or should
>> stewards be only nominated by Jimbo or Tim in the future ?
>>     
>
> It isn't true that all stewards have CheckUser access. Stewards have
> the ability, though not the right, to assign themselves this access.
> It's unfortunate that some have violated their privileges by assigning
> themselves CheckUser access without approval of the communities
> they're using it on. I thought stewards could be trusted not to do
> that, but seemingly not.
>
> Angela
> A steward with no checkuser access
Hoi,
I can remember some discussion where it was said that a steward without 
the knowledge to use this tool should not use it. This was at a time 
where a lot of problems were made worse by the lack of someone able or 
willing to do a checkuser on the Dutch wikipedia. The consensus at that 
time was that stewards should be considered responsible enough to decide 
for themselves if they are able to use a tool like this.

It was also discussed that a checkuser user should have the right to 
test for sockpuppetry when it is considered a possibility. It can be 
done discreetly. This allows a steward to quietly dispel the notion that 
two users are the same. This is less acrimonious than the fact that 
something is checked. The fact that someone is checked is often felt as 
an insult on its own. By allowing for discretion a lot of feathers will 
not be ruffled. Obviously, any project can do it in his/her way but 
given that the checkuser tool needs to be timely applied, I would 
consider using the tool by committee a self defeating proposition.

Thanks,
   GerardM


From magnus.manske at web.de  Wed Nov  9 09:16:03 2005
From: magnus.manske at web.de (Magnus Manske)
Date: Wed, 09 Nov 2005 10:16:03 +0100
Subject: [Wikitech-l] [OT] IOCCC
In-Reply-To: <20051108230607.GA22537@wikitex.org>
References: 	
	<20051108230607.GA22537@wikitex.org>
Message-ID: <4371BE53.8050008@web.de>

Peter Danenberg wrote:

>Is it our Timwi, by the way, who just won the IOCCC?
>
>     http://www.ioccc.org/whowon2005.html
>  
>
Looks like it. Congratulations, Timwi!

Let me be the first to welcome our new obfuscating overlord, and express
my deep hope that he never may use this special power on MediaWiki
source ;-)

Magnus


From phil.boswell at gmail.com  Wed Nov  9 09:50:52 2005
From: phil.boswell at gmail.com (Phil Boswell)
Date: Wed, 9 Nov 2005 09:50:52 -0000
Subject: [Wikitech-l] Re: Search very intermittent on :en:
References:  
	
Message-ID: 

"Timwi"  wrote in message 
news:dkq22f$e7q$2 at sea.gmane.org...
[me originally, then I think Tim Starling]
>>>The symptoms so far include showing the Google/Yahoo boxes and an odd 
>>>error message indicating some sort of bad return from an IP in the 
>>>[10.*.*.*] range. My IP knowledge is rusty: is that a private range?
>> At the moment, we have three search servers: maurus (10.0.0.16), vincent
>> (10.0.0.17) and coronelli (10.0.0.230).

I managed to capture one: it was "vincent"
* Internal error: no valid response from search server (10.0.0.17)

> Do we have to expose this detail to non-technical users? The above user 
> clearly knew a fair bit about IPs, and the message was still meaningless 
> to them; imagine what an average non-technical user might think.

You flatter me, for which thank you, but you're reasonably correct. If we're 
going to expose messages like that to naive users, they should be bracketed 
with a warning, and possibly some idea of someone to inform of the problem.
-- 
Phil
[[en:User:Phil Boswell]] 





From brion at pobox.com  Wed Nov  9 10:20:39 2005
From: brion at pobox.com (Brion Vibber)
Date: Wed, 09 Nov 2005 02:20:39 -0800
Subject: [Wikitech-l] Re: Search very intermittent on :en:
In-Reply-To: 
References: 
		
	
Message-ID: <4371CD77.5010408@pobox.com>

Tim Starling wrote:
> Timwi wrote:
>> Do we have to expose this detail to non-technical users? The above user
>> clearly knew a fair bit about IPs, and the message was still meaningless
>> to them; imagine what an average non-technical user might think.
> 
> We have detailed error messages so that sysadmins can debug problems when
> users report them.

I've switched it to automatically cycle to the next server if the first 
one doesn't respond; if all three fail it'll now print out:

"There was a problem with the wiki search.
This is probably temporary; try again in a few moments,
or you can search the wiki through an external search service:"
[followed by the google and yahoo search forms]

(This is localizable and configurable as MediaWiki:Lucenefallback)

-- brion vibber (brion @ pobox.com)


From timwi at gmx.net  Wed Nov  9 11:14:40 2005
From: timwi at gmx.net (Timwi)
Date: Wed, 09 Nov 2005 11:14:40 +0000
Subject: [Wikitech-l] Re: Adding a new tab
In-Reply-To: 
References: 
		
	
Message-ID: 

candy wrote:
> 
> Thanks Andrew but the mediawiki I installed doesnt have any
> javascript code. It has monobook.php and css files In tht case can I
> use the js code. If so how or else whats the alternative?

You can use the js code by placing it on the page 
[[MediaWiki:Monobook.js]] on your wiki. Not a file in your file system; 
a wiki page. However, note that it will only work in the Monobook skin. 
Since other skins don't use tabs at all, you will have to find a 
different way for those, or just not care about the few people who don't 
use Monobook.

Timwi



From timwi at gmx.net  Wed Nov  9 11:29:48 2005
From: timwi at gmx.net (Timwi)
Date: Wed, 09 Nov 2005 11:29:48 +0000
Subject: [Wikitech-l] Re: Search very intermittent on :en:
In-Reply-To: <4371CD77.5010408@pobox.com>
References: 			
	<4371CD77.5010408@pobox.com>
Message-ID: 


> (This is localizable and configurable as MediaWiki:Lucenefallback)

Will this go live later? [[MediaWiki:Lucenefallback]] doesn't seem to 
exist on en yet.



From magnus.manske at web.de  Wed Nov  9 12:56:33 2005
From: magnus.manske at web.de (Magnus Manske)
Date: Wed, 09 Nov 2005 13:56:33 +0100
Subject: [Wikitech-l] Wiki-to-XML
Message-ID: <4371F201.8050308@web.de>

I just wanted to announce that
* my PHP-based wiki-to-xml converter now supports the whole syntax
* is now in the "php" directory of the CVS module "wiki2xml"
* can be tested at http://magnusmanske.de/wiki2xml/w2x.php

You can either enter raw wikitext, or a list of article titles.
Templates can be automatically resolved (which is necessary for some
pages, as otherwise the wiki syntax is invalid and rendered as plain
text). Article and template texts are fetched from the given MediaWiki site.

Please report any bugs you find. I will now start and try (again) to
write a converter to OpenDocument format. Any help would be appreciated.

Magnus



From timwi at gmx.net  Wed Nov  9 15:30:44 2005
From: timwi at gmx.net (Timwi)
Date: Wed, 09 Nov 2005 15:30:44 +0000
Subject: [Wikitech-l] Re: Wiki-to-XML
In-Reply-To: <4371F201.8050308@web.de>
References: <4371F201.8050308@web.de>
Message-ID: 

Magnus Manske wrote:
> I just wanted to announce that
> * my PHP-based wiki-to-xml converter now supports the whole syntax
> * is now in the "php" directory of the CVS module "wiki2xml"
> * can be tested at http://magnusmanske.de/wiki2xml/w2x.php

My first (two-character) test input:

	{|

yields invalid XML as output. ;-) In general, it does so whenever the 
close-table markup (|}) is missing.

Also, you seem to be ignoring all whitespace at the beginning of the 
input, which makes it output a  when the first line should 
have been a 
 because it starts with a space.

Otherwise: Very impressive!!

Timwi



From nospam-abuse at bloodgate.com  Wed Nov  9 16:19:07 2005
From: nospam-abuse at bloodgate.com (Tels)
Date: Wed, 9 Nov 2005 17:19:07 +0100
Subject: [Wikitech-l] [OT] IOCCC
In-Reply-To: <4371BE53.8050008@web.de>
References: 
	<20051108230607.GA22537@wikitex.org> <4371BE53.8050008@web.de>
Message-ID: <200511091719.17650@bloodgate.com>

-----BEGIN PGP SIGNED MESSAGE-----

Moin,

On Wednesday 09 November 2005 10:16, Magnus Manske wrote:
> Peter Danenberg wrote:
> >Is it our Timwi, by the way, who just won the IOCCC?
> >
> >     http://www.ioccc.org/whowon2005.html
>
> Looks like it. Congratulations, Timwi!
>
> Let me be the first to welcome our new obfuscating overlord, and
> express my deep hope that he never may use this special power on
> MediaWiki source ;-)

Congrats and let me express similiar hopes,

Tels

- -- 
 Signed on Wed Nov  9 17:18:42 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.

 "If Duke Nukem Forever is not out in 2001, something's very wrong." -
 George Broussard, 2001 (http://tinyurl.com/6m8nh)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iQEVAwUBQ3Ihg3cLPEOTuEwVAQF2bQf/SNq4N8I14Ak0v1pT247zagHhQWuJbKon
Z4OQIZp4aGlnOuLd8yAE4xU/3fXR+p6UTBx3lAJSWK/xBE1YyDOWmsZwa8C/dhhT
FgVdjigA8FLBqdbQ6qBTtAZDI7I9b6WWtHV+AJVxEZ2Fix/aje3vVvq+pmolZmR/
vcSenwLgrbLjKo1j7ip+1Icm/qJMw3Qe3F754a1Gvzl6T3CZToV9RfOB23UshFEv
2z9Mt0sNVKneqHVMTgnRX7+Na5GBJpr3vnJbETRHBQ1TSZNwWZUPiWTvgPC8rK2q
XXZZXly6mpiUmFnox0OlyF6NbtItEpSkzemjucXL1gF1/hXJRmeyIA==
=Zv7X
-----END PGP SIGNATURE-----


From gmaxwell at gmail.com  Wed Nov  9 16:39:51 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Wed, 9 Nov 2005 11:39:51 -0500
Subject: [Wikitech-l] Wiki-to-XML
In-Reply-To: <4371F201.8050308@web.de>
References: <4371F201.8050308@web.de>
Message-ID: 

On 11/9/05, Magnus Manske  wrote:
> You can either enter raw wikitext, or a list of article titles.
> Templates can be automatically resolved (which is necessary for some
> pages, as otherwise the wiki syntax is invalid and rendered as plain
> text). Article and template texts are fetched from the given MediaWiki site.

There has been some discussion on IRC about potentially changing
syntax so that this doesn't happen. (It horribly breaks WYSIWYG
editing, it muddles up history, etc).

I'd like to see a longer and more thought out discussion on the list.

My thought is that if we consider the parse tree of wikitext it should
be that templates should only be able to affect a subtree under the
node where they are included, not make changes to the syntax at their
level or above.  I.e. you should be able to completely parse the
wikitext, then go in and insert subtrees at the templates and not
change anything else.

This *will* break a few things people have done on enwiki (I know this
because I ran into pages that broke my parser), but I don't think
there is anything useful that breaking this prevents accomplishing.


From rthudy at hotmail.com  Wed Nov  9 16:43:53 2005
From: rthudy at hotmail.com (Ryan Hudy)
Date: Wed, 09 Nov 2005 11:43:53 -0500
Subject: [Wikitech-l] mwdumper
Message-ID: 

Does anyone know if the mwdumper program has image import capabilities?  I'm 
able to import wikipedia data into my wiki, but I also need to import 
certain images, but can't find a way to perform that function effectively.

Ryan




From yellowikis at gmail.com  Wed Nov  9 16:50:25 2005
From: yellowikis at gmail.com (admin Yellowikis)
Date: Wed, 9 Nov 2005 16:50:25 +0000
Subject: [Wikitech-l] Reputation Systems
Message-ID: <40b4c2310511090850m38730a5fx21f65afcb7c6ed02@mail.gmail.com>

Has anyone tried integrating a reputation system with MediaWiki?

So editors get awarded points/feed back for their work?

Any tips or pointers very welcome.

Paul
--
Yellowikis is to Yellow Pages, as Wikipedia is to The Encyclopedia Britannica


From ejurschi at directmedia.de  Wed Nov  9 17:13:24 2005
From: ejurschi at directmedia.de (Erwin Jurschitza)
Date: Wed, 9 Nov 2005 18:13:24 +0100
Subject: AW: [Wikitech-l] Wiki-to-XML
In-Reply-To: 
Message-ID: 

Gregory Maxwell:

> My thought is that if we consider the parse tree
> of wikitext it should
> be that templates should only be able to affect a
> subtree under the
> node where they are included, not make changes to
> the syntax at their
> level or above.

Fully agreed. I found some pages on dewiki a while ago and
corrected them.

> I.e. you should be able to
> completely parse the
> wikitext, then go in and insert subtrees at the
> templates and not
> change anything else.

There are at least two more problems complicating the
building of a sane parse tree:

- templates may be nested inside tags, e.g.
  
  {| {{Prettytable}}
- variables may be used inside tags, see [1]
  [[Image:Chs2_{{{2}}}d40.png|{{{65}}}px]]


Erwin Jurschitza (de:Benutzer:Vlado)

[1] http://en.wikipedia.org/wiki/Template:Chess_pos2



From magnus.manske at web.de  Wed Nov  9 17:48:19 2005
From: magnus.manske at web.de (Magnus Manske)
Date: Wed, 09 Nov 2005 18:48:19 +0100
Subject: [Wikitech-l] Re: Wiki-to-XML
In-Reply-To: 
References: <4371F201.8050308@web.de> 
Message-ID: <43723663.20501@web.de>

Timwi wrote:
> My first (two-character) test input:
>
>     {|
>
> yields invalid XML as output. ;-) In general, it does so whenever the
> close-table markup (|}) is missing.
I hacked in a fix for nested tables minutes before announcing here, so
it's probably a side effect of that. I'll have a look, thanks for noticing.
>
> Also, you seem to be ignoring all whitespace at the beginning of the
> input, which makes it output a  when the first line should
> have been a 
 because it starts with a space.
Yep. Already fixed by changing "trim" to "rtrim" :-)
>
> Otherwise: Very impressive!!
Thanks! I hope with added OpenDocument export, this will become useful
one day.

Magnus


From gmaxwell at gmail.com  Wed Nov  9 18:42:30 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Wed, 9 Nov 2005 13:42:30 -0500
Subject: [Wikitech-l] Re: Wiki-to-XML
In-Reply-To: <43723663.20501@web.de>
References: <4371F201.8050308@web.de> 
	<43723663.20501@web.de>
Message-ID: 

On 11/9/05, Magnus Manske  wrote:
> Timwi wrote:
> > My first (two-character) test input:
> >
> >     {|
> >
> > yields invalid XML as output. ;-) In general, it does so whenever the
> > close-table markup (|}) is missing.
> I hacked in a fix for nested tables minutes before announcing here, so
> it's probably a side effect of that. I'll have a look, thanks for noticing.
> >
> > Also, you seem to be ignoring all whitespace at the beginning of the
> > input, which makes it output a  when the first line should
> > have been a 
 because it starts with a space.
> Yep. Already fixed by changing "trim" to "rtrim" :-)
> >
> > Otherwise: Very impressive!!
> Thanks! I hope with added OpenDocument export, this will become useful
> one day.

It's useful already...  The complexity of the wikitext syntax (from a
programmers perspective) is quite high and this adds a substantial
level of friction to creating tools which can look for content in
pages. Even doing something as simple as extracting all the text of an
article and excluding content in images can be a pain.  The XML
representation is much easier to work with.


From magnus.manske at web.de  Wed Nov  9 18:49:55 2005
From: magnus.manske at web.de (Magnus Manske)
Date: Wed, 09 Nov 2005 19:49:55 +0100
Subject: AW: [Wikitech-l] Wiki-to-XML
In-Reply-To: 
References: 
Message-ID: <437244D3.8030008@web.de>

Erwin Jurschitza wrote:
> Gregory Maxwell:
>
>   
>> My thought is that if we consider the parse tree
>> of wikitext it should
>> be that templates should only be able to affect a
>> subtree under the
>> node where they are included, not make changes to
>> the syntax at their
>> level or above.
>>     
>
> Fully agreed. I found some pages on dewiki a while ago and
> corrected them.
>   
That will break quite a few things on en, for example succession boxes,
where they do something like this:
{{table start}}
{{succession|some position}}
{{succession|some other position}}
{{succession|yet some other position}}
{{table end}}
>   
>> I.e. you should be able to
>> completely parse the
>> wikitext, then go in and insert subtrees at the
>> templates and not
>> change anything else.
>>     
>
> There are at least two more problems complicating the
> building of a sane parse tree:
>
> - templates may be nested inside tags, e.g.
>   
> {| {{Prettytable}} > - variables may be used inside tags, see [1] > [[Image:Chs2_{{{2}}}d40.png|{{{65}}}px]] > I do support #2, but not #1. Should be not overly complicated. Basically, I agree with you though; Templates everywhere gets awfully messy. Magnus From gordon.joly at pobox.com Wed Nov 9 17:42:58 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Wed, 9 Nov 2005 17:42:58 +0000 Subject: [Wikitech-l] mwdumper In-Reply-To: References: Message-ID: At 11:43 -0500 9/11/05, Ryan Hudy wrote: >Does anyone know if the mwdumper program has image import >capabilities? I'm able to import wikipedia data into my wiki, but I >also need to import certain images, but can't find a way to perform >that function effectively. > >Ryan > And.... Is it possible to dump all the sql from WIkipedia without shell access? Gordo -- "Think Feynman"///////// http://pobox.com/~gordo/ gordon.joly at pobox.com/// From gordon.joly at pobox.com Wed Nov 9 17:42:58 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Wed, 9 Nov 2005 17:42:58 +0000 Subject: [Wikitech-l] mwdumper In-Reply-To: References: Message-ID: At 11:43 -0500 9/11/05, Ryan Hudy wrote: >Does anyone know if the mwdumper program has image import >capabilities? I'm able to import wikipedia data into my wiki, but I >also need to import certain images, but can't find a way to perform >that function effectively. > >Ryan > And.... Is it possible to dump all the sql from WIkipedia without shell access? Gordo -- "Think Feynman"///////// http://pobox.com/~gordo/ gordon.joly at pobox.com/// From gtg808u at mail.gatech.edu Wed Nov 9 19:01:25 2005 From: gtg808u at mail.gatech.edu (Amruta Lonkar) Date: Wed, 9 Nov 2005 14:01:25 -0500 Subject: [Wikitech-l] Custom Namepsace Message-ID: <1131562885.4372478519227@webmail.mail.gatech.edu> Hi, I have created a new namespace for references in our local wiki. I have added it to the DefaultSeetings.php file as mentioned in help. I wanted to know if i need to add an entry in the languages.php file also, in the nstab section of language.php also?. The references in this wiki work i a similar fashion as images with each reference having its own talk page like any other article. Do i need to make any other changes anywhere within the code for this namespace to work? Thanks, Amruta From magnus.manske at web.de Wed Nov 9 19:07:52 2005 From: magnus.manske at web.de (Magnus Manske) Date: Wed, 09 Nov 2005 20:07:52 +0100 Subject: AW: [Wikitech-l] Wiki-to-XML In-Reply-To: <437244D3.8030008@web.de> References: <437244D3.8030008@web.de> Message-ID: <43724908.3070908@web.de> Magnus Manske wrote: > I do support #2, but not #1. Should be not overly complicated. > Done. From gmaxwell at gmail.com Wed Nov 9 19:09:12 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Wed, 9 Nov 2005 14:09:12 -0500 Subject: AW: [Wikitech-l] Wiki-to-XML In-Reply-To: <437244D3.8030008@web.de> References: <437244D3.8030008@web.de> Message-ID: On 11/9/05, Magnus Manske wrote: > That will break quite a few things on en, for example succession boxes, > where they do something like this: > {{table start}} > {{succession|some position}} > {{succession|some other position}} > {{succession|yet some other position}} > {{table end}} A small syntax change could fix that.. something like: {{table start|| {{somesomething|data}} ||}} The parser might not know what the heck "table start" is, but it would know that all of it's effects are contained inside the table start tag itself. From gmaxwell at gmail.com Wed Nov 9 19:12:28 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Wed, 9 Nov 2005 14:12:28 -0500 Subject: AW: [Wikitech-l] Wiki-to-XML In-Reply-To: <437244D3.8030008@web.de> References: <437244D3.8030008@web.de> Message-ID: On 11/9/05, Magnus Manske wrote: > > - templates may be nested inside tags, e.g. > >
> > {| {{Prettytable}} > > - variables may be used inside tags, see [1] > > [[Image:Chs2_{{{2}}}d40.png|{{{65}}}px]] > > > I do support #2, but not #1. Should be not overly complicated. > > Basically, I agree with you though; Templates everywhere gets awfully messy. Ah, missed this in my first reply. In #2's case we should separate objects and attributes. I don't think we should allow object names to be filled in via variables, only their attributes. This means that variables will not effect syntax but could still be used like the #2 above. From dorozynskij at poczta.onet.pl Wed Nov 9 19:05:09 2005 From: dorozynskij at poczta.onet.pl (=?iso-8859-2?Q?Doro=BFy=F1ski_Janusz?=) Date: Wed, 9 Nov 2005 20:05:09 +0100 Subject: [Wikitech-l] mwdumper In-Reply-To: Message-ID: <20051109190518Z4871738-1649+12970@ps9.test.onet.pl> | -----Original Message----- | From: ... Ryan Hudy | Sent: Wednesday, November 09, 2005 5:44 PM / | Does anyone know if the mwdumper program has image import | capabilities? I'm able to import wikipedia data into my | wiki, but I also need to import certain images, but can't | find a way to perform that function effectively. I think that images are stored in subdirectories of /images directory as jpg files rather than in db as blobs. So they are "dumping" (packing) to tar file, and you can unpack the proper tar. I hope that your certain images aren't stored on commons, because commons tar have 35 giga :-) Regards, Janusz 'Ency' Dorozynski From zaidpjd at umich.edu Wed Nov 9 01:29:55 2005 From: zaidpjd at umich.edu (muhammad zaid zainuddin) Date: Tue, 08 Nov 2005 20:29:55 -0500 Subject: [Wikitech-l] diacritics problem Message-ID: <01C0C205C3DBD9C4F409198B@zaid> dear wikitech, I'm really really sorry for bothering and giving you guys with a lot of trouble. I've been away because of end of ramadan and eid celebration. About the virama diacritics (attached), actually, it's kind of 'new', because it's introduced in 1985 by the 'Balai Penelitian Bahasa Ujung Pandang' (Ujung Pandang Committee of Language), thus will never be found in old text like i la galigo. Therefore, ominoglot and other lontara sources didn't mention this diacritic because they are referring to the oled texts. Here's the scan of the page from Buginese Grammar written in Indonesia. You will notice this at page five. The title of this book is 'Tatabahasa Bugis' published by 'Departemen Pendidikan dan Kebudayaan', Indonesia (Department of Education and Culture of Indonesia) in 1991. I think, you can find this book at main library or university's library. I hope wikitech creates this diacrtic and informs me about how to make the input. Secondly, I notice that the diacritic /e/, when written, got mix up with the other consonant. Take a look at this page: When I typed in the word 'pangadereng', the vowel /e/ got mix up with the other consonants. Thus, make it harder to read. Thirdly, I got some replies telling me that the consonant is too small to read. Can wikitechs make it a little bigger like Malayalam language. Thank you. Muhammad Zaid Zainuddin From moses.mason at gmail.com Tue Nov 8 11:37:28 2005 From: moses.mason at gmail.com (Moses) Date: Tue, 8 Nov 2005 19:37:28 +0800 Subject: [Wikitech-l] Re: Special:Ipblocklist doesn't function well inzh.wikipedia References: Message-ID: thanks, it's very clear for me ;) -- ?? Tim ?? "Re: Special:Ipblocklist doesn't function well inzh.wikipedia" ??????? 05 ?? Moses ???; ??? ID ? 385244; ?? 27 ?? 4933 ??. ??>> system, the user name was truncated. That's maybe why this list ??>> doesn't work any more. So, could any developer fix the problem? ??>> Or, if it's a bug, please fix it in the next upgrade. TS> No, the problem isn't the long ID, the problem is this: TS> http://zh.wikipedia.org/w/index.php?title=MediaWiki:Ipboptions&diff=1104262&oldid=861199 TS> The Chinese and English versions are around the wrong way, you TS> need to swap them. See TS> http://fr.wikipedia.org/wiki/MediaWiki:Ipboptions TS> for an example of how it's meant to be done. From brion at pobox.com Wed Nov 9 20:36:45 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 09 Nov 2005 12:36:45 -0800 Subject: [Wikitech-l] mwdumper In-Reply-To: References: Message-ID: <43725DDD.9010008@pobox.com> Ryan Hudy wrote: > Does anyone know if the mwdumper program has image import capabilities? > I'm able to import wikipedia data into my wiki, but I also need to > import certain images, but can't find a way to perform that function > effectively. Not at this time, sorry. -- brion vibber (brion @ pobox.com) From brion at pobox.com Wed Nov 9 20:38:58 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 09 Nov 2005 12:38:58 -0800 Subject: [Wikitech-l] mwdumper In-Reply-To: References: Message-ID: <43725E62.2070101@pobox.com> Gordon Joly wrote: > And.... > > Is it possible to dump all the sql from WIkipedia without shell access? For the most part I'd be surprised if you've got an account allowing that much disk space usage in the database but no shell. :) If you have something like phpMyAdmin, it may be possible upload the giant (multigigabyte for some languages) SQL file and have it source it in. -- brion vibber (brion @ pobox.com) From brion at pobox.com Wed Nov 9 20:40:52 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 09 Nov 2005 12:40:52 -0800 Subject: [Wikitech-l] Custom Namepsace In-Reply-To: <1131562885.4372478519227@webmail.mail.gatech.edu> References: <1131562885.4372478519227@webmail.mail.gatech.edu> Message-ID: <43725ED4.9000303@pobox.com> Amruta Lonkar wrote: > I have created a new namespace for references in our local wiki. I have added > it to the DefaultSeetings.php file as mentioned in help. You should pretty much never edit DefaultSettings.php. It's part of the software, and will be overwritten when you upgrade. Custom namespace declarations should go in your LocalSettings.php. > I wanted to know if i > need to add an entry in the languages.php file also, in the nstab section of > language.php also?. The references in this wiki work i a similar fashion as > images with each reference having its own talk page like any other article. > > Do i need to make any other changes anywhere within the code for this > namespace to work? Nope. -- brion vibber (brion @ pobox.com) From beng at garagegames.com Wed Nov 9 22:13:06 2005 From: beng at garagegames.com (Ben Garney) Date: Wed, 09 Nov 2005 14:13:06 -0800 Subject: [Wikitech-l] Reputation Systems In-Reply-To: <40b4c2310511090850m38730a5fx21f65afcb7c6ed02@mail.gmail.com> References: <40b4c2310511090850m38730a5fx21f65afcb7c6ed02@mail.gmail.com> Message-ID: <43727472.7090906@garagegames.com> admin Yellowikis wrote: > Has anyone tried integrating a reputation system with MediaWiki? > I'd also be interested in seeing any prior work in this area; I'm looking to implement an MSDN style content feedback system on my own MediaWiki site. I recognize it's an incredibly tricky issue, and I don't want to go in blind. :) Thanks, Ben From node.ue at gmail.com Wed Nov 9 22:39:01 2005 From: node.ue at gmail.com (Mark Williamson) Date: Wed, 9 Nov 2005 15:39:01 -0700 Subject: [Wikitech-l] diacritics problem In-Reply-To: <01C0C205C3DBD9C4F409198B@zaid> References: <01C0C205C3DBD9C4F409198B@zaid> Message-ID: <849f98ed0511091439gd2fa79fg@mail.gmail.com> Muhammad, what does the diacritic look like? I notice in some fonts, it is a line under the character, but is that the symbol used in Tatabahasa Bugis? Regarding vowel /e/, this is a problem which I'm working on now. Cheers Mark On 08/11/05, muhammad zaid zainuddin wrote: > dear wikitech, > > I'm really really sorry for bothering and giving you guys with a lot of > trouble. I've been away because of end of ramadan and eid celebration. > > About the virama diacritics (attached), actually, it's kind of 'new', > because it's introduced in 1985 by the 'Balai Penelitian Bahasa Ujung > Pandang' (Ujung Pandang Committee of Language), thus will never be found in > old text like i la galigo. Therefore, ominoglot and other lontara sources > didn't mention this diacritic because they are referring to the oled texts. > Here's the scan of the page from Buginese Grammar written in Indonesia. You > will notice this at page five. The title of this book is 'Tatabahasa Bugis' > published by 'Departemen Pendidikan dan Kebudayaan', Indonesia (Department > of Education and Culture of Indonesia) in 1991. I think, you can find this > book at main library or university's library. I hope wikitech creates this > diacrtic and informs me about how to make the input. > > Secondly, I notice that the diacritic /e/, when written, got mix up with > the other consonant. Take a look at this page: > > > > When I typed in the word 'pangadereng', the vowel /e/ got mix up with the > other consonants. Thus, make it harder to read. > > Thirdly, I got some replies telling me that the consonant is too small to > read. Can wikitechs make it a little bigger like Malayalam language. > > Thank you. > > Muhammad Zaid Zainuddin > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > -- "Take away their language, destroy their souls." -- Joseph Stalin From delirium at hackish.org Wed Nov 9 22:22:19 2005 From: delirium at hackish.org (Delirium) Date: Wed, 09 Nov 2005 17:22:19 -0500 Subject: [Wikitech-l] Reputation Systems In-Reply-To: <40b4c2310511090850m38730a5fx21f65afcb7c6ed02@mail.gmail.com> References: <40b4c2310511090850m38730a5fx21f65afcb7c6ed02@mail.gmail.com> Message-ID: <4372769B.3070305@hackish.org> admin Yellowikis wrote: >Has anyone tried integrating a reputation system with MediaWiki? > > It's been talked about on and off, but: 1) They have an impact on the community, and so there needs to be some consensus as to what form they should take. 2) They are notoriously easy to game. It may be possible to design a reasonably robust one, but it's non-trivial, especially if it is also to match whatever constraints (1) proposes. 3) Nobody has actually implemented one in MediaWiki. -Mark From jdunck at gmail.com Wed Nov 9 22:37:31 2005 From: jdunck at gmail.com (Jeremy Dunck) Date: Wed, 9 Nov 2005 16:37:31 -0600 Subject: [Wikitech-l] Really bad response time from upload.wikimedia.org Message-ID: <2545a92c0511091437o1327878dv7675106f1ba153fb@mail.gmail.com> I'm seeing a consistent 3 minute response time for a medium-sized file: $ wget http://upload.wikimedia.org/wikipedia/commons/2/2c/Pecan-nuts-on-tree.jpg --16:16:18-- http://upload.wikimedia.org/wikipedia/commons/2/2c/Pecan-nuts-on-t ree.jpg => `Pecan-nuts-on-tree.jpg' Resolving upload.wikimedia.org... 207.142.131.205, 207.142.131.210, 207.142.131. 213, ... Connecting to upload.wikimedia.org[207.142.131.205]:80... connected. HTTP request sent, awaiting response... 200 OK Length: 173,759 [image/jpeg] 100%[====================================>] 173,759 275.02K/s 16:19:23 (275.02 KB/s) - `Pecan-nuts-on-tree.jpg' saved [173759/173759] This seems pretty far out of normal performance. Any ideas? From brion at pobox.com Wed Nov 9 23:49:26 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 09 Nov 2005 15:49:26 -0800 Subject: [Wikitech-l] Really bad response time from upload.wikimedia.org In-Reply-To: <2545a92c0511091437o1327878dv7675106f1ba153fb@mail.gmail.com> References: <2545a92c0511091437o1327878dv7675106f1ba153fb@mail.gmail.com> Message-ID: <43728B06.9040107@pobox.com> Jeremy Dunck wrote: > I'm seeing a consistent 3 minute response time for a medium-sized file: The machine is sorely overloaded, and in its current disk configuration is crap slow at the best of times. A replacement has arrived but has a defective drive. Maintenance is scheduled. -- brion vibber (brion @ pobox.com) From ccanddyy at yahoo.com Thu Nov 10 07:11:48 2005 From: ccanddyy at yahoo.com (candy) Date: Wed, 09 Nov 2005 23:11:48 -0800 Subject: [Wikitech-l] wikipedia+database concepts and schema Message-ID: Hi all, First of all I have my own wiki installed running on mediawiki 1.5. I have the latest current pages database dump in xml format which I imported to the mediawiki using the php import script. Now before I installed the mediawiki 1.5 I had mediawiki 1.4. I find that the database schema has changed. My observations can be summarized by the following table : MediaWiki 1.5 (under command "show tables;" in mysql ) N.B I used the prefix "wiki" during installation for the tables. wikiarchive wikicategorylinks wikihitcounter wikiimage wikiimagelinks wikiinterwiki wikiipblocks wikilogging wikimath wikiobjectcache wikioldimage wikipage wikipagelinks wikiquerycache wikirecentchanges wikirevision wikisearchindex wikisite_stats wikitext wikitrackbacks wikiuser wikiuser_groups wikiuser_newtalk wikivalidate wikiwatchlist Apart from the above said tables Mediawiki 1.4 also had the tables as listed below. wikiblobs wikibrokenlinks wikicur wikilinks wikilinkscc wikilogging wikitranscache wikiuser_rights wikivalidate Having specified the background I have the following questions : 1. Does this mean that the database schema of wikipedia has changed ? 2. The schema looks really simple ! Is that all in the backend of the mighty wikipedia ? 3. I have observed that under the history tab in wikipedia, we get all the versions of the current page. And we can select any 2 versions and click on compare to find the changes. Apparently it seems that wikipedia stores all the snapshots of every change in the page(Article). I presume its stored in the recentchanges table. Am I correct in my presumption. 4. When does a page or Article enters the wikiarchive table ? I experimented with my own wiki (running on mediawiki 1.5) and discovered that if I add a new article to the wikipedia, its entry is there in the wikirecentchanges, wikipage and wikirevision tables. But not in wikiarchives table. So when does it enter the latter and under what circumstances? Please clarify these querries. In case you know the answer to some please feel to reply to them. Thanking you C From brion at pobox.com Thu Nov 10 07:55:45 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 09 Nov 2005 23:55:45 -0800 Subject: [Wikitech-l] wikipedia+database concepts and schema In-Reply-To: References: Message-ID: <4372FD01.3030306@pobox.com> candy wrote: > 1. Does this mean that the database schema of wikipedia has changed ? Yes. > 2. The schema looks really simple ! Is that all in the backend of the > mighty wikipedia ? :D > 3. I have observed that under the history tab in wikipedia, we get all > the versions of the current page. And we can select any 2 versions and > click on compare to find the changes. Apparently it seems that wikipedia > stores all the snapshots of every change in the page(Article). I presume > its stored in the recentchanges table. Am I correct in my presumption. See maintenance/tables.sql for the table definitions, which are liberally commented. > 4. When does a page or Article enters the wikiarchive table ? I > experimented with my own wiki (running on mediawiki 1.5) and discovered > that if I add a new article to the wikipedia, its entry is there in the > wikirecentchanges, wikipage and wikirevision tables. But not in > wikiarchives table. So when does it enter the latter and under what > circumstances? On deletion, revision data is moved to archive. -- brion vibber (brion @ pobox.com) From gordon.joly at pobox.com Thu Nov 10 09:21:34 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Thu, 10 Nov 2005 09:21:34 +0000 Subject: [Wikitech-l] "Meta Refresh" called on restricted wiki. Message-ID: I have set up wiki which can only be read by logged in users, by setting: $wgGroupPermissions['*' ]['createaccount'] = false; $wgGroupPermissions['*' ]['read'] = false; $wgGroupPermissions['*' ]['edit'] = false; $wgGroupPermissions['user' ]['move'] = true; $wgGroupPermissions['user' ]['read'] = true; $wgGroupPermissions['user' ]['edit'] = true; $wgGroupPermissions['user' ]['upload'] = true; However, a non logged in user visiting any page raises an error, and this appears to call:- function returnToMain( $auto = true, $returnto = NULL ) which contains a 10 second refresh (on the the main page) with the notice *** Login Required You must login to view other pages. Return to Main Page. *** $wgOut->addMeta( 'http:Refresh', '10;url=' . $titleObj->escapeFullURL() ); } I have set the refresh to 600 seconds to reduce server load. This Wiki is on the Internet (that is not an intranet). Any other thoughts or suggestions? Regards, Gordo -- "Think Feynman"///////// http://pobox.com/~gordo/ gordon.joly at pobox.com/// From gordon.joly at pobox.com Thu Nov 10 10:20:36 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Thu, 10 Nov 2005 10:20:36 +0000 Subject: [Wikitech-l] "Meta Refresh" called on restricted wiki. In-Reply-To: References: Message-ID: At 09:21 +0000 10/11/05, Gordon Joly wrote: >I have set up wiki which can only be read by logged in users, by setting: > >$wgGroupPermissions['*' ]['createaccount'] = false; >$wgGroupPermissions['*' ]['read'] = false; >$wgGroupPermissions['*' ]['edit'] = false; > >$wgGroupPermissions['user' ]['move'] = true; >$wgGroupPermissions['user' ]['read'] = true; >$wgGroupPermissions['user' ]['edit'] = true; >$wgGroupPermissions['user' ]['upload'] = true; > >However, a non logged in user visiting any page raises an error, and >this appears to call:- > > function returnToMain( $auto = true, $returnto = NULL ) > >which contains a 10 second refresh (on the the main page) with the notice > >*** >Login Required > >You must login to view other pages. > >Return to Main Page. >*** > > $wgOut->addMeta( 'http:Refresh', '10;url=' . $titleObj->escapeFullURL() ); > } > >I have set the refresh to 600 seconds to reduce server load. This >Wiki is on the Internet (that is not an intranet). > >Any other thoughts or suggestions? > >Regards, > >Gordo > >- My next question is "how do I login to wiki that will only allow logged in users to read pages"? Looks like there is need to refresh back to the login page, and not the page that says > >*** >Login Required > >You must login to view other pages. > >Return to Main Page. >*** Gordo -- "Think Feynman"///////// http://pobox.com/~gordo/ gordon.joly at pobox.com/// From avarab at gmail.com Thu Nov 10 10:27:58 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Thu, 10 Nov 2005 10:27:58 +0000 Subject: [Wikitech-l] Re: [MediaWiki-CVS] phase3/includes EditPage.php, 1.231, 1.232 In-Reply-To: <20051104153229.E71591AC1AE1@mail.wikimedia.org> References: <20051104153229.E71591AC1AE1@mail.wikimedia.org> Message-ID: <51dd1af80511100227m1aa81726lecc0dc4bee67fbe4@mail.gmail.com> On 11/4/05, Magnus Manske wrote: > Update of /cvsroot/wikipedia/phase3/includes > In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv21820/includes > > Modified Files: > EditPage.php > Log Message: > Additional hook > > Index: EditPage.php > =================================================================== > RCS file: /cvsroot/wikipedia/phase3/includes/EditPage.php,v > retrieving revision 1.231 > retrieving revision 1.232 > diff -C2 -d -r1.231 -r1.232 > *** EditPage.php 3 Nov 2005 22:40:02 -0000 1.231 > --- EditPage.php 4 Nov 2005 15:32:26 -0000 1.232 > *************** > *** 155,158 **** > --- 155,161 ---- > function edit() { > global $wgOut, $wgUser, $wgRequest, $wgTitle; > + $l = strlen ( $wgOut->mBodytext ) ; > + wfRunHooks( 'AlternateEdit', array( &$this ) ) ; > + if ( $l != strlen ( $wgOut->mBodytext ) ) return ; # Something's changed the text, my work here is done > > $fname = 'EditPage::edit'; > This is the wrong way to do something like this, it should look like: if ( ! wfRunHooks( 'AlternateEdit', array( &$this ) ) ) return; That way you avoid two calls to strlen() and allow the extension using the hook to override the function even if it doesn't change $wgOut->mBodyText (and you should really use the ->getHTML() accessor if you wanted it) From Jan-Paul at jpkoester.de Thu Nov 10 10:39:48 2005 From: Jan-Paul at jpkoester.de (=?ISO-8859-1?Q?Jan-Paul_K=F6ster?=) Date: Thu, 10 Nov 2005 11:39:48 +0100 Subject: [Wikitech-l] "Meta Refresh" called on restricted wiki. In-Reply-To: References: Message-ID: <43732374.40708@jpkoester.de> > My next question is "how do I login to wiki that will only allow > logged in users to read pages"? > > Looks like there is need to refresh back to the login page, and not > the page that says > Add # Pages anonymous (not-logged-in) users may see $wgWhitelistRead = array ("Special:Userlogin"); to your LocalSettings.php. Cheers, JP From gordon.joly at pobox.com Thu Nov 10 10:39:01 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Thu, 10 Nov 2005 10:39:01 +0000 Subject: [Wikitech-l] "Meta Refresh" called on restricted wiki. In-Reply-To: References: Message-ID: > > >My next question is "how do I login to wiki that will only allow >logged in users to read pages"? > >Looks like there is need to refresh back to the login page, and not >the page that says $wgWhitelistRead = array ( "Main Page", "Special:Userlogin", "Wikipedia:Help"); Seems to work! Gordo -- "Think Feynman"///////// http://pobox.com/~gordo/ gordon.joly at pobox.com/// From zaidpjd at umich.edu Thu Nov 10 17:23:23 2005 From: zaidpjd at umich.edu (muhammad zaid zainuddin) Date: Thu, 10 Nov 2005 12:23:23 -0500 Subject: [Wikitech-l] Re: Re: diacritics problem (Muhammad Zainuddin) Message-ID: <458AB527BFA7CFCDF4CF538D@zaid> Yes. take a look at this website in pdf page 7 If you notice, words =like ''sudah'', ''orang'' and anaknya has virama (we called it ''ma'') diacritic. Tough that're Indonesian but, certain word in buginese like ''padang'', ''barek'' and ''sallatang'' need this diacritic. The underline technique is Andy Mallarangeng's idea, the first person to create the lontara software. Thank you. Muhammad Zainuddin From gordon.joly at pobox.com Thu Nov 10 15:09:51 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Thu, 10 Nov 2005 15:09:51 +0000 Subject: [Wikitech-l] mwdumper In-Reply-To: <43725E62.2070101@pobox.com> References: <43725E62.2070101@pobox.com> Message-ID: At 12:38 -0800 9/11/05, Brion Vibber wrote: >Gordon Joly wrote: >>And.... >> >>Is it possible to dump all the sql from WIkipedia without shell access? > >For the most part I'd be surprised if you've got an account allowing >that much disk space usage in the database but no shell. :) I have created 5 or 6 wikis, without shell acccess... > >If you have something like phpMyAdmin, it may be possible upload the >giant (multigigabyte for some languages) SQL file and have it source >it in. > I feel I am on the wrong list.... since I using MediaWiki to create quite small wikis... perhaps these questions belong on Mediawiki-l? Cheers! Gordo -- "Think Feynman"///////// http://pobox.com/~gordo/ gordon.joly at pobox.com/// From hashar at altern.org Thu Nov 10 20:16:01 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Thu, 10 Nov 2005 21:16:01 +0100 Subject: [Wikitech-l] cluster monitoring Message-ID: Hello, We currently have ganglia wich give usefully reporting about server status. What about notifications when something is about to go wrong (disk usage at 95%, lot of errors in memcached, too many slow queries, server suddenly swapping ..). Kate wrote servmon that gave (give?) usefull informations about server but I am personally to lazy to hack that. Recently I finally found a job, part of my tasks is to setup a monitoring tool. My choice ? Nagios. It's an open source monitoring tool that I have setup on larousse some months ago. I asked avar and mark their though about having a monitoring tool, their answer was: sure! So let's start with Nagios. Nagios is still on larousse although it is not running at the momment. I could easily upgrade it to lastest version (2.0b4), tweak the config files to add the new servers (something like 60+ new friends). We will have to choose a server to run nagios on. Larousse seems to be a good choice as it is mostly idling, serve pages for http://noc.wikimedia.org/ and got used for servmon. Larousse could become THE monitoring device (and eventually move ganglia from zwinger to larousse). Next step is to agree on a way to check services on the various hosts. There is several solution for that: 1/ run a daemon on each server (nrpe), listening to queries from the monitoring host and giving back results. 2/ hack something that grab data from gmetad and add new metric plugins to ganglia. The good point is that we will then have those data showing in ganglia. 3/ make checks through ssh ussing passwordless ssh-key. I personally dont like that. 4/ deploy snmp everywhere The nrpe approach need to setup a daemon on each server. Problem, most of the data are already available through gmetad. The good point is that it is easy to setup (rpm -i nrpe , same config files and plugins for every servers). Reusing gmetad data is probably a better idea, the data in nagios and ganglia would be the same. One of the problems is that we will have to code a nagios plugin that cache the gmetad data to avoid multiples queries (we probably dont want to query gmetad for cpu, then for memory then for nfs call, then for each disk space usage). SNMP is a great tool for grabing devices status. Again it s probably redundant with gmetad but will let us monitor network equipment such as the switches, our ISP router and probably the console switch. cheers, -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From jedgecombe at carolina.rr.com Thu Nov 10 23:37:11 2005 From: jedgecombe at carolina.rr.com (Jason Edgecombe) Date: Thu, 10 Nov 2005 18:37:11 -0500 Subject: [Wikitech-l] cluster monitoring In-Reply-To: References: Message-ID: <4373D9A7.5000900@carolina.rr.com> Ashar Voultoiz wrote: >Hello, > >We currently have ganglia wich give usefully reporting about server >status. What about notifications when something is about to go wrong >(disk usage at 95%, lot of errors in memcached, too many slow queries, >server suddenly swapping ..). > >Kate wrote servmon that gave (give?) usefull informations about server >but I am personally to lazy to hack that. > >Recently I finally found a job, part of my tasks is to setup a >monitoring tool. My choice ? Nagios. It's an open source monitoring tool >that I have setup on larousse some months ago. I asked avar and mark >their though about having a monitoring tool, their answer was: sure! > >So let's start with Nagios. > > >Nagios is still on larousse although it is not running at the momment. I >could easily upgrade it to lastest version (2.0b4), tweak the config >files to add the new servers (something like 60+ new friends). > >We will have to choose a server to run nagios on. Larousse seems to be a >good choice as it is mostly idling, serve pages for >http://noc.wikimedia.org/ and got used for servmon. Larousse could >become THE monitoring device (and eventually move ganglia from zwinger >to larousse). > >Next step is to agree on a way to check services on the various hosts. >There is several solution for that: > > 1/ run a daemon on each server (nrpe), listening to queries from the >monitoring host and giving back results. > > 2/ hack something that grab data from gmetad and add new metric plugins >to ganglia. The good point is that we will then have those data showing >in ganglia. > > 3/ make checks through ssh ussing passwordless ssh-key. I personally >dont like that. > > 4/ deploy snmp everywhere > >The nrpe approach need to setup a daemon on each server. Problem, most >of the data are already available through gmetad. The good point is that >it is easy to setup (rpm -i nrpe , same config files and plugins for >every servers). > >Reusing gmetad data is probably a better idea, the data in nagios and >ganglia would be the same. One of the problems is that we will have to >code a nagios plugin that cache the gmetad data to avoid multiples >queries (we probably dont want to query gmetad for cpu, then for memory >then for nfs call, then for each disk space usage). > >SNMP is a great tool for grabing devices status. Again it s probably >redundant with gmetad but will let us monitor network equipment such as >the switches, our ISP router and probably the console switch. > >cheers, > > I would be VERY interested in nagios-ganglia integration. I recommend that you contact the nagios and ganglia teams about this. I think a lot of people have wanted this. From brion at pobox.com Fri Nov 11 00:07:10 2005 From: brion at pobox.com (Brion Vibber) Date: Thu, 10 Nov 2005 16:07:10 -0800 Subject: [Wikitech-l] Scarcity Message-ID: <4373E0AE.1040503@pobox.com> Just a heads up; I'm moving this weekend and may be scarce for a few days until my phone/internet gets set up. Any priority site/software issues should be directed to this list or the bug tracker rather than private email to me to make sure somebody sees it. -- brion vibber (brion @ pobox.com) From timwi at gmx.net Fri Nov 11 17:16:50 2005 From: timwi at gmx.net (Timwi) Date: Fri, 11 Nov 2005 17:16:50 +0000 Subject: [Wikitech-l] Re: Reputation Systems In-Reply-To: <4372769B.3070305@hackish.org> References: <40b4c2310511090850m38730a5fx21f65afcb7c6ed02@mail.gmail.com> <4372769B.3070305@hackish.org> Message-ID: Delirium wrote: > admin Yellowikis wrote: > >> Has anyone tried integrating a reputation system with MediaWiki? >> > It's been talked about on and off, but: > > 1) They have an impact on the community, and so there needs to be some > consensus as to what form they should take. > 2) They are notoriously easy to game. It may be possible to design a > reasonably robust one, but it's non-trivial, especially if it is also to > match whatever constraints (1) proposes. > 3) Nobody has actually implemented one in MediaWiki. I was always under the impression that we don't want such a reputation system because no matter how robust you make it, it is always easier to game than it is to make it robust against that particular attack. Which is kind of the opposite of wiki philosophy, where undoing vandalism is supposed to be made easier than to vandalise. Timwi From mark at nedworks.org Fri Nov 11 19:20:35 2005 From: mark at nedworks.org (Mark Bergsma) Date: Fri, 11 Nov 2005 20:20:35 +0100 Subject: [Wikitech-l] cluster monitoring In-Reply-To: References: Message-ID: <4374EF03.6020302@nedworks.org> Ashar Voultoiz wrote: > Nagios is still on larousse although it is not running at the momment. I > could easily upgrade it to lastest version (2.0b4), tweak the config > files to add the new servers (something like 60+ new friends). > > We will have to choose a server to run nagios on. Larousse seems to be a > good choice as it is mostly idling, serve pages for > http://noc.wikimedia.org/ and got used for servmon. Larousse could > become THE monitoring device (and eventually move ganglia from zwinger > to larousse). Yes. Although larousse is getting old, and the install running on it is too. We might want to do a reinstall before that. > Reusing gmetad data is probably a better idea, the data in nagios and > ganglia would be the same. One of the problems is that we will have to > code a nagios plugin that cache the gmetad data to avoid multiples > queries (we probably dont want to query gmetad for cpu, then for memory > then for nfs call, then for each disk space usage). I don't know ganglia too well, but this seems like the best option to investigate. If ganglia is flexible and uncomplicated enough to add new metrics easily, then this could certainly work. > SNMP is a great tool for grabing devices status. Again it s probably > redundant with gmetad but will let us monitor network equipment such as > the switches, our ISP router and probably the console switch. Can we use SNMP for devices that support it, and use ganglia for the rest? In my experience, SNMP is nice and easy for things that the standard net-snmpd supports, but it gets nasty beyond that, i.e. if you want to add things yourself... -- Mark mark at nedworks.org From john at websage.org Sat Nov 12 00:04:51 2005 From: john at websage.org (John Anthony Hartman) Date: Fri, 11 Nov 2005 16:04:51 -0800 Subject: [Wikitech-l] Job Opportunity Message-ID: <1131754018.26140@mx248.mysite4now.com> Job Description Wanted: an experienced programmer with a background in the MediaWiki code base. ManyOne Networks has been tasked by a client to build an encyclopedia in a wiki format. The wiki platform we are using is MediaWiki, the same underlying platform as Wikipedia. Our client is aiming, like Wikipedia, to build a large body of freely available content developed in a massively collaborative mode via a wiki. Unlike Wikipedia, this client is working in a specific topic area and with a specific (though large) body of potential contributors. Because of these (and other) differences, we find ourselves in need of someone who can work with us on making some changes or enhancements to the underlying MediaWiki code. It is our intent, by the way, that changes made be offered back as open source to the MediaWiki developers. Some of the areas where we hope to get help include: - Content replication across wiki namespaces; - area of registration and authentication; - role-based access control. Interested applicants should contact Mike Matthews at ManyOne Networks with resume and contact information. Contract to hire possibilities. Mike Matthews ManyOne Networks 100 Enterprise Way, Suite G-370 Scotts Valley, CA 95066 831-438-9800 ext 132 mike at manyone.net I figured this list would be a great place to find this person. ___________________________________________________ John Anthony Hartman website: http://www.websage.org podcast: Multi-Media Me- http://pmo.websage.org "Any sufficiently advanced technology is indistinguishable from magic. " --Arthur C. Clarke From dominik.bach at web.de Sat Nov 12 12:31:23 2005 From: dominik.bach at web.de (dominik.bach at web.de) Date: Sat, 12 Nov 2005 13:31:23 +0100 Subject: [Wikitech-l] Request for a Ripuarian Wikipedia In-Reply-To: <20051108090408.E9B261AC1871@mail.wikimedia.org> Message-ID: <4375EEAB.11330.37EE70@localhost> Hello, I would like to ask for the creation of a Ripuarian Wikipedia. The discussion has been around on MetaWiki for some time and seems to be finished. The TestWiki on http://wikoelsch.dergruenepunk.de has already more than 500 articles, and there have been several requests from users who want to join as soon as it is a real Wikipedia. Thankx Dominik From gmaxwell at gmail.com Sat Nov 12 13:34:19 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Sat, 12 Nov 2005 08:34:19 -0500 Subject: [Wikitech-l] Users with 'fun' tools. Message-ID: Some users on en wiki have started using a javascript tool that causes them to http get every article that shows up on recent changes rapid fire. (see http://en.wikipedia.org/w/index.php?title=User:Lupin/recent2.js ) We wouldn't normally permit a bot to hit the wiki that fast and bots aren't potentially in the hands of thousands of users equipped with nothing more than a web browser. Should we set up some guidelines on this now, or just wait until we have to limit per-source http request throttling? From timwi at gmx.net Sat Nov 12 13:57:04 2005 From: timwi at gmx.net (Timwi) Date: Sat, 12 Nov 2005 13:57:04 +0000 Subject: [Wikitech-l] Re: Request for a Ripuarian Wikipedia In-Reply-To: <4375EEAB.11330.37EE70@localhost> References: <20051108090408.E9B261AC1871@mail.wikimedia.org> <4375EEAB.11330.37EE70@localhost> Message-ID: > The TestWiki on http://wikoelsch.dergruenepunk.de Da m???ter abba noch wat dran arbeiten. :) http://wikoelsch.dergruenepunk.de/index.php/Bild:Houpsick-Screenshot.png From brian0918 at gmail.com Sat Nov 12 14:56:11 2005 From: brian0918 at gmail.com (Brian) Date: Sat, 12 Nov 2005 09:56:11 -0500 Subject: [Wikitech-l] Users with 'fun' tools. In-Reply-To: References: Message-ID: <4376028B.2060109@gmail.com> The point of that tool is to check the diff for common vandalism tools, not to have 'fun'. I know NullC is working on a bayesian filter tool that connects directly with the database, which will make this tool obsolete, but until then this is useful. Gregory Maxwell wrote: >Some users on en wiki have started using a javascript tool that causes >them to http get every article that shows up on recent changes rapid >fire. (see http://en.wikipedia.org/w/index.php?title=User:Lupin/recent2.js >) > >We wouldn't normally permit a bot to hit the wiki that fast and bots >aren't potentially in the hands of thousands of users equipped with >nothing more than a web browser. > >Should we set up some guidelines on this now, or just wait until we >have to limit per-source http request throttling? >_______________________________________________ >Wikitech-l mailing list >Wikitech-l at wikimedia.org >http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > > From brian0918 at gmail.com Sat Nov 12 15:01:08 2005 From: brian0918 at gmail.com (Brian) Date: Sat, 12 Nov 2005 10:01:08 -0500 Subject: [Wikitech-l] Users with 'fun' tools. In-Reply-To: <4376028B.2060109@gmail.com> References: <4376028B.2060109@gmail.com> Message-ID: <437603B4.3020201@gmail.com> Oops, you are NullC. But you probably already knew that. :) Brian wrote: > The point of that tool is to check the diff for common vandalism > tools, not to have 'fun'. I know NullC is working on a bayesian > filter tool that connects directly with the database, which will make > this tool obsolete, but until then this is useful. > > Gregory Maxwell wrote: > >> Some users on en wiki have started using a javascript tool that causes >> them to http get every article that shows up on recent changes rapid >> fire. (see >> http://en.wikipedia.org/w/index.php?title=User:Lupin/recent2.js >> ) >> >> We wouldn't normally permit a bot to hit the wiki that fast and bots >> aren't potentially in the hands of thousands of users equipped with >> nothing more than a web browser. >> >> Should we set up some guidelines on this now, or just wait until we >> have to limit per-source http request throttling? >> _______________________________________________ >> Wikitech-l mailing list >> Wikitech-l at wikimedia.org >> http://mail.wikipedia.org/mailman/listinfo/wikitech-l >> >> >> > From f-x.p at laposte.net Sat Nov 12 17:36:58 2005 From: f-x.p at laposte.net (FxParlant) Date: Sat, 12 Nov 2005 18:36:58 +0100 Subject: [Wikitech-l] Desperate for a linkification expert Message-ID: hi, I did something wrong, and now I end up with : src=" References: <4375EEAB.11330.37EE70@localhost> Message-ID: <200511121855.34750@bloodgate.com> -----BEGIN PGP SIGNED MESSAGE----- Moin, On Saturday 12 November 2005 13:31, dominik.bach at web.de wrote: > Hello, > > I would like to ask for the creation of a Ripuarian Wikipedia. The > discussion has been around on MetaWiki for some time and seems to > be finished. The TestWiki on http://wikoelsch.dergruenepunk.de has > already more than 500 articles, and there have been several requests > from users who want to join as soon as it is a real Wikipedia. Is "Ripuarian" a dialect or a language? Funny, living in the Rheinland for some time, and never heard the word "ripuarian" before. And http://de.wikipedia.org/wiki/Ripuarian http://de.wikipedia.org/wiki/Rheinl%C3%A4ndisch don't even exist yet. Best wishes, Tels - -- Signed on Sat Nov 12 18:53:11 2005 with key 0x93B84C15. Visit my photo gallery at http://bloodgate.com/photos/ PGP key on http://bloodgate.com/tels.asc or per email. "Any sufficiently rigged demo is indistinguishable from an advanced technology." -- Don Quixote, slashdot guy -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iQEVAwUBQ3YslXcLPEOTuEwVAQGX0Qf/U8UvoZDPeKF+F7QJ9zpyTCf0urIBZIby CH6LPvTEKUbhL6hcSbe1CytwE/oPnbTwRTshj2p6IfGxlqIAe0SOb6fDX50I7YJW WYRWVZ68X7P9q+gNaQkoADINYVNYC1OeNKh7c+wYsjMaqvY2QC1gzzWDZkPujJv7 r8HkuZculkSWo6ktH5Pi7PXikVlRkn2umSUVgdtU3N+EPttvGnZwzH89FtCJslk5 fKhPr0/xbLEQtu/KlNHTpLMbKUs2LzI3GsFspdMfWqlhgOSFgCfeOGjXz7zqUqpx th3Cp1DyaMx2x0J980lrVByLQO4z8vb5M1BEa53U7fz+mf5djDyM/Q== =F4AX -----END PGP SIGNATURE----- From valdelli at gmail.com Sat Nov 12 18:12:45 2005 From: valdelli at gmail.com (Ilario Valdelli) Date: Sat, 12 Nov 2005 19:12:45 +0100 Subject: [Wikitech-l] Request for a Ripuarian Wikipedia In-Reply-To: <200511121855.34750@bloodgate.com> References: <4375EEAB.11330.37EE70@localhost> <200511121855.34750@bloodgate.com> Message-ID: <4376309D.9090906@gmail.com> This is not a dialect but a group of dialet like "lombard". http://de.wikipedia.org/wiki/Ripuarische_Dialektgruppe The "funny" Ethnologue call it "language" but for Ethnologue all is language. Ilario Tels wrote: >-----BEGIN PGP SIGNED MESSAGE----- > >Moin, > >On Saturday 12 November 2005 13:31, dominik.bach at web.de wrote: > > >>Hello, >> >>I would like to ask for the creation of a Ripuarian Wikipedia. The >>discussion has been around on MetaWiki for some time and seems to >>be finished. The TestWiki on http://wikoelsch.dergruenepunk.de has >>already more than 500 articles, and there have been several requests >>from users who want to join as soon as it is a real Wikipedia. >> >> > >Is "Ripuarian" a dialect or a language? Funny, living in the Rheinland for >some time, and never heard the word "ripuarian" before. And > > http://de.wikipedia.org/wiki/Ripuarian > http://de.wikipedia.org/wiki/Rheinl%C3%A4ndisch > >don't even exist yet. > >Best wishes, > >Tels > >- -- > Signed on Sat Nov 12 18:53:11 2005 with key 0x93B84C15. > Visit my photo gallery at http://bloodgate.com/photos/ > PGP key on http://bloodgate.com/tels.asc or per email. > > "Any sufficiently rigged demo is indistinguishable from an advanced > technology." -- Don Quixote, slashdot guy > >-----BEGIN PGP SIGNATURE----- >Version: GnuPG v1.2.4 (GNU/Linux) > >iQEVAwUBQ3YslXcLPEOTuEwVAQGX0Qf/U8UvoZDPeKF+F7QJ9zpyTCf0urIBZIby >CH6LPvTEKUbhL6hcSbe1CytwE/oPnbTwRTshj2p6IfGxlqIAe0SOb6fDX50I7YJW >WYRWVZ68X7P9q+gNaQkoADINYVNYC1OeNKh7c+wYsjMaqvY2QC1gzzWDZkPujJv7 >r8HkuZculkSWo6ktH5Pi7PXikVlRkn2umSUVgdtU3N+EPttvGnZwzH89FtCJslk5 >fKhPr0/xbLEQtu/KlNHTpLMbKUs2LzI3GsFspdMfWqlhgOSFgCfeOGjXz7zqUqpx >th3Cp1DyaMx2x0J980lrVByLQO4z8vb5M1BEa53U7fz+mf5djDyM/Q== >=F4AX >-----END PGP SIGNATURE----- >_______________________________________________ >Wikitech-l mailing list >Wikitech-l at wikimedia.org >http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > > From timwi at gmx.net Sat Nov 12 18:31:27 2005 From: timwi at gmx.net (Timwi) Date: Sat, 12 Nov 2005 18:31:27 +0000 Subject: [Wikitech-l] Re: Desperate for a linkification expert In-Reply-To: References: Message-ID: > I did something wrong, and now I end up with : > src=" References: Message-ID: Timwi wrote: > > It would help if you told us what you were trying to do, and what you > "did wrong". > Nice shot, thatss my problem, I haven't got a clue where I did something wrong. i've only noticed this problem later. This link problem comes in the templates. I've also noticed someting weirder: If I call a template with a link, it comes broken. If I put an ISBN call, befor the template call, the link is perfect If the ISBN call is afterwards, the link is broken again. I'm in mw1.5 Thanks for your help Fran?ois From hashar at altern.org Sat Nov 12 20:13:51 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Sat, 12 Nov 2005 21:13:51 +0100 Subject: [Wikitech-l] Re: Desperate for a linkification expert In-Reply-To: References: Message-ID: FxParlant wrote: > Nice shot, thatss my problem, I haven't got a clue where I did something > wrong. i've only noticed this problem later. > > This link problem comes in the templates. I've also noticed someting > weirder: > If I call a template with a link, it comes broken. > If I put an ISBN call, befor the template call, the link is perfect > If the ISBN call is afterwards, the link is broken again. > > I'm in mw1.5 Just download the original mediawiki tar.gz and do a recursive diff to find your changes :) -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From gmaxwell at gmail.com Sat Nov 12 22:05:10 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Sat, 12 Nov 2005 17:05:10 -0500 Subject: [Wikitech-l] Users with 'fun' tools. In-Reply-To: <4376028B.2060109@gmail.com> References: <4376028B.2060109@gmail.com> Message-ID: On 11/12/05, Brian wrote: > The point of that tool is to check the diff for common vandalism tools, > not to have 'fun'. I know NullC is working on a bayesian filter tool > that connects directly with the database, which will make this tool > obsolete, but until then this is useful. Yep, lame verson one of it is online in #wikipedia-en-vandalism as user roomba... I'm working out some performance issues with the full bayes engine, so it's currently just wordlist based. 'Fun' was the name I gave it after starting it and seeing it make 200 http gets in a minute. :) From avarab at gmail.com Sun Nov 13 00:11:53 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Sun, 13 Nov 2005 00:11:53 +0000 Subject: [Wikitech-l] Thoughts and measurments related to a new storage framework for large wikis. In-Reply-To: References: <51dd1af80511080234r4b335cb6o53192db2484fa008@mail.gmail.com> Message-ID: <51dd1af80511121611j30197258p527c008104d78f78@mail.gmail.com> On 11/8/05, Gregory Maxwell wrote: > On 11/8/05, ?var Arnfj?r? Bjarmason wrote: > > On 11/5/05, Gregory Maxwell wrote: > > > If we use bsdiff (http://www.daemonology.net/bsdiff/ fast and > > > efficent, if you ignore the anti-social license) rather than diff -du, > > > > It uses the BSD license without an advertising clause, what's > > anti-social about it? > > Ha, if only. It uses the "BSD Protection License" which looks a lot > like the BSD License unless you actually look at it, then you notice > that it's free for any use *except* use with a non-X11 equivalent open > source license. Microsoft could use it in Windows, yet you couldn't > legally use it inside SVN. It's a very odd license. I suppose it's no > more 'anti-social' then any other license that makes you jump through > hoops, but that it doesn't look like it does to a quick glance... I stand corrected then, but that's not any more anti-social than a given viral free software license like the GPL, even less so, because you can't use that with any other license, but it appears you can use the BSDPL with X11-equivalents. From avarab at gmail.com Sun Nov 13 00:27:37 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Sun, 13 Nov 2005 00:27:37 +0000 Subject: [Wikitech-l] Wiki-to-XML In-Reply-To: <4371F201.8050308@web.de> References: <4371F201.8050308@web.de> Message-ID: <51dd1af80511121627w47ab3de3o3220001fd4556c91@mail.gmail.com> The website seems to be completely broken: Warning: set_time_limit(): Cannot set time limit in safe mode in /home/www/ww4553/html/wiki2xml/w2x.php on line 8 Warning: Cannot modify header information - headers already sent by (output started at /home/www/ww4553/html/wiki2xml/w2x.php:8) in /home/www/ww4553/html/wiki2xml/w2x.php on line 64 REDIRECTMetasyntactic variable From hashar at altern.org Sun Nov 13 13:56:38 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Sun, 13 Nov 2005 14:56:38 +0100 Subject: [Wikitech-l] Re: cluster monitoring In-Reply-To: <4374EF03.6020302@nedworks.org> References: <4374EF03.6020302@nedworks.org> Message-ID: Mark Bergsma wrote: > Yes. Although larousse is getting old, and the install running on it is > too. We might want to do a reinstall before that. Fedora Core 2 actually, should I ask Solar to upgrade it to FC3 so ? :o) >>Reusing gmetad data is probably a better idea, the data in nagios and >>ganglia would be the same. One of the problems is that we will have to >>code a nagios plugin that cache the gmetad data to avoid multiples >>queries (we probably dont want to query gmetad for cpu, then for memory >>then for nfs call, then for each disk space usage). > > I don't know ganglia too well, but this seems like the best option to > investigate. If ganglia is flexible and uncomplicated enough to add new > metrics easily, then this could certainly work. Tim Starling added a metric for NFS server calls. So we can probably add some more stuff. > Can we use SNMP for devices that support it, and use ganglia for the rest? > > In my experience, SNMP is nice and easy for things that the standard > net-snmpd supports, but it gets nasty beyond that, i.e. if you want to > add things yourself... In my experience, adding new measures in snmp is easy. You have to add the script that return data, then it generates the oid accordingly. Debian got some nice examples in the snmd.conf . Can we start installing snmpd on all servers to at least get some basic data ? :o) cheers, -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From krstic at fas.harvard.edu Sun Nov 13 14:04:51 2005 From: krstic at fas.harvard.edu (Ivan Krstic) Date: Sun, 13 Nov 2005 15:04:51 +0100 Subject: [Wikitech-l] Re: cluster monitoring In-Reply-To: References: <4374EF03.6020302@nedworks.org> Message-ID: <43774803.8040708@fas.harvard.edu> Ashar Voultoiz wrote: > In my experience, adding new measures in snmp is easy. You have to add > the script that return data, then it generates the oid accordingly. It's basically the same thing with ganglia metrics. -- Ivan Krstic | 0x147C722D From frando at xcite-online.de Sun Nov 13 16:04:58 2005 From: frando at xcite-online.de (Frando) Date: Sun, 13 Nov 2005 17:04:58 +0100 Subject: [Wikitech-l] Re: Wiki-to-XML In-Reply-To: <4371F201.8050308@web.de> References: <4371F201.8050308@web.de> Message-ID: Hello, while testing the latest CVS version locally, I got 2 "only variables can be passed by reference"-errors in lines 231 and 232 of wiki2xml.php (im using php 5.0.5). This is a well-known problem of scripts written for older php versions and ran under php 5, the solution is quite simple: lines 231 and 232 of the recent CVS version of wiki2xml.php: $target = array_pop ( explode ( ">" , $target , 2 ) ) ; $target = array_shift ( explode ( "<" , $target , 2 ) ) ; and my modified version that runs under php 5: $target = array_pop ( @explode ( ">" , $target , 2 ) ) ; $target = array_shift ( @explode ( "<" , $target , 2 ) ) ; apart from this the produced results seem to be quite good, I'll do some further testing in the next days.. best regards, Frando From mark at nedworks.org Sun Nov 13 16:39:53 2005 From: mark at nedworks.org (Mark Bergsma) Date: Sun, 13 Nov 2005 17:39:53 +0100 Subject: [Wikitech-l] Re: cluster monitoring In-Reply-To: References: <4374EF03.6020302@nedworks.org> Message-ID: <43776C59.8040907@nedworks.org> Ashar Voultoiz wrote: > Fedora Core 2 actually, should I ask Solar to upgrade it to FC3 so ? :o) It's not Solar doing that, and it's already scheduled. Problem is, we need to move stuff off it first. >>In my experience, SNMP is nice and easy for things that the standard >>net-snmpd supports, but it gets nasty beyond that, i.e. if you want to >>add things yourself... > > In my experience, adding new measures in snmp is easy. You have to add > the script that return data, then it generates the oid accordingly. > Debian got some nice examples in the snmd.conf . Yes, I have done so too, and I always found it a pain. The interface isn't exactly good/flexible. > Can we start installing snmpd on all servers to at least get some basic > data ? :o) That's exactly the same data ganglia is currently monitoring, so I don't really see the point... -- Mark mark at nedworks.org From dominik.bach at web.de Sun Nov 13 18:12:44 2005 From: dominik.bach at web.de (dominik.bach at web.de) Date: Sun, 13 Nov 2005 19:12:44 +0100 Subject: [Wikitech-l] Ripuarian In-Reply-To: <20051112220539.DE4CC19501BB@mail.wikimedia.org> Message-ID: <4377902C.31386.664529@localhost> Hello, the question "dialect or language" has been discussed on MetaWiki, you can find all arguments there. > http://de.wikipedia.org/wiki/Ripuarian > http://de.wikipedia.org/wiki/Rheinl%C3%A4ndisch > >don't even exist yet. > >Best wishes, > >Tels http://de.wikipedia.org/wiki/Ripuarian does not exist because in German it is http://de.wikipedia.org/wiki/Ripuarisch and in English http://en.wikipedia.org/wiki/Ripuarian Best Dominik From f-x.p at laposte.net Sun Nov 13 18:09:32 2005 From: f-x.p at laposte.net (FxParlant) Date: Sun, 13 Nov 2005 19:09:32 +0100 Subject: [Wikitech-l] Re: Desperate for a linkification expert In-Reply-To: References: Message-ID: Thanks Ashar, this solution would be rather "radical" because a lot of my small customization would lost :-( nevertheless, I'm more and more convinced that I can't do in this version (1.5) what I used to do so easily in previous versions :-( have you got a clu on how to deactivate the replacement of links attribute in templates ? (I mean in mediawiki 1.5, of course) Thanks for any idea. Fran?ois Ashar Voultoiz wrote: > FxParlant wrote: > > >>Nice shot, thatss my problem, I haven't got a clue where I did something >>wrong. i've only noticed this problem later. >> >>This link problem comes in the templates. I've also noticed someting >>weirder: >>If I call a template with a link, it comes broken. >>If I put an ISBN call, befor the template call, the link is perfect >>If the ISBN call is afterwards, the link is broken again. >> >>I'm in mw1.5 > > > Just download the original mediawiki tar.gz and do a recursive diff to > find your changes :) > From jooray at gmail.com Mon Nov 14 11:41:24 2005 From: jooray at gmail.com (Juraj Bednar) Date: Mon, 14 Nov 2005 12:41:24 +0100 Subject: [Wikitech-l] mediawiki dump html fix and few questions Message-ID: <437877E4.4080103@gmail.com> Hello, I tried dumping local Wikipedia to HTML using dumpHTML.php and I found two bugs: - the most important one is in includes/Title.php in getHashedDirectory function. When the $dbkey contains characters such as ".", it is used, so for example if $dbkey is "1. ", then the generated directory name is "1/./_/", which of course is only "1/_" and all links in that file stop working (assuming I'm using depth of 3). The fix is easy (borrowed from getHashedFilename). Just adding $chars[$i] = strtr( $chars[$i], '/\\*?"<>|~.', '__________' ); to the else part of if in the for cycle fixes the problem. - When generating specials (I suggest generating also Special:Allpages, not only categories), we should either make it possible to navigate through the result set using static HTML, or (for smaller wikis) getting rid of limits and navigation alltogether. I worked around this problem by increasing limits and deleting forms from the particular pages. Also, the generation is painfully slow, is it possible to speed it up somehow? I had to rerun it on Sun Fire V20 with two processors to get at least a little bit reasonable time for generation (two hours for sk and cs wikipedia). One last question: how to get rid of interwiki links to different language mutations? They don't work in local wikipedia of course... Sincerely, Juraj Bednar. From avenier at venier.net Mon Nov 14 14:10:17 2005 From: avenier at venier.net (Andrew Venier) Date: Mon, 14 Nov 2005 08:10:17 -0600 Subject: [Wikitech-l] Desperate for a linkification expert In-Reply-To: References: Message-ID: <43789AC9.7080107@venier.net> FxParlant wrote: >hi, > >I did something wrong, and now I end up with : > >src=" > class 'external free' would appear to come from: Parser::replaceFreeExternalLinks() Linker::makeExternalLink() Linker::getExternalLinkAttributes() From florian.keller at oiz.stzh.ch Mon Nov 14 14:13:57 2005 From: florian.keller at oiz.stzh.ch (Keller Florian) Date: Mon, 14 Nov 2005 15:13:57 +0100 Subject: [Wikitech-l] Creating Automated Wiki Sites, without self inserting into Database Message-ID: <689556182020E444B361749CE522AA4E37E8A1@SZHM0068.global.szh.loc> Hello, I've a little question about automated creating a new Wiki Site over a self made Formular. I've reverse engineered the Database Design. It works well with my own inserts into the wikidb, but I want to use Wiki Functions to do this. Im using at the moment the latest stable Version (1.5.2). Could someone give me a hint to do this. It's very hard for me to study the Wikicode without any good Developer Documentation. I found nothing with that issue @ Meta Wiki. I think the solution is near: Line: 376 in EditPage.php # If article is new, insert it. $aid = $this->mTitle->getArticleID( GAID_FOR_UPDATE ); .... I hope someone can help, otherwise I will need more sleepless nights :D Greetings, Florian Keller From evan at wikitravel.org Mon Nov 14 15:10:04 2005 From: evan at wikitravel.org (Evan Prodromou) Date: Mon, 14 Nov 2005 10:10:04 -0500 Subject: [Wikitech-l] elements for interlanguage link information Message-ID: <1131981004.29814.52.camel@zhora.1481ruerachel.net> I was pointed to this section of the HTML 4.0 spec ("Notes on helping search engines index your Web site") at W3C recently: http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.4 I was specifically interested in this section, which I quote in full: Specify language variants of this document If you have prepared translations of this document into other languages, you should use the LINK element to reference these. This allows an indexing engine to offer users search results in the user's preferred language, regardless of how the query was written. For instance, the following links offer French and German alternatives to a search engine: I think it'd be useful for most multilingual MediaWiki installations that use interlanguage links to have such hidden elements. elements aren't rendered in most browsers (Mozilla 1.5+, I think, will show links in a menu on the toolbar), but as mentioned they do provide some guidance to bots and spiders. The downside is that they still take up network bandwidth, and for oft-interwikied pages on big sites (e.g. Wikipedia) this section could run to the 5-10kB size range. (My off-the-cuff estimate for, say, articles where each is about 100B, and there are 50-100 interwiki links.) Barring objections I'm going to add a feature to HEAD to render these elements, controlled by a variable $wgInterLanguageLinkElement , default to false. ~Evan -- Evan Prodromou Wikitravel (http://wikitravel.org/) -- the free, complete, up-to-date and reliable world-wide travel guide From avarab at gmail.com Mon Nov 14 15:36:48 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Mon, 14 Nov 2005 15:36:48 +0000 Subject: [Wikitech-l] elements for interlanguage link information In-Reply-To: <1131981004.29814.52.camel@zhora.1481ruerachel.net> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: <51dd1af80511140736i607cc598h9c004d242004c448@mail.gmail.com> On 11/14/05, Evan Prodromou wrote: > I was pointed to this section of the HTML 4.0 spec ("Notes on helping > search engines index your Web site") at W3C recently: > > http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.4 > > I was specifically interested in this section, which I quote in full: > > Specify language variants of this document > > If you have prepared translations of this document into > other languages, you should use the LINK element to > reference these. This allows an indexing engine to offer > users search results in the user's preferred language, > regardless of how the query was written. For instance, > the following links offer French and German alternatives > to a search engine: > > type="text/html" > href="mydoc-fr.html" hreflang="fr" > lang="fr" title="La vie souterraine"> > type="text/html" > href="mydoc-de.html" hreflang="de" > lang="de" title="Das Leben im Untergrund"> > > I think it'd be useful for most multilingual MediaWiki installations > that use interlanguage links to have such hidden elements. > elements aren't rendered in most browsers (Mozilla 1.5+, I think, will > show links in a menu on the toolbar), but as mentioned they do provide > some guidance to bots and spiders. > > The downside is that they still take up network bandwidth, and for > oft-interwikied pages on big sites (e.g. Wikipedia) this section could > run to the 5-10kB size range. (My off-the-cuff estimate for, say, > articles where each is about 100B, and there are 50-100 > interwiki links.) > > Barring objections I'm going to add a feature to HEAD to render these > elements, controlled by a variable $wgInterLanguageLinkElement , > default to false. Great, and it should default to false because non-wikimedia sites often use the "on other languages" bar for something that isn't a link to other languages at all. But it should definitely be there on Wikimedia sites. B.t.w. better links would have been: * http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd * http://www.w3.org/TR/xhtml-modularization/abstraction.html Since we use the XHTML transitional spec, what you did appears to be perfectly valid in XHTML 1.0 Trans. as well, except that you have to write not as with all XHTML elements, also it appears to be recommended to include xml:lang="" as well as lang="". * From magnus.manske at web.de Mon Nov 14 15:47:32 2005 From: magnus.manske at web.de (Magnus Manske) Date: Mon, 14 Nov 2005 16:47:32 +0100 Subject: [Wikitech-l] Wiki-to-XML In-Reply-To: <51dd1af80511121627w47ab3de3o3220001fd4556c91@mail.gmail.com> References: <4371F201.8050308@web.de> <51dd1af80511121627w47ab3de3o3220001fd4556c91@mail.gmail.com> Message-ID: <4378B194.6040203@web.de> ?var Arnfj?r? Bjarmason wrote: >The website seems to be completely broken: > >Warning: set_time_limit(): Cannot set time limit in safe mode in >/home/www/ww4553/html/wiki2xml/w2x.php on line 8 > >Warning: Cannot modify header information - headers already sent by >(output started at /home/www/ww4553/html/wiki2xml/w2x.php:8) in >/home/www/ww4553/html/wiki2xml/w2x.php on line 64 >REDIRECTMetasyntactic variable > > Sorry, I forgot my hoster insists in safemode :-( Should be fixed now. From magnus.manske at web.de Mon Nov 14 15:47:52 2005 From: magnus.manske at web.de (Magnus Manske) Date: Mon, 14 Nov 2005 16:47:52 +0100 Subject: [Wikitech-l] Re: Wiki-to-XML In-Reply-To: References: <4371F201.8050308@web.de> Message-ID: <4378B1A8.9050502@web.de> Frando wrote: > Hello, > while testing the latest CVS version locally, I got 2 "only variables > can be passed by reference"-errors in lines 231 and 232 of > wiki2xml.php (im using php 5.0.5). > > This is a well-known problem of scripts written for older php versions > and ran under php 5, the solution is quite simple: > > lines 231 and 232 of the recent CVS version of wiki2xml.php: > $target = array_pop ( explode ( ">" , $target , 2 ) ) ; > $target = array_shift ( explode ( "<" , $target , 2 ) ) ; > and my modified version that runs under php 5: > $target = array_pop ( @explode ( ">" , $target , 2 ) ) ; > $target = array_shift ( @explode ( "<" , $target , 2 ) ) ; > > apart from this the produced results seem to be quite good, I'll do > some further testing in the next days.. Applied, thanks! Magnus From evanm at google.com Mon Nov 14 16:32:13 2005 From: evanm at google.com (Evan Martin) Date: Mon, 14 Nov 2005 08:32:13 -0800 Subject: [Wikitech-l] elements for interlanguage link information In-Reply-To: <1131981004.29814.52.camel@zhora.1481ruerachel.net> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> On 11/14/05, Evan Prodromou wrote: > I think it'd be useful for most multilingual MediaWiki installations > that use interlanguage links to have such hidden elements. > elements aren't rendered in most browsers (Mozilla 1.5+, I think, will > show links in a menu on the toolbar), but as mentioned they do provide > some guidance to bots and spiders. > > The downside is that they still take up network bandwidth, and for > oft-interwikied pages on big sites (e.g. Wikipedia) this section could > run to the 5-10kB size range. (My off-the-cuff estimate for, say, > articles where each is about 100B, and there are 50-100 > interwiki links.) What problem is this change trying to fix? I'm not sure I see the pros outweighing the cons... From hashar at altern.org Mon Nov 14 19:03:57 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Mon, 14 Nov 2005 20:03:57 +0100 Subject: [Wikitech-l] Re: cluster monitoring In-Reply-To: <43774803.8040708@fas.harvard.edu> References: <4374EF03.6020302@nedworks.org> <43774803.8040708@fas.harvard.edu> Message-ID: Ivan Krstic wrote: > It's basically the same thing with ganglia metrics. Good! Now I am RTFM :o) -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From hashar at altern.org Mon Nov 14 19:03:29 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Mon, 14 Nov 2005 20:03:29 +0100 Subject: [Wikitech-l] Re: cluster monitoring In-Reply-To: <43776C59.8040907@nedworks.org> References: <4374EF03.6020302@nedworks.org> <43776C59.8040907@nedworks.org> Message-ID: Mark Bergsma wrote: >>Can we start installing snmpd on all servers to at least get some basic >>data ? :o) > > That's exactly the same data ganglia is currently monitoring, so I don't > really see the point... So lets write ganglia scripts :o) If we want to monitor every minute 15 services, we will have to telnet the gmetad every 2 seconds. We could build a caching system though: Check gmetad, cache the result for one minute, the have the nagios plugins grep the cache instead of telneting gmetad. I think i have an idea about how to handle that. -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From hashar at altern.org Mon Nov 14 19:09:45 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Mon, 14 Nov 2005 20:09:45 +0100 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: <1131981004.29814.52.camel@zhora.1481ruerachel.net> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: Evan Prodromou wrote: > I was pointed to this section of the HTML 4.0 spec ("Notes on helping > search engines index your Web site") at W3C recently: > > http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.4 > > I was specifically interested in this section, which I quote in full: > > Specify language variants of this document > > If you have prepared translations of this document into > other languages, you should use the LINK element to > reference these. Would be good if articles on the different projects were translations. Most of the time, although they are about the same subject, the articles are differents. -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From usenet at tonal.clara.co.uk Mon Nov 14 19:24:14 2005 From: usenet at tonal.clara.co.uk (Neil Harris) Date: Mon, 14 Nov 2005 19:24:14 +0000 Subject: [Wikitech-l] elements for interlanguage link information In-Reply-To: <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> Message-ID: <4378E45E.10102@tonal.clara.co.uk> Evan Martin wrote: > On 11/14/05, Evan Prodromou wrote: > >> I think it'd be useful for most multilingual MediaWiki installations >> that use interlanguage links to have such hidden elements. >> elements aren't rendered in most browsers (Mozilla 1.5+, I think, will >> show links in a menu on the toolbar), but as mentioned they do provide >> some guidance to bots and spiders. >> >> The downside is that they still take up network bandwidth, and for >> oft-interwikied pages on big sites (e.g. Wikipedia) this section could >> run to the 5-10kB size range. (My off-the-cuff estimate for, say, >> articles where each is about 100B, and there are 50-100 >> interwiki links.) >> > > What problem is this change trying to fix? I'm not sure I see the > pros outweighing the cons... > > A possible point in favour: using rel="alternate" together with the hreflang attribute in elements (as per http://www.w3.org/TR/REC-html40/struct/links.html#h-12.3, section 12.3.3) might help search engines make smarter decisions about indexing Wikipedia content, thus increasing the relevance and availability of Wikipedia content in Web searches made by the general public. -- Neil > > From t.starling at physics.unimelb.edu.au Mon Nov 14 19:53:28 2005 From: t.starling at physics.unimelb.edu.au (Tim Starling) Date: Tue, 15 Nov 2005 06:53:28 +1100 Subject: [Wikitech-l] Re: cluster monitoring In-Reply-To: References: <4374EF03.6020302@nedworks.org> <43776C59.8040907@nedworks.org> Message-ID: Ashar Voultoiz wrote: > Mark Bergsma wrote: > > >>>Can we start installing snmpd on all servers to at least get some basic >>>data ? :o) >> >>That's exactly the same data ganglia is currently monitoring, so I don't >>really see the point... > > > So lets write ganglia scripts :o) > > If we want to monitor every minute 15 services, we will have to telnet > the gmetad every 2 seconds. We could build a caching system though: > > Check gmetad, cache the result for one minute, the have the nagios > plugins grep the cache instead of telneting gmetad. > > I think i have an idea about how to handle that. > I wrote a perl script a while back to poll the gmond XML output from one machine and stop or start a process on another machine based on the value of a metric retrieved. I didn't use telnet (ick), I read from a socket and then used an XPath module to find the metric in the XML. It's probably lying around in my home directory somewhere if you want to look at it. If caching is required, then adding metrics to nagios is obviously not the same as adding metrics to ganglia. For ganglia, you run gmetric whenever a metric changes, so you can have a loop that sets 30 metrics in each pass if you like. You don't give it a plugin for it to invoke at its leisure, you make your own daemon. -- Tim Starling From lars at aronsson.se Mon Nov 14 20:07:14 2005 From: lars at aronsson.se (Lars Aronsson) Date: Mon, 14 Nov 2005 21:07:14 +0100 (CET) Subject: [Wikitech-l] elements for interlanguage link information In-Reply-To: <4378E45E.10102@tonal.clara.co.uk> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> <4378E45E.10102@tonal.clara.co.uk> Message-ID: Neil Harris wrote: > A possible point in favour: using rel="alternate" together with the hreflang > attribute in elements (as per > http://www.w3.org/TR/REC-html40/struct/links.html#h-12.3, section 12.3.3) > might help search engines make smarter decisions about indexing Wikipedia > content, thus increasing the relevance and availability of Wikipedia content > in Web searches made by the general public. That's an interesting statement. What evidence do you have to support it? Can you name any search engines that actually look at this information? -- Lars Aronsson (lars at aronsson.se) Aronsson Datateknik - http://aronsson.se From avarab at gmail.com Mon Nov 14 20:10:16 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Mon, 14 Nov 2005 20:10:16 +0000 Subject: [Wikitech-l] elements for interlanguage link information In-Reply-To: <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> Message-ID: <51dd1af80511141210v2afed904i25a773f9cf0440d7@mail.gmail.com> On 11/14/05, Evan Martin wrote: > On 11/14/05, Evan Prodromou wrote: > > I think it'd be useful for most multilingual MediaWiki installations > > that use interlanguage links to have such hidden elements. > > elements aren't rendered in most browsers (Mozilla 1.5+, I think, will > > show links in a menu on the toolbar), but as mentioned they do provide > > some guidance to bots and spiders. > > > > The downside is that they still take up network bandwidth, and for > > oft-interwikied pages on big sites (e.g. Wikipedia) this section could > > run to the 5-10kB size range. (My off-the-cuff estimate for, say, > > articles where each is about 100B, and there are 50-100 > > interwiki links.) > > What problem is this change trying to fix? I'm not sure I see the > pros outweighing the cons... This would properly mark the document as avalible in multiple languages if you consider something that might not be a straight translation as an alternate version of the document. This would not "fix" anything but improve the document by providing proper metadata as reccomended in the specification. From avarab at gmail.com Mon Nov 14 20:14:38 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Mon, 14 Nov 2005 20:14:38 +0000 Subject: [Wikitech-l] elements for interlanguage link information In-Reply-To: References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> <4378E45E.10102@tonal.clara.co.uk> Message-ID: <51dd1af80511141214q59ab2957k69cb09be863abc5e@mail.gmail.com> On 11/14/05, Lars Aronsson wrote: > Neil Harris wrote: > > > A possible point in favour: using rel="alternate" together with the hreflang > > attribute in elements (as per > > http://www.w3.org/TR/REC-html40/struct/links.html#h-12.3, section 12.3.3) > > might help search engines make smarter decisions about indexing Wikipedia > > content, thus increasing the relevance and availability of Wikipedia content > > in Web searches made by the general public. > > That's an interesting statement. What evidence do you have to > support it? Can you name any search engines that actually look at > this information? Regardless of what search engiles do with it it would provide information to user agents that the document is avalible in an alternate language. From evan at wikitravel.org Mon Nov 14 22:22:25 2005 From: evan at wikitravel.org (Evan Prodromou) Date: Mon, 14 Nov 2005 17:22:25 -0500 Subject: [Wikitech-l] elements for interlanguage link information In-Reply-To: <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> Message-ID: <1132006945.29814.73.camel@zhora.1481ruerachel.net> On Mon, 2005-14-11 at 08:32 -0800, Evan Martin wrote: > What problem is this change trying to fix? It's not trying to solve any problem. There's a standard mechanism in HTML for specifying an alternate version of a document in another language; theoretically using this standard mechanism would help some MediaWiki-ignorant bots and spiders to find alternate language versions and "understand" the relationship between them. > I'm not sure I see the > pros outweighing the cons... I can see that. The pros are theoretical; the cons are more concrete. I'm of the opinion that if you follow Web standards, Good Things Happen (TM), but I don't have any proof that that's the case in this instance. ~Evan -- Evan Prodromou Wikitravel (http://wikitravel.org/) -- the free, complete, up-to-date and reliable world-wide travel guide From evan at wikitravel.org Mon Nov 14 22:23:53 2005 From: evan at wikitravel.org (Evan Prodromou) Date: Mon, 14 Nov 2005 17:23:53 -0500 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: <1132007033.29814.76.camel@zhora.1481ruerachel.net> On Mon, 2005-14-11 at 20:09 +0100, Ashar Voultoiz wrote: > > Specify language variants of this document > > > > If you have prepared translations of this document into > > other languages, you should use the LINK element to > > reference these. > > Would be good if articles on the different projects were translations. > Most of the time, although they are about the same subject, the articles > are differents. Yes, that's true. It's not clear how literal a translation an alternate document in another language has to be, though. Is the article about George Bush on Wikipedia fr: an "alternate in another language" to the one on Wikipedia en:? I'd say it's about as close as possible without being a translation. ~ESP -- Evan Prodromou Wikitravel (http://wikitravel.org/) -- the free, complete, up-to-date and reliable world-wide travel guide From sabine_cretella at yahoo.it Mon Nov 14 22:49:08 2005 From: sabine_cretella at yahoo.it (Sabine Cretella) Date: Mon, 14 Nov 2005 23:49:08 +0100 Subject: [Wikitech-l] Copy pages from Wikipedia/Wiktionary for use on giveway CD (presentation in Cracow) Message-ID: <43791464.8080802@yahoo.it> Hi, I need to create a giveaway CD where I would like to include some articles that are tranlsated in various languages. I have to give a presentation on wiki projects during a translator's conference. Now I tried to download some pages with HTTrack, but it just does not work. Is there a way to download specific pages and have them update every now and then since I suppose from now on it will happen more often to have to provide some single example pages. I know I could do this with IE using the "offline browsing" utility, well I use Firefox ... and I am not very keen on doing things with IE. Has someone an idea how I could achieve this? Thanks! Sabine ___________________________________ Yahoo! Mail: gratis 1GB per i messaggi e allegati da 10MB http://mail.yahoo.it From node.ue at gmail.com Mon Nov 14 23:45:47 2005 From: node.ue at gmail.com (Mark Williamson) Date: Mon, 14 Nov 2005 16:45:47 -0700 Subject: [Wikitech-l] sr.wiki has been waiting for months! Message-ID: <849f98ed0511141545i62c8b6e8m@mail.gmail.com> Hi everybody, Representatives of the Serbian Wikipedia have asked a number of times how they can get their conversion engine installed for sr.wikipedia. 1) It was approved in a vote by Serbian Wikipedians 2) It will make Wikipedia more usable to many people, including most Montenegrins 3) It does not require any new software; rather it is a fully working Mediawiki component already. Now, so far there has been no response from this community. Please, somebody, respond -- how is it that the Chinese Wikipedia got a conversion system implemented as soon as it was written, but the Serbian Wikipedia is waiting and waiting and waiting? Mark -- "Take away their language, destroy their souls." -- Joseph Stalin From usenet at tonal.clara.co.uk Mon Nov 14 23:48:03 2005 From: usenet at tonal.clara.co.uk (Neil Harris) Date: Mon, 14 Nov 2005 23:48:03 +0000 Subject: [Wikitech-l] elements for interlanguage link information In-Reply-To: References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <9f43d19d0511140832w58707025y9eff00af880db727@mail.google.com> <4378E45E.10102@tonal.clara.co.uk> Message-ID: <43792233.7070808@tonal.clara.co.uk> Lars Aronsson wrote: > Neil Harris wrote: > > >> A possible point in favour: using rel="alternate" together with the hreflang >> attribute in elements (as per >> http://www.w3.org/TR/REC-html40/struct/links.html#h-12.3, section 12.3.3) >> might help search engines make smarter decisions about indexing Wikipedia >> content, thus increasing the relevance and availability of Wikipedia content >> in Web searches made by the general public. >> > > That's an interesting statement. What evidence do you have to > support it? Can you name any search engines that actually look at > this information? > I think it's a chicken-and-egg problem, a case of "if you build it, they will come." Although I do not know of any search engine that uses this information, on the other hand, it's hard to prove a negative. Google, in particular, are highly ingenious in using every scrap of information they can glean from analyzing web pages, and feeding it to their machine-learning pattern-matching engines. Certainly, unless that information is present, they will certainly not be able to use it, even in theory. I think that encoding the metadata statement "this other page is an alternative to this page, in the following language" in a standardized machine-readable form is quite likely to be useful, possibly in unanticipated ways. Indeed, the application of this link tag to Wikipedia is a clear case where both of the normal objections voiced to the hreflang attribute (no control of linked-to content, what about Accept-Lang) clearly do not apply. It's simple to implement, clearly cannot do any harm to Wikipedia's content, and it might do some good; whether it is worth the additional load on Wikipedia's bandwidth and servers is another matter. -- Neil From node.ue at gmail.com Tue Nov 15 00:44:50 2005 From: node.ue at gmail.com (Mark Williamson) Date: Mon, 14 Nov 2005 17:44:50 -0700 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: <849f98ed0511141644l108e8d38l@mail.gmail.com> I would change that to "much of the time". On the larger Wikipedias, there is some overlap, and on many small Wikipedias, a large portion of the articles are translations (mt.wiki, br.wiki, are examples) Mark On 14/11/05, Ashar Voultoiz wrote: > Evan Prodromou wrote: > > I was pointed to this section of the HTML 4.0 spec ("Notes on helping > > search engines index your Web site") at W3C recently: > > > > http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.4 > > > > I was specifically interested in this section, which I quote in full: > > > > Specify language variants of this document > > > > If you have prepared translations of this document into > > other languages, you should use the LINK element to > > reference these. > > Would be good if articles on the different projects were translations. > Most of the time, although they are about the same subject, the articles > are differents. > > -- > Ashar Voultoiz - WP++++ > http://en.wikipedia.org/wiki/User:Hashar > http://www.livejournal.com/community/wikitech/ > IM: hashar at jabber.org ICQ: 15325080 > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- "Take away their language, destroy their souls." -- Joseph Stalin From neil at tonal.clara.co.uk Tue Nov 15 01:57:42 2005 From: neil at tonal.clara.co.uk (Neil Harris) Date: Tue, 15 Nov 2005 01:57:42 +0000 Subject: [Wikitech-l] Re: [WikiEN-l] Markup; html In-Reply-To: <5AD37ECD-581D-406B-B1F7-FC2110EF5850@specialbusservice.com> References: <813E7CE2-EDB7-4067-B670-7EDE950EA713@specialbusservice.com> <2C7C96A3-78CF-4A96-B0A0-82798A55D9A2@specialbusservice.com> <5AD37ECD-581D-406B-B1F7-FC2110EF5850@specialbusservice.com> Message-ID: <43794096.5010506@tonal.clara.co.uk> Justin Cormack wrote: > > On 15 Nov 2005, at 01:21, Andrew Gray wrote: >> Signatures that are broken are, in most cases, sloppy coding that was >> previously shored up by our parser, I think. However, a lot of other >> issues aren't bad coding (at least, not on the surface); there's >> issues with HTML in transcluded templates, for example, which no-one >> seems sure about; these may be template coding flaws or they may be a >> problem with the backup code. >> >> I really don't know anything more than what I've just been reading, >> though... > > Its the template thing: it was [[Portal:London]] I saw which has lots. > Quite > prepared to believe that there were errors to start with; the current > state > looks like a pretty major change though, so it doesnt seem worth fixing > until they have reverted to something that is going to stay stable. If > there are problems it would be good to have checking tools on input, or > more markup to replace common html uses (html in sigs is rather evil > as it > cant be cleaned up after if wrong which is rather gross and this > should really > be fixed). > > Justinc > > _ Maybe templates should be HTML-cleaned as independent entities before transclusion / substitution into the larger page? Or would that not work because the code doesn't work that way, or break existing uses? -- Neil From maveric149 at yahoo.com Tue Nov 15 02:00:04 2005 From: maveric149 at yahoo.com (Daniel Mayer) Date: Mon, 14 Nov 2005 18:00:04 -0800 (PST) Subject: [Wikitech-l] HTML and subpage support on internal wiki needed Message-ID: <20051115020004.15319.qmail@web51610.mail.yahoo.com> Could somebody add HTML and subpage support on internal wiki? I just posted a huge HTML doc there that needs to be subdivided. Right now it is a mess. Thanks in advance. :) -- mav __________________________________ Yahoo! FareChase: Search multiple travel sites in one click. http://farechase.yahoo.com From avarab at gmail.com Tue Nov 15 03:57:02 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Tue, 15 Nov 2005 03:57:02 +0000 Subject: [Wikitech-l] sr.wiki has been waiting for months! In-Reply-To: <849f98ed0511141545i62c8b6e8m@mail.gmail.com> References: <849f98ed0511141545i62c8b6e8m@mail.gmail.com> Message-ID: <51dd1af80511141957o67fe49cdtc31cb97dbed21e42@mail.gmail.com> On 11/14/05, Mark Williamson wrote: > Hi everybody, > > Representatives of the Serbian Wikipedia have asked a number of times > how they can get their conversion engine installed for sr.wikipedia. > > 1) It was approved in a vote by Serbian Wikipedians > 2) It will make Wikipedia more usable to many people, including most > Montenegrins > 3) It does not require any new software; rather it is a fully working > Mediawiki component already. > > Now, so far there has been no response from this community. > > Please, somebody, respond -- how is it that the Chinese Wikipedia got > a conversion system implemented as soon as it was written, but the > Serbian Wikipedia is waiting and waiting and waiting? Where's the patch to LanguageSr.php and the conversion library to make this happen? From dgerard at gmail.com Tue Nov 15 11:51:43 2005 From: dgerard at gmail.com (David Gerard) Date: Tue, 15 Nov 2005 11:51:43 +0000 Subject: [Wikitech-l] Re: [WikiEN-l] Tower of Babel - Voting remains stupid as well as evil. Message-ID: Tony Sidaway wrote: [re: pissfight over [[Ivory Coast]] vs [[Cote d'Ivoire]] on en:] >It occurs to me that these nomenclature problems could be solved at >software level by permitting the creation of synonyms, rather than >redirects. >There's no reason why there should be two or more separate articles, >with people squabbling about which one should be the main article and >which the redirect, if both exist and have precisely the same article >id. The synonym creation would also have to operate for the >corresponding talk namespace. Get coding :-D (This is a really nice idea, IMO, and would save much of the forests of redirects and annoyance of double redirects.) - d. From dgerard at gmail.com Tue Nov 15 11:54:33 2005 From: dgerard at gmail.com (David Gerard) Date: Tue, 15 Nov 2005 11:54:33 +0000 Subject: [Wikitech-l] Status of article rating feature? Message-ID: dpbsmith wrote: >On my next flight, I hope my pilot will not be using these maps. That's because the maps are marked "ALPHA QUALITY - NOT FINAL PRODUCTION VERSIONS" and are written in pencil. Never mind they're in the top-40 maps in the world - the project has peaked way too early. Damn we need the rating feature. What's holding it up right now? List please, referring to current version of code. (I know the servers are creaking ...) - d. From rowan.collins at gmail.com Tue Nov 15 12:26:53 2005 From: rowan.collins at gmail.com (Rowan Collins) Date: Tue, 15 Nov 2005 12:26:53 +0000 Subject: [Wikitech-l] Re: [WikiEN-l] Markup; html In-Reply-To: <43794096.5010506@tonal.clara.co.uk> References: <813E7CE2-EDB7-4067-B670-7EDE950EA713@specialbusservice.com> <2C7C96A3-78CF-4A96-B0A0-82798A55D9A2@specialbusservice.com> <5AD37ECD-581D-406B-B1F7-FC2110EF5850@specialbusservice.com> <43794096.5010506@tonal.clara.co.uk> Message-ID: <9f02ca4c0511150426y75d2e3f9w@mail.gmail.com> On 15/11/05, Neil Harris wrote: > Maybe templates should be HTML-cleaned as independent entities before > transclusion / substitution into the larger page? Or would that not work > because the code doesn't work that way, or break existing uses? Well, people have a tendency to do things like put just the openning of a
(with appropriate attrbiutes) or parts of a table's markup into separate templates. In most cases, this *could* be done by having a single template with parameters for the middle bits I should think, but switching to pre-rendering templates separately (which would be nice from a programming point of view) would break things as they are now. I think. -- Rowan Collins BSc [IMSoP] From dgerard at gmail.com Tue Nov 15 12:52:19 2005 From: dgerard at gmail.com (David Gerard) Date: Tue, 15 Nov 2005 12:52:19 +0000 Subject: [Wikitech-l] [WikiEN-l] Status of article rating feature? Message-ID: geni wrote: >On 11/15/05, David Gerard wrote: >> Damn we need the rating feature. What's holding it up right now? List >> please, referring to current version of code. (I know the servers are >> creaking ...) >Do you know if the softwear has been released yet? If not it could be >a way to get it tested. It's been in the MediaWiki code base for months and we've been screaming for it to be switched on (see wikien-l in all that time). I've directly asked on wikitech-l if there's some reason this feature is *never* going in and if I should just stop bothering, and been told that's not the case and that there are things that need fixing first. But the original developer (Magnus Manske) *still* can't get any clear list of what needs fixing for it to go in. So. What's up with Special:Validate? [cc to wikitech-l] - d. From peter_jacobi at gmx.net Tue Nov 15 13:17:25 2005 From: peter_jacobi at gmx.net (Peter Jacobi) Date: Tue, 15 Nov 2005 14:17:25 +0100 (MET) Subject: =?ISO-8859-1?Q?Re:_[Wikitech-l]_Re:_[WikiEN-l]_Tower_of_Babel_-_Voting_re?= =?ISO-8859-1?Q?mains_stupid=09as_well_as_evil.?= References: Message-ID: <14864.1132060645@www10.gmx.net> On the issue of REDIRECT handling: > >There's no reason why there should be two or more separate articles, > >with people squabbling about which one should be the main article and > >which the redirect, if both exist and have precisely the same article > >id. The synonym creation would also have to operate for the > >corresponding talk namespace. > > > Get coding :-D > > (This is a really nice idea, IMO, and would save much of the forests > of redirects and annoyance of double redirects.) Uli Fuchs' sort-of fork WikiWeise with independently developed Wiki software has automic DISAMBIGs and an elaborated scheme of index entries instead of REDIRECTs. See (for those reading German): 1) http://www.wikiweise.de/wiki/Wikiweise%3ABegriffskl%C3%A4rung 2) http://www.wikiweise.de/wiki/Wikiweise%3AIndizierung Summary: 1) "Foo ((Bar))" and "Foo ((Baz))" will algorithmically generate a "Foo" disambig. 2) There is a separate entry field for index entries for the article Regards, Peter Jacobi -- Highspeed-Freiheit. Bei GMX superg?nstig, z.B. GMX DSL_Cityflat, DSL-Flatrate f?r nur 4,99 Euro/Monat* http://www.gmx.net/de/go/dsl From phil.boswell at gmail.com Tue Nov 15 13:24:01 2005 From: phil.boswell at gmail.com (Phil Boswell) Date: Tue, 15 Nov 2005 13:24:01 -0000 Subject: [Wikitech-l] Signature problems Message-ID: What the heck happened with signatures since last night? There's a heap of complaints on [[en:WP:VPT]] from people with complicated, and indeed not-so-complicated, signatures, which suddenly stopped resolving properly. My sig is fairly simple: up to a few minutes ago it was non-raw, and consisted of "Phil]] | [[User talk:Phil Boswell|Talk", which when wrapped with the traditional "[[User:..." and "]]" worked fine. I have now converted it to raw and filled in the supposedly automatic bits by hand. As far as I can tell, the pipe and "[[" symbols after the "]]" were being converted into numeric HTML entities. Is this some feature which has blown up in our faces, or has somebody been fiddling with code, or what? -- Phil [[en:User:Phil Boswell]] From rowan.collins at gmail.com Tue Nov 15 13:37:02 2005 From: rowan.collins at gmail.com (Rowan Collins) Date: Tue, 15 Nov 2005 13:37:02 +0000 Subject: [Wikitech-l] Signature problems In-Reply-To: References: Message-ID: <9f02ca4c0511150537y62db149l@mail.gmail.com> On 15/11/05, Phil Boswell wrote: > What the heck happened with signatures since last night? There's a heap of > complaints on [[en:WP:VPT]] from people with complicated, and indeed > not-so-complicated, signatures, which suddenly stopped resolving properly. See http://mail.wikimedia.org/pipermail/wikien-l/2005-November/032692.html > My sig is fairly simple: up to a few minutes ago it was non-raw, and > consisted of "Phil]] | [[User talk:Phil Boswell|Talk", which when wrapped > with the traditional "[[User:..." and "]]" worked fine. I have now converted > it to raw and filled in the supposedly automatic bits by hand. This is, more or less, the reason why "raw signatures" were invented in the first place - playing with the "nickname" to simulate them can cause all sorts of hideous markup, and a software change made it less tolerant of some of them. Although this particular change is inadvertent and temporary, you're probably best sticking with "raw" mode. -- Rowan Collins BSc [IMSoP] From rthudy at hotmail.com Tue Nov 15 14:37:10 2005 From: rthudy at hotmail.com (Ryan Hudy) Date: Tue, 15 Nov 2005 09:37:10 -0500 Subject: [Wikitech-l] Image Upload/Import Message-ID: I know that tools like mwdumper don't have the ability to upload and import images from wikipedia, so I was wondering if anyone has written a script in PHP or know of any other tools that perform this functionality. I found this topic discussed here: http://mail.wikipedia.org/pipermail/wikitech-l/2005-May/029745.html with someone stating they were working on a PHP script to do this, but I guess it never happened or it would have probably been passed around within the community. Anyone have any additional information on this issue? Ryan From gordon.joly at pobox.com Tue Nov 15 14:50:39 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Tue, 15 Nov 2005 14:50:39 +0000 Subject: [Wikitech-l] Image Upload/Import In-Reply-To: References: Message-ID: At 09:37 -0500 15/11/05, Ryan Hudy wrote: >I know that tools like mwdumper don't have the ability to upload and >import images from wikipedia, so I was wondering if anyone has >written a script in PHP or know of any other tools that perform this >functionality. I found this topic discussed here: >http://mail.wikipedia.org/pipermail/wikitech-l/2005-May/029745.html > >with someone stating they were working on a PHP script to do this, >but I guess it never happened or it would have probably been passed >around within the community. Anyone have any additional information >on this issue? > >Ryan > I used a suite called "db_backup" and copied the /images/ directory with no problems at all in rebuilding a MediaWiki website (not Wikipedia). The webiste had 300 images at most.... -- Gordo (aka LoopZilla) gordon.joly at pobox.com http://pobox.com/~gordo/ http://www.loopzilla.org/ From gordon.joly at pobox.com Tue Nov 15 14:50:39 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Tue, 15 Nov 2005 14:50:39 +0000 Subject: [Wikitech-l] Image Upload/Import In-Reply-To: References: Message-ID: At 09:37 -0500 15/11/05, Ryan Hudy wrote: >I know that tools like mwdumper don't have the ability to upload and >import images from wikipedia, so I was wondering if anyone has >written a script in PHP or know of any other tools that perform this >functionality. I found this topic discussed here: >http://mail.wikipedia.org/pipermail/wikitech-l/2005-May/029745.html > >with someone stating they were working on a PHP script to do this, >but I guess it never happened or it would have probably been passed >around within the community. Anyone have any additional information >on this issue? > >Ryan > I used a suite called "db_backup" and copied the /images/ directory with no problems at all in rebuilding a MediaWiki website (not Wikipedia). The webiste had 300 images at most.... -- Gordo (aka LoopZilla) gordon.joly at pobox.com http://pobox.com/~gordo/ http://www.loopzilla.org/ From thomas at sudo.ch Tue Nov 15 15:15:57 2005 From: thomas at sudo.ch (thomas) Date: Tue, 15 Nov 2005 16:15:57 +0100 Subject: [Wikitech-l] Re: Wikitech-l Digest, Vol 28, Issue 29 In-Reply-To: <20051114195935.63EEA1AC1797@mail.wikimedia.org> References: <20051114195935.63EEA1AC1797@mail.wikimedia.org> Message-ID: <4379FBAD.4000408@sudo.ch> wikitech-l-request at wikimedia.org schrieb: >Send Wikitech-l mailing list submissions to > wikitech-l at wikimedia.org > >To subscribe or unsubscribe via the World Wide Web, visit > http://mail.wikipedia.org/mailman/listinfo/wikitech-l >or, via email, send a message with subject or body 'help' to > wikitech-l-request at wikimedia.org > >You can reach the person managing the list at > wikitech-l-owner at wikimedia.org > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of Wikitech-l digest..." > > >Today's Topics: > > 1. elements for interlanguage link information > (Evan Prodromou) > 2. Re: elements for interlanguage link information > (?var Arnfj?r? Bjarmason) > 3. Re: Wiki-to-XML (Magnus Manske) > 4. Re: Re: Wiki-to-XML (Magnus Manske) > 5. Re: elements for interlanguage link information > (Evan Martin) > 6. Re: cluster monitoring (Ashar Voultoiz) > 7. Re: cluster monitoring (Ashar Voultoiz) > 8. Re: elements for interlanguage link information > (Ashar Voultoiz) > 9. Re: elements for interlanguage link information > (Neil Harris) > 10. Re: cluster monitoring (Tim Starling) > > >---------------------------------------------------------------------- > >Message: 1 >Date: Mon, 14 Nov 2005 10:10:04 -0500 >From: Evan Prodromou >Subject: [Wikitech-l] elements for interlanguage link > information >To: wikitech-l at wikimedia.org >Message-ID: <1131981004.29814.52.camel at zhora.1481ruerachel.net> >Content-Type: text/plain > >I was pointed to this section of the HTML 4.0 spec ("Notes on helping >search engines index your Web site") at W3C recently: > > http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.4 > >I was specifically interested in this section, which I quote in full: > > Specify language variants of this document > > If you have prepared translations of this document into > other languages, you should use the LINK element to > reference these. This allows an indexing engine to offer > users search results in the user's preferred language, > regardless of how the query was written. For instance, > the following links offer French and German alternatives > to a search engine: > > type="text/html" > href="mydoc-fr.html" hreflang="fr" > lang="fr" title="La vie souterraine"> > type="text/html" > href="mydoc-de.html" hreflang="de" > lang="de" title="Das Leben im Untergrund"> > >I think it'd be useful for most multilingual MediaWiki installations >that use interlanguage links to have such hidden elements. >elements aren't rendered in most browsers (Mozilla 1.5+, I think, will >show links in a menu on the toolbar), but as mentioned they do provide >some guidance to bots and spiders. > >The downside is that they still take up network bandwidth, and for >oft-interwikied pages on big sites (e.g. Wikipedia) this section could >run to the 5-10kB size range. (My off-the-cuff estimate for, say, >articles where each is about 100B, and there are 50-100 >interwiki links.) > >Barring objections I'm going to add a feature to HEAD to render these > elements, controlled by a variable $wgInterLanguageLinkElement , >default to false. > >~Evan > > > From thomas at sudo.ch Tue Nov 15 15:18:42 2005 From: thomas at sudo.ch (thomas) Date: Tue, 15 Nov 2005 16:18:42 +0100 Subject: [Wikitech-l] Set up a PHP-Page including articles from Mediawiki Message-ID: <4379FC52.30909@sudo.ch> I'm trying to set up a PHP-Page including articles from my mediawiki and also want to submit articles to the wiki through my page. Because I dont like to manipulate the mediawikis database, I tried to implement the html form that is used while editing the mediawiki so it will work the same way on my page aswell and encountered some problems. What kind of possibilities do I have to insert/get data from the mediawiki using an API (method calls)? I'm using Mediawiki 1.5.2 Thanks for your help :) ~Thomas From smoddy at gmail.com Tue Nov 15 16:38:23 2005 From: smoddy at gmail.com (Sam Korn) Date: Tue, 15 Nov 2005 16:38:23 +0000 Subject: [Wikitech-l] Re: [WikiEN-l] Markup; html In-Reply-To: <43794096.5010506@tonal.clara.co.uk> References: <813E7CE2-EDB7-4067-B670-7EDE950EA713@specialbusservice.com> <2C7C96A3-78CF-4A96-B0A0-82798A55D9A2@specialbusservice.com> <5AD37ECD-581D-406B-B1F7-FC2110EF5850@specialbusservice.com> <43794096.5010506@tonal.clara.co.uk> Message-ID: On 11/15/05, Neil Harris wrote: > Maybe templates should be HTML-cleaned as independent entities before > transclusion / substitution into the larger page? Or would that not work > because the code doesn't work that way, or break existing uses? As far as my limited understanding of PHP goes, it would not work. The current technique allows the template to change depending which page it is included on, by means of the {{PAGENAME}} variable and others like it. Not doing this would screw this up. It might well also be a greater demand on the servers. As I say, my understanding is more hypothetical than theoretical, and I might be wholly wrong. Caveat lector. Sam From millosh at mutualaid.org Tue Nov 15 17:02:08 2005 From: millosh at mutualaid.org (Milos Rancic) Date: Tue, 15 Nov 2005 18:02:08 +0100 Subject: [Wikitech-l] sr.wiki has been waiting for months! In-Reply-To: <51dd1af80511141957o67fe49cdtc31cb97dbed21e42@mail.gmail.com> References: <849f98ed0511141545i62c8b6e8m@mail.gmail.com> <51dd1af80511141957o67fe49cdtc31cb97dbed21e42@mail.gmail.com> Message-ID: <2505d1590511150902x265eb532m70e7a9e49a4da17e@mail.gmail.com> Mark, thank you for asking this. On 11/15/05, ?var Arnfj?r? Bjarmason wrote: > Where's the patch to LanguageSr.php and the conversion library to make > this happen? It can be found at http://meta.wikimedia.org/wiki/User:BraneJ/Serbian_Variants with the hole explanation. I can send the whole files, too. But, it would be very helpful that some of us have CVS access for that purpose. This, first variant, which is just read-only for Latin and Iyekavian should be changed through the time and to include write version, too. From rthudy at hotmail.com Tue Nov 15 19:02:12 2005 From: rthudy at hotmail.com (Ryan Hudy) Date: Tue, 15 Nov 2005 14:02:12 -0500 Subject: [Wikitech-l] Change "Wikimedia Project" Icon Message-ID: I'm looking to replace the Wikimedia Project icon (located at the bottom left of the mediawiki pages: http://meta.wikimedia.org/wiki/MediaWiki) with another icon within my intranet wiki. I believe I made all the necessary changes within the files (Skin.php, SkinTemplate.php, Monobook.php, Commonprint.css, Monobook/main.css) but my icon still isn't showing up. Am I missing an adjustment/addition in a specific file, and is the Wikimedia Project icon called as copyrightico? Ryan From martina.greiner at gmx.de Sat Nov 12 22:52:37 2005 From: martina.greiner at gmx.de (Martina Greiner) Date: Sat, 12 Nov 2005 22:52:37 +0000 (UTC) Subject: [Wikitech-l] Error 1064 when importing xml file into mysql with mwdumper Message-ID: Hello, I tried to load the wikipedia 20051105 data dump into my mysql database with an installation of media wiki 5.1 and mysql 4.0.26 I used the command 56:~/Desktop mgreiner$ nohup java -server -jar mwdumper.jar --format=sql:1.5 20051105_pages_full.xml.bz2 | mysql wikidb & and received the following error: 56:~/Desktop mgreiner$ ERROR 1064 at line 82: You have an error in your SQL syntax. Check the manual that corresponds to your MySQL server version for the right syntax to use near ''\'\'\'Anarchism\'\'\' is a generic term describing various rev I suspect that it has something to do with the version of mysql. What version is recommended for installing wikipedia? I hope that somebody can help me. Martina Greiner From topologis at t-online.de Tue Nov 15 12:02:29 2005 From: topologis at t-online.de (topi) Date: Tue, 15 Nov 2005 13:02:29 +0100 Subject: [Wikitech-l] Extending cur table with further fields Message-ID: How can i add additional fields (added befor to the cur-table) to mediawiki. My aim is to add defined additional data to an article and to make use of the editing and version-system of mediawiki. These additional fields shall appear within the article. Thanks Heinz From ccanddyy at yahoo.com Sun Nov 13 06:36:02 2005 From: ccanddyy at yahoo.com (candy) Date: Sat, 12 Nov 2005 22:36:02 -0800 Subject: [Wikitech-l] change page rendering to display all metadata Message-ID: hi all, Can somebody help me with the folowing : In wikipedia when we view the page of an article, it displays the content of the page and not the metadata such as author(s),page timestamps etc. A part of these metadata is visible in the history section where it displays all the page revisions. I want to change the rendering of the page such that we have a new tab (say metadata) where all the metadata information regarding that page is displayed. I would like to know the procedure of how this can be done. I am using mediawiki1.5 and have already tried hacking through the code. I have a vague idea but its not very clear and concrete. Your advise,instruction or help will be highly appreciated. Thanking you, C From ccanddyy at yahoo.com Mon Nov 14 04:48:30 2005 From: ccanddyy at yahoo.com (candy) Date: Sun, 13 Nov 2005 20:48:30 -0800 Subject: [Wikitech-l] change page rendering to display all metadata Message-ID: hi all, Can somebody help me with the folowing : In wikipedia when we view the page of an article, it displays the content of the page and not the metadata such as author(s),page timestamps etc. A part of these metadata is visible in the history section where it displays all the page revisions. I want to change the rendering of the page such that we have a new tab (say metadata) where all the metadata information regarding that page is displayed. I would like to know the procedure of how this can be done. I am using mediawiki1.5 and have already tried hacking through the code. I have a vague idea but its not very clear and concrete. Your advise,instruction or help will be highly appreciated. Thanking you, C From jason.stubbs at gmail.com Tue Nov 15 12:53:31 2005 From: jason.stubbs at gmail.com (Jason Stubbs) Date: Tue, 15 Nov 2005 07:53:31 -0500 Subject: [Wikitech-l] Help: not working on my wiki In-Reply-To: <4378E1AB.3040702@gmail.com> References: <4378E1AB.3040702@gmail.com> Message-ID: Hi, I have set up a company wiki over the last couple of weeks. I have tried to us the tage to display the wiki formatting, but it doesnt seem to work. eg if it type: '''bold''' expecting to get '''bold''' I get *bold* instead. I assume that this is the parser that does this. Is there a configuration file that can enable or disable this function? I have incorporated the FCKeditor into the wiki...could this have broken something? However, the page i am editing is not using HTML. Regards Jason From dake.cdx at gmail.com Tue Nov 15 13:35:27 2005 From: dake.cdx at gmail.com (dake) Date: Tue, 15 Nov 2005 14:35:27 +0100 Subject: [Wikitech-l] Re: Signature problems In-Reply-To: References: Message-ID: Phil Boswell wrote: > What the heck happened with signatures since last night? There's a heap of > complaints on [[en:WP:VPT]] from people with complicated, and indeed > not-so-complicated, signatures, which suddenly stopped resolving properly. same on :fr, my signature is f***** up. From jason.stubbs at myrealbox.com Tue Nov 15 14:34:18 2005 From: jason.stubbs at myrealbox.com (Jason Stubbs) Date: Tue, 15 Nov 2005 09:34:18 -0500 Subject: [Wikitech-l] Help: not working on my wiki In-Reply-To: References: <4378E1AB.3040702@gmail.com> Message-ID: <4379F1EA.8060900@myrealbox.com> I managed to fix the problem with the tags mentioned below, but i have now discovered that I cannot parse tags. I get the following error displayed on the page when I try to parse: \sqrt{1-e^2} *Failed to parse (unknown error): \sqrt{1-e^2}* I've checked through the all of the preference files to make sure it is enabled, and I think I have everything set up correctly. Is there something extra I need that is not supplied in the 1.5.2 build? Thanks Jason Jason Stubbs wrote: > Hi, > > I have set up a company wiki over the last couple of weeks. I have > tried to us the tage to display the wiki formatting, > but it doesnt seem to work. eg if it type: > > '''bold''' expecting to get '''bold''' > > I get *bold* instead. > > I assume that this is the parser that does this. Is there a > configuration file that can enable or disable this function? > > I have incorporated the FCKeditor into the wiki...could this have > broken something? However, the page i am editing is not using HTML. > > Regards > > Jason > > > From brion.vibber at gmail.com Tue Nov 15 19:00:02 2005 From: brion.vibber at gmail.com (Brion Vibber) Date: Tue, 15 Nov 2005 11:00:02 -0800 Subject: [Wikitech-l] Signature problems In-Reply-To: References: Message-ID: On 11/15/05, Phil Boswell wrote: > What the heck happened with signatures since last night? There's a heap of > complaints on [[en:WP:VPT]] from people with complicated, and indeed > not-so-complicated, signatures, which suddenly stopped resolving properly. There are two issues: first, lots of people have had bad markup in their signatures. When details of the parser change, or if we have to temporarily turn off the 'HTML Tidy' postprocessor due to configuration issues, the bad markup, already inserted into thousands of pages, can suddenly become visible, sending entire pages into superscript or whatever. Tidy was disabled on Monday due to some crashing/hangup problems that haven't been resolved yet, so we've gotten the usual round of complaints about broken pages because of this. > My sig is fairly simple: up to a few minutes ago it was non-raw, and > consisted of "Phil]] | [[User talk:Phil Boswell|Talk", which when wrapped > with the traditional "[[User:..." and "]]" worked fine. I have now converted > it to raw and filled in the supposedly automatic bits by hand. The last time this happened, I added the 'raw signatures' option to discourage people from using these crappy markup tricks -- since they're often also badly broken. I want to accommodate fancy signatures; personalized sigs help build a sense of community and personality, but we also need to not break things all the time. :) This time I've gone ahead and added some stricter behavior; I know I'll get some hate mail on this (hi!) but we get hate mail about the broken rendering due to people inserting broken markup on thousands of pages, too, so I think I'll like this hate mail better. First, non-raw signatures no longer allow markup, just text. This keeps the simple cases simple. Fancy cases can use the 'raw signature' option, which will also keep them from breaking if/when we change how the default signature works. Second, raw signatures are now more strictly checked for validity. Currently it's doing a check for well-formed XML, which will keep out the common mismatched tags problem (eg having a and forgetting the ). This may be expanded to check for unbalanced '' and ''' wiki tags, unbalanced [[ and ]], etc. Also we will probably change it to either reject or pre-expand any templates. Unfortunately all this excitement came at a bit of a bad time; I'm on intermittent, very slow dial-up until my cable gets installed so I haven't been able to put in big shiny announcements in all the usual places. I hope this has been sufficiently widely announced by now (eg http://en.wikipedia.org/wiki/Wikipedia:How_to_fix_your_signature), please spread the info around as necessary. -- brion vibber (brion @ pobox.com) From brion.vibber at gmail.com Tue Nov 15 19:12:17 2005 From: brion.vibber at gmail.com (Brion Vibber) Date: Tue, 15 Nov 2005 11:12:17 -0800 Subject: [Wikitech-l] Change "Wikimedia Project" Icon In-Reply-To: References: Message-ID: On 11/15/05, Ryan Hudy wrote: > I'm looking to replace the Wikimedia Project icon (located at the bottom > left of the mediawiki pages: http://meta.wikimedia.org/wiki/MediaWiki) with > another icon within my intranet wiki. I believe I made all the necessary > changes within the files (Skin.php, SkinTemplate.php, Monobook.php, > Commonprint.css, Monobook/main.css) but my icon still isn't showing up. Am I > missing an adjustment/addition in a specific file, and is the Wikimedia > Project icon called as copyrightico? Here's ours: $wgCopyrightIcon = 'Wikimedia
Foundation'; As always with configuration options, double-check your cache issues after making changes. -- brion vibber (brion @ pobox.com) From f-x.p at laposte.net Tue Nov 15 19:00:37 2005 From: f-x.p at laposte.net (FxParlant) Date: Tue, 15 Nov 2005 20:00:37 +0100 Subject: [Wikitech-l] Re: Desperate for a linkification expert In-Reply-To: <43789AC9.7080107@venier.net> References: <43789AC9.7080107@venier.net> Message-ID: Thanks a lot Andrew, I'm playing now to find the correct combination of these functions and get what I want. (avenier, pronounced "avenir" in french. Nice :-) ) Fran?ois Andrew Venier wrote: > FxParlant wrote: > >> hi, >> >> I did something wrong, and now I end up with : >> >> src="> >> > class 'external free' would appear to come from: > Parser::replaceFreeExternalLinks() > Linker::makeExternalLink() > Linker::getExternalLinkAttributes() From smoddy at gmail.com Tue Nov 15 19:32:57 2005 From: smoddy at gmail.com (Sam Korn) Date: Tue, 15 Nov 2005 19:32:57 +0000 Subject: [Wikitech-l] Signature problems In-Reply-To: References: Message-ID: On 11/15/05, Brion Vibber wrote: > Second, raw signatures are now more strictly checked for validity. > Currently it's doing a check for well-formed XML, which will keep out > the common mismatched tags problem (eg having a and forgetting > the ). This may be expanded to check for unbalanced '' and ''' > wiki tags, unbalanced [[ and ]], etc. Also we will probably change it > to either reject or pre-expand any templates. If it's XML parsing, is it lower-case-only tags then? Thanks for the explaination. Sam From servien at gmail.com Tue Nov 15 20:43:22 2005 From: servien at gmail.com (Servien Ilaino) Date: Tue, 15 Nov 2005 22:43:22 +0200 Subject: [Wikitech-l] Request for creation of Nedersaksisch; Low Saxon (NL) and Romani; Vlax Romany Message-ID: Hi, I would like to request the creation of Nedersaksisch; Low Saxon (NL) as soon as possible, it has been discussed for over 5 months now, most things have been cleared up and most people agree the wiki should be created soon, for the benefit of the Low Saxon community in the Netherlands and I guess to wikipedia. There are 2 oppose votes, one from Node ue (who deletes the request from the approved page almost daily) and has been put on again by various users, the second user is a non-active anonymous user at meta-wiki. Romani; Vlax Romany has 1 "temporary against vote" which is also a non-active anonymous user at meta-wiki. The majority however supports the creation, and should be created as soon as possible sothat the Romani-community can also start with their own wikipedia. Summary: 1. Nedersaksisch 2. Romani More info can be found at: http://meta.wikimedia.org/wiki/Approved_requests_for_new_languages or at: http://meta.wikimedia.org/wiki/Template:Requests for new languages/nds-nl and http://meta.wikimedia.org/wiki/Template:Requests for new languages/Vlax Romany. Regards, Servien Ilaino From brion at pobox.com Wed Nov 16 03:44:05 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 15 Nov 2005 19:44:05 -0800 Subject: [Wikitech-l] Status of article rating feature? In-Reply-To: References: Message-ID: <437AAB05.9090501@pobox.com> David Gerard wrote: > That's because the maps are marked "ALPHA QUALITY - NOT FINAL > PRODUCTION VERSIONS" and are written in pencil. Never mind they're in > the top-40 maps in the world - the project has peaked way too early. > > Damn we need the rating feature. What's holding it up right now? List > please, referring to current version of code. (I know the servers are > creaking ...) Keep in mind that the cute little survey thing doesn't have anything to do with a system for marking particular page revisions as 'public-ready' or 'approved' in any useful way. That's an entirely unrelated and separate issue (and one the project actually needs), which Special:Validate (which might, hypothetically, produce 'interesting data' of some sort to some one) does nothing to help directly. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From brion at pobox.com Wed Nov 16 03:45:44 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 15 Nov 2005 19:45:44 -0800 Subject: [Wikitech-l] Set up a PHP-Page including articles from Mediawiki In-Reply-To: <4379FC52.30909@sudo.ch> References: <4379FC52.30909@sudo.ch> Message-ID: <437AAB68.4060008@pobox.com> thomas wrote: > I'm trying to set up a PHP-Page including articles from my mediawiki and > also want to submit articles to the wiki through my page. > Because I dont like to manipulate the mediawikis database, I tried to > implement the html form that is used while editing the mediawiki so it > will work the same way on my page aswell and encountered some problems. > > What kind of possibilities do I have to insert/get data from the > mediawiki using an API (method calls)? > I'm using Mediawiki 1.5.2 Sorry, but we don't have a lot of internal API documentation at this time besides the code and the doc comments in the code. Read through the code and see how things get called. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From brion at pobox.com Wed Nov 16 03:46:52 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 15 Nov 2005 19:46:52 -0800 Subject: [Wikitech-l] Re: [WikiEN-l] Markup; html In-Reply-To: <43794096.5010506@tonal.clara.co.uk> References: <813E7CE2-EDB7-4067-B670-7EDE950EA713@specialbusservice.com> <2C7C96A3-78CF-4A96-B0A0-82798A55D9A2@specialbusservice.com> <5AD37ECD-581D-406B-B1F7-FC2110EF5850@specialbusservice.com> <43794096.5010506@tonal.clara.co.uk> Message-ID: <437AABAC.7010008@pobox.com> Neil Harris wrote: > Maybe templates should be HTML-cleaned as independent entities before > transclusion / substitution into the larger page? Or would that not work > because the code doesn't work that way, or break existing uses? That's what the non-tidy mode does, IIRC, and that's why it breaks existing uses (which are IMHO bad to begin with) which open tags in one chunk and close them in the next, etc. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From Migdejong at hotmail.com Tue Nov 15 20:36:58 2005 From: Migdejong at hotmail.com (Migdejong) Date: Tue, 15 Nov 2005 20:36:58 +0000 (UTC) Subject: [Wikitech-l] Re: {{CURRENTSECOND}} References: <167a3d3b0511070441g16452f53k11eac8ffb667f41d@mail.gmail.com> <8b722b800511071705y5a72604dp469bd0dc5c913e9d@mail.gmail.com> Message-ID: Besides. This is probably the worst random selection method I've ever seen in use. It only allows for the middle options to appear... Ever tried the dice-rolling game. You'd be arrested for using it in a casino... Mig From brion at pobox.com Wed Nov 16 03:52:03 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 15 Nov 2005 19:52:03 -0800 Subject: [Wikitech-l] Help: not working on my wiki In-Reply-To: <4379F1EA.8060900@myrealbox.com> References: <4378E1AB.3040702@gmail.com> <4379F1EA.8060900@myrealbox.com> Message-ID: <437AACE3.2050401@pobox.com> Jason Stubbs wrote: > I managed to fix the problem with the tags mentioned below, but > i have now discovered that I cannot parse tags. I get the > following error displayed on the page when I try to parse: Please restore the original code and test again; we do not know what horrors this fckeditor hack may have wrought on the code. If it's broken, complain to the authors of that hack. > \sqrt{1-e^2} > > *Failed to parse (unknown error): \sqrt{1-e^2}* > > I've checked through the all of the preference files to make sure it is > enabled, and I think I have everything set up correctly. Is there > something extra I need that is not supplied in the 1.5.2 build? Math requires several external programs, please see the FAQ and the math/README file. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From brion at pobox.com Wed Nov 16 03:53:42 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 15 Nov 2005 19:53:42 -0800 Subject: [Wikitech-l] Signature problems In-Reply-To: References: Message-ID: <437AAD46.3000905@pobox.com> Sam Korn wrote: > On 11/15/05, Brion Vibber wrote: >> Second, raw signatures are now more strictly checked for validity. >> Currently it's doing a check for well-formed XML, which will keep out >> the common mismatched tags problem (eg having a and forgetting >> the ). This may be expanded to check for unbalanced '' and ''' >> wiki tags, unbalanced [[ and ]], etc. Also we will probably change it >> to either reject or pre-expand any templates. > > If it's XML parsing, is it lower-case-only tags then? Since XML is case-sensitive, this check will require that you match on both sides: ... is ok, ... will fail. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From brion at pobox.com Wed Nov 16 04:00:14 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 15 Nov 2005 20:00:14 -0800 Subject: [Wikitech-l] [WikiEN-l] Status of article rating feature? In-Reply-To: References: Message-ID: <437AAECE.1090304@pobox.com> David Gerard wrote: > It's been in the MediaWiki code base for months and we've been > screaming for it to be switched on (see wikien-l in all that time). > > I've directly asked on wikitech-l if there's some reason this feature > is *never* going in and if I should just stop bothering, and been told > that's not the case and that there are things that need fixing first. > > But the original developer (Magnus Manske) *still* can't get any clear > list of what needs fixing for it to go in. I already told Magnus, if there's anything terribly wrong with it it'll get fixed after it's turned on (and if necessary, back off). There's at least a half dozen people who could turn it on this very moment, but nobody's done it in all these months, probably because nobody on the server team thinks it's particularly important, useful, or high-priority. (If someone does think this and has refrained from turning it on for some other reason, they haven't told me so.) It's a solution in search of a problem; it doesn't solve the validated-version-display issue in any way, it's just a survey form that might, in theory, produce data that might, in theory, be interesting or useful to someone one day. Will it be worth the trouble of turning it on and possibly having to deal with fixing it when further problems become evident? Who knows. > So. What's up with Special:Validate? It's on my list for this week, I'll see about getting it turned on and working. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From beesley at gmail.com Wed Nov 16 05:28:20 2005 From: beesley at gmail.com (Angela) Date: Wed, 16 Nov 2005 06:28:20 +0100 Subject: [Wikitech-l] Re: {{CURRENTSECOND}} In-Reply-To: References: <167a3d3b0511070441g16452f53k11eac8ffb667f41d@mail.gmail.com> <8b722b800511071705y5a72604dp469bd0dc5c913e9d@mail.gmail.com> Message-ID: <8b722b800511152128o79b8376dq8e3656c82c706dce@mail.gmail.com> On 11/15/05, Migdejong wrote: > Besides. This is probably the worst random selection method I've ever seen in > use. It only allows for the middle options to appear... Ever tried the > dice-rolling game. You'd be arrested for using it in a casino... I don't think that's true considering I just tried it and rolled a 1 followed by a 6, which are not the middle options. If there really is a problem with the extension, which I'm not convinced there is, please make comments on that at http://meta.wikimedia.org/wiki/User_talk:Algorithm/RandomSelection Angela. From avarab at gmail.com Wed Nov 16 05:54:58 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Wed, 16 Nov 2005 05:54:58 +0000 Subject: [Wikitech-l] sr.wiki has been waiting for months! In-Reply-To: <2505d1590511150902x265eb532m70e7a9e49a4da17e@mail.gmail.com> References: <849f98ed0511141545i62c8b6e8m@mail.gmail.com> <51dd1af80511141957o67fe49cdtc31cb97dbed21e42@mail.gmail.com> <2505d1590511150902x265eb532m70e7a9e49a4da17e@mail.gmail.com> Message-ID: <51dd1af80511152154i3e8df213u56a15270c0af240f@mail.gmail.com> On 11/15/05, Milos Rancic wrote: > Mark, thank you for asking this. > > On 11/15/05, ?var Arnfj?r? Bjarmason wrote: > > Where's the patch to LanguageSr.php and the conversion library to make > > this happen? > > It can be found at > http://meta.wikimedia.org/wiki/User:BraneJ/Serbian_Variants with the > hole explanation. I can send the whole files, too. Please open a bug in bugzilla for this and attach the files (and unified diffs where appropriate) to it. From node.ue at gmail.com Wed Nov 16 07:18:07 2005 From: node.ue at gmail.com (Mark Williamson) Date: Wed, 16 Nov 2005 00:18:07 -0700 Subject: [Wikitech-l] Request for creation of Nedersaksisch; Low Saxon (NL) and Romani; Vlax Romany In-Reply-To: References: Message-ID: <849f98ed0511152318i616ea92dj@mail.gmail.com> > I would like to request the creation of Nedersaksisch; Low Saxon (NL) > as soon as possible, it has been discussed for over 5 months now, most > things have been cleared up and most people agree the wiki should be "Most things"... you have still not clearly explained many details of the proposal and have instead chosen to ignore inquiries. "Most people"... the vote is 17-2 currently. You were still saying it should win when it was 16-4. How should anybody trust YOUR judgement? And you're right, it HAS been 5 months. Since then, conditions changed -- there is no longer the "discrimination" you talked about. After some discussions with Arbeo, a number of new facts have come to light which make your proposal even more ridiculous. > created soon, for the benefit of the Low Saxon community in the > Netherlands and I guess to wikipedia. There are 2 oppose votes, one > from Node ue (who deletes the request from the approved page almost > daily) and has been put on again by various users, the second user is > a non-active anonymous user at meta-wiki. "Removes" is a better word. "Deletes" has the connotation that I removed it from existence. I just moved it back to the other page. "Approved requests for new languages" has only in the past contained languages with 1 or less oppose votes. You can't seem to get this through your brain. > Romani; Vlax Romany has 1 "temporary against vote" which is also a > non-active anonymous user at meta-wiki. The majority however supports > the creation, and should be created as soon as possible sothat the > Romani-community can also start with their own wikipedia. What Romani community? The two inactive users who have made no edits for weeks? I do support that request, but I understand if a developer won't create it. Why not wait until there is at least one real article on the test-wiki? If there is really a community, that should be easy. > Summary: > > 1. Nedersaksisch > 2. Romani No, the summary is: VLAX Romani. Not Romani and Nedersaksisch. We already have a Nedersaksisch Wikipedia. The proper name is probably "Nedersaksisch van Keuninkryk Nederlaand" or whatever, since you insist that nobody can use any English name. No book mentions such a language. It is not a real language, it exists only in your mind. No website mentions it except for Wikimedia. To create such a WP would be original research. There are 5 major dialects of Low Saxon language. "Dutch Low Saxon" is not one of them -- all of the dialects of the Netherlands are the same group as Hamburg dialect. You continue attempts to mislead people. Mark -- "Take away their language, destroy their souls." -- Joseph Stalin From morven at gmail.com Wed Nov 16 05:20:04 2005 From: morven at gmail.com (Matt Brown) Date: Tue, 15 Nov 2005 21:20:04 -0800 Subject: [Wikitech-l] Re: [WikiEN-l] Markup; html In-Reply-To: <437AABAC.7010008@pobox.com> References: <813E7CE2-EDB7-4067-B670-7EDE950EA713@specialbusservice.com> <2C7C96A3-78CF-4A96-B0A0-82798A55D9A2@specialbusservice.com> <5AD37ECD-581D-406B-B1F7-FC2110EF5850@specialbusservice.com> <43794096.5010506@tonal.clara.co.uk> <437AABAC.7010008@pobox.com> Message-ID: <42f90dc00511152120p72902e8anca6dc20270b941fa@mail.gmail.com> On 11/15/05, Brion Vibber wrote: > That's what the non-tidy mode does, IIRC, and that's why it breaks > existing uses (which are IMHO bad to begin with) which open tags in one > chunk and close them in the next, etc. Which I remember you telling everyone not to do early on when templates were new. -Matt From gerard.meijssen at gmail.com Wed Nov 16 09:26:06 2005 From: gerard.meijssen at gmail.com (Gerard Meijssen) Date: Wed, 16 Nov 2005 10:26:06 +0100 Subject: [Wikitech-l] Request for creation of Nedersaksisch; Low Saxon (NL) and Romani; Vlax Romany In-Reply-To: <849f98ed0511152318i616ea92dj@mail.gmail.com> References: <849f98ed0511152318i616ea92dj@mail.gmail.com> Message-ID: <437AFB2E.40700@gmail.com> Mark Williamson wrote: >> I would like to request the creation of Nedersaksisch; Low Saxon (NL) >> as soon as possible, it has been discussed for over 5 months now, most >> things have been cleared up and most people agree the wiki should be >> > > "Most things"... you have still not clearly explained many details of > the proposal and have instead chosen to ignore inquiries. > > "Most people"... the vote is 17-2 currently. You were still saying it > should win when it was 16-4. How should anybody trust YOUR judgement? > > And you're right, it HAS been 5 months. Since then, conditions changed > -- there is no longer the "discrimination" you talked about. After > some discussions with Arbeo, a number of new facts have come to light > which make your proposal even more ridiculous. > > >> created soon, for the benefit of the Low Saxon community in the >> Netherlands and I guess to wikipedia. There are 2 oppose votes, one >> from Node ue (who deletes the request from the approved page almost >> daily) and has been put on again by various users, the second user is >> a non-active anonymous user at meta-wiki. >> > > "Removes" is a better word. "Deletes" has the connotation that I > removed it from existence. I just moved it back to the other page. > "Approved requests for new languages" has only in the past contained > languages with 1 or less oppose votes. You can't seem to get this > through your brain. > > >> Romani; Vlax Romany has 1 "temporary against vote" which is also a >> non-active anonymous user at meta-wiki. The majority however supports >> the creation, and should be created as soon as possible sothat the >> Romani-community can also start with their own wikipedia. >> > > What Romani community? The two inactive users who have made no edits > for weeks? I do support that request, but I understand if a developer > won't create it. Why not wait until there is at least one real article > on the test-wiki? If there is really a community, that should be easy. > > >> Summary: >> >> 1. Nedersaksisch >> 2. Romani >> > > No, the summary is: VLAX Romani. Not Romani and Nedersaksisch. We > already have a Nedersaksisch Wikipedia. The proper name is probably > "Nedersaksisch van Keuninkryk Nederlaand" or whatever, since you > insist that nobody can use any English name. > > No book mentions such a language. It is not a real language, it exists > only in your mind. No website mentions it except for Wikimedia. To > create such a WP would be original research. > > There are 5 major dialects of Low Saxon language. "Dutch Low Saxon" is > not one of them -- all of the dialects of the Netherlands are the same > group as Hamburg dialect. > > You continue attempts to mislead people. > > Mark > Mark, I think you do not know the situation on the ground or you misrepresent this situation in a magnificent way. The nds wikipedia has been high-jacked by a majority who insist on a particular orthography who is German oriented. An orthography that is foreign to the nds as spoken in the Netherlands. Your assertion that there should be only one nds is fine but the majority who does nds does differently. The factoid that the Dutch dialects are considered to be part of the "Hamburg dialect" has nothing to do with the orthography used. There are two solutions to this; either there is no compulsory orthography on the nds.wikipedia or a second wikipedia is allowed. I do think that it is not up to you to be in judgement on the creation of new Wikipedias. You behave like if it is your right, the fact that you have a lot of bookish knowledge does not count for much. With the skills that you have, it would be better if you helped fledgling projects in the way that grows the Neapolitan wikipedia. It has already some 3700 articles. Yes, they are mostly stubs but they help in building a structure where people can build on. Another thing you could do is write a newsletter for the small projects and explain how they are doing and inform on what works where and why. It does not suit you to be negative. Certainly not when you have your facts wrong. And do not trash talk other people. If you want to have people think that you mean well, you should at least assume their good faith. Thanks, GerardM From node.ue at gmail.com Wed Nov 16 09:44:13 2005 From: node.ue at gmail.com (Mark Williamson) Date: Wed, 16 Nov 2005 02:44:13 -0700 Subject: [Wikitech-l] Request for creation of Nedersaksisch; Low Saxon (NL) and Romani; Vlax Romany In-Reply-To: <437AFB2E.40700@gmail.com> References: <849f98ed0511152318i616ea92dj@mail.gmail.com> <437AFB2E.40700@gmail.com> Message-ID: <849f98ed0511160144g18bd8751g@mail.gmail.com> > Mark, > I think you do not know the situation on the ground or you misrepresent > this situation in a magnificent way. The nds wikipedia has been > high-jacked by a majority who insist on a particular orthography who is > German oriented. An orthography that is foreign to the nds as spoken in No -- it was hijacked by Heiko Evermann. He has recently become inactive. REcently, most articles posted in Dutch spelling have been allowed to remain unchanged. http://nds.wikipedia.org/wiki/Noord-Veluws http://nds.wikipedia.org/wiki/Vrees_spraak > the Netherlands. Your assertion that there should be only one nds is > fine but the majority who does nds does differently. The factoid that > the Dutch dialects are considered to be part of the "Hamburg dialect" > has nothing to do with the orthography used. Wikipedias in the past have not been divided by orthography. > There are two solutions to this; either there is no compulsory > orthography on the nds.wikipedia or a second wikipedia is allowed. I do . . . Servien seems to think otherwise. He insists that his language is different, not just spelling, but otherwise. Experts on LS that I consulted told me this is not true. > think that it is not up to you to be in judgement on the creation of new > Wikipedias. You behave like if it is your right, the fact that you have When did I say this should be delayed because I voted against it? I think it should not be created because Servien never answered many of the questions that were asked, there is still significant opposition to the proposal, and he has shown an anti-wiki attitude on the test-wp. I also think it should be delayed because the situation has changed (namely, Dutch-spelt articles have been allowed to stay now). > a lot of bookish knowledge does not count for much. With the skills that Oh? > you have, it would be better if you helped fledgling projects in the way > that grows the Neapolitan wikipedia. It has already some 3700 articles. Well, then, why waste your time writing e-mails? Why don't you go help fledgling projects? Since, I don't spend any time doing that, obviously. > works where and why. It does not suit you to be negative. Certainly not > when you have your facts wrong. Which facts do I have wrong? I have been told this over and over by Servien. He has offered nothing I don't already know. > And do not trash talk other people. If you want to have people think > that you mean well, you should at least assume their good faith. Google "talking trash", and you'll see that I'm not talking trash. After all that I have been through with Servien, I don't think it's possible anymore to assume good faith. Even when the total vote was 16-4, he kept moving it to the Approved requests page. > Thanks, > GerardM I always thought it was funny that you ended even the rudest and most blunt e-mails with such a polite phrase. Cheers, Mark -- "Take away their language, destroy their souls." -- Joseph Stalin From scott at crazycolour.com Wed Nov 16 09:58:23 2005 From: scott at crazycolour.com (Scott Spence) Date: Wed, 16 Nov 2005 19:58:23 +1000 Subject: [Wikitech-l] Using page property fields in Wiki Pages or templates, e.g. the page title - is this possible? Message-ID: Dear Wiki masters, I am trying to make every page on my Wiki site link to a corresponding page title on another Wiki (I have interwiki links set-up fine). Ideally I'd like to use something like this in a template to link to the corresponding page in the OtherWiki that has the same page title: [[OtherWiki:{{{Page_Title}}}]] This would (in my ideal world) then use the title of the page that the template was called from to link to the interwiki using that page's title. Is this possible? If not using template how about in the page itself? Forgive me if this is obvious - I am just new at wiki configurations, and my research so far has turned up very little. Best wishes, Scott --- Scott.Spence at crazycolour.com CC Consulting Ltd Telephone: 0845 006 0826 (UK) Messages/Fax: 0845 280 2048 (UK) http://www.crazycolour.com/p2 - PRINCE2 Courses http://www.crazycolour.com/itil - ITIL Courses http://www.crazycolour.com/msp - Managing Successful Programmes Courses http://www.crazycolour.com/store - Best Practice & Crazy Colour Store From beesley at gmail.com Wed Nov 16 10:03:42 2005 From: beesley at gmail.com (Angela) Date: Wed, 16 Nov 2005 11:03:42 +0100 Subject: [Wikitech-l] Using page property fields in Wiki Pages or templates, e.g. the page title - is this possible? In-Reply-To: References: Message-ID: <8b722b800511160203u3bf3436es5ff9a7598daf9e58@mail.gmail.com> > I am trying to make every page on my Wiki site link to a corresponding page > title on another Wiki (I have interwiki links set-up fine). Ideally I'd like > to use something like this in a template to link to the corresponding page > in the OtherWiki that has the same page title: > > [[OtherWiki:{{{Page_Title}}}]] Wikitech-l is for technical issues on the Wikimedia sites (Wikipedia, Wikibooks etc). I think you meant to send this to the MediaWiki list at There is a {{PAGENAME}} variable that might help with what you're trying to do. [[OtherWiki:{{PAGENAME}}]] for example. If you put this at [[MediaWiki:Sitenotice]], it will appear on every page without needing to add it manually each time. Angela. From jooray at gmail.com Wed Nov 16 10:06:12 2005 From: jooray at gmail.com (Juraj Bednar) Date: Wed, 16 Nov 2005 11:06:12 +0100 Subject: [Wikitech-l] mediawiki dump html fix and few questions In-Reply-To: <437877E4.4080103@gmail.com> References: <437877E4.4080103@gmail.com> Message-ID: <437B0494.3010506@gmail.com> Hello, > The fix is easy (borrowed from getHashedFilename). Just adding > > $chars[$i] = strtr( $chars[$i], '/\\*?"<>|~.', '__________' ); > > to the else part of if in the for cycle fixes the problem. also : needs replacing, since it is illegal directory name under windows. This would make it: $chars[$i] = strtr( $chars[$i], '/\\*?"<>|~.:', '___________' ); Juraj. From servien at gmail.com Wed Nov 16 10:21:43 2005 From: servien at gmail.com (Servien Ilaino) Date: Wed, 16 Nov 2005 12:21:43 +0200 Subject: [Wikitech-l] Request for creation of Nedersaksisch; Low Saxon (NL) and Romani; Vlax Romany In-Reply-To: <849f98ed0511160144g18bd8751g@mail.gmail.com> References: <849f98ed0511152318i616ea92dj@mail.gmail.com> <437AFB2E.40700@gmail.com> <849f98ed0511160144g18bd8751g@mail.gmail.com> Message-ID: Hi, First of all Node, I've explained everything to you already, I'm not going to explain it again. Second.. there is no wikipedia with the name "Nedersaksisch" the name used is "Platd??tsch" which is in fact not the same as "Nedersaksisch", platt is spoken in Germany ONLY. You could change the name to Nedersaksisch van et Keuninkryk van Nederlaand but this is very long and unnecessary since the term "neddersass'sch" is practically not used in Germany. Node mentions Wikipedias have not been divided because of orthography, I garanty you as Veluws speakers, which is quite similar to Sallands, Drents and Stellingwerfs I cannot understand the German variaty of the "one language", sure you pick up some words or you understand it because you've learned German at school but that's why people fall back to the Dutch wikipedia they don't want to use a wikipedia they don't understand 80% of because it's in Low Saxon spoken in Germany, if you see the organisations in the Netherlands which promote the language, they only promote the variaty which is spoken in the Netherlands. It has no use discussing this now, we have had similar discussions, but you keep repeating yourself, you state things which are incorrect and ask questions which we've answered ages ago, besides this is a wikitech-l mailing list requesting creation of both wikipedias. Vlax Romany... off course who is gonna blame them? Why should you waste time on a test-wikipedia when someone might just block the whole project, rather wait 'till it's created! Servien 2005/11/16, Mark Williamson : > > Mark, > > I think you do not know the situation on the ground or you misrepresent > > this situation in a magnificent way. The nds wikipedia has been > > high-jacked by a majority who insist on a particular orthography who is > > German oriented. An orthography that is foreign to the nds as spoken in > > No -- it was hijacked by Heiko Evermann. He has recently become > inactive. REcently, most articles posted in Dutch spelling have been > allowed to remain unchanged. > > http://nds.wikipedia.org/wiki/Noord-Veluws > http://nds.wikipedia.org/wiki/Vrees_spraak > > > the Netherlands. Your assertion that there should be only one nds is > > fine but the majority who does nds does differently. The factoid that > > the Dutch dialects are considered to be part of the "Hamburg dialect" > > has nothing to do with the orthography used. > > Wikipedias in the past have not been divided by orthography. > > > There are two solutions to this; either there is no compulsory > > orthography on the nds.wikipedia or a second wikipedia is allowed. I do > > . . . > > Servien seems to think otherwise. He insists that his language is > different, not just spelling, but otherwise. Experts on LS that I > consulted told me this is not true. > > > think that it is not up to you to be in judgement on the creation of new > > Wikipedias. You behave like if it is your right, the fact that you have > > When did I say this should be delayed because I voted against it? I > think it should not be created because Servien never answered many of > the questions that were asked, there is still significant opposition > to the proposal, and he has shown an anti-wiki attitude on the > test-wp. I also think it should be delayed because the situation has > changed (namely, Dutch-spelt articles have been allowed to stay now). > > > a lot of bookish knowledge does not count for much. With the skills that > > Oh? > > > you have, it would be better if you helped fledgling projects in the way > > that grows the Neapolitan wikipedia. It has already some 3700 articles. > > Well, then, why waste your time writing e-mails? Why don't you go help > fledgling projects? Since, I don't spend any time doing that, > obviously. > > > works where and why. It does not suit you to be negative. Certainly not > > when you have your facts wrong. > > Which facts do I have wrong? I have been told this over and over by > Servien. He has offered nothing I don't already know. > > > And do not trash talk other people. If you want to have people think > > that you mean well, you should at least assume their good faith. > > Google "talking trash", and you'll see that I'm not talking trash. > After all that I have been through with Servien, I don't think it's > possible anymore to assume good faith. > > Even when the total vote was 16-4, he kept moving it to the Approved > requests page. > > > Thanks, > > GerardM > > I always thought it was funny that you ended even the rudest and most > blunt e-mails with such a polite phrase. > > Cheers, > Mark > > -- > "Take away their language, destroy their souls." -- Joseph Stalin > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > From gerard.meijssen at gmail.com Wed Nov 16 10:38:12 2005 From: gerard.meijssen at gmail.com (Gerard Meijssen) Date: Wed, 16 Nov 2005 11:38:12 +0100 Subject: [Wikitech-l] Why thanks :) In-Reply-To: <849f98ed0511160144g18bd8751g@mail.gmail.com> References: <849f98ed0511152318i616ea92dj@mail.gmail.com> <437AFB2E.40700@gmail.com> <849f98ed0511160144g18bd8751g@mail.gmail.com> Message-ID: <437B0C14.7010207@gmail.com> Hoi, Mark Williamson wrote: > I always thought it was funny that you ended even the rudest and most > blunt e-mails with such a polite phrase. > > Cheers, > Mark Hoi, When I end my e-mails with a "Thanks", I thank the readers for the time they spend on what I wanted to say. Particularly on mailing lists, it is a given that much of what is written is not read by all the subscribers. Even when I have something blunt to say, I appreciate it when what I have to say gets its consideration. The point of me thanking a reader is because I do not assume that I am due the attention that I get. I hope that I deserve some credit; this may have led to the reading of much of my contributions to the mailing lists. Assuming that I have, I have yet another reason to thank my "public". As to being rude, I try not to be. But if by being blunt I am considered rude, I consider it is an unfortunate side effect. I prefer to be understood in what I intend to express. I do not think the English that I use is bad but I know how easy it is to be misunderstood. Thanks, GerardM From dgerard at gmail.com Wed Nov 16 10:43:30 2005 From: dgerard at gmail.com (David Gerard) Date: Wed, 16 Nov 2005 10:43:30 +0000 Subject: [Wikitech-l] [WikiEN-l] Status of article rating feature? Message-ID: Brion Vibber brion at pobox.com wrote: >I already told Magnus, if there's anything terribly wrong with it it'll >get fixed after it's turned on (and if necessary, back off). Indeed :-) >It's a solution in search of a problem; it doesn't solve the >validated-version-display issue in any way, it's just a survey form that >might, in theory, produce data that might, in theory, be interesting or >useful to someone one day. We (or I) have a plan to apply it. Call it a medium-term solution, trying to go straight to the deep answer :-) (I could be completely wrong and the results could be complete rubbish, of course.) >Will it be worth the trouble of turning it on and possibly having to >deal with fixing it when further problems become evident? Who knows. >> So. What's up with Special:Validate? >It's on my list for this week, I'll see about getting it turned on and >working. :-D Please let me know when you do, as I have a profound interest in this and will be watching it closely! Thanks, Brion :-) - d. From arbeo_m at yahoo.de Wed Nov 16 11:11:42 2005 From: arbeo_m at yahoo.de (Arbeo M) Date: Wed, 16 Nov 2005 12:11:42 +0100 (CET) Subject: [Wikitech-l] Re: Request for creation of Nedersaksisch; Low Saxon (NL) and Romani; Vlax Romany Message-ID: <20051116111142.77944.qmail@web25814.mail.ukl.yahoo.com> Hi everybody! I don't want to discuss any of the details regarding the two requested new Wikipedias here, because both have been discussed very thouroughly in the appropriate places and can be considered approved by the community now - if they weren't we would certainly not request their creation here. Beyond all personal opinions, there are two very simple facts: * both proposals have been discussed exhaustively and have gained support by a vast majority of users (actually, they are supported by more people than some of the new Wikipedias created recently) * both proposals have only minimal opposition against them (one anonymous user in one case, one anonymus plus one registered user in the other case) and meet all formal requirements Hence, allowing them at Wikipedia is simply a question of fairness. In my opinion, the ongoing attempts of discrediting them are plainly anti-social and contradict basic principles of the Wikipedia community ("Assume Good Faith", "Wiki-Love" ...). Everybody has the right to dislike certain ideas but such destructive behaviour is not acceptable here. Having said that, I would like to support the petition made by our fellow Wikipedians Servien and Gerard, and kindly ask for the creation of the two wikis mentioned above (ISO codes are "rmy" and "nds-NL"). Thank you very much for your consideration! Arbeo --------------------------------- Gesendet von Yahoo! Mail - Jetzt mit 1GB kostenlosem Speicher From wowa at jet.msk.su Wed Nov 16 14:05:37 2005 From: wowa at jet.msk.su (Vladimir Tsichevski) Date: Wed, 16 Nov 2005 17:05:37 +0300 Subject: [Wikitech-l] change text type in database from blob to text Message-ID: <437B3CB1.7000804@jet.msk.su> Hi, I wonder why all fields in the MySQL database which hold text are implemented as binary data (blobs, mediumblobs etc.). MediaWiki expects to read/write them in utf8. It may be Ok for MediaWiki, but it is VERY inconvenient when it comes to accessing the data from other tools. I've managed to recreate my db just by replacing all `blob' occurences by `text'. My wiki operates still the same, by now I can read Russian text in db with any tool. Regards, Vladimir From branej at gmail.com Wed Nov 16 13:57:54 2005 From: branej at gmail.com (=?UTF-8?B?QnJhbmlzbGF2IEpvdmFub3ZpxIc=?=) Date: Wed, 16 Nov 2005 14:57:54 +0100 Subject: [Wikitech-l] sr.wiki has been waiting for months! In-Reply-To: <51dd1af80511152154i3e8df213u56a15270c0af240f@mail.gmail.com> References: <849f98ed0511141545i62c8b6e8m@mail.gmail.com> <51dd1af80511141957o67fe49cdtc31cb97dbed21e42@mail.gmail.com> <2505d1590511150902x265eb532m70e7a9e49a4da17e@mail.gmail.com> <51dd1af80511152154i3e8df213u56a15270c0af240f@mail.gmail.com> Message-ID: <437B3AE2.2080403@gmail.com> ?var Arnfj?r? Bjarmason wrote: > Please open a bug in bugzilla for this and attach the files (and > unified diffs where appropriate) to it. Done. Bug #3993 http://bugzilla.wikimedia.org/show_bug.cgi?id=3993 -- Brane Jovanovic From usenet at tonal.clara.co.uk Wed Nov 16 14:34:44 2005 From: usenet at tonal.clara.co.uk (Neil Harris) Date: Wed, 16 Nov 2005 14:34:44 +0000 Subject: [Wikitech-l] Re: Status of article rating feature? In-Reply-To: <437AAB05.9090501@pobox.com> References: <437AAB05.9090501@pobox.com> Message-ID: <437B4384.5000105@tonal.clara.co.uk> Brion Vibber wrote: > David Gerard wrote: > >> That's because the maps are marked "ALPHA QUALITY - NOT FINAL >> PRODUCTION VERSIONS" and are written in pencil. Never mind they're in >> the top-40 maps in the world - the project has peaked way too early. >> >> Damn we need the rating feature. What's holding it up right now? List >> please, referring to current version of code. (I know the servers are >> creaking ...) >> > > Keep in mind that the cute little survey thing doesn't have anything to > do with a system for marking particular page revisions as 'public-ready' > or 'approved' in any useful way. > > That's an entirely unrelated and separate issue (and one the project > actually needs), which Special:Validate (which might, hypothetically, > produce 'interesting data' of some sort to some one) does nothing to > help directly. > > -- brion vibber (brion @ pobox.com) > > I disagree. I think it is a mistake to dismiss the article rating system _a priori_ as not being useful in the effort to create Wikipedia 1.0. This certainly is not the opinion of many participants on this list. In my opinion, a Nupedia-style review procedure for every single article clearly can't possibly scale to rate almost a million articles in any reasonable timescale -- the painfully slow featured article process, or the conflict-ridden AfD process are good examples of the limitations of formal article-rating systems. However, collective labelling systems certainly can work; look at the category system, for example, which is almost magically self-organizing, or Google, which uses simple link graph adjacency to do much of its work. An article rating system can sort the possible wheat from the obvious chaff, with a formal review system then being applied to those articles where there is either a significant spread of article ratings, or where an editor or external source has explicitly asserted significant inaccuracy, insufficiency or bias in an article (and category tagging could certainly be a useful way to indicate this). By doing this, the formal review process can be applied in the places it's most needed. If there are performance problems with writing and reading the database, why not during the initial blind rating period simply write the ratings using syslog over UDP, and allow (possibly anonymized) versions of these flat-text logfiles to be downloaded by whoever is interested in analyzing the results? Comma-separated variable files with one line per entry would do just fine, and the occasional loss of a rating due to packet loss would matter very little, providing that it is uncorrelated with the contents of the ratings. This would require very little extra server load, and minimal rewriting of the ratings code, and could be made more scalable very quickly, by -- for example -- running multiple logging processes on multiple servers, and sending them the ratings results at random. I'm sure that a great many interested parties would be more than glad to data-mine the results for us. As David Gerard has remarked, most of the obvious objections to ratings schemes, such as trolling or rating advocacy campaigns by article supporters or opponents, can be addressed quite easily with quite simple statistical analysis techniques (for example, see the analysis techniques used to mine the Seti at Home data after it's been number-crunched). Incidentally, a very similar technique could be used to analyze server traffic; just syslog an entry (to a different log, of course) for one in every 100, or one in every 1000, server hits. -- Neil From alfio.puglisi at gmail.com Wed Nov 16 14:55:33 2005 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Wed, 16 Nov 2005 15:55:33 +0100 Subject: [Wikitech-l] Re: Status of article rating feature? In-Reply-To: <437B4384.5000105@tonal.clara.co.uk> References: <437AAB05.9090501@pobox.com> <437B4384.5000105@tonal.clara.co.uk> Message-ID: <4902d9990511160655y21d9e453v380ecbc21cbfc960@mail.gmail.com> On 11/16/05, Neil Harris wrote: > I disagree. I think it is a mistake to dismiss the article rating system > _a priori_ as not being useful in the effort to create Wikipedia 1.0. > This certainly is not the opinion of many participants on this list. I second this. I would also like to stress that, while the current validation feature may not be optimal, switching it on would allow us to learn lessons and improve it. At the very least, assessed strenghts and weaknesses can be useful in designing the next validation feature - you know, the real one that will lead to 1.0 :-) Alfio From ilia at prohost.org Wed Nov 16 15:36:15 2005 From: ilia at prohost.org (Ilia Alshanetsky) Date: Wed, 16 Nov 2005 10:36:15 -0500 Subject: [Wikitech-l] PHP 5.1.0RC5 testing Message-ID: <437B51EF.1000603@prohost.org> Hello! You are receiving this email because your project has been select to take part in a new effort by the PHP QA Team to make sure that your project still works with PHP versions to-be-released. With this we hope to make sure that you are either aware of things that might break, or to make sure we don't introduce any strange regressions. With this effort we hope to build a better relation between the PHP Team and the major projects. If you do not want to receive these heads-up emails, please reply to me personally and I will remove you from the list; but, we hope that you want to actively help us making PHP a better and more stable tool. The fifth and hopefully the final RC of PHP 5.1.0 has released today, it can be downloaded from http://downloads.php.net/ilia/. Since this is the final release, we ask you to test it extensively with your software to ensure that no regressions have occurred. If you discover any (we hope not) please notify PHP's QA team at "php-qa at lists.php.net". In case you think that other projects should also receive this kinds of emails, please let me know privately, and I will add them to the list of projects to contact. regards, Ilia Alshanetsky 5.1 Release Master From Migdejong at hotmail.com Wed Nov 16 19:19:16 2005 From: Migdejong at hotmail.com (Mig) Date: Wed, 16 Nov 2005 19:19:16 +0000 (UTC) Subject: [Wikitech-l] Re: {{CURRENTSECOND}} References: <167a3d3b0511070441g16452f53k11eac8ffb667f41d@mail.gmail.com> <8b722b800511071705y5a72604dp469bd0dc5c913e9d@mail.gmail.com> <8b722b800511152128o79b8376dq8e3656c82c706dce@mail.gmail.com> Message-ID: You're rigth. This time (on another PC) it did work. Why is there such a big difference? On the first pc it showed all the time 3 or 4, 5 or 6 and 3 or 4. It seemed a bit weird like that. So how do I install this, and where? I'm a complete noob on this... Mig From ssamoylov at gmail.com Wed Nov 16 22:27:29 2005 From: ssamoylov at gmail.com (Sergey Samoylov) Date: Thu, 17 Nov 2005 01:27:29 +0300 Subject: [Wikitech-l] Help with wiktionary Message-ID: <5470f7ce0511161427n1954e386q86b641245d664958@mail.gmail.com> Hi there... I've just downloaded wiktionary dump. And wikimedia 4.12. There is a problem that wiki code is redirected url with lower-case first character in search phrase to upper-case first character. For examle if I type http://localhost/wiki412/index.php/true it's redirected to http://localhost/wiki412/index.php/True So I cannot reach the first url with world "true". What should I do for fix it. Thank you Sergey From brion at pobox.com Wed Nov 16 22:05:24 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 16 Nov 2005 14:05:24 -0800 Subject: [Wikitech-l] change text type in database from blob to text In-Reply-To: <437B3CB1.7000804@jet.msk.su> References: <437B3CB1.7000804@jet.msk.su> Message-ID: <437BAD24.8080802@pobox.com> Vladimir Tsichevski wrote: > I wonder why all fields in the MySQL database which hold text are > implemented as binary data (blobs, mediumblobs etc.). MediaWiki expects > to read/write them in utf8. They may contain compressed data or other binary data. > It may be Ok for MediaWiki, but it is VERY inconvenient when it comes to > accessing the data from other tools. Fix your broken tools. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From brion at pobox.com Wed Nov 16 23:43:37 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 16 Nov 2005 15:43:37 -0800 Subject: [Wikitech-l] Help with wiktionary In-Reply-To: <5470f7ce0511161427n1954e386q86b641245d664958@mail.gmail.com> References: <5470f7ce0511161427n1954e386q86b641245d664958@mail.gmail.com> Message-ID: <437BC429.8000608@pobox.com> Sergey Samoylov wrote: > Hi there... > > I've just downloaded wiktionary dump. And wikimedia 4.12. > There is a problem that wiki code is redirected url with lower-case first > character in search phrase to upper-case first character. > > For examle if I type > http://localhost/wiki412/index.php/true it's redirected to > http://localhost/wiki412/index.php/True > > So I cannot reach the first url with world "true". What should I do for fix > it. $wgCapitalLinks = false; -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From t.starling at physics.unimelb.edu.au Thu Nov 17 05:23:03 2005 From: t.starling at physics.unimelb.edu.au (Tim Starling) Date: Thu, 17 Nov 2005 16:23:03 +1100 Subject: [Wikitech-l] Re: Help with wiktionary In-Reply-To: <5470f7ce0511161427n1954e386q86b641245d664958@mail.gmail.com> References: <5470f7ce0511161427n1954e386q86b641245d664958@mail.gmail.com> Message-ID: Sergey Samoylov wrote: > Hi there... > > I've just downloaded wiktionary dump. And wikimedia 4.12. 4.12? I thought Wikimedia was still in beta! It still has that ad-hoc management structure, I think I filed a bug report about that a while back. -- Tim Starling From jillian.goldie at gmail.com Thu Nov 17 06:16:06 2005 From: jillian.goldie at gmail.com (Jillian Waldman) Date: Thu, 17 Nov 2005 01:16:06 -0500 Subject: [Wikitech-l] Re: {{CURRENTSECOND}} In-Reply-To: References: <167a3d3b0511070441g16452f53k11eac8ffb667f41d@mail.gmail.com> <8b722b800511071705y5a72604dp469bd0dc5c913e9d@mail.gmail.com> <8b722b800511152128o79b8376dq8e3656c82c706dce@mail.gmail.com> Message-ID: <87f54e840511162216r50a859bay672045130477b416@mail.gmail.com> Copy the text from the extension (http://meta.wikimedia.org/wiki/User:Algorithm/RandomSelection) into a file called RandomSelection.php in docroot/extensions. (Depending on your wiki version, you might need the older version of the extension -- I did.) Then add the line require_once( "extensions/RandomSelection.php" ); to your LocalSettings.php and everything should just work. Use the same markup as the examples given earlier in this thread to use the random selection function. I've just used this to enable random quotes to be shown at the top of my Special:Recentchanges, which was a silly cosmetic feature I missed from MoinMoin. Yay. Jillian On 11/16/05, Mig wrote: > You're rigth. This time (on another PC) it did work. Why is there such a big > difference? On the first pc it showed all the time 3 or 4, 5 or 6 and 3 or 4. > It seemed a bit weird like that. > So how do I install this, and where? I'm a complete noob on this... > > Mig > > > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > From kornerstone at gmail.com Thu Nov 17 00:34:38 2005 From: kornerstone at gmail.com (J S) Date: Wed, 16 Nov 2005 16:34:38 -0800 Subject: [Wikitech-l] Upgrade steps from 1.4.10 to 1.5.x Message-ID: Hi, Does anyone have the steps to upgrade from mediwiki 1.4.10 to 1.5.x. Details would be appreciated. Thanks in advance, ~John From ccanddyy at yahoo.com Thu Nov 17 01:08:34 2005 From: ccanddyy at yahoo.com (candy) Date: Wed, 16 Nov 2005 17:08:34 -0800 Subject: [Wikitech-l] Help needed in page rendering Message-ID: hi all, Can somebody help me with the folowing : In wikipedia when we view the page of an article, it displays the content of the page and not the metadata such as author(s),page timestamps etc. A part of these metadata is visible in the history section where it displays all the page revisions. I want to change the rendering of the page such that we have a new tab (say metadata) where all the metadata information regarding that page is displayed. I would like to know the procedure of how this can be done. I am using mediawiki1.5 and have already tried hacking through the code. I have a vague idea but its not very clear and concrete. Your advise,instruction or help will be highly appreciated. Thanking you, C From nospam-abuse at bloodgate.com Thu Nov 17 11:02:17 2005 From: nospam-abuse at bloodgate.com (Tels) Date: Thu, 17 Nov 2005 12:02:17 +0100 Subject: [Wikitech-l] Extensions that return non-wikitext Message-ID: <200511171202.29000@bloodgate.com> -----BEGIN PGP SIGNED MESSAGE----- Moin, my graph extension (see http://bloodgate.com/perl/graph) returns either HTML, or SVG. However, mediawiki treats the returned text as wikitext, e.g. it parses it, attempts transform it into HTML and the sanitizes it. The latter step should be done for security reasons - after all, my extension processes user input and I wouldn't trust it completely to always return well-formed HTML. However, I would like to somehow annouce that my extension already returns HTML. At the moment the returned text cannot contain leading spaces or empty line because these will be turned into "

" or "

", completely 
destroying the output.

- From the comment in the example extension (where I based mine on) I read 
that the extension should return HTML, but it seems the returned text 
will be treated as wikitext and not just HTML. Is it possible to change 
this? 

(I have the feeling that I already asked that, but couldn't find it 
again :)

Another, related problem is SVG output, this too is treated first as 
wikitext, then as HTML, and in this process the output is destroyed. Now 
it would be possible to handle SVG output like  does, e.g. 
producing an external file and the referencing it. However, I would 
prefer to generate inline SVG. For this to work the output must be passed 
unaltered, because the HTML sanitizer seems to not like SVG at all (not 
surprise :)

Has anybody done something in this regard?

The code for my extension is inlined below, you can find the complete 
package at my site mentioned above.

If this isn't the right place to ask this question, please kindly redirect 
me.

Best wishes,

Tels

  tags, and runs it through the
# external script "graphcnv", which generates an ASCII, HTML or SVG
# graph from it.

$wgExtensionFunctions[] = "wfGraphExtension";

function wfGraphExtension() {
    global $wgParser;

    # register the extension with the WikiText parser
    # the second parameter is the callback function for processing the 
text between the tags

    $wgParser->setHook( "graph", "renderGraph" );
}

# for Special::Version:

$wgExtensionCredits[parserhook][] = array(
        'name' => 'graph extension',
        'author' => 'Tels',
        'url' => 'http://wwww.bloodgate.com/perl/graph/',
        'version' => 'v0.13 using Graph::Easy v' . `perl -MGraph::Easy -e 
'print $Graph::Easy::VERSION'`,
);

# The callback function for converting the input text to HTML output
function renderGraph( $input ) {
    global $wgInputEncoding;

    if( !is_executable( "graph/graphcnv" ) ) {
        return "graph/graphcnv is not 
executable";
    }

    $cmd = "graph/graphcnv ".
                 escapeshellarg($input)." ".
                 escapeshellarg($wgInputEncoding);
    $output = `$cmd`;

    if (strlen($output) == 0) {
        return "Couldn't execute 
graph/graphcnv";
    }

    return $output;
}
?>

- -- 
 Signed on Thu Nov 17 11:52:40 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.

 "Some spammers have this warped idea that their freedom of speech is
 guaranteed all the way into my hard drive, but it is my firm belief that
 their rights end at my firewall." -- Nigel Featherston

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iQEVAwUBQ3xjQncLPEOTuEwVAQEu1Qf8CbyBzGmmQBsKnlIyLhZ6XGbYved014TE
DgRJSuIO7zP04q1W7wu7WuzPuTeH2kB6fcYoANEoV4YzEL4vpWA9tpscvBcAYViO
WtKi/+f3KSoPJLmFqQckuiRWz4jHa5SBRJKzcwX8XchrXVXojxZ7xrbEZaOl896g
ehBUANxeBL/wcByBTJY1gShZ2f6N6GlFW/3/ERLbhRsC8qvKe0fkbDFs6w9C8KWN
4/VStDOTQ3hu0dMt6q6s82psKi+jtRb7asNAaxmGsie3w6tX6Q1jEwMo8fdsfv5X
0ASy2sol+zFWaR19BbgXM3cS6nOa75u0GMEsfmXRrTqmD9GjnJwbzA==
=dBzA
-----END PGP SIGNATURE-----


From erchache2000.enciclopedia at gmail.com  Thu Nov 17 12:33:19 2005
From: erchache2000.enciclopedia at gmail.com (erchache2000)
Date: Thu, 17 Nov 2005 13:33:19 +0100
Subject: [Wikitech-l] Upgrade steps from 1.4.10 to 1.5.x
In-Reply-To: 
References: 
Message-ID: <437C788F.20903@gmail.com>

J S escribi?:

>Hi,
>
>    Does anyone have the steps to upgrade from mediwiki 1.4.10 to
>1.5.x. Details would be appreciated.
>
>Thanks in advance,
>~John
>_______________________________________________
>Wikitech-l mailing list
>Wikitech-l at wikimedia.org
>http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>
>  
>
Here is, http://meta.wikimedia.org/wiki/Help:Upgrading_MediaWiki.

Feel free to add your own experience.

Good Luck.


From sabine_cretella at yahoo.it  Thu Nov 17 16:27:22 2005
From: sabine_cretella at yahoo.it (Sabine Cretella)
Date: Thu, 17 Nov 2005 17:27:22 +0100
Subject: [Wikitech-l] Translating Wikipedia contents with OmegaT - technical
 help needed
Message-ID: <437CAF6A.20606@yahoo.it>

Well we are building the NAP-wikipedia and of course there are parts 
where one easily can transfer data by just translating it from one 
Wikipedia to the other. In this case we already uploaded the Calendar - 
and now it would make sense to transfer the contents of the Italian 
wikipedia there by translating it - people and events stay the same. So 
what I now would like to reach is:

1) having a dump from the Italian wikipedia
2) extracting all pages of the calendar
3) translate them with the help of OmegaT

Why OmegaT? Well in the sections "Born" and "Died" after the name of the 
person you vey often find just "actor, actress, writer, politician" or 
whatever - this means that there would be quite a lot of 100% matches 
and the translation would be much faster with the tool than without.

These lines all are created like this:
*[[name of the person]] description

Now to have the possibility to get 100% matches I need at least a line 
break after
*[[name of the person]]
that needs to be taken out again after having translated the file.

Then in a second step, with the help of a bot, the translated parts can 
be transferred into the articles.

Well: what I now need is some advice on how to have this done - and then 
what can be done for Neapolitan can easily be repeated for other languages.

This means I need some help to get this regular expression ... I mean 
some code that runs through the data, inserts the line break and after 
translation takes it out again.

Who can help me with this? Btw. the TMX (translation memory) is going to 
be available under GFDL for anyone - well this should be obvious.

This text was originally posted here (in order to allow to collect how 
to's): 
http://www.wesolveitnet.com/modules/newbb/viewtopic.php?topic_id=83&post_id=117&order=0&viewmode=flat&pid=0&forum=37#forumpost117

Thank you!!!

Ciao, Sabine

*****
Sabine Cretella
http://www.wordsandmore.it
s.cretella at wordsandmore.it
skype: sabinecretella


		
___________________________________ 
Yahoo! Messenger: chiamate gratuite in tutto il mondo 
http://it.messenger.yahoo.com


From galwaygirl at xs4all.nl  Thu Nov 17 17:28:01 2005
From: galwaygirl at xs4all.nl (Galwaygirl)
Date: Thu, 17 Nov 2005 18:28:01 +0100
Subject: [Wikitech-l] Dutch Main Page
Message-ID: <437CBDA1.4050502@xs4all.nl>

Hi all,

On the Dutch Wikipedia we have a recurring discussion about our new main 
page. It loads very slowly at times. Some claim it's because there's too 
many images and templates in it, some claim it's because servers are 
slow, some claim it's due to both factors.

What do experts think? ;-)

http://nl.wikipedia.org/wiki/Hoofdpagina
http://nl.wikipedia.org/wiki/Sjabloon:Inhoud (Template, versions 5 nov 
2005 13:52 and up)

Btw: User:Waerth has even gone on strike until the old main page is put 
back into place... (http://nl.wikipedia.org/wiki/Gebruiker:Waerth)

Thanks,


Galwaygirl


From sabine_cretella at yahoo.it  Thu Nov 17 17:38:40 2005
From: sabine_cretella at yahoo.it (Sabine Cretella)
Date: Thu, 17 Nov 2005 18:38:40 +0100
Subject: [Wikitech-l] Dutch Main Page
In-Reply-To: <437CBDA1.4050502@xs4all.nl>
References: <437CBDA1.4050502@xs4all.nl>
Message-ID: <437CC020.7010109@yahoo.it>

Hi!

Well considering the general slowness during the last days: I just 
accessed the main page - it loads quicker than the Neapolitan one.

I don't think that there are real problems with that page - and to tell 
the truth: I like it - where did you get the icons from?

Ciao, Sabine

Galwaygirl wrote:

> Hi all,
>
> On the Dutch Wikipedia we have a recurring discussion about our new 
> main page. It loads very slowly at times. Some claim it's because 
> there's too many images and templates in it, some claim it's because 
> servers are slow, some claim it's due to both factors.
>
> What do experts think? ;-)
>
> http://nl.wikipedia.org/wiki/Hoofdpagina
> http://nl.wikipedia.org/wiki/Sjabloon:Inhoud (Template, versions 5 nov 
> 2005 13:52 and up)
>
> Btw: User:Waerth has even gone on strike until the old main page is 
> put back into place... (http://nl.wikipedia.org/wiki/Gebruiker:Waerth)
>
> Thanks,
>
>
> Galwaygirl 



	

	
		
___________________________________ 
Yahoo! Mail: gratis 1GB per i messaggi e allegati da 10MB 
http://mail.yahoo.it


From valdelli at gmail.com  Thu Nov 17 18:43:53 2005
From: valdelli at gmail.com (Ilario Valdelli)
Date: Thu, 17 Nov 2005 19:43:53 +0100
Subject: [Wikitech-l] Dutch Main Page
In-Reply-To: <437CBDA1.4050502@xs4all.nl>
References: <437CBDA1.4050502@xs4all.nl>
Message-ID: <437CCF69.1020206@gmail.com>

About my knlowledge of PHP i can say that the templates are not very 
slow. I programmed pages with a large number of templates without problems.

The problems can be depend to connection to DB and to the stack that 
allows only a minimum number of concurrent connections.

Ilario

Galwaygirl wrote:

> Hi all,
>
> On the Dutch Wikipedia we have a recurring discussion about our new 
> main page. It loads very slowly at times. Some claim it's because 
> there's too many images and templates in it, some claim it's because 
> servers are slow, some claim it's due to both factors.
>
> What do experts think? ;-)
>
> http://nl.wikipedia.org/wiki/Hoofdpagina
> http://nl.wikipedia.org/wiki/Sjabloon:Inhoud (Template, versions 5 nov 
> 2005 13:52 and up)
>
> Btw: User:Waerth has even gone on strike until the old main page is 
> put back into place... (http://nl.wikipedia.org/wiki/Gebruiker:Waerth)
>
> Thanks,
>
>
> Galwaygirl
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l at wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>


From chamdalae at dreamwiz.com  Thu Nov 17 19:47:28 2005
From: chamdalae at dreamwiz.com (Richard Austin)
Date: Thu, 17 Nov 2005 19:47:28 +0000 (UTC)
Subject: [Wikitech-l] Request for the creation of Samogitian and Banyumasan
	wikipedias
References: 
Message-ID: 

Servien Ilaino  writes:

> 
> Hi,
> 
> I would like to request the creation of Nedersaksisch; Low Saxon (NL)
> as soon as possible, it has been discussed for over 5 months now, most
> things have been cleared up and most people agree the wiki should be
> created soon, for the benefit of the Low Saxon community in the
> Netherlands and I guess to wikipedia. There are 2 oppose votes, one
> from Node ue (who deletes the request from the approved page almost
> daily) and has been put on again by various users, the second user is
> a non-active anonymous user at meta-wiki.
> 
> Romani; Vlax Romany has 1 "temporary against vote" which is also a
> non-active anonymous user at meta-wiki. The majority however supports
> the creation, and should be created as soon as possible sothat the
> Romani-community can also start with their own wikipedia.
> 
> Summary:
> 
> 1. Nedersaksisch
> 2. Romani
> 
> More info can be found at:
> http://meta.wikimedia.org/wiki/Approved_requests_for_new_languages
> 
> or at: http://meta.wikimedia.org/wiki/Template:Requests for new
> languages/nds-nl and http://meta.wikimedia.org/wiki/Template:Requests
> for new languages/Vlax Romany.
> 
> Regards,
> Servien Ilaino
> 

In addition to these requests there are two others which were approved about two
months ago - Samogitian and Banyumasan. The main delay has apparently been
because of a lack of codes, but binominal codes have been suggested. These have
been used in the past, and are recommended in this situation in the instructions
on the requests page itself.

The proposers are waiting anxiously, especially the Banyumasan group. They have
put a lot of work into building their test-wiki, and are getting concerned about
what will happen to this if they don't end up having a wikipedia.

Also we are currently discussing the future policy for starting new wikis. Some
users (myself included) would prefer to have wikipedias created for requests
that have already been approved before finalising this policy. It will be easier
to discuss possible changes to this policy in this situation, since we don't
have to worry about whether these changes affect requests which have already
been approved. We need to get this policy finalised as soon as possible, but I
don't think that can really happen while we have approved requests waiting.

Richard



From usenet at tonal.clara.co.uk  Thu Nov 17 19:55:31 2005
From: usenet at tonal.clara.co.uk (Neil Harris)
Date: Thu, 17 Nov 2005 19:55:31 +0000
Subject: [Wikitech-l] http://www.splammer.com/ a real-time proxy?
Message-ID: <437CE033.8040301@tonal.clara.co.uk>

The Wikipedia clone site http://www.splammer.com/ appears to me to be 
copying Wikipedia content in real time, apparently by reading the raw 
Wikitext, rather than running from a dump.

Could someone take a look to see whether this is the case?

-- Neil



From gmaxwell at gmail.com  Thu Nov 17 22:19:13 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Thu, 17 Nov 2005 17:19:13 -0500
Subject: [Wikitech-l] Subject: cookie based sockcheck,
	a prelude to cookie based blocking
Message-ID: 

There are situations where ip based blocking is overbroad (many users
behind a proxy) and situations where it is ineffective (user can
change IP). As a result some people have thought it desirable to be
able to block users based on a cookie, which although not foolproof
itself would be a useful additional tool.

I'd like to propose we implement half of that to gain something which
is useful right away but would require almost no work: Cookie based
sockcheck.

When a user edits, we request a cookie "usertoken" or whatever. If
they do not have one, we generate a long random number and give them
one. Every edit made by that browser (no matter which user is logged
in) the cookie is returned. We add an extra column to recent changes
to store this value.

A new version of sockcheck is produced that finds users who share
revisions with the same token, much like we can do with IPs already.
Viola, cookie based sockcheck.

Thoughts?


From astronouth7303 at gmail.com  Thu Nov 17 22:21:54 2005
From: astronouth7303 at gmail.com (Jamie Bliss)
Date: Thu, 17 Nov 2005 17:21:54 -0500
Subject: [Wikitech-l] Re: http://www.splammer.com/ a real-time proxy?
In-Reply-To: <437CE033.8040301@tonal.clara.co.uk>
References: <437CE033.8040301@tonal.clara.co.uk>
Message-ID: 

Neil Harris wrote:
> The Wikipedia clone site http://www.splammer.com/ appears to me to be 
> copying Wikipedia content in real time, apparently by reading the raw 
> Wikitext, rather than running from a dump.
> 
> Could someone take a look to see whether this is the case?

Well, "real time" isn't the term I would use, since it's been about an 
hour and changes haven't shown up yet (testing with [[openFIRST]]).

It appears to use Special:Export.

It is also a very poor copy, IMHO.

-- Jamie
-------------------------------------------------------------------
http://endeavour.zapto.org/astro73/
Thank you to JosephM for inviting me to Gmail!
Have lots of invites. Gmail now has 2GB.




From astronouth7303 at gmail.com  Thu Nov 17 22:42:40 2005
From: astronouth7303 at gmail.com (Jamie Bliss)
Date: Thu, 17 Nov 2005 17:42:40 -0500
Subject: [Wikitech-l] Re: http://www.splammer.com/ a real-time proxy?
In-Reply-To: <437CE033.8040301@tonal.clara.co.uk>
References: <437CE033.8040301@tonal.clara.co.uk>
Message-ID: 

Also, some sites using the same server [207.210.218.170]:
* thecorporatepage.com
* collectibleboats.com
* thelegalpage.com
* collectiblehomes.com
* freedomsailing.com
* thelegalreport.com

-- Jamie
-------------------------------------------------------------------
http://endeavour.zapto.org/astro73/
Thank you to JosephM for inviting me to Gmail!
Have lots of invites. Gmail now has 2GB.



From walter at wikipedia.be  Thu Nov 17 23:24:09 2005
From: walter at wikipedia.be (Walter Vermeir)
Date: Fri, 18 Nov 2005 00:24:09 +0100
Subject: [Wikitech-l] Re: Subject: cookie based sockcheck,
 a prelude to cookie based blocking
In-Reply-To: 
References: 
Message-ID: 

Gregory Maxwell schreef:
[cut]
> Thoughts?

Any system that gives the option to do a less stupid way of blocking
users like it is now is welcome.

http://bugzilla.wikimedia.org/show_bug.cgi?id=550

In june Tony Sidaway already had a working Mediawiki-testwiki that had
the function that IP blocks does not apply to logged-in users.

https://elektra.homeunix.org/testwiki/Main_Page

-- 
Ook een artikeltje schrijven? WikipediaNL, de vrije GNU/FDL encyclopedie
http://www.wikipedia.be



From brion at pobox.com  Fri Nov 18 00:17:40 2005
From: brion at pobox.com (Brion Vibber)
Date: Thu, 17 Nov 2005 16:17:40 -0800
Subject: [Wikitech-l] Extensions that return non-wikitext
In-Reply-To: <200511171202.29000@bloodgate.com>
References: <200511171202.29000@bloodgate.com>
Message-ID: <437D1DA4.5080600@pobox.com>

Tels wrote:
> my graph extension (see http://bloodgate.com/perl/graph) returns either 
> HTML, or SVG. However, mediawiki treats the returned text as wikitext, 
> e.g. it parses it, attempts transform it into HTML and the sanitizes it.

That would be very unusual if true; extensions' HTML output is
reinserted into final output in a very late stage of parsing, at
blocklevels iirc.

Avoid extra whitespace and blank lines in your output.

-- brion vibber (brion @ pobox.com)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: 

From brion at pobox.com  Fri Nov 18 00:28:49 2005
From: brion at pobox.com (Brion Vibber)
Date: Thu, 17 Nov 2005 16:28:49 -0800
Subject: [Wikitech-l] Re: http://www.splammer.com/ a real-time proxy?
In-Reply-To: 
References: <437CE033.8040301@tonal.clara.co.uk> 
Message-ID: <437D2041.5050002@pobox.com>

Jamie Bliss wrote:
> Also, some sites using the same server [207.210.218.170]:
> * thecorporatepage.com
> * collectibleboats.com
> * thelegalpage.com
> * collectiblehomes.com
> * freedomsailing.com
> * thelegalreport.com

Blocked IP.

-- brion vibber (brion @ pobox.com)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: 

From norial at gmx.de  Fri Nov 18 09:29:04 2005
From: norial at gmx.de (norial)
Date: Fri, 18 Nov 2005 10:29:04 +0100
Subject: [Wikitech-l] DokuWiki -> MediaWiki Konverter 
Message-ID: <437D9EE0.7060108@gmx.de>

Hallo

Ich m?chte von dem DokuWiki (http://wiki.splitbrain.org/wiki:dokuwiki) 
Daten in das MediaWiki importieren. Die Daten sind als UTF-8 Textfile 
gespeichert. Gibt es f?r sowas schon ein Konverter?
Mein Idee ist es, direkt in die mySQl-Datenbank zu importieren. Was ich 
vor allem suche ist eine Routine, die ins MediaWIki am Besten ?ber eine 
Funktion wie insertNewArticle schreibt. Daher muss das PHP-Skript soweit 
ich das sehe nur die localsettings.php laden, zur Datenbank verbinden, 
einen Admin anmelden und dann den Text, Zusammenfassung, Autor ?bergeben 
und schreiben. Die Routine zum Auslesen der Daten w?rde ich nat?rlich 
selbst einf?gen.
Kann mir dabei jemand helfen?

Gr??e
  andreas


From norial at gmx.de  Fri Nov 18 09:50:14 2005
From: norial at gmx.de (norial)
Date: Fri, 18 Nov 2005 10:50:14 +0100
Subject: [Wikitech-l] DokuWiki -> MediaWiki Converter
Message-ID: <437DA3D6.50003@gmx.de>

Hello

sorry, English is the common language for this ng. ignore my previous 
posting.
I like to import wiki entries created with DokuWiki 
(http://wiki.splitbrain.org/wiki:dokuwiki) in the MediaWiki database. 
The data's are stored in UTF-8 text files. >Is a Converter already 
available?
I think the easiest way could be an direct import into the mySQL 
database. So I need an small script that use the functions of MediaWiki 
like insertNewArticle. The script has to be insert the 
localsettings.php, connect to the database, login an admin and insert my 
text as a new page (text, summary, ...).

thanks in advance
  andreas


From f-x.p at laposte.net  Fri Nov 18 09:45:47 2005
From: f-x.p at laposte.net (FxParlant)
Date: Fri, 18 Nov 2005 10:45:47 +0100
Subject: [Wikitech-l] links in template headache
Message-ID: 

Thanks Brion for reminding me that
http://foo is replace into a link

But, Sorry for asking again. Is there a proper way to have this kind of
link in a template to work (Mediawiki 1.5.0)




In mw1.3, I hacked a bit the regexp in Parser.php so that anything
beginning with an '=' was left out of attributes replacement.

Here (mw1.5.0), I tried the simple  surrounding, but with
no success.

I'm really not sure there is a way to write this in wiki language, but
if someone knows...

Thanks for any solution

Fran?ois




From dgerard at gmail.com  Fri Nov 18 10:47:56 2005
From: dgerard at gmail.com (David Gerard)
Date: Fri, 18 Nov 2005 10:47:56 +0000
Subject: [Wikitech-l] Subject: cookie based sockcheck,
	a prelude to cookie based blocking
Message-ID: 

Gregory Maxwell wrote:

>When a user edits, we request a cookie "usertoken" or whatever. If
>they do not have one, we generate a long random number and give them
>one. Every edit made by that browser (no matter which user is logged
>in) the cookie is returned. We add an extra column to recent changes
>to store this value.
>A new version of sockcheck is produced that finds users who share
>revisions with the same token, much like we can do with IPs already.
>Viola, cookie based sockcheck.
>Thoughts?


Can a cookie carry between different IPs for the same browser? i.e.,
user hangs up and dials again for a different IP?


- d.


From magnus.manske at web.de  Fri Nov 18 12:48:46 2005
From: magnus.manske at web.de (Magnus Manske)
Date: Fri, 18 Nov 2005 13:48:46 +0100
Subject: [Wikitech-l] DokuWiki -> MediaWiki Converter
In-Reply-To: <437DA3D6.50003@gmx.de>
References: <437DA3D6.50003@gmx.de>
Message-ID: <437DCDAE.20202@web.de>

norial wrote:

> Hello
>
> sorry, English is the common language for this ng. ignore my previous
> posting.
> I like to import wiki entries created with DokuWiki
> (http://wiki.splitbrain.org/wiki:dokuwiki) in the MediaWiki database.
> The data's are stored in UTF-8 text files. >Is a Converter already
> available?
> I think the easiest way could be an direct import into the mySQL
> database. So I need an small script that use the functions of
> MediaWiki like insertNewArticle. The script has to be insert the
> localsettings.php, connect to the database, login an admin and insert
> my text as a new page (text, summary, ...).


You *do* realize that DokuWiki uses a rather different wiki syntax then
MediaWiki?

Magnus


From norial at gmx.de  Fri Nov 18 13:50:46 2005
From: norial at gmx.de (norial)
Date: Fri, 18 Nov 2005 14:50:46 +0100
Subject: [Wikitech-l] Re: DokuWiki -> MediaWiki Converter
In-Reply-To: <437DCDAE.20202@web.de>
References: <437DA3D6.50003@gmx.de> <437DCDAE.20202@web.de>
Message-ID: <437DDC36.9000302@gmx.de>

Hello

Magnus Manske schrieb am 18.11.2005 13:48:
> norial wrote:
> 
> You *do* realize that DokuWiki uses a rather different wiki syntax then
> MediaWiki?

Yes of curse. This will be part of the Converter and could be solved
easily because my files don't contain complex formatting elements.

regards
  andreas




From gmaxwell at gmail.com  Fri Nov 18 15:45:28 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Fri, 18 Nov 2005 10:45:28 -0500
Subject: [Wikitech-l] Subject: cookie based sockcheck,
	a prelude to cookie based blocking
In-Reply-To: 
References: 
Message-ID: 

On 11/18/05, David Gerard  wrote:
> >When a user edits, we request a cookie "usertoken" or whatever. If
> >they do not have one, we generate a long random number and give them
> >one. Every edit made by that browser (no matter which user is logged
> >in) the cookie is returned. We add an extra column to recent changes
> >to store this value.
> >A new version of sockcheck is produced that finds users who share
> >revisions with the same token, much like we can do with IPs already.
> >Viola, cookie based sockcheck.
> >Thoughts?
>
> Can a cookie carry between different IPs for the same browser? i.e.,
> user hangs up and dials again for a different IP?

Yes, thats what makes it useful beyond just an IP based checkuser.


From juergenherz at gmail.com  Fri Nov 18 16:21:26 2005
From: juergenherz at gmail.com (=?ISO-8859-1?Q?J=FCrgen_Herz?=)
Date: Fri, 18 Nov 2005 08:21:26 -0800
Subject: [Wikitech-l] Wikimedia Servers and Organization
Message-ID: 

Hello,

I used the last hours trying to dig in the infrastructural
organization of the Wikimedia servers. My starting points where
[[meta:Wikimedia_servers]] and Ganglia and my motivation was
Wikipedias slowness in the last time.

In contrast to my expectations, the database servers are far away from
being under high load. It even seems the pressure is so low, you can
easily live without holbach and webster for days (resp. over a month).
Bottlenecks are Apaches and Squids (yes, I know that's nothing new for
you).

But like all other clusters too, the load is very unequally
distributed over the machines. For example the Yahoo! squids showed
yf1003 9.39, yf1000 7.60, yf1004 1.60, yf1002 1.44, yf1001 0.73
at noon (UTC) today and similar load values (albeit with a different
distribution) at other times.

Or the Apaches in Florida:
16 Apaches with load around 15, 9 between 1.5 and 2, 8 between 1 and
1.5 and 10 less than 1.

Where does this come from, or is this wanted? Wouldn't a more balanced
load be better?


Other point: The Yahoo! Squids do virtually nothing between 18:00 and
0:00 (and machines besides yf1000-yf1004 to virtually nothing around
the clock). How nice would it be make them helping out the other
overloaded machines in Florida and Netherlands at least in these six
hours.


And no, I don't criticize anyone or know how to do it better. But
available informations look strange to me - it would be great to get
some explanations.

Speaking of explanations. I've three more simple questions:
1. Squids at lopar idle all the time since dns has been moved of them.
What where the problems with them and will they be back soon?
2. Commons is very slow since the move from the prior "overloaded"
server to the new one. Any explanation to satisfy a simple user?
And what server is the new one?
3. I read about new machines srv51-70. Where do they come from? Can't
see a recent order for them or they are mentioned on
[[meta:Wikimedia_servers]].

Thank you in advance,
Juergen


From nospam-abuse at bloodgate.com  Fri Nov 18 17:00:43 2005
From: nospam-abuse at bloodgate.com (Tels)
Date: Fri, 18 Nov 2005 18:00:43 +0100
Subject: [Wikitech-l] Extensions that return non-wikitext
In-Reply-To: <437D1DA4.5080600@pobox.com>
References: <200511171202.29000@bloodgate.com> <437D1DA4.5080600@pobox.com>
Message-ID: <200511181800.53604@bloodgate.com>

-----BEGIN PGP SIGNED MESSAGE-----

Moin,

On Friday 18 November 2005 01:17, Brion Vibber wrote:
> Tels wrote:
> > my graph extension (see http://bloodgate.com/perl/graph) returns
> > either HTML, or SVG. However, mediawiki treats the returned text as
> > wikitext, e.g. it parses it, attempts transform it into HTML and the
> > sanitizes it.
>
> That would be very unusual if true; extensions' HTML output is
> reinserted into final output in a very late stage of parsing, at
> blocklevels iirc.
>
> Avoid extra whitespace and blank lines in your output.

Well, what is it - inserted late (and thus unmodified) or modified and 
then inserted?

In 1.5.x lines with white space in front are turned into 
, and empty 
lines result in paragraphs - and both of these transformations shouldn't 
be nec. when the extension outputs HTML. 

This smells like some bug - could you point me to the place in the code 
where the result of the extension output is worked into the normal 
output?

Best wishes,

Tels

- -- 
 Signed on Fri Nov 18 17:58:41 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.

 This email violates U.S. patent #6,775,781 :
 
   sudo rm -fR *

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iQEUAwUBQ34IxHcLPEOTuEwVAQGjvAf1HIyka85g96hQKICj+isolgAiERsQ6+Kp
5gNhSlN2Bj+I7fagG4VDY/fcYpDQlKli/t2nEB9lrxlrNfb/CrG/jHewNjFGLCpJ
0ZJcM6lMWl5i9x/dAmu15QJIBS3nReXmRRI+g8eWEyXg4oo/6F7tUQfqYgLa5VcO
ugLIED49C6fjBKWRXrRgo2/99QW4IC8SLNqKW9qGnTR85R4OE8rjJX2hWJeksTDe
isPxrw4DB6LI0agn343D+KvRmJWM3GX/SVRRkDDLLRWLu4ckxKocSQWQyjHI0Wzc
Tu2yNq26PFXYtpcEK2zbB2Kj98/EPoa0lxGPdpsCVeaGVhNm2jBm
=G7GO
-----END PGP SIGNATURE-----


From magnus.manske at web.de  Fri Nov 18 17:44:46 2005
From: magnus.manske at web.de (Magnus Manske)
Date: Fri, 18 Nov 2005 18:44:46 +0100
Subject: [Wikitech-l] Extensions that return non-wikitext
In-Reply-To: <200511181800.53604@bloodgate.com>
References: <200511171202.29000@bloodgate.com> <437D1DA4.5080600@pobox.com>
	<200511181800.53604@bloodgate.com>
Message-ID: <437E130E.4020005@web.de>

Tels wrote:
> Moin,
>
> On Friday 18 November 2005 01:17, Brion Vibber wrote:
> >> Tels wrote:
> >>> my graph extension (see http://bloodgate.com/perl/graph) returns
> >>> either HTML, or SVG. However, mediawiki treats the returned text as
> >>> wikitext, e.g. it parses it, attempts transform it into HTML and the
> >>> sanitizes it.
> >> That would be very unusual if true; extensions' HTML output is
> >> reinserted into final output in a very late stage of parsing, at
> >> blocklevels iirc.
> >>
> >> Avoid extra whitespace and blank lines in your output.
>
> Well, what is it - inserted late (and thus unmodified) or modified and
> then inserted?
>
> In 1.5.x lines with white space in front are turned into 
, and empty
> lines result in paragraphs - and both of these transformations shouldn't
> be nec. when the extension outputs HTML.
>
> This smells like some bug - could you point me to the place in the code
> where the result of the extension output is worked into the normal
> output?
>
> Best wishes,
>
> Tels
>
Stupid guess, but what if you wrap your HTML in  tags as a
temporary measure?

MAgnus_______________________________________________
Wikitech-l mailing list
Wikitech-l at wikimedia.org
http://mail.wikipedia.org/mailman/listinfo/wikitech-l




From nospam-abuse at bloodgate.com  Fri Nov 18 18:37:26 2005
From: nospam-abuse at bloodgate.com (Tels)
Date: Fri, 18 Nov 2005 19:37:26 +0100
Subject: [Wikitech-l] Extensions that return non-wikitext
In-Reply-To: <437E130E.4020005@web.de>
References: <200511171202.29000@bloodgate.com>
	<200511181800.53604@bloodgate.com> <437E130E.4020005@web.de>
Message-ID: <200511181937.35804@bloodgate.com>

-----BEGIN PGP SIGNED MESSAGE-----

Moin,

On Friday 18 November 2005 18:44, Magnus Manske wrote:
> Tels wrote:
> > Moin,
> >
> > On Friday 18 November 2005 01:17, Brion Vibber wrote:
> > >> Tels wrote:
> > >>> my graph extension (see http://bloodgate.com/perl/graph) returns
> > >>> either HTML, or SVG. However, mediawiki treats the returned text
> > >>> as wikitext, e.g. it parses it, attempts transform it into HTML
> > >>> and the sanitizes it.
> > >>
> > >> That would be very unusual if true; extensions' HTML output is
> > >> reinserted into final output in a very late stage of parsing, at
> > >> blocklevels iirc.
> > >>
> > >> Avoid extra whitespace and blank lines in your output.
> >
> > Well, what is it - inserted late (and thus unmodified) or modified
> > and then inserted?
> >
> > In 1.5.x lines with white space in front are turned into 
, and
> > empty lines result in paragraphs - and both of these transformations
> > shouldn't be nec. when the extension outputs HTML.
> >
> > This smells like some bug - could you point me to the place in the
> > code where the result of the extension output is worked into the
> > normal output?
> >
> > Best wishes,
> >
> > Tels
>
> Stupid guess, but what if you wrap your HTML in  tags as a
> temporary measure?

I'll try this - but it would be the same kind of "hack" that I employ now 
- - I just strip empty lines and leading whitespace from the HTML before 
returning it to mediawiki. I'll try to look into the mediawiki source and 
see if something obvious jumps at me.

The  issue is still unsolved though - SVG outpout gets mangled so 
that it no longer works and the browser (like firefox 1.5) does not 
recognize it as valid SVG. (And external SVG files create a lot of 
problems with caching, filesystem etc). I haven't found a workaround for 
SVG yet, but your  trick might actually help here :)

Thanx a lot,,

Tels

- -- 
 Signed on Fri Nov 18 19:34:35 2005 with key 0x93B84C15.
 Visit my photo gallery at http://bloodgate.com/photos/
 PGP key on http://bloodgate.com/tels.asc or per email.

 "Time flies like an arrow; fruit flies like a banana." -- Groucho Marx

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iQEVAwUBQ34fbncLPEOTuEwVAQFO6wf+NCknjs5hBckKGtEj8Oro0Yk4ohD4Ejmq
LGBfKWuC610sdKBu/2mtMxVn6QOLDfthnmG8rlWJTununa5l7xbtQljwpwHxBbwF
Cv8EC9KLRkKR6vOu+eBM/XV/gJQaFeIOyrWaR1frTDN9rYoDWwIwPbXq4xVlTObz
lnWwAo75IKG50C5E4G2V9gR3ipB/Q4/SS7bhM7JB1NyoIAjoWZQhyBJLeXE16n55
EVBlHdLO8o//YNOnmolGBTu7SXW6dUZFo6SvM4b0T25PdnTV4E4cdM6taUC2fBNj
2qZZhPPOTmbb4kLlcHUzPEvV5thq8P+0hPecNkcIr9g2Jab4tHtmdA==
=1Vvb
-----END PGP SIGNATURE-----


From brion at pobox.com  Fri Nov 18 19:19:49 2005
From: brion at pobox.com (Brion Vibber)
Date: Fri, 18 Nov 2005 11:19:49 -0800
Subject: [Wikitech-l] Subject: cookie based sockcheck, a prelude to cookie
	based blocking
In-Reply-To: 
References: 
Message-ID: <437E2955.2080806@pobox.com>

David Gerard wrote:
> Can a cookie carry between different IPs for the same browser? i.e.,
> user hangs up and dials again for a different IP?

Yes, but it's also trivially easy to remove or just reject. It's only an
effective measure against the painfully naive. (Of which there are many,
alas. ;)

-- brion vibber (brion @ pobox.com)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: 

From brion at pobox.com  Fri Nov 18 19:21:40 2005
From: brion at pobox.com (Brion Vibber)
Date: Fri, 18 Nov 2005 11:21:40 -0800
Subject: [Wikitech-l] DokuWiki -> MediaWiki Converter
In-Reply-To: <437DA3D6.50003@gmx.de>
References: <437DA3D6.50003@gmx.de>
Message-ID: <437E29C4.8060407@pobox.com>

norial wrote:
> I like to import wiki entries created with DokuWiki
> (http://wiki.splitbrain.org/wiki:dokuwiki) in the MediaWiki database.
> The data's are stored in UTF-8 text files. >Is a Converter already
> available?
> I think the easiest way could be an direct import into the mySQL
> database. So I need an small script that use the functions of MediaWiki
> like insertNewArticle. The script has to be insert the
> localsettings.php, connect to the database, login an admin and insert my
> text as a new page (text, summary, ...).

Take a look at maintenance/importUseModWiki.php for an example.

The 1.5 version I think now outputs an XML data stream for our importer
rather than poking directly into the database; if you use this technique
you'll help to future-proof your code.

The pages can then be imported with importDump.php (command-line) or
through Special:Import if the file isn't too big.

-- brion vibber (brion @ pobox.com)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: 

From derick at php.net  Fri Nov 18 12:01:44 2005
From: derick at php.net (Derick Rethans)
Date: Fri, 18 Nov 2005 13:01:44 +0100 (CET)
Subject: [Wikitech-l] PHP 4.4.2RC1 Testing
Message-ID: 

Hello!

You are receiving this email because your project has been selected to 
take part in a new effort by the PHP QA Team to make sure that your 
project still works with PHP versions to-be-released. With this we hope 
to make sure that you are either aware of things that might break, or to 
make sure we don't introduce any strange regressions. With this effort 
we hope to build a better relation between the PHP Team and the major 
projects.

If you do not want to receive these heads-up emails, please reply to me 
personally and I will remove you from the list; but, we hope that you 
want to actively help us making PHP a better and more stable tool.

The first release candidate of PHP 4.4.2 can be found at 
http://downloads.php.net/derick/ . If everything goes well, we hope to 
release PHP 4.4.2 next tuesday. If you find any issues, please contact 
the PHP QA team at "php-qa at lists.php.net".

The main things that this release addresses are:
- problems with mod_rewrite and apache 2
- the key() and current() bug

Please test those cases extra carefully on as many platforms as 
possible.

In case you think that other projects should also receive this kinds of 
emails, please let me know privately, and I will add them to the list of 
projects to contact.

regards,
Derick


-- 
Derick Rethans
http://derickrethans.nl | http://ez.no | http://xdebug.org


From brion at pobox.com  Fri Nov 18 19:27:13 2005
From: brion at pobox.com (Brion Vibber)
Date: Fri, 18 Nov 2005 11:27:13 -0800
Subject: [Wikitech-l] Wikimedia Servers and Organization
In-Reply-To: 
References: 
Message-ID: <437E2B11.3010806@pobox.com>

J?rgen Herz wrote:
> But like all other clusters too, the load is very unequally
> distributed over the machines.
[snip]
> Where does this come from, or is this wanted? Wouldn't a more balanced
> load be better?

More balanced would likely be better. Spreading load evenly seems to be
really hard to get right; if you have advice based on experience I'm
sure we'd love to hear it.

> Other point: The Yahoo! Squids do virtually nothing between 18:00 and
> 0:00 (and machines besides yf1000-yf1004 to virtually nothing around
> the clock). How nice would it be make them helping out the other
> overloaded machines in Florida and Netherlands at least in these six
> hours.

What would they do during this time?

> 1. Squids at lopar idle all the time since dns has been moved of them.
> What where the problems with them and will they be back soon?

They're older, slower machines and can handle only a small fraction of
what we pump through the Amsterdam cluster, so not too sure about these.

> 2. Commons is very slow since the move from the prior "overloaded"
> server to the new one. Any explanation to satisfy a simple user?
> And what server is the new one?

It was already slow *before* that. The files were moved to a faster
internal server, but the web interface is still on a slow machine until
things get finalized. This means images are still slow to load (alas)
but don't bog down the primary wiki web servers as much when they poke
at the images.

-- brion vibber (brion @ pobox.com)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: 

From pavol.cupka at gmail.com  Fri Nov 18 19:28:44 2005
From: pavol.cupka at gmail.com (Pavol Cupka)
Date: Fri, 18 Nov 2005 20:28:44 +0100
Subject: [Wikitech-l] RC rss feed problems
Message-ID: <8d8fe0590511181128h2b123c21r37d7a3dab77b329@mail.gmail.com>

Hello techs :)

I would like to ask about a specific bug that prevents showing the
diff for new pages in the RC rss feed. Can someone please look at it
and tell me someting about it.

http://bugzilla.wikimedia.org/show_bug.cgi?id=3996

Thank you in advance
--
Palica
http://sk.wikipedia.org/User:Palica
Nezabudni si vzia? svoje Wikam?ny. / Don't forget to take your
Wikamins today. - Palica
http://sk.wikipedia.org - slobodn? encyklop?dia, ktor? m??e ka?d?
upravova? - AJ TY

From sechan at amazon.com  Fri Nov 18 19:34:57 2005
From: sechan at amazon.com (Sechan, Gabe)
Date: Fri, 18 Nov 2005 11:34:57 -0800
Subject: [Wikitech-l] Parser caching
Message-ID: <00C14A1BA7454D4099D19923AA7FE21B06C83905@ex-mail-sea-04.ant.amazon.com>

Does the parser cache the results of pages after parsing them?  If so, is there a quick place I could turn it off for certain pages?  Such as pretending that they cache miss?  The parsing is messing up my read protections on pages-  the permissions are working perfectly on special pages like edit or history that try to access the page, but don't always protect the node itself unless I edit it after protecting it.

Gabe


From sechan at amazon.com  Fri Nov 18 19:57:26 2005
From: sechan at amazon.com (Sechan, Gabe)
Date: Fri, 18 Nov 2005 11:57:26 -0800
Subject: [Wikitech-l] Parser caching
Message-ID: <00C14A1BA7454D4099D19923AA7FE21B06C8392F@ex-mail-sea-04.ant.amazon.com>

Ok, I tried forsing ParserCache::get to pretend there's a cache miss on protected pages, and it seems to be working.  If anyone knows a reason this wouldn't work fully or that this is a bad idea (other than performance, which should be minor as less than 1% of pages are read restricted), please tell me.

Gabe 

-----Original Message-----
From: wikitech-l-bounces at wikimedia.org [mailto:wikitech-l-bounces at wikimedia.org] On Behalf Of Sechan, Gabe
Sent: Friday, November 18, 2005 11:35 AM
To: wikitech-l at wikimedia.org
Subject: [Wikitech-l] Parser caching

Does the parser cache the results of pages after parsing them?  If so, is there a quick place I could turn it off for certain pages?  Such as pretending that they cache miss?  The parsing is messing up my read protections on pages-  the permissions are working perfectly on special pages like edit or history that try to access the page, but don't always protect the node itself unless I edit it after protecting it.

Gabe
_______________________________________________
Wikitech-l mailing list
Wikitech-l at wikimedia.org
http://mail.wikipedia.org/mailman/listinfo/wikitech-l


From brion at pobox.com  Fri Nov 18 20:06:59 2005
From: brion at pobox.com (Brion Vibber)
Date: Fri, 18 Nov 2005 12:06:59 -0800
Subject: [Wikitech-l] Parser caching
In-Reply-To: <00C14A1BA7454D4099D19923AA7FE21B06C83905@ex-mail-sea-04.ant.amazon.com>
References: <00C14A1BA7454D4099D19923AA7FE21B06C83905@ex-mail-sea-04.ant.amazon.com>
Message-ID: <437E3463.7010100@pobox.com>

Sechan, Gabe wrote:
> Does the parser cache the results of pages after parsing them?  If
> so, is there a quick place I could turn it off for certain pages?
> Such as pretending that they cache miss?  The parsing is messing up
> my read protections on pages-  the permissions are working perfectly
> on special pages like edit or history that try to access the page,
> but don't always protect the node itself unless I edit it after
> protecting it.

If this is causing a problem for you, your permissions scheme is
inherently flawed and is probably a huge security hole.

-- brion vibber (brion @ pobox.com)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: 

From max at xam.de  Fri Nov 18 19:32:46 2005
From: max at xam.de (Max Voelkel)
Date: Fri, 18 Nov 2005 19:32:46 +0000
Subject: [Wikitech-l] Extending Wikipedia with Java-based technology?
Message-ID: <997947954.20051118193246@xam.de>

Dear Wikipedia-Wizards,

  we are a group of four researchers building an extension for
  Wikipedia, called the "Semantic Wikipedia", which is technically a
  MediaWiki extension.

  The project is described here:
  http://meta.wikimedia.org/wiki/Semantic_MediaWiki

  can be used as a demo here:
  http://wiki.ontoworld.org

  As a short summary, it allows users to type links, which yields to
  the creation of semantic metadata (page name, link type, link
  target). In a similar fashion we allow for the annotation of
  attributes. If this project will be deployed on Wikipedia, a huge
  amount of machine-processable data could be generated. We will
  provide an RDF export per page and a SPARQL-query endpoint for the
  whole Semantic Wikipedia (SPARQL is like SQL, but more adapted to
  the data model of RDF, a building block of the semantic web).

  Currently, we have two problems and would be glad if you help us:

  1.
  The tool stack in the semantic web community is mainly built on
  Java. For C, there is only on "triple store" (which is needed for
  efficient RDF storage & querying). The only candiadate we have,
  "3store" is not very mature - but many Java stores are. Especially
  the open-source system "Sesame" (openrdf.org) would be our choice
  for implementation. But, as far as I understand Wikipedia, Java is
  not open source enough, as there is no open source implementation of
  Java itself? Is this true or just a rumor?

  2.
  Syntax. We had to extend the syntax slightly to enable annotations
  of links and data values. Currently we settled down to use

    [[link type::link target|optional alternate label]]

    Sample, on page "London":  ... is in [[located in::England]] ...
    Renders as:    ... is in England ....  (England = Linked)
    
  for relations, and for attributes.
  
    [[attribute type:=data value with unit|optional alternate label]]

    Sample, on page "London": ... rains on [[rain:=234 days/year]] ....
    Renders as  .... rains on 234 days/year  (nothing linked)

  For a full explanation of whay and what we try to do,
  you can also have a look at a paper, which we wrote for a
  conference:
  http://www.aifb.uni-karlsruhe.de/Publikationen/showPublikation_english?publ_id=1055

  BTW: I promised Jimmy (in San Diego) to explain him, what the semantic web
  is. I still work on that :-)
  
  Thanks a lot in advance,


Kind regards,

  Max V?lkel
--
Dipl.-Inform. Max V?lkel
University of Karlsruhe, AIFB, Knowledge Management Group
mvo at aifb.uni-karlsruhe.de   +49 721 608-4754   www.xam.de




  
    


From gmaxwell at gmail.com  Fri Nov 18 20:29:06 2005
From: gmaxwell at gmail.com (Gregory Maxwell)
Date: Fri, 18 Nov 2005 15:29:06 -0500
Subject: [Wikitech-l] Extending Wikipedia with Java-based technology?
In-Reply-To: <997947954.20051118193246@xam.de>
References: <997947954.20051118193246@xam.de>
Message-ID: 

On 11/18/05, Max Voelkel  wrote:
> Dear Wikipedia-Wizards,
>   we are a group of four researchers building an extension for
>   Wikipedia, called the "Semantic Wikipedia", which is technically a
>   MediaWiki extension.
>   As a short summary, it allows users to type links, which yields to
>   the creation of semantic metadata (page name, link type, link
>   target). In a similar fashion we allow for the annotation of
>   attributes. If this project will be deployed on Wikipedia, a huge
>   amount of machine-processable data could be generated. We will
>   provide an RDF export per page and a SPARQL-query endpoint for the
>   whole Semantic Wikipedia (SPARQL is like SQL, but more adapted to
>   the data model of RDF, a building block of the semantic web).
>
>   Currently, we have two problems and would be glad if you help us:
>
>   1.
>   The tool stack in the semantic web community is mainly built on
>   Java. For C, there is only on "triple store" (which is needed for
>   efficient RDF storage & querying). The only candiadate we have,
>   "3store" is not very mature - but many Java stores are. Especially
>   the open-source system "Sesame" (openrdf.org) would be our choice
>   for implementation. But, as far as I understand Wikipedia, Java is
>   not open source enough, as there is no open source implementation of
>   Java itself? Is this true or just a rumor?

Look at [[GCJ]] on enwiki.  There are free implementations but they
are not good enough for all applications.

Perhaps I'm hopeless out of touch with the times, but why can't this
information be stored in a normal SQL database. I'm sure a parser
could be written to rewrite SPARQL queries into efficiently executing
SQL queries.

On my on and off again analysis copy of Wikipedia I use the hstore
module in PGsql to store name=value pairs for every revision.  If you
were only talking about storing metadata for the current versions
articles it would likely be quite efficent to carry a relation which
contains source,type,dest tuples for each of your links.  Even MySQL
could do a pretty good job for this.

>   2.
>   Syntax. We had to extend the syntax slightly to enable annotations
>   of links and data values. Currently we settled down to use
>
>     [[link type::link target|optional alternate label]]
>
>     Sample, on page "London":  ... is in [[located in::England]] ...
>     Renders as:    ... is in England ....  (England = Linked)
>
>   for relations, and for attributes.
>
>     [[attribute type:=data value with unit|optional alternate label]]
>
>     Sample, on page "London": ... rains on [[rain:=234 days/year]] ....
>     Renders as  .... rains on 234 days/year  (nothing linked)
>
>   For a full explanation of whay and what we try to do,
>   you can also have a look at a paper, which we wrote for a
>   conference:
>   http://www.aifb.uni-karlsruhe.de/Publikationen/showPublikation_english?publ_id=1055

There would appear to be some potential conversion with namespaces. It
might be advisable to use a character which could currently not appear
in an internal link.
It might also be advisable to not make them look like internal links.
We've already overloaded [[]] quite a bit with things which are not
quite the same as internal links conceptually, or syntactically
(categories, images).

Can semantic links carry an additional attribute beyond their type, i.e.
on [[Bill Clinton]] [[us presidential succession::George W.
Bush:=next]] or would it have to be  [[next us president::George W.
Bush]]

If it's just the latter case, then all it is is a typed directed graph
and there are a wealth of fast algorithms available for searching and
storing the relationships.


From sechan at amazon.com  Fri Nov 18 20:46:10 2005
From: sechan at amazon.com (Sechan, Gabe)
Date: Fri, 18 Nov 2005 12:46:10 -0800
Subject: [Wikitech-l] Parser caching
Message-ID: <00C14A1BA7454D4099D19923AA7FE21B06C8397C@ex-mail-sea-04.ant.amazon.com>

How so?  I'm honestly curious here.  My permissions are stored in page_restrictions.  Its just a simple group read/can't read thing (only groups in the field can read a page.  Unless the field is blank, in which case anyone can).  I put the restrictions checking in Revision::getText()  and Title::getText().  This is working for edit text, history text, etc.  The only exception is for the main node itself-  there it works only after I edit the page (so if I edit the page, then read it, it works.  If I just protect it without editing, it fails).  Putting in the fake cache miss seems to fix that bug.  What am I missing?

Gabe

-----Original Message-----
From: wikitech-l-bounces at wikimedia.org [mailto:wikitech-l-bounces at wikimedia.org] On Behalf Of Brion Vibber
Sent: Friday, November 18, 2005 12:07 PM
To: Wikimedia developers
Subject: Re: [Wikitech-l] Parser caching

Sechan, Gabe wrote:
> Does the parser cache the results of pages after parsing them?  If so, 
> is there a quick place I could turn it off for certain pages?
> Such as pretending that they cache miss?  The parsing is messing up my 
> read protections on pages-  the permissions are working perfectly on 
> special pages like edit or history that try to access the page, but 
> don't always protect the node itself unless I edit it after protecting 
> it.

If this is causing a problem for you, your permissions scheme is inherently flawed and is probably a huge security hole.

-- brion vibber (brion @ pobox.com)



From beng at garagegames.com  Fri Nov 18 22:19:48 2005
From: beng at garagegames.com (Ben Garney)
Date: Fri, 18 Nov 2005 14:19:48 -0800
Subject: [Wikitech-l] Parser caching
In-Reply-To: <00C14A1BA7454D4099D19923AA7FE21B06C8397C@ex-mail-sea-04.ant.amazon.com>
References: <00C14A1BA7454D4099D19923AA7FE21B06C8397C@ex-mail-sea-04.ant.amazon.com>
Message-ID: <437E5384.3080608@garagegames.com>

Sechan, Gabe wrote:
> How so?  I'm honestly curious here.  
Me, too, it sounds like all that's happening is it's taking a cache 
flush for the "this page protected" message to show up instead of the 
regular content. In fact, for my project I've just turned off 
client-side caching for the moment, since its been a significant source 
of problems. I'll no doubt have to revisit that as more people start 
using my site. :)

Regards,
Ben Garney
Torque Technologies Director


From brion at pobox.com  Fri Nov 18 22:26:14 2005
From: brion at pobox.com (Brion Vibber)
Date: Fri, 18 Nov 2005 14:26:14 -0800
Subject: [Wikitech-l] Parser caching
In-Reply-To: <00C14A1BA7454D4099D19923AA7FE21B06C8397C@ex-mail-sea-04.ant.amazon.com>
References: <00C14A1BA7454D4099D19923AA7FE21B06C8397C@ex-mail-sea-04.ant.amazon.com>
Message-ID: <437E5506.3000701@pobox.com>

Sechan, Gabe wrote:
> How so?  I'm honestly curious here.  My permissions are stored in
> page_restrictions.  Its just a simple group read/can't read thing
> (only groups in the field can read a page.  Unless the field is
> blank, in which case anyone can).  I put the restrictions checking in
> Revision::getText()  and Title::getText().

Be very careful; this will prevent all internal functions from loading
text properly, and could result in permanent data corruption on internal
maintenance processes (compression/uncompression for instance, backup
data dumps, perhaps future upgrades).

More generally about the parser cache; if you add per-user changes that
affect rendering you need to take this into account in the parser cache
option hash. See User.php.

-- brion vibber (brion @ pobox.com)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: 

From sechan at amazon.com  Fri Nov 18 22:40:24 2005
From: sechan at amazon.com (Sechan, Gabe)
Date: Fri, 18 Nov 2005 14:40:24 -0800
Subject: [Wikitech-l] Parser caching
Message-ID: <00C14A1BA7454D4099D19923AA7FE21B06C83A7C@ex-mail-sea-04.ant.amazon.com>

 

-----Original Message-----
From: wikitech-l-bounces at wikimedia.org [mailto:wikitech-l-bounces at wikimedia.org] On Behalf Of Brion Vibber
Sent: Friday, November 18, 2005 2:26 PM
To: Wikimedia developers
Subject: Re: [Wikitech-l] Parser caching

Sechan, Gabe wrote:
> How so?  I'm honestly curious here.  My permissions are stored in 
> page_restrictions.  Its just a simple group read/can't read thing 
> (only groups in the field can read a page.  Unless the field is blank, 
> in which case anyone can).  I put the restrictions checking in
> Revision::getText()  and Title::getText().

Be very careful; this will prevent all internal functions from loading text properly, and could result in permanent data corruption on internal maintenance processes (compression/uncompression for instance, backup data dumps, perhaps future upgrades).

More generally about the parser cache; if you add per-user changes that affect rendering you need to take this into account in the parser cache option hash. See User.php.

-- brion vibber (brion @ pobox.com)


Good catch, I didn't think of backup.  I'll add in an exception to the checking so that internal functions bypass it.  Since these would be done via an admin or the command line, excepting those 2 cases ought to work (admins need full vision anyway).

Gabe


From t.starling at physics.unimelb.edu.au  Sat Nov 19 03:19:20 2005
From: t.starling at physics.unimelb.edu.au (Tim Starling)
Date: Sat, 19 Nov 2005 14:19:20 +1100
Subject: [Wikitech-l] Re: Wikimedia Servers and Organization
In-Reply-To: 
References: 
Message-ID: 

J?rgen Herz wrote:
> Hello,
> 
> I used the last hours trying to dig in the infrastructural
> organization of the Wikimedia servers. My starting points where
> [[meta:Wikimedia_servers]] and Ganglia and my motivation was
> Wikipedias slowness in the last time.
> 
> In contrast to my expectations, the database servers are far away from
> being under high load. It even seems the pressure is so low, you can
> easily live without holbach and webster for days (resp. over a month).
> Bottlenecks are Apaches and Squids (yes, I know that's nothing new for
> you).
> 
> But like all other clusters too, the load is very unequally
> distributed over the machines. For example the Yahoo! squids showed
> yf1003 9.39, yf1000 7.60, yf1004 1.60, yf1002 1.44, yf1001 0.73
> at noon (UTC) today and similar load values (albeit with a different
> distribution) at other times.

That's just ordinary random variation. The 15 minute load average is much
more closely clustered than the 1 minute load average.

> Or the Apaches in Florida:
> 16 Apaches with load around 15, 9 between 1.5 and 2, 8 between 1 and
> 1.5 and 10 less than 1.
> 
> Where does this come from, or is this wanted? Wouldn't a more balanced
> load be better?

The apache load figures are unreliable at the moment because there is a
number of hung processes on each machine waiting for an NFS share that is
never going to start working again. But there's still a few points I can
make about this:

* The apache load also sees quite a lot of random variation from minute to
minute. This is unavoidable, and in my opinion, harmless. We can't tell how
much CPU time a request will require before we let apache accept the
connection. That's why we have multiprocessing.
* Perlbal suffered from a couple of load balancing problems, such as
oscillation of load between the perlbal servers. When we switched to LVS,
with weighted least connection scheduling, load distribution became much
more stable on the 10 minute time scale.
* Currently we're not using a higher weight for the dual-CPU apaches than
for the single-CPU apaches. It's likely that the optimal concurrency level
for dual-CPU machines is higher than for single-CPU machines. I'm not sure
what impact this has on throughput, but I suspect it would be fairly small,
especially in times of high load. To have high throughput, you just have to
have *enough* connections queued or active in order to keep the CPU busy,
and at high load that condition is likely to be satisfied.

The crucial thing to avoid in load balancing is a wide divergence in request
service time, wide enough to be user-visible. At the moment, we don't seem
to have that, except in a few special cases. A couple of hundred
milliseconds divergence is acceptable, it's when you get into the seconds
that you have a problem.

This is data from the last 24 hours or so:

mysql> select pf_server,pf_time/pf_count from profiling where pf_name='-total';
+-----------+------------------+
| pf_server | pf_time/pf_count |
+-----------+------------------+
| srv15     |  204.56651921613 |
| goeje     |  201.84655049787 |
| srv16     |  153.09295690508 |
| srv49     |  103.67533112583 |
| srv12     |  142.05136531207 |
| srv45     |  171.57344543147 |
| srv29     |  137.25579172808 |
| srv39     |   169.4883604123 |
| srv20     |     149.64453125 |
| srv18     |  159.00027606007 |
| srv41     |  143.65079066265 |
| srv22     |  159.61156337535 |
| srv25     |  141.55984462781 |
| srv21     |  142.70388359744 |
| diderot   |  324.14045643154 |
| rabanus   |   208.7263215859 |
| humboldt  |  834.82322485207 |
| srv43     |  129.96146801287 |
| avicenna  |  173.68361557263 |
| srv27     |  428.79877888655 |
| srv19     |  229.72577751196 |
| srv28     |  148.12455417799 |
| srv35     |  125.47606219212 |
| srv48     |  142.57025343643 |
| srv37     |  147.24074616948 |
| srv32     |  145.71093201754 |
| srv46     |  161.01125266335 |
| srv24     |  175.73745958135 |
| srv44     |  173.76259793447 |
| srv14     |  143.50663395904 |
| srv11     |  146.01194914544 |
| srv30     |  162.15520549113 |
| srv23     |  169.26885457677 |
| srv0      |  269.70157858274 |
| srv38     |  182.01380813953 |
| srv17     |  143.32407831225 |
| srv34     |  120.18475895904 |
| srv13     |   165.9256635274 |
| alrazi    |  203.50503060089 |
| srv50     |  152.40960211391 |
| srv36     |  210.69465332031 |
| hypatia   |  169.72124821556 |
| srv47     |  161.01089421174 |
| friedrich |  180.30277434721 |
| srv40     |  168.59215472028 |
| srv3      |  145.29130517183 |
| srv33     |  119.75965134641 |
| srv4      |  149.60560154037 |
| kluge     |  173.34889083873 |
| rose      |  675.57995605469 |
+-----------+------------------+
50 rows in set (0.01 sec)

Humboldt and rose need attention, but I wouldn't worry about the rest.


> Other point: The Yahoo! Squids do virtually nothing between 18:00 and
> 0:00 (and machines besides yf1000-yf1004 to virtually nothing around
> the clock). How nice would it be make them helping out the other
> overloaded machines in Florida and Netherlands at least in these six
> hours.

This was an intentional part of our architecture. Local squid clusters serve
local users, this reduces latency due to network round-trip time (RTT). Of
course, this only makes sense if that network RTT (200ms or so) is greater
than the local service time due to high load. Hopefully we can fix our squid
software problems and add more squid hardware to Florida (which is the only
site where it truly seems to be lacking), and thus maintain this design.
Spreading load across servers distributed worldwide is cheaper but
necessarily slower than the alternative of minimising distance between cache
and user.

It might interest you to know that as soon as I came to terms with the fact
that the knams cluster would be idle at night, I promoted the idea of using
that idle CPU time for something useful, like non-profit medical research.
We've now clocked up 966 work units for Folding at Home:

http://fah-web.stanford.edu/cgi-bin/main.py?qtype=userpage&username=wikimedia%2Eorg


> And no, I don't criticize anyone or know how to do it better. But
> available informations look strange to me - it would be great to get
> some explanations.
> 
> Speaking of explanations. I've three more simple questions:
> 1. Squids at lopar idle all the time since dns has been moved of them.
> What where the problems with them and will they be back soon?
> 2. Commons is very slow since the move from the prior "overloaded"
> server to the new one. Any explanation to satisfy a simple user?
> And what server is the new one?

Brion answered those two well enough.

> 3. I read about new machines srv51-70. Where do they come from? Can't
> see a recent order for them or they are mentioned on
> [[meta:Wikimedia_servers]].

No idea. I just use the things.

-- Tim Starling



From avarab at gmail.com  Sat Nov 19 05:07:20 2005
From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=)
Date: Sat, 19 Nov 2005 05:07:20 +0000
Subject: [Wikitech-l] Extending Wikipedia with Java-based technology?
In-Reply-To: <997947954.20051118193246@xam.de>
References: <997947954.20051118193246@xam.de>
Message-ID: <51dd1af80511182107g4bbafa01ge259d37ff0fdeb2f@mail.gmail.com>

On 11/18/05, Max Voelkel  wrote:
>   2.
>   Syntax. We had to extend the syntax slightly to enable annotations
>   of links and data values. Currently we settled down to use

The problem with your extending of the syntax is that it conflicts
with existing titles both in theory and in practice, pages with double
colons though rare do exist, for example the Code::Blocks article on
enwiki and numerous user pages on other wikis.

Having said that you could either break compatability with such titles
or use some of the characters currently not allowed in titles which
are:

* +
* <
* >
* [
* ]
* {
* |
* }

[] aren't practical since they already delimit the link (unless you
wanted horrors like [[located in[England]]), {} are already used for
templates and | would be ambiguous that leaves you with <> and +. and
[[location>England]] or [[location=>England]] doesn't look all that
bad.

>     [[attribute type:=data value with unit|optional alternate label]]
>
>     Sample, on page "London": ... rains on [[rain:=234 days/year]] ....
>     Renders as  .... rains on 234 days/year  (nothing linked)

Say I also wanted to link an assigned value, say make a link to [[234
days/year]] how would I do that? [[[[rain:=234 days/year]]]] doesn't
work.


From hashar at altern.org  Sat Nov 19 12:28:42 2005
From: hashar at altern.org (Ashar Voultoiz)
Date: Sat, 19 Nov 2005 13:28:42 +0100
Subject: [Wikitech-l] Re: Wikimedia Servers and Organization
In-Reply-To: 
References: 
	
Message-ID: 

Tim Starling wrote:

> The apache load figures are unreliable at the moment because there is a
> number of hung processes on each machine waiting for an NFS share that is
> never going to start working again. 


If any root is interested in fixing those apaches, I listed them in a
bug report on bugzilla:


-- 
Ashar Voultoiz - WP++++
http://en.wikipedia.org/wiki/User:Hashar
http://www.livejournal.com/community/wikitech/
IM: hashar at jabber.org  ICQ: 15325080



From hashar at altern.org  Sat Nov 19 12:39:16 2005
From: hashar at altern.org (Ashar Voultoiz)
Date: Sat, 19 Nov 2005 13:39:16 +0100
Subject: [Wikitech-l] Re: Wikimedia Servers and Organization
In-Reply-To: 
References: 	
	
Message-ID: 

Ashar Voultoiz wrote:
 > If any root is interested in fixing those apaches, I listed them in a
> bug report on bugzilla:

http://bugzilla.wikimedia.org/show_bug.cgi?id=3869

-- 
Ashar Voultoiz - WP++++
http://en.wikipedia.org/wiki/User:Hashar
http://www.livejournal.com/community/wikitech/
IM: hashar at jabber.org  ICQ: 15325080



From fun at thingy.apana.org.au  Sat Nov 19 16:23:44 2005
From: fun at thingy.apana.org.au (David Gerard)
Date: Sun, 20 Nov 2005 03:23:44 +1100
Subject: [Wikitech-l] Subject: cookie based sockcheck,
	a prelude to cookie based blocking
In-Reply-To: <437E2955.2080806@pobox.com>
References: 
	<437E2955.2080806@pobox.com>
Message-ID: <20051119162344.GA28244@thingy.apana.org.au>

Brion Vibber (brion at pobox.com) [051119 06:20]:
> David Gerard wrote:

> > Can a cookie carry between different IPs for the same browser? i.e.,
> > user hangs up and dials again for a different IP?
 
> Yes, but it's also trivially easy to remove or just reject. It's only an
> effective measure against the painfully naive. (Of which there are many,
> alas. ;)


Actually, from vandal/troll-chasing on en:, I strongly suspect there are
enough as a proportion that this simple measure would help greatly.


- d.




From oub at mat.ucm.es  Sat Nov 19 19:47:22 2005
From: oub at mat.ucm.es (Uwe Brauer)
Date: Sat, 19 Nov 2005 20:47:22 +0100
Subject: [Wikitech-l] Moodle, newgroups for Threading (was: LiquidThreads)
References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil>
	 
	<200511031845.17000@bloodgate.com>
Message-ID: <87y83kbet1.fsf_-_@mat.ucm.es>


Hello

When I wrote about the lack of threading capabilities in the
WikiPedia discussion page, I had only experience with relative
low frequented discussions pages, for which the Wikipedia
software was uncomfortable but still ok. However now I am on
discussion page with a lot of hot debated contributions. That is
a sort of nightmare, since a lot of people are editing that page
at the same time, try to chance the outline and as a result
contributions get either malformed or changed or even deleted in
short it is a PITA.

So I would like to know.

    - is it possible to use for example parts  of the software of
      the moodle project http://moodle.org. That is open software
      for course management. Every course has its own, mailing
      list like,  discussion page. Now the parallels to wikipedia
      seem obvious course=article. However in reality can that
      software be used for wikipedia?

    - Would it be possible that instead of  providing a link to a
      discussion  page,   provide a    link  to  a  corresponding
      *newsgroup*. That is every   article page should   have its
      newsgroup (or mailing list if that would be easier.)


Regards


Uwe Brauer
      



From brion at pobox.com  Sat Nov 19 19:53:18 2005
From: brion at pobox.com (Brion Vibber)
Date: Sat, 19 Nov 2005 11:53:18 -0800
Subject: [Wikitech-l] Moodle, newgroups for Threading
In-Reply-To: <87y83kbet1.fsf_-_@mat.ucm.es>
References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil>	
		<200511031845.17000@bloodgate.com>
	<87y83kbet1.fsf_-_@mat.ucm.es>
Message-ID: <437F82AE.2030308@pobox.com>

Uwe Brauer wrote:
>     - Would it be possible that instead of  providing a link to a
>       discussion  page,   provide a    link  to  a  corresponding
>       *newsgroup*. That is every   article page should   have its
>       newsgroup (or mailing list if that would be easier.)

I think having several million mailing lists or newsgroups would not be
very nice. :)

-- brion vibber (brion @ pobox.com)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 249 bytes
Desc: OpenPGP digital signature
URL: 

From ssamoylov at gmail.com  Sun Nov 20 00:12:26 2005
From: ssamoylov at gmail.com (Sergey Samoylov)
Date: Sun, 20 Nov 2005 03:12:26 +0300
Subject: [Wikitech-l] About tables templates in wikimedia
Message-ID: <5470f7ce0511191612o2c27ead0q991142e6056ed6c9@mail.gmail.com>

Hi there....

I've downloaded wikinews and installed on my mediawiki.
There is a page like
http://en.wikinews.org/w/index.php/Template:Hurricane_2005_Infobox

On wikinews site all text from template is consisted in the table from right
side but on my site it's just a usual rows.

The difference between sorce code generation when processed
{{InfoboxStart|infotitle=[[w:Hurricane|Hurricanes]] - 2005}} wiki code.

On my site:


On wikinews:
Template:InfoboxStart is
'''{{{infotitle}}}'''
So there is redundant
on my site. Without this tag table is OK! I've checked 1.4, 1.5 and 1.6(developer version) there is the same situation. Is it a bug in Parser.php or it's some settings in mediawiki? Thank you Sergey From oub at mat.ucm.es Sun Nov 20 10:02:40 2005 From: oub at mat.ucm.es (Uwe Brauer) Date: Sun, 20 Nov 2005 11:02:40 +0100 Subject: [Wikitech-l] Re: Moodle, newgroups for Threading References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <200511031845.17000@bloodgate.com> <87y83kbet1.fsf_-_@mat.ucm.es> <437F82AE.2030308@pobox.com> Message-ID: <874q6763i7.fsf@mat.ucm.es> >>>>> "Brion" == Brion Vibber writes: Brion> Uwe Brauer wrote: >> - Would it be possible that instead of providing a link to a >> discussion page, provide a link to a corresponding >> *newsgroup*. That is every article page should have its >> newsgroup (or mailing list if that would be easier.) Brion> I think having several million mailing lists or newsgroups Brion> would not be very nice. :) I doubt it would be that many which would be really used frequently. It might come down to several thousands (for each language I must add) Why not? Would it really slow down anything? Any mailinglist/newgroups are so much more convenient for discussions Uwe From brion at pobox.com Sun Nov 20 10:40:49 2005 From: brion at pobox.com (Brion Vibber) Date: Sun, 20 Nov 2005 02:40:49 -0800 Subject: [Wikitech-l] [WikiEN-l] Status of article rating feature? In-Reply-To: <437AAECE.1090304@pobox.com> References: <437AAECE.1090304@pobox.com> Message-ID: <438052B1.8050109@pobox.com> Brion Vibber wrote: >> So. What's up with Special:Validate? > > It's on my list for this week, I'll see about getting it turned on and > working. I'm tweaking, fixing, and reworking various bits of it (display formatting fixes, better code reuse, using revision IDs instead of unreliable timestamps in places, etc). Will try to have it ready to try out tomorrowish. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From magnus.manske at web.de Sun Nov 20 11:11:55 2005 From: magnus.manske at web.de (Magnus Manske) Date: Sun, 20 Nov 2005 12:11:55 +0100 Subject: [Wikitech-l] Extending Wikipedia with Java-based technology? In-Reply-To: <997947954.20051118193246@xam.de> References: <997947954.20051118193246@xam.de> Message-ID: <438059FB.2070904@web.de> Max Voelkel wrote: > 2. > Syntax. We had to extend the syntax slightly to enable annotations > of links and data values. Currently we settled down to use > > [[link type::link target|optional alternate label]] > > Sample, on page "London": ... is in [[located in::England]] ... > Renders as: ... is in England .... (England = Linked) > > for relations, and for attributes. > > [[attribute type:=data value with unit|optional alternate label]] > > Sample, on page "London": ... rains on [[rain:=234 days/year]] .... > Renders as .... rains on 234 days/year (nothing linked) > Why not make a nice extension, and wrap it in templates? key=rain value=234 unit=days/year label=sometext which can be generated by {{attribute|rain|234|days/year|sometext}} The extension can hold a table of default units and labels, in case they are omitted: {{attribute|rain|234| | }} uses "unit=days/year" and "label=rains on VALUE UNIT", because of key "rain". An extension could also extract the attribute list and display it in the sidebar, like language links, instead or in addition to displaying them inline. Instead of the special link syntax, a similar extension called by {{linkto|target|type|label}} could be used. Magnus From magnus.manske at web.de Sun Nov 20 11:16:13 2005 From: magnus.manske at web.de (Magnus Manske) Date: Sun, 20 Nov 2005 12:16:13 +0100 Subject: [Wikitech-l] [WikiEN-l] Status of article rating feature? In-Reply-To: <438052B1.8050109@pobox.com> References: <437AAECE.1090304@pobox.com> <438052B1.8050109@pobox.com> Message-ID: <43805AFD.1060501@web.de> Brion Vibber wrote: > I'm tweaking, fixing, and reworking various bits of it (display > formatting fixes, better code reuse, using revision IDs instead of > unreliable timestamps in places, etc). > > Will try to have it ready to try out tomorrowish. > *THANK YOU* (imagine sound of released breath :-) BTW, I used a combination of revision IDs and timestamps, because you told me the revision IDs can get out of order through deleting/undeleting revisions, and I wanted them in correct order no matter what. If revision IDs suffice, great. Magnus From jherz at myrealbox.com Sun Nov 20 14:30:32 2005 From: jherz at myrealbox.com (=?ISO-8859-1?Q?J=FCrgen_Herz?=) Date: Sun, 20 Nov 2005 15:30:32 +0100 Subject: [Wikitech-l] Re: Wikimedia Servers and Organization In-Reply-To: References: Message-ID: <43808888.2010907@myrealbox.com> Hello, >> But like all other clusters too, the load is very unequally >> distributed over the machines. For example the Yahoo! squids showed >> yf1003 9.39, yf1000 7.60, yf1004 1.60, yf1002 1.44, yf1001 0.73 >> at noon (UTC) today and similar load values (albeit with a different >> distribution) at other times. > > That's just ordinary random variation. The 15 minute load average is much > more closely clustered than the 1 minute load average. Oh, indeed. Without switching to load_fifteen it's hard to see in the cluster overview. >> Or the Apaches in Florida: >> 16 Apaches with load around 15, 9 between 1.5 and 2, 8 between 1 and >> 1.5 and 10 less than 1. >> >> Where does this come from, or is this wanted? Wouldn't a more balanced >> load be better? > > The apache load figures are unreliable at the moment because there is a > number of hung processes on each machine waiting for an NFS share that is > never going to start working again. Comparing the servers named in bug 3869 - yes, those are those showing constantly very high loads. > The crucial thing to avoid in load balancing is a wide divergence in request > service time, wide enough to be user-visible. At the moment, we don't seem > to have that, except in a few special cases. A couple of hundred > milliseconds divergence is acceptable, it's when you get into the seconds > that you have a problem. > > This is data from the last 24 hours or so: > [...] > Humboldt and rose need attention, but I wouldn't worry about the rest. Hm, yes, that's right. As I noticed on Friday already all machines CPU usage (user+system) is only at about 60% in the long time mean. That looks ok. > This was an intentional part of our architecture. Local squid clusters serve > local users, this reduces latency due to network round-trip time (RTT). Of > course, this only makes sense if that network RTT (200ms or so) is greater > than the local service time due to high load. Hopefully we can fix our squid > software problems and add more squid hardware to Florida (which is the only > site where it truly seems to be lacking), and thus maintain this design. > Spreading load across servers distributed worldwide is cheaper but > necessarily slower than the alternative of minimising distance between cache > and user. The RTT between me (Germany) to knams is the same as to pmtpa (around 60ms) but it's 350ms to yaseo. That's of course a lot higher, but if they would have the right data available, this would be still a lot faster than the dozent seconds (if not timing out) the knams Squids need to deliver pages. I must admit it's not always slow but if, it is constantly for a longer time (mostly in the evening hours). But then I can't see noticeable problems on Ganglia. I mean I know that Wikipedia's servers have to serve a lot requests and they increased constantly (judging from the graphs at NOC) but if it's slow there's no higher load, not more requests or so - strange. Maybe it's some internal reason like waiting for NFS server or so, but I'm sure you'll make it work smooth again. > It might interest you to know that as soon as I came to terms with the fact > that the knams cluster would be idle at night, I promoted the idea of using > that idle CPU time for something useful, like non-profit medical research. > We've now clocked up 966 work units for Folding at Home: > > http://fah-web.stanford.edu/cgi-bin/main.py?qtype=userpage&username=wikimedia%2Eorg Oh, that's nice indeed. If something hinders a reasonable worldwide useage, that's surely the right thing to do with the processing power. >> 3. I read about new machines srv51-70. Where do they come from? Can't >> see a recent order for them or they are mentioned on >> [[meta:Wikimedia_servers]]. > > No idea. I just use the things. Ah, so it must be great seeing 20 servers disappear suddenly to use. ;-) J?rgen From wclarkxoom at gmail.com Sun Nov 20 14:46:08 2005 From: wclarkxoom at gmail.com (Bill Clark) Date: Sun, 20 Nov 2005 09:46:08 -0500 Subject: [Wikitech-l] machine translaton of the articles... In-Reply-To: <20040809151150.GD14128@wikia.com> References: <20040809140527.98630.qmail@web8306.mail.in.yahoo.com> <20040809151150.GD14128@wikia.com> Message-ID: <741500800511200646k6677769di3dba91ea7a440d9d@mail.gmail.com> NOTE: I am replying to an older article because it was the most recent thread I could find in my archives on the topic. I think Jimmy's comments accurately reflect those of most people's (justified) low opinion of raw (unaided) machine translation output. On 8/9/04, Jimmy (Jimbo) Wales wrote: > > First, it is important to understand that for the most part, the > individual > wikipedia languages are not mere translations. Perhaps they should be, or more precisely, perhaps there should be a way to get the English article translated into Urdu, as well as an Urdu version of the article (with different, Urdu-centric content, as we have now). I'd be interested in knowing how the French article on Sartre differed from the English one (for example) but I don't read French. > Second, machine language translation is typically quite poor. There are ways to get much, much better machine translation with a little extra effort from native speakers of the source language. If the words in an article are part-of-speech (POS) tagged (noun, verb, adjective, preposition, etc.) then the quality of machine translation of that text improves dramatically. I work for the Linguistic Data Consortium at the University of Pennsylvania, where I provide IT support to a group of linguists who create and distribute the corpora (datasets) used by the researchers (both public and private) who develop machine translation systems, automatic content-extraction systems, and a variety of other computational linguistic systems. If people are interested, I'll look into getting a few articles POS-tagged (bribe a linguistics grad student with free lunch or something) and run them through some public (grant-funded, opensource) MT systems to demo the output. If the output is reasonable enough to offer up on the site as-is, or with minimal corrections (maybe a few sentences) then I'd think it might be worth considering. As a huge (and rapidly growing) collection of GFDL-ed text, the Wikipedia is a valuable public linguistic resource. If it could also provide a set of parallel text in several different languages (the human-corrected versions of machine translated articles) then it would become even more valuable, a virtual Rosetta Stone for the modern age. -Bill Clark From avarab at gmail.com Sun Nov 20 15:24:12 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Sun, 20 Nov 2005 15:24:12 +0000 Subject: [Wikitech-l] Extending Wikipedia with Java-based technology? In-Reply-To: <438059FB.2070904@web.de> References: <997947954.20051118193246@xam.de> <438059FB.2070904@web.de> Message-ID: <51dd1af80511200724r62fc90f9v8d1689d6e490e9a6@mail.gmail.com> On 11/20/05, Magnus Manske wrote: > Max Voelkel wrote: > > 2. > > Syntax. We had to extend the syntax slightly to enable annotations > > of links and data values. Currently we settled down to use > > > > [[link type::link target|optional alternate label]] > > > > Sample, on page "London": ... is in [[located in::England]] ... > > Renders as: ... is in England .... (England = Linked) > > > > for relations, and for attributes. > > > > [[attribute type:=data value with unit|optional alternate label]] > > > > Sample, on page "London": ... rains on [[rain:=234 days/year]] .... > > Renders as .... rains on 234 days/year (nothing linked) > > > Why not make a nice extension, and wrap it in templates? > > > key=rain > value=234 > unit=days/year > label=sometext > > > which can be generated by > > {{attribute|rain|234|days/year|sometext}} In the current parser that would not be possible, you can't pass template arguments to extensions. Also, if it's inline it's likelier to get updated. From ssamoylov at gmail.com Sun Nov 20 15:30:25 2005 From: ssamoylov at gmail.com (Sergey Samoylov) Date: Sun, 20 Nov 2005 18:30:25 +0300 Subject: [Wikitech-l] Wikisource dump Message-ID: <5470f7ce0511200730p5bbb4ad2x56bf484d9ed18efe@mail.gmail.com> Hi there... Is it possible to get dump of en.wikisource.org The dump from http://download.wikimedia.org/special/sources/ include just dump of wikisource.org which doesn't contain usefull articles. Just messages like: This page was moved to http://en.wikisource.org/wiki/The_Pioneers_-_Chapter_5 Please do not add anymore english pages to this site.

Thank you All the best! Sergey From hashar at altern.org Sun Nov 20 17:17:06 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Sun, 20 Nov 2005 18:17:06 +0100 Subject: [Wikitech-l] Re: cluster monitoring In-Reply-To: References: <4374EF03.6020302@nedworks.org> <43776C59.8040907@nedworks.org> Message-ID: Tim Starling wrote: > I wrote a perl script a while back to poll the gmond XML output from one > machine and stop or start a process on another machine based on the value of > a metric retrieved. I didn't use telnet (ick), I read from a socket and then > used an XPath module to find the metric in the XML. It's probably lying > around in my home directory somewhere if you want to look at it. Hello, I have wrote a little perl plugin for nagios that would let us grab a given metric for a given host. The plugin implements caching of gmetad data, caching of xml parse and handle warning / critical threshold. I have also give a little configuration example for nagios checkcommands.cfg. Files are in /home/hashar/gmeta-nagios/ : cbg_commands.cfg The nagios configuration for the plugin check_by_gmetad.pl The plugin itself, use perl gmetad-cache.stor XML Parse cache gmetad-cache.xml XML grabbed from the gmetad host. So now, it is just pending for a FC3 larousse upgrade and a nagios compile. > If caching is required, then adding metrics to nagios is obviously not the > same as adding metrics to ganglia. For ganglia, you run gmetric whenever a > metric changes, so you can have a loop that sets 30 metrics in each pass if > you like. You don't give it a plugin for it to invoke at its leisure, you > make your own daemon. I will set up a basic nagios installation first, then we can work on metrics. Memcached instances / errors might be interesting as well as mysql replication lag for slaves. cheers, -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From hashar at altern.org Sun Nov 20 18:06:44 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Sun, 20 Nov 2005 19:06:44 +0100 Subject: [Wikitech-l] some logo updated Message-ID: Hello, I have updated some of the logo that were listed on : http://meta.wikimedia.org/wiki/Requests_for_logos Dont forget to ask for logo protection before filling in a request ! cheers, -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From gmaxwell at gmail.com Sun Nov 20 19:01:09 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Sun, 20 Nov 2005 14:01:09 -0500 Subject: [Wikitech-l] Re: Moodle, newgroups for Threading In-Reply-To: <874q6763i7.fsf@mat.ucm.es> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <200511031845.17000@bloodgate.com> <87y83kbet1.fsf_-_@mat.ucm.es> <437F82AE.2030308@pobox.com> <874q6763i7.fsf@mat.ucm.es> Message-ID: On 11/20/05, Uwe Brauer wrote: > Brion> Uwe Brauer wrote: > >> - Would it be possible that instead of providing a link to a > >> discussion page, provide a link to a corresponding > >> *newsgroup*. That is every article page should have its > >> newsgroup (or mailing list if that would be easier.) > > Brion> I think having several million mailing lists or newsgroups > Brion> would not be very nice. :) > > I doubt it would be that many which would be really used > frequently. It might come down to several thousands (for each language > I must add) Why not? Would it really slow down anything? Any > mailinglist/newgroups are so much more convenient for discussions Part of the point is that all pages should have a single talk area available so that users have a nice consistent way to discuss the article. Many pages talks are used infrequently but they are still used.... a non-wiki talk page would just become a spam trap as users couldn't remove spam as they do today. Personally I don't find mailinglists or newsgroups any more convenient, and I think the wiki editing practice is much needed by our users. From gmaxwell at gmail.com Sun Nov 20 19:18:13 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Sun, 20 Nov 2005 14:18:13 -0500 Subject: [Wikitech-l] [WikiEN-l] Status of article rating feature? In-Reply-To: <43805AFD.1060501@web.de> References: <437AAECE.1090304@pobox.com> <438052B1.8050109@pobox.com> <43805AFD.1060501@web.de> Message-ID: On 11/20/05, Magnus Manske wrote: > BTW, I used a combination of revision IDs and timestamps, because you > told me the revision IDs can get out of order through > deleting/undeleting revisions, and I wanted them in correct order no > matter what. If revision IDs suffice, great. Ee. What happens if we delete a page then restore all but one version? Do the validates become off by one? From magnus.manske at web.de Sun Nov 20 19:37:22 2005 From: magnus.manske at web.de (Magnus Manske) Date: Sun, 20 Nov 2005 20:37:22 +0100 Subject: [Wikitech-l] Extending Wikipedia with Java-based technology? In-Reply-To: <51dd1af80511200724r62fc90f9v8d1689d6e490e9a6@mail.gmail.com> References: <997947954.20051118193246@xam.de> <438059FB.2070904@web.de> <51dd1af80511200724r62fc90f9v8d1689d6e490e9a6@mail.gmail.com> Message-ID: <4380D072.6010507@web.de> ?var Arnfj?r? Bjarmason wrote: > In the current parser that would not be possible, you can't pass > template arguments to extensions. Also, if it's inline it's likelier > to get updated. > Funny, last time I checked on my citation feature it worked quite well. That has been a few weeks ago, though. Magnus From magnus.manske at web.de Sun Nov 20 19:39:29 2005 From: magnus.manske at web.de (Magnus Manske) Date: Sun, 20 Nov 2005 20:39:29 +0100 Subject: [Wikitech-l] [WikiEN-l] Status of article rating feature? In-Reply-To: References: <437AAECE.1090304@pobox.com> <438052B1.8050109@pobox.com> <43805AFD.1060501@web.de> Message-ID: <4380D0F1.60907@web.de> Gregory Maxwell wrote: > On 11/20/05, Magnus Manske wrote: > >> BTW, I used a combination of revision IDs and timestamps, because you >> told me the revision IDs can get out of order through >> deleting/undeleting revisions, and I wanted them in correct order no >> matter what. If revision IDs suffice, great. >> > > Ee. What happens if we delete a page then restore all but one version? > Do the validates become off by one? > The validation data itself refers to the revision ID only, so no problem there. The time plus ID thing is only for the sorting of the revisions. You can say "merge all may old validations for this article into the current version". For that, it needs to determine "older versions". That's all. Magnus From avarab at gmail.com Sun Nov 20 20:19:49 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Sun, 20 Nov 2005 20:19:49 +0000 Subject: [Wikitech-l] Extending Wikipedia with Java-based technology? In-Reply-To: <4380D072.6010507@web.de> References: <997947954.20051118193246@xam.de> <438059FB.2070904@web.de> <51dd1af80511200724r62fc90f9v8d1689d6e490e9a6@mail.gmail.com> <4380D072.6010507@web.de> Message-ID: <51dd1af80511201219m64450c08kadcc2f3dfb29a1cf@mail.gmail.com> On 11/20/05, Magnus Manske wrote: > ?var Arnfj?r? Bjarmason wrote: > > In the current parser that would not be possible, you can't pass > > template arguments to extensions. Also, if it's inline it's likelier > > to get updated. > > > Funny, last time I checked on my citation feature it worked quite well. > That has been a few weeks ago, though. When you make a template at Template:Extension with the contents: """ arg = {{{1}}} """ and call it at a page with {{Extension|myarg}} the output is: """ arg = {{{1}}} """ I.e. the {{{1}}} is not interpolated, calling the parser on it won't work either as you'll be dealing with a new instance of the parser which won't replace those variables because as far as it's concerned it hasn't been called with any. Now, your idea of having a template wrap it as {{attribute|rain|...}} would presumably require a template at Template:Attribute with contents like: """ weather = {{{1}}} .... """ And as I've demonstrated that doesn't work, so what exactly did work the last time you checked it? From f-x.p at laposte.net Fri Nov 18 18:55:58 2005 From: f-x.p at laposte.net (FxParlant) Date: Fri, 18 Nov 2005 19:55:58 +0100 Subject: [Wikitech-l] Re: links in template headache In-Reply-To: References: Message-ID: Solved by a hack: In Parser::replaceFreeExternalLinks(), about line 1125: Changed: $bits = preg_split( '/(\b(?:'.$wgUrlProtocols.'))/S', $text, -1, PREG_SPLIT_DELIM_CAPTURE ); Into: $bits = preg_split( '/(\b(?:xx'.$wgUrlProtocols.'))/S', $text, -1, PREG_SPLIT_DELIM_CAPTURE ); Just added "xx" to break the search for protocols. Added to a limitation of the Sanitizer::Removehtmltags to work only for talk pages, I got what I wanted ... until I discover other bugs in this tuning. Thanks for your help in this linkification problem. Fran?ois FxParlant wrote: > Thanks Brion for reminding me that > http://foo is replace into a link > > But, Sorry for asking again. Is there a proper way to have this kind of > link in a template to work (Mediawiki 1.5.0) > > > > > In mw1.3, I hacked a bit the regexp in Parser.php so that anything > beginning with an '=' was left out of attributes replacement. > > Here (mw1.5.0), I tried the simple surrounding, but with > no success. > > I'm really not sure there is a way to write this in wiki language, but > if someone knows... > > Thanks for any solution > > Fran?ois From timwi at gmx.net Fri Nov 18 19:09:50 2005 From: timwi at gmx.net (Timwi) Date: Fri, 18 Nov 2005 19:09:50 +0000 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: <1131981004.29814.52.camel@zhora.1481ruerachel.net> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: > I think it'd be useful for most multilingual MediaWiki installations > that use interlanguage links to have such hidden elements. Speaking of which - this reminds me of an idea I had a while ago and I was wondering if anyone would be interested to hear this. Currently many Wikipedia pages in Google search results are redirects (for example, Google for "nonogram" and look at the seventh search result). I was wondering if there is a element one could use to say that another URL is the "real" page? Then the page returned for a redirect's URL would tell search engines the URL of the page it's redirecting to. Timwi From timwi at gmx.net Fri Nov 18 19:27:16 2005 From: timwi at gmx.net (Timwi) Date: Fri, 18 Nov 2005 19:27:16 +0000 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: > Would be good if articles on the different projects were > translations. Most of the time, although they are about the same > subject, the articles are differents. Not all inter-wiki links link to an article that is about the same subject. They sometimes link to a more general topic. My pet peeve example is the German Wikipedians insisting that [[en:Vulcan (Star Trek)]], [[en:Romulan]], [[en:Ferengi]] etc. don't deserve German equivalents, so they *all* link to [[de:V?lker im Star-Trek-Universum]], which of course contains less than half of the information, but instead a huge patronising banner warning not to be so stupid as to think these fictitious things could have anything to do with the real world, much less that they could be welcome on Wikipedia. Timwi From magnus.manske at web.de Sun Nov 20 21:16:43 2005 From: magnus.manske at web.de (Magnus Manske) Date: Sun, 20 Nov 2005 22:16:43 +0100 Subject: [Wikitech-l] Extending Wikipedia with Java-based technology? In-Reply-To: <51dd1af80511201219m64450c08kadcc2f3dfb29a1cf@mail.gmail.com> References: <997947954.20051118193246@xam.de> <438059FB.2070904@web.de> <51dd1af80511200724r62fc90f9v8d1689d6e490e9a6@mail.gmail.com> <4380D072.6010507@web.de> <51dd1af80511201219m64450c08kadcc2f3dfb29a1cf@mail.gmail.com> Message-ID: <4380E7BB.1070407@web.de> ?var Arnfj?r? Bjarmason wrote: > On 11/20/05, Magnus Manske wrote: > >> ?var Arnfj?r? Bjarmason wrote: >> >>> In the current parser that would not be possible, you can't pass >>> template arguments to extensions. Also, if it's inline it's likelier >>> to get updated. >>> >>> >> Funny, last time I checked on my citation feature it worked quite well. >> That has been a few weeks ago, though. >> > > When you make a template at Template:Extension with the contents: > """ > > arg = {{{1}}} > > """ > > and call it at a page with {{Extension|myarg}} the output is: > > """ > arg = {{{1}}} > """ > > I.e. the {{{1}}} is not interpolated, calling the parser on it won't > work either as you'll be dealing with a new instance of the parser > which won't replace those variables because as far as it's concerned > it hasn't been called with any. > > Now, your idea of having a template wrap it as {{attribute|rain|...}} > would presumably require a template at Template:Attribute with > contents like: > > """ > > weather = {{{1}}} > .... > > """ > > And as I've demonstrated that doesn't work, so what exactly did work > the last time you checked it? > Damn, I distinctly remember I ported the fix for this, based on something I found on bugzilla. Maybe someone turned it off again because of unpleasant side effects? I'll keep you posted if I can find it again. Magnus From saintonge at telus.net Sun Nov 20 20:32:53 2005 From: saintonge at telus.net (Ray Saintonge) Date: Sun, 20 Nov 2005 12:32:53 -0800 Subject: [Wikitech-l] machine translaton of the articles... In-Reply-To: <741500800511200646k6677769di3dba91ea7a440d9d@mail.gmail.com> References: <20040809140527.98630.qmail@web8306.mail.in.yahoo.com> <20040809151150.GD14128@wikia.com> <741500800511200646k6677769di3dba91ea7a440d9d@mail.gmail.com> Message-ID: <4380DD75.8080200@telus.net> Bill Clark wrote: >NOTE: I am replying to an older article because it was the most recent >thread I could find in my archives on the topic. I think Jimmy's comments >accurately reflect those of most people's (justified) low opinion of raw >(unaided) machine translation output. > >On 8/9/04, Jimmy (Jimbo) Wales wrote: > > >>First, it is important to understand that for the most part, the >>individual >>wikipedia languages are not mere translations. >> >> >Perhaps they should be, or more precisely, perhaps there should be a way to >get the English article translated into Urdu, as well as an Urdu version of >the article (with different, Urdu-centric content, as we have now). I'd be >interested in knowing how the French article on Sartre differed from the >English one (for example) but I don't read French. > What is notable is the the Sartre and other articles in French and English are independently written rather than one being the translation of the other. At first glance the Spanish version appears to be a translation from English, and It is conceivable that one or more of the 37 other current versions of the Sartre article are translations, but I'm not in a position to verify that because of my limited knowledge of these languages. The Urdu version has not yet been written. Looking at the brief introductory paragraph, and the first biography paragraph we have in English > Jean-Paul Charles Aymard Sartre (1905 > -06-21 > ? 1980 > -04-15 > ) was a French > existentialist > philosopher > , dramatist > , novelist > and critic > . > > Early life and thought > > Sartre was born in Paris to > parents Jean-Baptiste Sartre, an officer > of the French Navy > , and Anne-Marie Schweitzer, > cousin of Albert Schweitzer > . When he was 15 > months old, his father died of a fever > and Anne-Marie raised him with > help from her father, Charles Schweitzer, who taught Sartre > mathematics and introduced > him to classical literature at > an early age. > In French we have > Jean-Paul Sartre (Paris 21 juin > 1905 > - Paris 15 avril > 1980 > ) est un philosophe > et ?crivain fran?ais > . > > [ > Biographie > > N? ? Paris le 21 juin > 1905 > , Sartre est orphelin de p?re ? > deux ans et grandit ? Paris , dans > un milieu bourgeois et intellectuel. Il fait ses ?tudes secondaires au > lyc?e Henri IV , o? > il fait la connaissance de Paul Nizan > . > which translates as Jean-Paul Sartre (Paris, June 21, 1905 - Paris, April 15, 1980) is a French philosopher and writer Biography Born in Paris on June 21, 1905, Sartre was paternally orphaned at two years old and grew up in Paris, in a bourgeois and intellectual environment. His secondary studies were done at the Lyc?e Henri IV, where he became acquainted with Paul Nizan It is interesting to note that reference to the Schweitzer family appears nowhere in the French article, and Paul Nizan appears nowhere in the English article! >>Second, machine language translation is typically quite poor. >> >> >There are ways to get much, much better machine translation with a little >extra effort from native speakers of the source language. If the words in an >article are part-of-speech (POS) tagged (noun, verb, adjective, preposition, >etc.) then the quality of machine translation of that text improves >dramatically. > > I agree that there are ways to improve machine translations, but it strikes me as impossible for machines to reconcile the cultural gaps which may exist between language versions. That requires the intervention of thinking humans. Ec From oub at mat.ucm.es Sun Nov 20 21:12:23 2005 From: oub at mat.ucm.es (Uwe Brauer) Date: Sun, 20 Nov 2005 22:12:23 +0100 Subject: [Wikitech-l] Re: Moodle, newgroups for Threading References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <200511031845.17000@bloodgate.com> <87y83kbet1.fsf_-_@mat.ucm.es> <437F82AE.2030308@pobox.com> <874q6763i7.fsf@mat.ucm.es> Message-ID: <87veynuiq0.fsf@mat.ucm.es> >>>>> "Gregory" == Gregory Maxwell writes: Gregory> Part of the point is that all pages should have a single Gregory> talk area available so that users have a nice consistent Gregory> way to discuss the article. Many pages talks are used Gregory> infrequently but they are still used.... a non-wiki talk Gregory> page would just become a spam trap as users couldn't Gregory> remove spam as they do today. Personally I don't find Gregory> mailinglists or newsgroups any more convenient, and I Gregory> think the wiki editing practice is much needed by our Gregory> users. Oops you have not been on wiki-discussion pages say of size 0.2 M, which frequent comments. Just to add a comment to check whether I have used the wikipedia syntax correctly is very time consuming (especially if the connection is slow). - I can write this reply change its format indenting and the like at will, - and have not to bother whether my reply gets edited - and the list offer me a reasonable threading such that I can find my posting and the relevant answers in an instant. And these points you don't find convenient? Uwe Brauer From gmaxwell at gmail.com Sun Nov 20 21:41:49 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Sun, 20 Nov 2005 16:41:49 -0500 Subject: [Wikitech-l] Re: Moodle, newgroups for Threading In-Reply-To: <87veynuiq0.fsf@mat.ucm.es> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <200511031845.17000@bloodgate.com> <87y83kbet1.fsf_-_@mat.ucm.es> <437F82AE.2030308@pobox.com> <874q6763i7.fsf@mat.ucm.es> <87veynuiq0.fsf@mat.ucm.es> Message-ID: On 11/20/05, Uwe Brauer wrote: > Gregory> Part of the point is that all pages should have a single > Gregory> talk area available so that users have a nice consistent > Gregory> way to discuss the article. Many pages talks are used > Gregory> infrequently but they are still used.... a non-wiki talk > Gregory> page would just become a spam trap as users couldn't > Gregory> remove spam as they do today. Personally I don't find > Gregory> mailinglists or newsgroups any more convenient, and I > Gregory> think the wiki editing practice is much needed by our > Gregory> users. > > Oops you have not been on wiki-discussion pages say of size 0.2 M, > which frequent comments. Just to add a comment to check whether I have > used the wikipedia syntax correctly is not very time consuming (but the small > delay gives me an excuse to complain). Sure I've been on large talk pages, thought I generally archive off the inactive sections once they get that large! > - I can write this reply change its format indenting and the > like at will, You can change the indenting on a wikipage, and someone walking into a discussion later doesn't need to waste their time seeing 100x duplication of text as people quote it since there is only one topmost copy. > - and have not to bother whether my reply gets edited Sorry, but the communities need to remove spam and to refactor and otherwise focus conversations trumps your paranoia about your comments being edited. If we distrust our fellow editors so much that we must worry about them editing our comments on a system that preserves complete revision history, then we have already lost and should just give up. > - and the list offer me a reasonable threading such that I > can find my posting and the relevant answers in an instant. Whats wrong with searching for your signature or reading in diff mode? > And these points you don't find convenient? I don't think any of them are sufficiently compelling or unachievable with the existing behavior that a change would be justified. From gmaxwell at gmail.com Sun Nov 20 22:00:42 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Sun, 20 Nov 2005 17:00:42 -0500 Subject: [Wikitech-l] machine translaton of the articles... In-Reply-To: <741500800511200646k6677769di3dba91ea7a440d9d@mail.gmail.com> References: <20040809140527.98630.qmail@web8306.mail.in.yahoo.com> <20040809151150.GD14128@wikia.com> <741500800511200646k6677769di3dba91ea7a440d9d@mail.gmail.com> Message-ID: On 11/20/05, Bill Clark wrote: [snip] > There are ways to get much, much better machine translation with a little > extra effort from native speakers of the source language. If the words in an > article are part-of-speech (POS) tagged (noun, verb, adjective, preposition, > etc.) then the quality of machine translation of that text improves > dramatically. [snip] I've been making a little effort on and off again to improve the parsability of Wikipedia articles by Link Grammar (http://bobo.link.cs.cmu.edu/link/). Generally the formal style used on most articles provides easy material for link grammar to correctly parse and most of the statements that unparsable are clear grammatical or spelling mistakes. Generally the two biggest sources of parse errors which can not be attributed to an obvious mistake that I've run into using link grammar is the omission of the serial comma, and subject area verbs which are not in my dictionary. I'm not sure why the serial comma isn't required in the manual of style as it's omission sometimes causes human readers to incorrectly group objects. I think that machine readability for Wikipedia should be a long term goal, even if we do not intend to use it to facilitate translation. Generally text which is machine parsable without markup also tends to be more easily readable by human readers who have widely varying levels of skill. Once we factor in the improvements in searching, translation, and machine intelligence, the desirability of machine parsibility becomes more clear. For example, I've toyed with making my content filtering bot (output available on freenode irc in #wikipedia-suspectedits) use link grammar to parse sentences and detect when someone has negated/inverted the meaning of a sentence. Unfortunately I can't put this into production on my bot because the machine parsability of Wikipedia is currently too low, and link-grammar's performance on difficult to parse text is currently too low. From oub at mat.ucm.es Sun Nov 20 22:19:35 2005 From: oub at mat.ucm.es (Uwe Brauer) Date: Sun, 20 Nov 2005 23:19:35 +0100 Subject: [Wikitech-l] Re: Moodle, newgroups for Threading References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <200511031845.17000@bloodgate.com> <87y83kbet1.fsf_-_@mat.ucm.es> <437F82AE.2030308@pobox.com> <874q6763i7.fsf@mat.ucm.es> <87veynuiq0.fsf@mat.ucm.es> Message-ID: <87lkzjt11k.fsf@mat.ucm.es> >>>>> "Gregory" == Gregory Maxwell writes: Gregory> Sure I've been on large talk pages, thought I generally Gregory> archive off the inactive sections once they get that Gregory> large! Problem is that sometimes so called inactive sections will get comments and answers after a while, since it does not make to much sense to add that comments in new sections >> - I can write this reply change its format indenting and the >> like at will, Gregory> You can change the indenting on a wikipage, and someone Gregory> walking into a discussion later doesn't need to waste Gregory> their time seeing 100x duplication of text as people quote Gregory> it since there is only one topmost copy. Yeah but that chance of indent, adding : is a sort of PITA, because it is *not* WYSIWYG (as in emails/newsgroups). As for (mis)quotes in mailing lists it depends what people quote, a lot just let the text stay end add text which has no reference to the quote (kill-paragraph is you friend in emacs/xemacs for those things) >> - and have not to bother whether my reply gets edited Gregory> Sorry, but the communities need to remove spam and to Gregory> refactor and otherwise focus conversations trumps your Gregory> paranoia about your comments being edited. _paranoia_ ??? It happened to me 3 times, including once the reply got lost and could not be recovered. (Spam in say gmane seems not a huge problem to me) Gregory> If we distrust our fellow editors so much that we must Gregory> worry about them editing our comments on a system that Gregory> preserves complete revision history, then we have already Gregory> lost and should just give up. It has not necessarily be done by bad intention. The wikipage I am referring to suffers from editing by the participants (I am not talking about vandalism here) >> - and the list offer me a reasonable threading such that I >> can find my posting and the relevant answers in an instant. Gregory> Whats wrong with searching for your signature or reading Gregory> in diff mode? It is simply far less convenient than a mailing list newgroup thread >> And these points you don't find convenient? Gregory> I don't think any of them are sufficiently compelling or Gregory> unachievable with the existing behavior that a change Gregory> would be justified. So the last and most annoying of all the points I mentioned, is the speed. - I write that reply, run the spell checker (hopefully successfully) and sent it away. - In the wikipedia discussion page, I would have to run several times the preview, before saving the page and if the connection is real slow. Roughly wikipedia discussions page are around 4 times slower before the relevant contribution is digested. From timwi at gmx.net Sun Nov 20 23:46:38 2005 From: timwi at gmx.net (Timwi) Date: Sun, 20 Nov 2005 23:46:38 +0000 Subject: [Wikitech-l] Re: Moodle, newgroups for Threading In-Reply-To: <87lkzjt11k.fsf@mat.ucm.es> References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <200511031845.17000@bloodgate.com> <87y83kbet1.fsf_-_@mat.ucm.es> <437F82AE.2030308@pobox.com> <874q6763i7.fsf@mat.ucm.es> <87veynuiq0.fsf@mat.ucm.es> <87lkzjt11k.fsf@mat.ucm.es> Message-ID: > Yeah but that chance of indent, adding : is a sort of PITA, because > it is *not* WYSIWYG (as in emails/newsgroups). There's nothing stopping you from writing a WYSIWYG editor for Wiki mark-up. :-) > _paranoia_ ??? It happened to me 3 times, What happened to you three times? That a comment of yours was edited? That is to be expected, and perfectly alright. That it was edited maliciously and nobody reverted it? That's hard to believe. > (Spam in say gmane seems not a huge problem to me) This is because friendly list administrators clean out the spam behind the scenes so you don't have to. Just because you can't see it doesn't mean it doesn't exist ;-) Timwi From timwi at gmx.net Sun Nov 20 23:49:47 2005 From: timwi at gmx.net (Timwi) Date: Sun, 20 Nov 2005 23:49:47 +0000 Subject: [Wikitech-l] Re: [WikiEN-l] Status of article rating feature? In-Reply-To: <43805AFD.1060501@web.de> References: <437AAECE.1090304@pobox.com> <438052B1.8050109@pobox.com> <43805AFD.1060501@web.de> Message-ID: > BTW, I used a combination of revision IDs and timestamps, because you > told me the revision IDs can get out of order through > deleting/undeleting revisions, and I wanted them in correct order no > matter what. You should use IDs (and never timestamps) for purposes of uniquely identifying a row, not for sorting. You should use timestamps (and never IDs) for sorting purposes, and never for unique identification. You shouldn't ever need to use a combination of both for a single purpose. Timwi From timwi at gmx.net Mon Nov 21 00:11:07 2005 From: timwi at gmx.net (Timwi) Date: Mon, 21 Nov 2005 00:11:07 +0000 Subject: [Wikitech-l] Re: DokuWiki -> MediaWiki Konverter In-Reply-To: <437D9EE0.7060108@gmx.de> References: <437D9EE0.7060108@gmx.de> Message-ID: > Ich m?chte von dem DokuWiki (http://wiki.splitbrain.org/wiki:dokuwiki) > Daten in das MediaWiki importieren. Die Daten sind als UTF-8 Textfile > gespeichert. Gibt es f?r sowas schon ein Konverter? Nein, aber es gibt ein Script (importDump.php), welches die XML-Dumps in die Datenbank importiert. Du kannst dieses Script evtl. auseinandernehmen und die relevanten Datenbank-Schreiboperationen "abgucken". Timwi From timwi at gmx.net Mon Nov 21 00:17:31 2005 From: timwi at gmx.net (Timwi) Date: Mon, 21 Nov 2005 00:17:31 +0000 Subject: [Wikitech-l] Re: DokuWiki -> MediaWiki Konverter In-Reply-To: References: <437D9EE0.7060108@gmx.de> Message-ID: Timwi wrote: > >> Ich m?chte von dem DokuWiki (http://wiki.splitbrain.org/wiki:dokuwiki) >> Daten in das MediaWiki importieren. Die Daten sind als UTF-8 Textfile >> gespeichert. Gibt es f?r sowas schon ein Konverter? > > Nein, aber es gibt ein Script (importDump.php), welches die XML-Dumps in > die Datenbank importiert. Du kannst dieses Script evtl. > auseinandernehmen und die relevanten Datenbank-Schreiboperationen > "abgucken". *sigh* I wish people would keep the same topic within the same thread! From t.starling at physics.unimelb.edu.au Mon Nov 21 01:17:24 2005 From: t.starling at physics.unimelb.edu.au (Tim Starling) Date: Mon, 21 Nov 2005 12:17:24 +1100 Subject: [Wikitech-l] Re: Wikimedia Servers and Organization In-Reply-To: <43808888.2010907@myrealbox.com> References: <43808888.2010907@myrealbox.com> Message-ID: J?rgen Herz wrote: > The RTT between me (Germany) to knams is the same as to pmtpa (around > 60ms) but it's 350ms to yaseo. That's of course a lot higher, but if > they would have the right data available, this would be still a lot > faster than the dozent seconds (if not timing out) the knams Squids need > to deliver pages. > > I must admit it's not always slow but if, it is constantly for a longer > time (mostly in the evening hours). But then I can't see noticeable > problems on Ganglia. I mean I know that Wikipedia's servers have to > serve a lot requests and they increased constantly (judging from the > graphs at NOC) but if it's slow there's no higher load, not more > requests or so - strange. > Maybe it's some internal reason like waiting for NFS server or so, but > I'm sure you'll make it work smooth again. The main reason for slow service squid service times lately seems to have been memory issues. A couple of days ago, one of the knams squids was very slow (often tens of seconds) because it was swapping, and another was heading that way. System administration issues like this are a very common cause of slowness. There's no magic bullet to solve it -- it's just a matter of progressively improving our techniques, and increasing the size of the sysadmin team. Lack of hardware may be an issue for certain services, but identifying which services are the problem, determining what we need to order, and then working out which part of the chain will give out next, is no easy task. We have 3 squid clusters and 2 apache clusters with their own memcached, DB, NFS and search -- if any one of those services has a problem, it will lead to a slow user experience. To add to the headache, many reports of slowness are due to problems with the client network rather than with our servers. Luckily most of our monitoring statistics are public, so the entry barrier to this kind of performance analysis is low. I'm glad you're taking an interest. If you want to offer advice on a real-time basis, the #wikimedia-tech channel on irc.freenode.net is the best place to do it. -- Tim Starling From robla at robla.net Mon Nov 21 02:02:55 2005 From: robla at robla.net (Rob Lanphier) Date: Sun, 20 Nov 2005 18:02:55 -0800 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: <1132538575.6605.35.camel@localhost.localdomain> On Fri, 2005-11-18 at 19:09 +0000, Timwi wrote: > > I think it'd be useful for most multilingual MediaWiki installations > > that use interlanguage links to have such hidden elements. > > Speaking of which - this reminds me of an idea I had a while ago and I > was wondering if anyone would be interested to hear this. Currently many > Wikipedia pages in Google search results are redirects (for example, > Google for "nonogram" and look at the seventh search result). I was > wondering if there is a element one could use to say that another > URL is the "real" page? Then the page returned for a redirect's URL > would tell search engines the URL of the page it's redirecting to. I'm not aware of any syntax, but one way to do it would be for MediaWiki to issue an HTTP 301 status (permanent redirect) to the new page, rather than returning 200 and giving the content. That probably introduces an unacceptably large performance penalty, though (extra round trip per request). The "Content-Location" HTTP header is a potential longshot. I don't think Google documents their use/non-use of this header, but it's one of those "can't hurt" kind of things. http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.14 It appears it can be tacked on using a meta http-equiv tag in the HTML head. Rob From brion at pobox.com Mon Nov 21 02:39:52 2005 From: brion at pobox.com (Brion Vibber) Date: Sun, 20 Nov 2005 18:39:52 -0800 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: <1132538575.6605.35.camel@localhost.localdomain> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <1132538575.6605.35.camel@localhost.localdomain> Message-ID: <43813378.4060605@pobox.com> Rob Lanphier wrote: > On Fri, 2005-11-18 at 19:09 +0000, Timwi wrote: >> Speaking of which - this reminds me of an idea I had a while ago and I >> was wondering if anyone would be interested to hear this. Currently many >> Wikipedia pages in Google search results are redirects (for example, >> Google for "nonogram" and look at the seventh search result). I was >> wondering if there is a element one could use to say that another >> URL is the "real" page? Then the page returned for a redirect's URL >> would tell search engines the URL of the page it's redirecting to. > > I'm not aware of any syntax, but one way to do it would be for > MediaWiki to issue an HTTP 301 status (permanent redirect) to the new > page, rather than returning 200 and giving the content. That probably > introduces an unacceptably large performance penalty, though (extra > round trip per request). It's not a performance issue at all, and round-trips for 301s are often cheap compared to rendering. It just makes it a lot harder to deal with such pages: if you HTTP-redirect straight to the target page you're missing the link back to the redirect page. (And that is *crucial* for editing work and vandalism cleanup. It is non-negotiable.) If you redirect to an alternate URL which includes the linkback address, then a) it's an uglier URL and b) you don't get the alleged benefits of going to the single target URL in the first place. We've actually discussed this many times before; please search the list archives if you wish to comment further. :) > The "Content-Location" HTTP header is a potential longshot. I don't > think Google documents their use/non-use of this header, but it's one of > those "can't hurt" kind of things. > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.14 The spec is sufficiently vague and mysterious that I'd recommend against using it for any purpose. Since the destination page would not return the same HTML as the redirect page, it would likely be incorrect and might cause problems if anything does use it. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From brion at pobox.com Mon Nov 21 02:45:06 2005 From: brion at pobox.com (Brion Vibber) Date: Sun, 20 Nov 2005 18:45:06 -0800 Subject: [Wikitech-l] [WikiEN-l] Status of article rating feature? In-Reply-To: <43805AFD.1060501@web.de> References: <437AAECE.1090304@pobox.com> <438052B1.8050109@pobox.com> <43805AFD.1060501@web.de> Message-ID: <438134B2.6000309@pobox.com> Magnus Manske wrote: > Brion Vibber wrote: >> I'm tweaking, fixing, and reworking various bits of it (display >> formatting fixes, better code reuse, using revision IDs instead of >> unreliable timestamps in places, etc). >> >> Will try to have it ready to try out tomorrowish. >> > *THANK YOU* (imagine sound of released breath :-) > > BTW, I used a combination of revision IDs and timestamps, because you > told me the revision IDs can get out of order through > deleting/undeleting revisions, and I wanted them in correct order no > matter what. If revision IDs suffice, great. They can still be out of sync at times (for some old cases with delete/undelete this used to happen all the time, but not anymore due to changes; if history data is imported with Special:Import it may however still be). However the main problem is that timestamps aren't guaranteed to be unique; you will find pages where two adjacent revisions have the same timestamp (due to some old bugs, or due to merging of history between pages that have been moved over each other for instance). There were bits of the code where timestamp was getting used as the index in an associative array, which would mean that ratings on both those revs would end up conflicting, possibly vanishing or attaching to the wrong rev. Sort by timestamp, use id as keys, and things should... hopefully... work. :D -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From brion at pobox.com Mon Nov 21 02:47:27 2005 From: brion at pobox.com (Brion Vibber) Date: Sun, 20 Nov 2005 18:47:27 -0800 Subject: [Wikitech-l] Wikisource dump In-Reply-To: <5470f7ce0511200730p5bbb4ad2x56bf484d9ed18efe@mail.gmail.com> References: <5470f7ce0511200730p5bbb4ad2x56bf484d9ed18efe@mail.gmail.com> Message-ID: <4381353F.4060601@pobox.com> Sergey Samoylov wrote: > Is it possible to get dump of en.wikisource.org > The dump from http://download.wikimedia.org/special/sources/ include just > dump of wikisource.org which doesn't > contain usefull articles. http://download.wikimedia.org/wikisource/en/ -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From robla at robla.net Mon Nov 21 05:23:34 2005 From: robla at robla.net (Rob Lanphier) Date: Sun, 20 Nov 2005 21:23:34 -0800 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: <43813378.4060605@pobox.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> Message-ID: <1132550615.6605.65.camel@localhost.localdomain> On Sun, 2005-11-20 at 18:39 -0800, Brion Vibber wrote: > Rob Lanphier wrote: > > On Fri, 2005-11-18 at 19:09 +0000, Timwi wrote: > >> Speaking of which - this reminds me of an idea I had a while ago and I > >> was wondering if anyone would be interested to hear this. Currently many > >> Wikipedia pages in Google search results are redirects (for example, > >> Google for "nonogram" and look at the seventh search result). I was > >> wondering if there is a element one could use to say that another > >> URL is the "real" page? Then the page returned for a redirect's URL > >> would tell search engines the URL of the page it's redirecting to. > > > > I'm not aware of any syntax, but one way to do it would be for > > MediaWiki to issue an HTTP 301 status (permanent redirect) to the new > > page, rather than returning 200 and giving the content. That probably > > introduces an unacceptably large performance penalty, though (extra > > round trip per request). > > It's not a performance issue at all, and round-trips for 301s are often > cheap compared to rendering. ...except for the fact that you are adding a round-trip in addition to subsequent rendering. I'll take your word for it that it's not a big deal in the larger scheme of things, but relative to a single header or tag, it seems pretty expensive (492 bytes inbound + 778 bytes outbound in the test I just ran with Firefox <=> standard config Apache). > It just makes it a lot harder to deal with such pages: if you > HTTP-redirect straight to the target page you're missing the link back > to the redirect page. (And that is *crucial* for editing work and > vandalism cleanup. It is non-negotiable.) > > If you redirect to an alternate URL which includes the linkback address, > then a) it's an uglier URL and b) you don't get the alleged benefits of > going to the single target URL in the first place. > > We've actually discussed this many times before; please search the list > archives if you wish to comment further. :) I looked through the archives, and found the old "301's are evil" discussion from July 2003, which looks more like a misunderstanding than a productive conversation. I'd like to point out that there's a third way, which is to set a cookie, rather than put the original request info in the URL. I'll admit that's probably got other problems, but I'm throwing that out there as a solution. > > The "Content-Location" HTTP header is a potential longshot. I don't > > think Google documents their use/non-use of this header, but it's one of > > those "can't hurt" kind of things. > > > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.14 > > The spec is sufficiently vague and mysterious that I'd recommend against > using it for any purpose. Typical use is in content negotiation, allowing the server to advertise the direct URL to the content that was ultimately served as a result of the negotiation. > Since the destination page would not return > the same HTML as the redirect page, it would likely be incorrect and > might cause problems if anything does use it. I suppose you're right. More importantly, there's little reason to believe that it'd actually solve the problem at hand. Now that I think about it, the search engines probably shouldn't imply that the content location header contains the better URL to use to access the content in question. Since they shouldn't, that means it's a bad thing to count on. Rob From oub at mat.ucm.es Mon Nov 21 11:35:58 2005 From: oub at mat.ucm.es (Uwe Brauer) Date: Mon, 21 Nov 2005 12:35:58 +0100 Subject: [Wikitech-l] discus without wiki makeup? (was: Moodle, newgroups for Threading) References: <23E4371F6124D411B59B00805F9FA9711C9EA02C@navont3.navo.navy.mil> <200511031845.17000@bloodgate.com> <87y83kbet1.fsf_-_@mat.ucm.es> <437F82AE.2030308@pobox.com> <874q6763i7.fsf@mat.ucm.es> <87veynuiq0.fsf@mat.ucm.es> <87lkzjt11k.fsf@mat.ucm.es> Message-ID: <87k6f2w7vl.fsf_-_@mat.ucm.es> >>>>> "Timwi" == Timwi writes: Timwi> There's nothing stopping you from writing a WYSIWYG editor Timwi> for Wiki mark-up. :-) Well the lack of sufficiently technical abilities would qualify as a reason, I think. >> _paranoia_ ??? It happened to me 3 times, Timwi> What happened to you three times? That a comment of yours Timwi> was edited? That is to be expected, and perfectly Timwi> alright. That it was edited maliciously and nobody reverted Timwi> it? That's hard to believe. Twice the format was changed, once it was lost and could not be found nor reverted believe me. I thought one of the wiki commitments is: thou shalt not chance thy content no thy format of thy neighbour Timwi> This is because friendly list administrators clean out the Timwi> spam behind the scenes so you don't have to. Just because Timwi> you can't see it doesn't mean it doesn't exist ;-) And that would be difficult to implement? Another idea: - would it be possible to have the discussion page, without the wiki makeup language. While this formating is nice for reading and writing articles, I find that checking whether the format of my contribution is correct is a real time consumer. And a question: - is it possible that a certain paragraph starts each line with the same prefix, like >I say bla bla >and moreove but I say bla bla Uwe Brauer From ssamoylov at gmail.com Mon Nov 21 13:41:22 2005 From: ssamoylov at gmail.com (Sergey Samoylov) Date: Mon, 21 Nov 2005 16:41:22 +0300 Subject: [Wikitech-l] Wikisource dump In-Reply-To: <4381353F.4060601@pobox.com> References: <5470f7ce0511200730p5bbb4ad2x56bf484d9ed18efe@mail.gmail.com> <4381353F.4060601@pobox.com> Message-ID: <5470f7ce0511210541i7a7b5e18w3f46476a5f9cd6d0@mail.gmail.com> Thank you, Brion! On 11/21/05, Brion Vibber wrote: > > Sergey Samoylov wrote: > > Is it possible to get dump of en.wikisource.org< > http://en.wikisource.org> > > The dump from http://download.wikimedia.org/special/sources/ include > just > > dump of wikisource.org > which doesn't > > contain usefull articles. > > http://download.wikimedia.org/wikisource/en/ > > -- brion vibber (brion @ pobox.com ) > > > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > > From anysomefile at gmail.com Mon Nov 21 13:55:07 2005 From: anysomefile at gmail.com (Any File) Date: Mon, 21 Nov 2005 14:55:07 +0100 Subject: [Wikitech-l] Re: elements for interlanguage link Message-ID: Timwi wrote: > Speaking of which - this reminds me of an idea I had a while ago and I > was wondering if anyone would be interested to hear this. Currently many > Wikipedia pages in Google search results are redirects (for example, > Google for "nonogram" and look at the seventh search result). I was > wondering if there is a element one could use to say that another > URL is the "real" page? Then the page returned for a redirect's URL > would tell search engines the URL of the page it's redirecting to. Is a list the names of the pages redirected to a page inserted among he keywords of the target page? One more radical way would be to respond to a request of a page that is a redirect page with an http redirect, when a web crawler is detected asking the page. AnyFile From galwaygirl at xs4all.nl Mon Nov 21 18:34:55 2005 From: galwaygirl at xs4all.nl (Galwaygirl) Date: Mon, 21 Nov 2005 19:34:55 +0100 Subject: [Wikitech-l] Dutch Main Page In-Reply-To: <437CC020.7010109@yahoo.it> References: <437CBDA1.4050502@xs4all.nl> <437CC020.7010109@yahoo.it> Message-ID: <4382134F.3030107@xs4all.nl> Sabine Cretella wrote: > I don't think that there are real problems with that page - and to tell > the truth: I like it - where did you get the icons from? I have no idea. :-) Ask Rex (http://nl.wikipedia.org/wiki/Gebruiker:Rex), he designed the new main page. Galwaygirl From avarab at gmail.com Mon Nov 21 19:57:00 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Mon, 21 Nov 2005 19:57:00 +0000 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: <51dd1af80511211157u7144dc9ahe3d26be37db60fdc@mail.gmail.com> On 11/18/05, Timwi wrote: > > > I think it'd be useful for most multilingual MediaWiki installations > > that use interlanguage links to have such hidden elements. > > Speaking of which - this reminds me of an idea I had a while ago and I > was wondering if anyone would be interested to hear this. Currently many > Wikipedia pages in Google search results are redirects [snip] I've never understood why people think this is a problem. If I search for a term in a search engine I think it increases the value of the search if I get a redirect under the title I searched for rather than a synonym for the term which I may or may not be familiar with. From roybb95 at gmail.com Mon Nov 21 20:10:58 2005 From: roybb95 at gmail.com (Roy Ben-Baruch) Date: Mon, 21 Nov 2005 22:10:58 +0200 Subject: [Wikitech-l] Dutch Main Page In-Reply-To: <4382134F.3030107@xs4all.nl> References: <437CBDA1.4050502@xs4all.nl> <437CC020.7010109@yahoo.it> <4382134F.3030107@xs4all.nl> Message-ID: <3ac677290511211210g2f985075n3b27b3df277185f9@mail.gmail.com> On 11/21/05, Galwaygirl wrote: > Sabine Cretella wrote: > > I don't think that there are real problems with that page - and to tell > > the truth: I like it - where did you get the icons from? > > I have no idea. :-) Ask Rex > (http://nl.wikipedia.org/wiki/Gebruiker:Rex), he designed the new main page. > > Galwaygirl > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > In fact, it was originally designed for the Hebrew Wikipedia, but was later copied to other Wikis. The icons were designed by a Hebrew wikipedian as well. -- [[w:he:User:Roybb95]] From anysomefile at gmail.com Mon Nov 21 21:21:31 2005 From: anysomefile at gmail.com (Any File) Date: Mon, 21 Nov 2005 22:21:31 +0100 Subject: [Wikitech-l] Re: elements for interlanguage link Message-ID: Rob Lanphier wrote: > Subject: Re: [Wikitech-l] Re: elements for interlanguage link > information > To: Wikimedia developers > Message-ID: <1132538575.6605.35.camel at localhost.localdomain> > Content-Type: text/plain > > I'm not aware of any syntax, but one way to do it would be for > MediaWiki to issue an HTTP 301 status (permanent redirect) to the new > page, rather than returning 200 and giving the content. That probably > introduces an unacceptably large performance penalty, though (extra > round trip per request). > > The "Content-Location" HTTP header is a potential longshot. I don't > think Google documents their use/non-use of this header, but it's one of > those "can't hurt" kind of things. > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.14 > > It appears it can be tacked on using a meta http-equiv tag in the HTML > head. > Reading the specs it seems more to differenziate among different content retrived from the same URI/URL rather than (as is our case) to state that different URL/URI correspond to the smae content. Despite of this, it seems a brillant idea to use this header. Does anybody has any contact with google or yahoo (or other web search) to ask them their opinion about this and other possible solution? AnyFile From brion at pobox.com Mon Nov 21 23:14:52 2005 From: brion at pobox.com (Brion Vibber) Date: Mon, 21 Nov 2005 15:14:52 -0800 Subject: [Wikitech-l] Brute force image server upgrade Message-ID: <438254EC.6080508@pobox.com> Since everybody's so frustrated about this, I'm going to go ahead and force the issue with the upload server. I'll be disabling uploads and turning off the upload.wikimedia.org web server for a few hours so we can get everything moved over and totally copied once and for all. Alas this'll mean not seeing images for a few hours, but it should finally be nicer after this. :D http://meta.wikimedia.org/wiki/November_2005_image_server -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From sbwoodside at yahoo.com Tue Nov 22 07:13:44 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Tue, 22 Nov 2005 02:13:44 -0500 Subject: [Wikitech-l] six wikipedia conjectures, and slowness Message-ID: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> Hi there, I've just joined the list. I've been contributing to the wiki for a while, but I've got some technical interest now. To start off with, here is a list of "conjectures" that I think are interesting about wikipedia. I'm crossposting from my blog ( http:// simonwoodside.com/weblog/2005/11/20 ). Conjecture 1. That the distance between any two wikipedia pages, randomly chosen, as measured by wikilinks, is on average 6. Conjecture 2. That wikipedia is sufficiently formal and complete that you could build a useful general purpose AI knowledge base using it. Conjecture 3. That wikipedia has low information entropy. Conjecture 4. That the development of a wikipedia article over time occurs in a manner consistent to the biological evolution of a species. Conjecture 5. That the relationship between the amount of material in wikipedia and the number of article views is exponential. Conjecture 6. That wikipedia is, on average, factually accurate. The one that I'm most interested in today, actually is #5 (known as Reed's law) ... because today I went to do some editing and found that the system was very slow... I'm actually a little worried that wikipedia is going to be overwhelmed by it's own popularity. I saw something like this happen before, when I joined a MUD called MicroMUSE back in er... 1993??? Wired issue #3 wrote about it and then they were inundated with users... everything was really slow... well back then I didn't have the technical chops to do something about but now I do. I have some ideas about architectural design of WP's servers in mind that I discussed with some people on #wikimedia- tech today... I'll probably write them up in a few days. Anyway, the motivational reason is that IF conjecture #5 is TRUE, then there's probably some severe architectural implications for WP's technical design. --simon (founder of semacode, contributor to mozilla, etc.) -- http://simonwoodside.com From gmaxwell at gmail.com Tue Nov 22 07:35:28 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Tue, 22 Nov 2005 02:35:28 -0500 Subject: [Wikitech-l] six wikipedia conjectures, and slowness In-Reply-To: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> References: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> Message-ID: On 11/22/05, S. Woodside wrote: > Conjecture 1. That the distance between any two wikipedia pages, > randomly chosen, as measured by wikilinks, is on average 6. > > Conjecture 2. That wikipedia is sufficiently formal and complete that > you could build a useful general purpose AI knowledge base using it. > > Conjecture 3. That wikipedia has low information entropy. > > Conjecture 4. That the development of a wikipedia article over time > occurs in a manner consistent to the biological evolution of a species. > > Conjecture 5. That the relationship between the amount of material in > wikipedia and the number of article views is exponential. > > Conjecture 6. That wikipedia is, on average, factually accurate. Ohoh ! can I produce one? Conjecture 7. The amount of wanky conjectures produced by bloggers is bound only by the number of sites that will allow them to spam their URLs. Do I get a prize? ;) On a more constructive note, If you're interested in AI taught using wikipedia... you should take a look at the state of the art in English parsers (such as [[Link Grammar]]) and how they fair on Wikipedia. Impressive that it works at all, but I don't think we need to worry about a hostile takeover of the planet by a self-aware encyclopedia any time soon. From node.ue at gmail.com Tue Nov 22 07:50:27 2005 From: node.ue at gmail.com (Mark Williamson) Date: Tue, 22 Nov 2005 00:50:27 -0700 Subject: [Wikitech-l] six wikipedia conjectures, and slowness In-Reply-To: References: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> Message-ID: <849f98ed0511212350q26751fd4p@mail.gmail.com> What about: Conjecture 8. Regardless of the final result, the inner workings of Wikipedia are so chaotic and at times unpleasant that most readers would rip all of their hair out if they watched a documentary about the creative process behind the articles. Fair? Mark On 22/11/05, Gregory Maxwell wrote: > On 11/22/05, S. Woodside wrote: > > Conjecture 1. That the distance between any two wikipedia pages, > > randomly chosen, as measured by wikilinks, is on average 6. > > > > Conjecture 2. That wikipedia is sufficiently formal and complete that > > you could build a useful general purpose AI knowledge base using it. > > > > Conjecture 3. That wikipedia has low information entropy. > > > > Conjecture 4. That the development of a wikipedia article over time > > occurs in a manner consistent to the biological evolution of a species. > > > > Conjecture 5. That the relationship between the amount of material in > > wikipedia and the number of article views is exponential. > > > > Conjecture 6. That wikipedia is, on average, factually accurate. > > Ohoh ! can I produce one? > > Conjecture 7. The amount of wanky conjectures produced by bloggers is > bound only by the number of sites that will allow them to spam their > URLs. > > Do I get a prize? ;) > > On a more constructive note, If you're interested in AI taught using > wikipedia... you should take a look at the state of the art in English > parsers (such as [[Link Grammar]]) and how they fair on Wikipedia. > Impressive that it works at all, but I don't think we need to worry > about a hostile takeover of the planet by a self-aware encyclopedia > any time soon. > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- "Take away their language, destroy their souls." -- Joseph Stalin From oub at mat.ucm.es Tue Nov 22 11:11:51 2005 From: oub at mat.ucm.es (Uwe Brauer) Date: Tue, 22 Nov 2005 12:11:51 +0100 Subject: [Wikitech-l] wikipedia connection are slowed down: client side solutions? Message-ID: <87y83h3pjc.fsf@mat.ucm.es> Hello I noted since a couple of days a slow down when I try to submit a contribution. Is there anything I could do like enhance the cache of my browser? Uwe Brauer From ubifrieda at gmail.com Tue Nov 22 13:27:36 2005 From: ubifrieda at gmail.com (Frieda Brioschi) Date: Tue, 22 Nov 2005 14:27:36 +0100 Subject: [Wikitech-l] Re: Portal namespace In-Reply-To: References: <431FAC34.8030001@cavia.com> Message-ID: <1b2348920511220527v8c599ecm@mail.gmail.com> I've two new namespace request: one for it.wiki and one for it.source. For it.wiki: like English, German and French Wikipedia we'd like to have a namespace "Portale" too [here's the discussion on our village pump: ] For it.source: we are already using a pseudo-namespace "Autore" [take a look here: ], is it possible, instead, having a "real" namespace "Autore"? Thanks! Ciao, Frieda 2005/9/8, Ashar Voultoiz : > Guillaume Blanchard wrote: > > Hi, > > We requested a new 'portal' namespace about one year ago but this came > > to nothing. Some days ago, we discovered the English and German > > Wikipedia are now using this namespace (sic!) so we requesting to be > > able to do same [1]. The French word for portal is 'portail'. > > Regards, > > > > Aoineko > > > > [1] > > http://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Le_Bistro/30_ao%C3%BBt_2005#Un_espace_de_nom_pour_nos_portails > > Done: > namespace 100 : Portail > namespace 101 : Discussion_Portail > > Announcing it on the local village pump. > > > -- > Ashar Voultoiz - WP++++ > http://en.wikipedia.org/wiki/User:Hashar > http://www.livejournal.com/community/wikitech/ > IM: hashar at jabber.org ICQ: 15325080 > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- ___________________________________________ http://it.wikipedia.org/wiki/Utente:Frieda From usenet at tonal.clara.co.uk Tue Nov 22 14:10:50 2005 From: usenet at tonal.clara.co.uk (Neil Harris) Date: Tue, 22 Nov 2005 14:10:50 +0000 Subject: [Wikitech-l] six wikipedia conjectures, and slowness In-Reply-To: <849f98ed0511212350q26751fd4p@mail.gmail.com> References: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> <849f98ed0511212350q26751fd4p@mail.gmail.com> Message-ID: <438326EA.2050801@tonal.clara.co.uk> Mark Williamson wrote: > What about: > > Conjecture 8. Regardless of the final result, the inner workings of > Wikipedia are so chaotic and at times unpleasant that most readers > would rip all of their hair out if they watched a documentary about > the creative process behind the articles. > > Fair? > > Mark "Politics^W Wikipedia is like sausage making, you don't want to see how it's done." -- Neil/ //// / From evan at wikitravel.org Tue Nov 22 18:46:34 2005 From: evan at wikitravel.org (Evan Prodromou) Date: Tue, 22 Nov 2005 13:46:34 -0500 Subject: [Wikitech-l] Mediawiki RDF extension available Message-ID: <1132685194.3680.446.camel@zhora.1481ruerachel.net> Hi, folks. Just a quick note to let you know that there's an extension for MediaWiki available that allows customized RDF output and in-page user input of Turtle RDF. Code is here: http://wikitravel.org/~evan/mw-rdf-0.3.tar.gz This is in production on Wikitravel, only works for MediaWiki 1.4.x (at least for the history model, probably some other stuff is broken with the new database schema, too). More info here: http://wikitravel.org/en/Wikitravel:RDF http://meta.wikimedia.org/wiki/RDF README file is attached for below for people who don't follow URLs so much. I'll add it to extensions section of mediawiki CVS RSN, but I've been using darcs for version control so far and I CBA to merge to CVS yet. ~Evan ________________________________________________________________________ MediaWiki RDF extension version 0.3 16 November 2005 This is the README file for the RDF extension for MediaWiki software. The extension is only useful if you've got a MediaWiki installation; it can only be installed by the administrator of the site. The extension adds RDF (= Resource Definition Framework) support to MediaWiki. It will show RDF data about a page with a new special page, Special:Rdf. It allows users to add custom RDF statements to a page between ... tags. Administrators and programmers can add new automated RDF models, too. This is the first version of the extension and it's almost sure to have bugs. See the BUGS section below for info on how to report problems. == License == Copyright 2005 Evan Prodromou This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA == Installation == You have to have MediaWiki 1.4.x installed for this software to work. Sorry, but that's the version I've got installed, so it's the one this software works with. You also have to install RAP, the RDF API for PHP (www.wiwiss.fu-berlin.de/suhl/bizer/rdfapi/) . I used version 0.92, plus some custom hacks to make the N3 parser less fragile. You have to apply a patch to the distribution if you want RDF to work; it's included in this distribution. (Future versions of RAP will have these enhancements). You can copy the file MwRdf.php to the extensions directory of your MediaWiki installation. Then add these lines to your LocalSettings.php: define("RDFAPI_INCLUDE_DIR", "/full/path/to/rdfapi-php/api/"); require_once("extensions/MwRdf.php"); == 60-second intro to RDF == RDF is a framework for making statements about resources. Statements are in the form: subject predicate object Here, "subject" is a "resource" such as a person, place, idea, Web page, picture, concept, or whatever. "Predicates" are names of properties of a resource, like its color, shape, texture, size, history, or relationships to other "resources". The object is the value of the property. So "car color red" would be a statement about a car; "Evan hasBrother Nate" would be a statement about a person. Of course, it's important to be definite about which resources and which properties we're discussing. In the Web world, each "resource" is identified with a URI (usually an URL). For electronic resources, this is usually pretty easy; the main page of English-language Wikipedia, for example, has the URI "http://en.wikipedia.org/wiki/Main_Page". However, for analog subjects like people or ideas or physical objects, this can be a little trickier. There's no general solution, but the typical workaround is to use real or made-up URIs to "stand in" for offline entities. For example, you could use the URI for my Wikitravel user page, "http://wikitravel.org/en/User:Evan", as the URI for me. Or you could use my email address in URI form, like "mailto:evan at wikitravel.org". People who need to agree on statements often create 'vocabularies' or 'schemas' that map concepts, object, and relationships to URIs. By popularizing such a mapping, we can all agree about what a particular URI "means". For example, the Dublin Core Metadata Initiative (DCMI) (http://www.dublincore.org/) has a schema for very simple metadata, such as you'd find on a library card. They've defined (among other things), that the idea of authoring or creating something is represented by the URL http://purl.org/dc/elements/1.1/creator. So you could say: http://www.fsf.org http://purl.org/dc/elements/1.1/creator mailto:rms at gnu.org ... means that the creator of the Free Software Foundation is Richard Stallman. There are a lot of RDF models out there; you can also create your own if you want. RDF statements can be encoded in a number of different ways. By far the most popular is as XML, sometimes called "RDF/XML". "Turtle" is another format, which uses plain text rather than XML; and "Ntriples" is still another. == Models == For any given resource you can describe it from many different perspectives. For example, you can describe a man in terms of his academic career, his job experience, his family members, his body parts' size and weight, his location in space, his membership in organizations, his hobbies and interests, etc. In this extension, we use the term "model" to describe a perspective on a resource. For example, listing the links to and from a page is one model; its edit history is another model. You can choose which models you want to know about when querying the system for RDF statements about a subject, and only statements in that model are returned. This is mostly a concession to performance; it doesn't make sense to calculate information about the history of a page if calling program isn't going to use it. There are a number of models built into this extension; you can also add your own, if you know how to code PHP. The models have short little codenames for easy access, listed below. Models built in: * dcmes: Dublin Core Metadata Element Set (DCMES) data. Mostly information about who edited a page, when, and other simple stuff. Titles, format, etc. This is a common vocabulary that's very useful for general-purpose bots. * cc: Creative Commons metadata. Gives license information; there are a few tools and search engines that use this data. * linksto, linksfrom, links: Internal wiki links to and from a page. "links" is a shortcut for both. * image: DCMES information about images in a page. * history: version history of a page; who edited the page and when. * interwiki: links to different language versions of a page. * categories: which categories a page is in. * inpage: a special model for blocks of RDF embedded into the source code of MediaWiki pages; see "In-page RDF" below for info. == Special:RDF == You can view RDF for a page using the [[Special:Rdf]] feature. It should be listed on the list of special pages as "Rdf". Enter the title of the page you want RDF for in the title box, and choose one or more of the RDF models from the multiselect box. You can also select which output format you want; XML is probably most useful and can be viewed in a browser. The Special:Rdf page can also be called directly, with the following parameters: * target: title of the article to get RDF info about. If no target URL is provided, the special page shows the input form. * modelnames: comma-separated list of model names, like "links,cc,history". Default is a list of standard models, configurable per-site (see below). * format: output format; one of 'xml', 'turtle' and 'ntriples'. Default is XML. == In-page RDF == Any user can make additional RDF statements about any resource by adding an in-page RDF block to the page. The RDF needs to be in Turtle format (http://www.dajobe.org/2004/01/turtle/), which is extremely simple. It's a subset of Notation3 (http://www.w3.org/DesignIssues/Notation3.html), for which there is a good introduction. (http://www.w3.org/2000/10/swap/Primer.html) RDF blocks are delimited by the tag "". They're invisible for normal output, but they can provide information for RDF-reading items. Here's an example: Mathematics is ''very'' hard. <> dc:subject "Mathematics"@en . Here, the rdf block says that the subject of the article is "Mathematics". Note that <> in Turtle means "this document". Another example: Chilean wines are quite delicious. <> dc:source . dc:creator "Bob Smith" . Here, we've said that the article's source is another Web page on another server; we can also say that that other Web page's author is Bob Smith. In-page RDF is displayed whenever the "inpage" model is requested for Special:RDF; it's one of the defaults. It's also useful for people making MediaWiki extensions; you can have users add information in in-page RDF, and then extract it and read it using the function MwRdfGetModel(). This lets users add data that isn't for presentation but perhaps for automated tools to use. Note also that MediaWiki templates are expanded when in-page RDF is queries. So if the syntax of Turtle is daunting, you can add templates that make it easier. For example, we could create a template Template:Source for showing source documents: <> dc:source <{{{1}}}> . <{{{1}}}> dc:creator "{{{2|anonymous}}}" . We could then make the same statement as above with a template transclusion: {{source|http://example.org/chileanwines.html|Bob Smith}} Note that a number of namespaces are pre-defined for your RDF blocks. Some basic namespaces are provided by RAP; you can define custom namespaces with the global variable $wgRdfNamespaces . In addition, each of the article namespaces is mapped to a namespace prefix in Turtle, so you can say something like this: Wikitravel_talk:Spelling dc:subject Wikitravel:Spelling . :Montreal dc:spatial "Montreal" . Note that the default prefix (":") is the article namespace. == Customization == There are a few customization variables available, mostly for programmers. $wgRdfDefaultModels -- an array of names of the default models to use when no model name is specified. $wgRdfNamespaces -- You can add custom namespaces to this associative array, of the form 'prefix' => 'uri' . $wgRdfModelFunctions -- an associative array mapping model names to functions that generate the model. See below for how to add a new model. $wgRdfOutputFunctions -- A map of output format to functions that generate that output. You can add new output formats by adding to this array. == Extending == You can add new RDF models to the framework by creating a model function and adding it to the $wgRdfModelFunctions array. The function will get a single MediaWiki Article object as a parameter; it should return a single RAP Model object (a collection of statements) as a result. For example, function CharacterCount($article) { # create a new model $model = ModelFactory::getDefaultModel(); # get the article source $text = $article->getContent(true); # ... and its size $size = mb_strlen($text); # Get the resource for this article $ar = MwRdfArticleResource($article); # Add a statement to the model $model->add(new Statement($ar, new Resource("http://example.org/charcount"), new Literal($size))); # return the model return $model; } You can then give the model a name like so: $wgRdfModelFunctions['charcount'] = 'CharacterCount'; You can add a message to the site describing your model like so: $wgMessageCache->addMessages(array('rdf-charcount' => 'Count of characters')); You can also create model-outputting functions if you so desire; they should accept a RAP model as input and make output as they would to the Web. This is probably only useful if you want a specific RDF encoding mechanism that's not RDF/XML, Turtle, or Ntriples; for example, TriG or TriX. == Future == These are some future directions I'd like to see things go: * Store statements in DB: statements could be stored in the database when the page is saved and retrieved when needed. This would make it to do extended queries based on information about *all* pages. * Performance: there wasn't much performance tuning and there are probably way too many DB hits and reads and such. * Semantic tuning: I'd like to make sure that the statements in the standard models are accurate and useful. == Bugs == Send bug reports, patches, and feature requests to Evan Prodromou . -- Evan Prodromou Wikitravel (http://wikitravel.org/) -- the free, complete, up-to-date and reliable world-wide travel guide From brion at pobox.com Tue Nov 22 19:10:59 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 22 Nov 2005 11:10:59 -0800 Subject: [Wikitech-l] Brute force image server upgrade In-Reply-To: <438254EC.6080508@pobox.com> References: <438254EC.6080508@pobox.com> Message-ID: <43836D43.4010809@pobox.com> Brion Vibber wrote: > Since everybody's so frustrated about this, I'm going to go ahead and > force the issue with the upload server. I'll be disabling uploads and > turning off the upload.wikimedia.org web server for a few hours so we > can get everything moved over and totally copied once and for all. > > Alas this'll mean not seeing images for a few hours, but it should > finally be nicer after this. :D Data's finally all done copying (gar, what overloaded servers those were :P), arranging stuff now to set up the new server in use. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From avenier at venier.net Tue Nov 22 20:37:13 2005 From: avenier at venier.net (Andrew Venier) Date: Tue, 22 Nov 2005 14:37:13 -0600 Subject: [Wikitech-l] six wikipedia conjectures, and slowness In-Reply-To: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> References: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> Message-ID: <43838179.9010704@venier.net> S. Woodside wrote: > Conjecture 4. That the development of a wikipedia article over time > occurs in a manner consistent to the biological evolution of a species. > I would be rather surprised to hear that our articles are reproducing. From wegge at wegge.dk Tue Nov 22 20:44:47 2005 From: wegge at wegge.dk (Anders Wegge Jakobsen) Date: 22 Nov 2005 21:44:47 +0100 Subject: [Wikitech-l] six wikipedia conjectures, and slowness In-Reply-To: <43838179.9010704@venier.net> References: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> <43838179.9010704@venier.net> Message-ID: "Andrew" == Andrew Venier writes: > S. Woodside wrote: >> Conjecture 4. That the development of a wikipedia article over time >> occurs in a manner consistent to the biological evolution of a >> species. >> > I would be rather surprised to hear that our articles are > reproducing. The original list of "Wikipedians who " has cloned itself numerous times. You may argue that it's nothing more than a parasite feeding on the host organisms desire to be part of as many groups as possible, but vira are also a result of evolution. -- /Wegge Min holdning til Usenet - Min weblog - From galwaygirl at xs4all.nl Tue Nov 22 22:35:42 2005 From: galwaygirl at xs4all.nl (Galwaygirl) Date: Tue, 22 Nov 2005 23:35:42 +0100 Subject: [Wikitech-l] Dutch Main Page In-Reply-To: <3ac677290511211210g2f985075n3b27b3df277185f9@mail.gmail.com> References: <437CBDA1.4050502@xs4all.nl> <437CC020.7010109@yahoo.it> <4382134F.3030107@xs4all.nl> <3ac677290511211210g2f985075n3b27b3df277185f9@mail.gmail.com> Message-ID: <43839D3E.2030208@xs4all.nl> Roy Ben-Baruch wrote: > On 11/21/05, Galwaygirl wrote: > >>Sabine Cretella wrote: >> >>>I don't think that there are real problems with that page - and to tell >>>the truth: I like it - where did you get the icons from? >> >>I have no idea. :-) Ask Rex >>(http://nl.wikipedia.org/wiki/Gebruiker:Rex), he designed the new main page. >> >>Galwaygirl > > In fact, it was originally designed for the Hebrew Wikipedia, but was > later copied to other Wikis. The icons were designed by a Hebrew > wikipedian as well. Ah, right, thank you. I thought Rex designed it all by himself. Sorry for the misinformation! Galwaygirl From brion at pobox.com Tue Nov 22 22:45:17 2005 From: brion at pobox.com (Brion Vibber) Date: Tue, 22 Nov 2005 14:45:17 -0800 Subject: [Wikitech-l] Brute force image server upgrade In-Reply-To: <43836D43.4010809@pobox.com> References: <438254EC.6080508@pobox.com> <43836D43.4010809@pobox.com> Message-ID: <43839F7D.2030203@pobox.com> Brion Vibber wrote: > Data's finally all done copying (gar, what overloaded servers those were > :P), arranging stuff now to set up the new server in use. Up and running! There's 2.5 terabytes free on the server's disk array so we should be set for a few more months at least. ;) There is somewhat more load on the server from NFS work that we'd like; there may be some bugs in the image data caching, or extra checks are being made that don't have to. But for now that's not a big problem as the machine handles it well. When we rearrange the image storage system there should be much less need to hit the disk over NFS, so that should go down in the future to make further room for real growth. The server's currently pumping out about 200 objects per second between 8 lighttpd worker threads. (Multiple worker threads keeps I/O blocking from halting everything; with 4 CPU cores and a lot of spindles you don't want to halt on just one file!) Filesystem Size Used Avail Use% Mounted on /dev/mapper/rootvg-striped 3.0T 521G 2.5T 18% /export procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 5 1 192 27844 1218704 5617736 0 0 3296 108 6282 16359 1 7 52 40 4 0 192 27328 1217796 5618904 0 0 3500 161 6308 14198 1 7 53 39 0 0 192 25832 1217348 5620912 0 0 2905 75 6027 14952 1 7 61 31 -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From jherz at myrealbox.com Wed Nov 23 17:20:44 2005 From: jherz at myrealbox.com (=?ISO-8859-1?Q?J=FCrgen_Herz?=) Date: Wed, 23 Nov 2005 18:20:44 +0100 Subject: [Wikitech-l] Re: Brute force image server upgrade In-Reply-To: <43839F7D.2030203@pobox.com> References: <438254EC.6080508@pobox.com> <43836D43.4010809@pobox.com> <43839F7D.2030203@pobox.com> Message-ID: <4384A4EC.9010004@myrealbox.com> Brion Vibber wrote: >> Data's finally all done copying (gar, what overloaded servers those were >> :P), arranging stuff now to set up the new server in use. > > Up and running! > > There's 2.5 terabytes free on the server's disk array so we should be > set for a few more months at least. ;) That reads fine and the experience is quite fast. Thanks for the work! What I don't understand is why Ganglia only shows ~ 3.3 TB of disk_total while there should be about 4.8 TB. > When we rearrange the image storage system there should be much less > need to hit the disk over NFS, so that should go down in the future to > make further room for real growth. Can I read something about that planned rearrange anywhere? Regards, J?rgen From brion at pobox.com Wed Nov 23 18:29:43 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 23 Nov 2005 10:29:43 -0800 Subject: [Wikitech-l] Re: Brute force image server upgrade In-Reply-To: <4384A4EC.9010004@myrealbox.com> References: <438254EC.6080508@pobox.com> <43836D43.4010809@pobox.com> <43839F7D.2030203@pobox.com> <4384A4EC.9010004@myrealbox.com> Message-ID: <4384B517.7070005@pobox.com> J?rgen Herz wrote: > Brion Vibber wrote: >> There's 2.5 terabytes free on the server's disk array so we should be >> set for a few more months at least. ;) > > That reads fine and the experience is quite fast. Thanks for the work! > What I don't understand is why Ganglia only shows ~ 3.3 TB of disk_total > while there should be about 4.8 TB. The RAID eats up some of the raw disks' space with the redundancy. Either that or I've lost track of something. ;) >> When we rearrange the image storage system there should be much less >> need to hit the disk over NFS, so that should go down in the future to >> make further room for real growth. > > Can I read something about that planned rearrange anywhere? There's some notes scribbled at http://www.mediawiki.org/wiki/1.6_image_storage -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From jherz at myrealbox.com Wed Nov 23 19:28:37 2005 From: jherz at myrealbox.com (=?ISO-8859-1?Q?J=FCrgen_Herz?=) Date: Wed, 23 Nov 2005 20:28:37 +0100 Subject: [Wikitech-l] Re: Brute force image server upgrade In-Reply-To: <4384B517.7070005@pobox.com> References: <438254EC.6080508@pobox.com> <43836D43.4010809@pobox.com> <43839F7D.2030203@pobox.com> <4384A4EC.9010004@myrealbox.com> <4384B517.7070005@pobox.com> Message-ID: <4384C2E5.2010205@myrealbox.com> Brion Vibber wrote: >> That reads fine and the experience is quite fast. Thanks for the work! >> What I don't understand is why Ganglia only shows ~ 3.3 TB of disk_total >> while there should be about 4.8 TB. > > The RAID eats up some of the raw disks' space with the redundancy. Either that > or I've lost track of something. ;) Hm, shouldn't 6 of 8 disks (8 - 1 for parity - 1 spare) per RAID be available in a RAID 5? So each array should provide 2400 GB (2232 GiB, don't know if Ganglia's GB really are GB or wrongly named GiB). >> Can I read something about that planned rearrange anywhere? > > There's some notes scribbled at http://www.mediawiki.org/wiki/1.6_image_storage Thanks J?rgen From brion at pobox.com Wed Nov 23 22:26:26 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 23 Nov 2005 14:26:26 -0800 Subject: [Wikitech-l] [WikiEN-l] Status of article rating feature? In-Reply-To: <438052B1.8050109@pobox.com> References: <437AAECE.1090304@pobox.com> <438052B1.8050109@pobox.com> Message-ID: <4384EC92.2050105@pobox.com> Brion Vibber wrote: > Brion Vibber wrote: >>> So. What's up with Special:Validate? >> It's on my list for this week, I'll see about getting it turned on and >> working. > > I'm tweaking, fixing, and reworking various bits of it (display > formatting fixes, better code reuse, using revision IDs instead of > unreliable timestamps in places, etc). > > Will try to have it ready to try out tomorrowish. Between the image server and still unpacking furniture I haven't quite finished the validation stuff yet. Unfortunately I'm still finding some XSS-style security holes (which could eg be used to compromise a sysop account by sending them to a specially-crafted link which uses JavaScript to perform actions on the wiki under their behalf). So it definitely won't be going live until I've done a thorough security review as well as fixing up the revision stuff. Also, there's been some muttering that turning this on will warrant/require a press release or some other sort of big publicity. I currently can't give any specific date for having it done or for turning it on once its ready. Sighhhhhhh... -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From JeLuF at gmx.de Wed Nov 23 22:32:27 2005 From: JeLuF at gmx.de (Jens Frank) Date: Wed, 23 Nov 2005 23:32:27 +0100 (MET) Subject: [Wikitech-l] Re: Brute force image server upgrade Message-ID: <3515.1132785147@www55.gmx.net> On Wed, Nov 23, 2005 at 08:28:37PM +0100, J?rgen Herz wrote: > Brion Vibber wrote: > > >> That reads fine and the experience is quite fast. Thanks for the work! > >> What I don't understand is why Ganglia only shows ~ 3.3 TB of disk_total > >> while there should be about 4.8 TB. > > > > The RAID eats up some of the raw disks' space with the redundancy. Either that > > or I've lost track of something. ;) > > Hm, shouldn't 6 of 8 disks (8 - 1 for parity - 1 spare) per RAID be > available in a RAID 5? So each array should provide 2400 GB (2232 GiB, > don't know if Ganglia's GB really are GB or wrongly named GiB). Sorry, the wikitech page about amane was not up to date. Current layout: There are three SATA RAID controllers: Ctl Model Ports Drives Units NotOpt RRate VRate BBU ------------------------------------------------------------------------ c0 9500S-8 8 8 1 0 4 4 OK c1 9500S-8 8 8 1 0 4 4 OK c2 7006-2 2 2 1 0 2 - - c2 is used for the OS, RAID 1, 60 GB. c0 and c1 are the big array controllers: Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC ------------------------------------------------------------------------------ u0 RAID-10 OK - 64K 1490.07 ON OFF OFF Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 372.61 GB 781422768 WD-WMAMY12171 p1 OK u0 372.61 GB 781422768 WD-WMAMY12037 p2 OK u0 372.61 GB 781422768 WD-WMAMY12168 p3 OK u0 372.61 GB 781422768 WD-WMAMY12168 p4 OK u0 372.61 GB 781422768 WD-WMAMY12168 p5 OK u0 372.61 GB 781422768 WD-WMAMY12168 p6 OK u0 372.61 GB 781422768 WD-WMAMY12025 p7 OK u0 372.61 GB 781422768 WD-WMAMY12168 Name OnlineState BBUReady Status Volt Temp Hours LastCapTest --------------------------------------------------------------------------- bbu On Yes OK OK High 0 xx-xxx-xxxx On top of the two 1.5 TB RAID10's there's LVM, generating a striped RAID0 volume with 3TB capacity. Bonnie reported these figures for this configuration: --------Sequential Output--------- ---Sequential Input--- --Random--- -Per Char- ---Block--- --Rewrite-- -Per Char- ---Block--- --Seeks---- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU amane 10000 59312 99.9 337711 84.0 101595 21.8 38235 66.8 179973 20.2 1583.2 2.6 real 10m46.529s user 5m23.604s sys 1m36.116s This was the fastest available configuration with redundancy. Only RAID0 was reading faster. The server has 8GB memory, so 10GB test file size were chosen. Regards, JeLuF -- Lust, ein paar Euro nebenbei zu verdienen? Ohne Kosten, ohne Risiko! Satte Provisionen f?r GMX Partner: http://www.gmx.net/de/go/partner From jherz at myrealbox.com Wed Nov 23 23:14:55 2005 From: jherz at myrealbox.com (=?ISO-8859-1?Q?J=FCrgen_Herz?=) Date: Thu, 24 Nov 2005 00:14:55 +0100 Subject: [Wikitech-l] Re: Brute force image server upgrade In-Reply-To: <3515.1132785147@www55.gmx.net> References: <3515.1132785147@www55.gmx.net> Message-ID: <4384F7EF.3020205@myrealbox.com> Jens Frank wrote: >> > The RAID eats up some of the raw disks' space with the redundancy. >> > Either that or I've lost track of something. ;) >> >> Hm, shouldn't 6 of 8 disks (8 - 1 for parity - 1 spare) per RAID be >> available in a RAID 5? So each array should provide 2400 GB (2232 GiB, >> don't know if Ganglia's GB really are GB or wrongly named GiB). > > Sorry, the wikitech page about amane was not up to date. > > [...] Ah yes, that's an explanation and matches the 3,257 GB from Ganglia better. > This was the fastest available configuration with redundancy. Only RAID0 was > reading faster. The server has 8GB memory, so 10GB test file size were chosen. That gives less space available but actually I was uneasy about RAID5 from the security point of view anyway. Thanks, J?rgen From epzachte at chello.nl Thu Nov 24 00:59:29 2005 From: epzachte at chello.nl (epzachte at chello.nl) Date: Thu, 24 Nov 2005 1:59:29 +0100 Subject: [Wikitech-l] EasyTimeline patch causes garbled texts Message-ID: <20051124005929.TQDB184.amsfep15-int.chello.nl@localhost> I just saw the recent EasyTimeline upgrade, patched for unicode compliance 4 days ago. Unfortunately it breaks things seriously. Could this patch be reverted until I submit a better solution hopefully soon? With the new patch all texts with embedded links will be garbled. This is why: EasyTimeline supports embedded links, Ploticus does not, so EasyTimeline divides and draw texts in segments, straight text in black, then linked text in blue, more text in black, carefully positioning each segment, depending on alignment, etc. Currently Easytimeline script still assumes the standard monospaced font. So text with embedded links will be positioned wrongly with the new variable sized font. Now Wp authors will attempt to 'correct' the scripts to accomodate current offset errors, only to find charts are garbled again when the proper solution arrives. My version asks Ploticus for font metrics and then handles text positioning of each segment well, with any font. It also allows to mix several fonts, adapts line spacing to font metrics, etc. In fact the update for EasyTimeline is ready for months and was waiting for Ploticus patch I submitted patch for Ploticus that allows querying of Freetype font metrics. I just checked: this patch has been included in newest Ploticus release, thanks to Steve Grubb, so I can resume testing soon. I know I'm late with my solution, which I announced in Frankfurt. My health was not 100% in previous months, in fact I'm still limited in the amount of work I can do here. I'm recovering but my day job comes first of course. So please bear with me for a while and leave EasyTimeline font support it was in the meantime. Thanks, Erik Zachte From servien at gmail.com Thu Nov 24 12:34:56 2005 From: servien at gmail.com (Servien Ilaino) Date: Thu, 24 Nov 2005 14:34:56 +0200 Subject: [Wikitech-l] Immediate request for wiki set-ups Message-ID: Hi, I'd like to request the immediate set-up for the following wikis. These wikis have been waiting for quite a while and should be created as soon as possible!! 1. Banyumasan (8) 2. Nedersaksisch/Dutch Low Saxon (21 support; 3 oppose [thereof 2 anonymous votes]) 3. Ripuarian (18 support [thereof 2 anonymous votes]; 2 oppose) 4. Samogitian (?emait??ka) (11) 5. Vlax Romany (11 support, 1 oppose [anonymous]) More info on: http://meta.wikimedia.org/wiki/Approved_requests_for_new_languages Kind regards, Servien Ilaino; and the Meta-Wiki community From john at abstractec.co.uk Thu Nov 24 14:19:27 2005 From: john at abstractec.co.uk (John Haselden) Date: Thu, 24 Nov 2005 14:19:27 +0000 (UTC) Subject: [Wikitech-l] Static HTML tree dumps for mirroring or CD distribution Message-ID: Hello all, I was reading the Wikibooks:Database_download page and there is a section titled "Static HTML tree dumps for mirroring or CD distribution". This section states that if anyone would "like to help set up an automatic dump-to-static function, please drop us a note on the developers' mailing list". Is there a HTML tree dump project in the works that I could contribute to? Many thanks, John Haselden From phil.boswell at gmail.com Thu Nov 24 15:46:25 2005 From: phil.boswell at gmail.com (Phil Boswell) Date: Thu, 24 Nov 2005 15:46:25 -0000 Subject: [Wikitech-l] Re: Add namespace parameter to _What links here_ References: Message-ID: "Phil Boswell" wrote in message news:djaql9$r3f$1 at sea.gmane.org... > Would it be especially complicated to add the namespace=? parameter to the > "What links here?" mechanism? > > I would be quite happy to add it to the URL manually at first, but this > would be most useful ploughing through huge lists of links trying to make > out whether something is actually being linked to meaningfully. Did I miss an answer to this question I wrote back in October? This functionality would really be very useful for what I'm wanting to do right now... -- Phil [[en:User:Phil Boswell]] From timwi at gmx.net Thu Nov 24 16:47:13 2005 From: timwi at gmx.net (Timwi) Date: Thu, 24 Nov 2005 16:47:13 +0000 Subject: [Wikitech-l] Re: Add namespace parameter to _What links here_ In-Reply-To: References: Message-ID: >>Would it be especially complicated to add the namespace=? parameter to the >>"What links here?" mechanism? >> >>I would be quite happy to add it to the URL manually at first, but this >>would be most useful ploughing through huge lists of links trying to make >>out whether something is actually being linked to meaningfully. > > Did I miss an answer to this question I wrote back in October? There hasn't been a reply. But the answer to your question is "no, it wouldn't be especially complicated", especially since you stress that you don't need a UI for it. To find someone to code it may be the harder part. Last time I did something like this it was reverted with a snide remark from Brion, so I'm not doing this again. Timwi From timwi at gmx.net Thu Nov 24 16:58:11 2005 From: timwi at gmx.net (Timwi) Date: Thu, 24 Nov 2005 16:58:11 +0000 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: <43813378.4060605@pobox.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> Message-ID: > It just makes it a lot harder to deal with such pages: if you > HTTP-redirect straight to the target page you're missing the link back > to the redirect page. (And that is *crucial* for editing work and > vandalism cleanup. It is non-negotiable.) I feel I should point out an implicit fallacy in this. It is non-negotiable that we need a link to the redirect page. It is *not* non-negotiable that we can't return a 301 HTTP redirection. The only other way I can think of that you can get from a page to something that redirects to it, is via Whatlinkshere. This is useful, but not perfect because you may have to browse through long lists to find the redirect you're looking for. My suggestion would be to add a list of "redirects to here" to the Edit page of any article. We already have a "Templates used by this page" list; this would be a useful complement. Once we've done that, we can renew the discussion about HTTP redirects, because now the "Redirected from" link is no longer strictly needed. There's also the hacker's non-UI option of allowing "?redirect=no" at the end of /wiki/ URLs (rather than only /w/index.php URLs). Then technical/experienced users won't even need to go to the Edit page. Timwi From timwi at gmx.net Thu Nov 24 17:02:49 2005 From: timwi at gmx.net (Timwi) Date: Thu, 24 Nov 2005 17:02:49 +0000 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: <51dd1af80511211157u7144dc9ahe3d26be37db60fdc@mail.gmail.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <51dd1af80511211157u7144dc9ahe3d26be37db60fdc@mail.gmail.com> Message-ID: ?var Arnfj?r? Bjarmason wrote: > On 11/18/05, Timwi wrote: > >>>I think it'd be useful for most multilingual MediaWiki installations >>>that use interlanguage links to have such hidden elements. >> >>Speaking of which - this reminds me of an idea I had a while ago and I >>was wondering if anyone would be interested to hear this. Currently many >>Wikipedia pages in Google search results are redirects [snip] > > I've never understood why people think this is a problem. It's a problem at least at Google because it means the Google rankings of several articles are split across various URLs that "contain" the same article. If these rankings were funnelled to a single canonical URL, article ratings would be much higher, and Google search results would be more relevant. > If I search for a term in a search engine I think it increases the > value of the search if I get a redirect under the title I searched > for rather than a synonym for the term which I may or may not be > familiar with. But you *won't* get a search result under the title you searched for (at least not in Google). Google for "nonogram". The window title (and hence the Google search result) displays "Paint by numbers" because that is the title of the article. Except in the URL and in the article's first paragraph, you don't see the word "nonogram". Timwi From brion at pobox.com Thu Nov 24 20:30:56 2005 From: brion at pobox.com (Brion Vibber) Date: Thu, 24 Nov 2005 12:30:56 -0800 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> Message-ID: <43862300.20000@pobox.com> Timwi wrote: >> It just makes it a lot harder to deal with such pages: if you >> HTTP-redirect straight to the target page you're missing the link back >> to the redirect page. (And that is *crucial* for editing work and >> vandalism cleanup. It is non-negotiable.) > > I feel I should point out an implicit fallacy in this. It is > non-negotiable that we need a link to the redirect page. It is *not* > non-negotiable that we can't return a 301 HTTP redirection. The only fallacy is yours, you invented some claim that I said 301s can't be used. This is false; if you had read my message you'd have seen that my point was using a 301 would require sending additional parameters to avoid making redirect cleanup work hugely annoying, and this would destroy any benefit that would allegedly come from using an HTTP redirect to the direct target URL (since you can't do it, you'd have to use a different URL). -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From node.ue at gmail.com Thu Nov 24 20:42:23 2005 From: node.ue at gmail.com (Mark Williamson) Date: Thu, 24 Nov 2005 13:42:23 -0700 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: References: Message-ID: <849f98ed0511241242t79c8d799s@mail.gmail.com> No -- with 3 oppose votes, the Dutch LS Wiki should NOT be created. 2 votes was one thing; 3 is quite a different thing. Mark On 24/11/05, Servien Ilaino wrote: > Hi, > > I'd like to request the immediate set-up for the following wikis. > These wikis have been waiting for quite a while and should be created > as soon as possible!! > > 1. Banyumasan (8) > 2. Nedersaksisch/Dutch Low Saxon (21 support; 3 oppose [thereof 2 > anonymous votes]) > 3. Ripuarian (18 support [thereof 2 anonymous votes]; 2 oppose) > 4. Samogitian (?emait??ka) (11) > 5. Vlax Romany (11 support, 1 oppose [anonymous]) > > More info on: http://meta.wikimedia.org/wiki/Approved_requests_for_new_languages > > Kind regards, > Servien Ilaino; and the > Meta-Wiki community > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > -- "Take away their language, destroy their souls." -- Joseph Stalin From gerard.meijssen at gmail.com Thu Nov 24 21:04:52 2005 From: gerard.meijssen at gmail.com (GerardM) Date: Thu, 24 Nov 2005 22:04:52 +0100 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: <849f98ed0511241242t79c8d799s@mail.gmail.com> References: <849f98ed0511241242t79c8d799s@mail.gmail.com> Message-ID: <41a006820511241304l7215e18ex9bb64e16bfa2d8d8@mail.gmail.com> Mark, 2 anonymous votes that basically disqualify themselves.. Great to know that you are there on your own Mark. So there is one qualified vote and two unqualified votes, three is a different thing. As to the proposed ideas of Jimmy, both Stellingwerfs, Gronings and Veluws could get their own Wikipedias as well. I am sure you would support that as well. If you don't then basically you agree with me that Jimmy's proposal has its fair share of problems. There is no such thing as an objective way of saying what a language is. You can come close but as long as people are willing to go on their hobby horse or fight from their ivory tower you will have for many languages controversy because they might be a dialect. For me having new projects is not a problem as long as people put sincere effort in their project. Yes, I have moved my position more and more towards allowing for most efforts. I am also not really afraid of hoaxes, when we find them we delete them if we are able and willing to come to such a decision. What is there to lose but the effort that a new community can put into a project that will be one of our projects? As to shifting position, even when a dialect gets its wikipedia, it will reflect the culture that is part of the people that speak it. When people of other dialects read this, they will find the pecularies of these people and find the differences and similarities. It will be a kind of information that will be hard to get in any other way. So yes, bring on nds-nl :) Thanks, GerardM On 11/24/05, Mark Williamson wrote: > > No -- with 3 oppose votes, the Dutch LS Wiki should NOT be created. 2 > votes was one thing; 3 is quite a different thing. > > Mark > > On 24/11/05, Servien Ilaino wrote: > > Hi, > > > > I'd like to request the immediate set-up for the following wikis. > > These wikis have been waiting for quite a while and should be created > > as soon as possible!! > > > > 1. Banyumasan (8) > > 2. Nedersaksisch/Dutch Low Saxon (21 support; 3 oppose [thereof 2 > > anonymous votes]) > > 3. Ripuarian (18 support [thereof 2 anonymous votes]; 2 oppose) > > 4. Samogitian (?emait??ka) (11) > > 5. Vlax Romany (11 support, 1 oppose [anonymous]) > > > > More info on: > http://meta.wikimedia.org/wiki/Approved_requests_for_new_languages > > > > Kind regards, > > Servien Ilaino; and the > > Meta-Wiki community > > > > _______________________________________________ > > Wikitech-l mailing list > > Wikitech-l at wikimedia.org > > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > > > > > > -- > "Take away their language, destroy their souls." -- Joseph Stalin > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > From node.ue at gmail.com Thu Nov 24 21:21:59 2005 From: node.ue at gmail.com (Mark Williamson) Date: Thu, 24 Nov 2005 14:21:59 -0700 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: <41a006820511241304l7215e18ex9bb64e16bfa2d8d8@mail.gmail.com> References: <849f98ed0511241242t79c8d799s@mail.gmail.com> <41a006820511241304l7215e18ex9bb64e16bfa2d8d8@mail.gmail.com> Message-ID: <849f98ed0511241321w3da5a87ew@mail.gmail.com> > 2 anonymous votes that basically disqualify themselves.. Great to know that > you are there on your own Mark. So there is one qualified vote and two > unqualified votes, three is a different thing. As to the proposed ideas of > Jimmy, both Stellingwerfs, Gronings and Veluws could get their own > Wikipedias as well. I am sure you would support that as well. If you don't > then basically you agree with me that Jimmy's proposal has its fair share of > problems. How do anonymous votes disqualify themselves? Wikimedia voting policy is very clear. Anon voting is allowed unless local policy specifically dictates otherwise. > There is no such thing as an objective way of saying what a language is. You > can come close but as long as people are willing to go on their hobby horse > or fight from their ivory tower you will have for many languages controversy > because they might be a dialect. ... > For me having new projects is not a problem as long as people put sincere > effort in their project. Yes, I have moved my position more and more > towards allowing for most efforts. I am also not really afraid of hoaxes, > when we find them we delete them if we are able and willing to come to such > a decision. What is there to lose but the effort that a new community can > put into a project that will be one of our projects? I for one don't think that's particularly relevant. What _is_ relevant, is that even when the vote was at 15-4, Servien tried to shove it through even though by ALL definitions that is absolutely not consensus. > As to shifting position, even when a dialect gets its wikipedia, it will > reflect the culture that is part of the people that speak it. When people of > other dialects read this, they will find the pecularies of these people and > find the differences and similarities. It will be a kind of information that > will be hard to get in any other way. I don't think it's relevant. 3 oppose votes. Mark -- "Take away their language, destroy their souls." -- Joseph Stalin From magnus.manske at web.de Thu Nov 24 21:36:21 2005 From: magnus.manske at web.de (Magnus Manske) Date: Thu, 24 Nov 2005 22:36:21 +0100 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: <43862300.20000@pobox.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <43862300.20000@pobox.com> Message-ID: <43863255.4020009@web.de> Brion Vibber wrote: > Timwi wrote: > >>> It just makes it a lot harder to deal with such pages: if you >>> HTTP-redirect straight to the target page you're missing the link back >>> to the redirect page. (And that is *crucial* for editing work and >>> vandalism cleanup. It is non-negotiable.) >>> >> I feel I should point out an implicit fallacy in this. It is >> non-negotiable that we need a link to the redirect page. It is *not* >> non-negotiable that we can't return a 301 HTTP redirection. >> > > The only fallacy is yours, you invented some claim that I said 301s can't be > used. This is false; if you had read my message you'd have seen that my point > was using a 301 would require sending additional parameters to avoid making > redirect cleanup work hugely annoying, and this would destroy any benefit that > would allegedly come from using an HTTP redirect to the direct target URL (since > you can't do it, you'd have to use a different URL). > We're already using two different URLs to display REDIRECT pages. Using the redirect : .../wiki/The_Redirect shows the page The_Redirect redirects to. Showing the redirect: .../w/index.php?title=The_Redirect&redirect=no shows the page containing the redirect. Not that I have a strong opinion about using 301s, but couldn't we just turn the former link "style" into 301 and the latter not? Magnus From brion at pobox.com Thu Nov 24 21:53:47 2005 From: brion at pobox.com (Brion Vibber) Date: Thu, 24 Nov 2005 13:53:47 -0800 Subject: [Wikitech-l] Re: elements for interlanguage link information In-Reply-To: <43863255.4020009@web.de> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <43862300.20000@pobox.com> <43863255.4020009@web.de> Message-ID: <4386366B.7090403@pobox.com> Magnus Manske wrote: > We're already using two different URLs to display REDIRECT pages. > > Using the redirect : .../wiki/The_Redirect > shows the page The_Redirect redirects to. > > Showing the redirect: .../w/index.php?title=The_Redirect&redirect=no > shows the page containing the redirect. > > Not that I have a strong opinion about using 301s, but couldn't we just > turn the former link "style" into 301 and the latter not? No, because then you have no idea how you got to that page, and have to dig through stuff to find out. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From mathias.schindler at gmail.com Thu Nov 24 22:13:35 2005 From: mathias.schindler at gmail.com (Mathias Schindler) Date: Thu, 24 Nov 2005 23:13:35 +0100 Subject: [Wikitech-l] Static HTML tree dumps for mirroring or CD distribution In-Reply-To: References: Message-ID: <48502b480511241413x2b48f8caof6670b3497fc54f2@mail.gmail.com> On 11/24/05, John Haselden wrote: > Is there a HTML tree dump project in the works that I could contribute to? > /maintenance/dumphtml.php and static.wikipedia.org Mathias From mathias.schindler at gmail.com Thu Nov 24 22:15:20 2005 From: mathias.schindler at gmail.com (Mathias Schindler) Date: Thu, 24 Nov 2005 23:15:20 +0100 Subject: [Wikitech-l] Re: Add namespace parameter to _What links here_ In-Reply-To: References: Message-ID: <48502b480511241415k629ef1f3y634bbba6e5186f69@mail.gmail.com> On 11/24/05, Phil Boswell wrote: > Did I miss an answer to this question I wrote back in October? I filed a mediazilla feature request for this. Mathias From leon at ch.tudelft.nl Thu Nov 24 22:47:43 2005 From: leon at ch.tudelft.nl (Leon Planken) Date: Thu, 24 Nov 2005 23:47:43 +0100 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: <849f98ed0511241321w3da5a87ew@mail.gmail.com> References: <849f98ed0511241242t79c8d799s@mail.gmail.com> <41a006820511241304l7215e18ex9bb64e16bfa2d8d8@mail.gmail.com> <849f98ed0511241321w3da5a87ew@mail.gmail.com> Message-ID: <20051124224743.GA25195@atlas.et.tudelft.nl> Hi, I'll step in here and try not to cause too much of a fuss. I just want to notice here that every time this debate comes up, the only one who voices objections is Mark, the only non-anonymous voter against the new language Wikipedia. (I may remember this selectively though.) Of course, the people that most eloquently *support* the creation probably voted for it too. I wanted to write a conclusion here but couldn't really think of any. Just wanted to let you know that this struck me. The rest of the mailing list doesn't seem to want to get involved. Regards, Leon -- I really didn't foresee the Internet. But then, neither did the computer industry. Not that that tells us very much, of course - the computer industry didn't even foresee that the century was going to end. -- Douglas Adams (1952 - 2001) From node.ue at gmail.com Thu Nov 24 22:57:16 2005 From: node.ue at gmail.com (Mark Williamson) Date: Thu, 24 Nov 2005 15:57:16 -0700 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: <20051124224743.GA25195@atlas.et.tudelft.nl> References: <849f98ed0511241242t79c8d799s@mail.gmail.com> <41a006820511241304l7215e18ex9bb64e16bfa2d8d8@mail.gmail.com> <849f98ed0511241321w3da5a87ew@mail.gmail.com> <20051124224743.GA25195@atlas.et.tudelft.nl> Message-ID: <849f98ed0511241457v1a2c7604g@mail.gmail.com> On 24/11/05, Leon Planken wrote: > Hi, > > I'll step in here and try not to cause too much of a fuss. > > I just want to notice here that every time this debate comes up, the > only one who voices objections is Mark, the only non-anonymous voter > against the new language Wikipedia. (I may remember this selectively > though.) Of course, the people that most eloquently *support* the > creation probably voted for it too. Yes, to a certain degree; but how many people asked for its creation on-list? So far, Servien and Arbeo and perhaps Gerard M. So just as it's not fair to apply 3 to a full count of voters, I don't think it's fair to do the same for 1. Mark -- "Take away their language, destroy their souls." -- Joseph Stalin From astronouth7303 at gmail.com Fri Nov 25 00:45:31 2005 From: astronouth7303 at gmail.com (Jamie Bliss) Date: Thu, 24 Nov 2005 19:45:31 -0500 Subject: [Wikitech-l] Re: Add namespace parameter to _What links here_ In-Reply-To: References: Message-ID: Timwi wrote: > To find someone to code it may be the harder part. Last time I did > something like this it was reverted with a snide remark from Brion, so > I'm not doing this again. To get someone to _code_ it should be easy (just ask here and mediawiki-I). To get someone to commit it to CVS could be more challenging. You wouldn't believe the number of RFEs/bugs I've submitted to mediazilla with patches. The only one that was integrated into CVS was #1328, for eAccelerator shm support, and I had to really push for that one in IRC until Brion commited it. -- Jamie ------------------------------------------------------------------- http://endeavour.zapto.org/astro73/ Thank you to JosephM for inviting me to Gmail! Have lots of invites. Gmail now has 2GB. From sbwoodside at yahoo.com Fri Nov 25 05:40:11 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Fri, 25 Nov 2005 00:40:11 -0500 Subject: [Wikitech-l] scalability design ideas Message-ID: I discussed some ideas a few days ago on #wikimedia-tech, here's a summary email about that. overall goals: - preserve all wikipedia functionality - make it more resistant to scaling forces - highly modular - reduce interdependence - identify core "must have" functionality - push everything else out to the edge as far as possible - cache as much as possible at every level - design caching into the system Five logical components I'm focusing on: - the article (all articles = the content) - a content cache - authentication server(s) - UI server(s) - squid cache(s) As I understand the architecture today, several of these functions are currently being performed monolithically by the appservers... so in part I'm proposing a refactoring where the core functions (article, content cache, authentication) are protected from traffic by a ring of "defenses" in the UI servers and squid caches. Here's a breakdown of each layer: ARTICLE - for storing the content The unit of content is a 3-tuple: {wikitext, red coloured links, templates} - each time I am edited: --- change my content --- if I'm a new/moved article, change colour of links in articles that reference me --- if I'm a template, change each article that uses me - goal: --- when I'm edited, propagate those changes as *efficiently* as possible to my fellow articles --- insist that when I'm changed, directly or indirectly, I am only read *once* by the Content cache --- insist that I'm only changed by the authentication server CONTENT CACHE - for caching articles for browsing - goal: --- I only hit an article *once* for each change to that article --- no one else ever *reads* from the articles but me AUTHENTICATION SERVER - for authenticating users for editing - goal: --- I'm only involved when you have to be *certain* of a user's ID --- that is, first log-in, and when they submit an edit --- no one else ever *writes* to the articles but me (once I've ID'd the user) UI SERVER - for serving up HTML pages - goal: --- for browsing, I read from content cache, add user dressing, and serve --- for submitting edits, I send them to authentication server tricks: - could get into tricks with javascript, IFRAMEs, whatever to push work farther to the edge - could create a distributed UI server system that can be replicated and run by universities, etc. SQUID CACHE - for especially non-logged in users - goal: --- remove browsing load from the UI server -- http://simonwoodside.com From sbwoodside at yahoo.com Fri Nov 25 05:46:13 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Fri, 25 Nov 2005 00:46:13 -0500 Subject: [Wikitech-l] six wikipedia conjectures, and slowness In-Reply-To: References: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> <43838179.9010704@venier.net> Message-ID: On Nov 22, 2005, at 3:44 PM, Anders Wegge Jakobsen wrote: > "Andrew" == Andrew Venier writes: > >> S. Woodside wrote: >>> Conjecture 4. That the development of a wikipedia article over time >>> occurs in a manner consistent to the biological evolution of a >>> species. >>> > >> I would be rather surprised to hear that our articles are >> reproducing. > > The original list of "Wikipedians who " has cloned itself > numerous times. You may argue that it's nothing more than a parasite > feeding on the host organisms desire to be part of as many groups as > possible, but vira are also a result of evolution. Exactly. There's a bunch of ways to approach it. One is the concept of Speciation. This is the process whereby a single species turns into multiple species. Perhaps there's a new uncross-able river that shows up between two populations of the species, they diverge over time, and eventually they become two species. The same thing might be seen to have with wikipedia article where the article splits in two or more to cover variations on the topic in greater details. --simon -- http://simonwoodside.com From sbwoodside at yahoo.com Fri Nov 25 05:54:23 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Fri, 25 Nov 2005 00:54:23 -0500 Subject: [Wikitech-l] six wikipedia conjectures, and slowness In-Reply-To: References: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> Message-ID: On Nov 22, 2005, at 2:35 AM, Gregory Maxwell wrote: > On 11/22/05, S. Woodside wrote: >> Conjecture 2. That wikipedia is sufficiently formal and complete that >> you could build a useful general purpose AI knowledge base using it. > > On a more constructive note, If you're interested in AI taught using > wikipedia... you should take a look at the state of the art in English > parsers (such as [[Link Grammar]]) and how they fair on Wikipedia. > Impressive that it works at all, but I don't think we need to worry > about a hostile takeover of the planet by a self-aware encyclopedia > any time soon. I'm googling a little bit now for wiki-related AI projects... What stuck in my mind for conjecture 2 was the Loebner Prize, where you create these chatbots to try to win a "Turing Test" style competition. One of the biggest challenges they have is to come up with a good knowledge base, and since Cyc isn't free, well, that's a challenge. Could you build a good Loebner competitor using wiki? I have no clue. :-) But if someone put a gun to my head that's probably the track I'd go down. It's not like I think I'm some kind of scientific genius but I did run these conjectures by a friend of mine who I think is a genius and he seemed sufficiently interested that I thought I'd spam the world with them ;-) --simon -- http://simonwoodside.com From sbwoodside at yahoo.com Fri Nov 25 06:04:01 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Fri, 25 Nov 2005 01:04:01 -0500 Subject: [Wikitech-l] six wikipedia conjectures, and slowness In-Reply-To: <849f98ed0511212350q26751fd4p@mail.gmail.com> References: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> <849f98ed0511212350q26751fd4p@mail.gmail.com> Message-ID: On Nov 22, 2005, at 2:50 AM, Mark Williamson wrote: > What about: > > Conjecture 8. Regardless of the final result, the inner workings of > Wikipedia are so chaotic and at times unpleasant that most readers > would rip all of their hair out if they watched a documentary about > the creative process behind the articles. > > Fair? but hard to prove :-) simon > > Mark > > On 22/11/05, Gregory Maxwell wrote: >> On 11/22/05, S. Woodside wrote: >>> Conjecture 1. That the distance between any two wikipedia pages, >>> randomly chosen, as measured by wikilinks, is on average 6. >>> >>> Conjecture 2. That wikipedia is sufficiently formal and complete >>> that >>> you could build a useful general purpose AI knowledge base using it. >>> >>> Conjecture 3. That wikipedia has low information entropy. >>> >>> Conjecture 4. That the development of a wikipedia article over time >>> occurs in a manner consistent to the biological evolution of a >>> species. >>> >>> Conjecture 5. That the relationship between the amount of >>> material in >>> wikipedia and the number of article views is exponential. >>> >>> Conjecture 6. That wikipedia is, on average, factually accurate. >> >> Ohoh ! can I produce one? >> >> Conjecture 7. The amount of wanky conjectures produced by bloggers is >> bound only by the number of sites that will allow them to spam their >> URLs. >> >> Do I get a prize? ;) >> >> On a more constructive note, If you're interested in AI taught using >> wikipedia... you should take a look at the state of the art in >> English >> parsers (such as [[Link Grammar]]) and how they fair on Wikipedia. >> Impressive that it works at all, but I don't think we need to worry >> about a hostile takeover of the planet by a self-aware encyclopedia >> any time soon. >> _______________________________________________ >> Wikitech-l mailing list >> Wikitech-l at wikimedia.org >> http://mail.wikipedia.org/mailman/listinfo/wikitech-l >> > > > -- > "Take away their language, destroy their souls." -- Joseph Stalin > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l -- http://simonwoodside.com From smolensk at eunet.yu Fri Nov 25 07:18:07 2005 From: smolensk at eunet.yu (Nikola Smolenski) Date: Fri, 25 Nov 2005 08:18:07 +0100 Subject: [Wikitech-l] six wikipedia conjectures, and slowness In-Reply-To: References: <04EC2987-4790-4CFD-B0C0-50D591CB520D@yahoo.com> <849f98ed0511212350q26751fd4p@mail.gmail.com> Message-ID: <200511250818.07534.smolensk@eunet.yu> On Friday 25 November 2005 07:04, S. Woodside wrote: > On Nov 22, 2005, at 2:50 AM, Mark Williamson wrote: > > Conjecture 8. Regardless of the final result, the inner workings of > > Wikipedia are so chaotic and at times unpleasant that most readers > > would rip all of their hair out if they watched a documentary about > > the creative process behind the articles. > > > > Fair? > > but hard to prove :-) Why? Simply make a documentary about it and force some lab rats^H^H^H^H^H^H^H^Hreaders to watch it. From gerard.meijssen at gmail.com Fri Nov 25 09:13:40 2005 From: gerard.meijssen at gmail.com (GerardM) Date: Fri, 25 Nov 2005 10:13:40 +0100 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: <849f98ed0511241457v1a2c7604g@mail.gmail.com> References: <849f98ed0511241242t79c8d799s@mail.gmail.com> <41a006820511241304l7215e18ex9bb64e16bfa2d8d8@mail.gmail.com> <849f98ed0511241321w3da5a87ew@mail.gmail.com> <20051124224743.GA25195@atlas.et.tudelft.nl> <849f98ed0511241457v1a2c7604g@mail.gmail.com> Message-ID: <41a006820511250113v62a80aev4f915226c77b1eb1@mail.gmail.com> Hoi, There is one other thing wrong with voting for languages. You are supposed to vote for a project if you are willing to help with that project. Consequently I have not voted for nds-nl. For people who vote AGAINST the setup of a language there is no such barrier. There is nothing in there for them. It does not have consequences. There has been a lot of acromony about this language, Mark has moved it out of the standard request page several times and it still just does not die. If anything it proves that people want this language and want it badly. I do not understand what is in there for Mark. I do know that a democracy and voting where the votes do not carry the same weight is bad. Thanks, GerardM On 11/24/05, Mark Williamson wrote: > > On 24/11/05, Leon Planken wrote: > > Hi, > > > > I'll step in here and try not to cause too much of a fuss. > > > > I just want to notice here that every time this debate comes up, the > > only one who voices objections is Mark, the only non-anonymous voter > > against the new language Wikipedia. (I may remember this selectively > > though.) Of course, the people that most eloquently *support* the > > creation probably voted for it too. > > Yes, to a certain degree; but how many people asked for its creation > on-list? So far, Servien and Arbeo and perhaps Gerard M. So just as > it's not fair to apply 3 to a full count of voters, I don't think it's > fair to do the same for 1. > > Mark > > -- > "Take away their language, destroy their souls." -- Joseph Stalin > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > From phil.boswell at gmail.com Fri Nov 25 09:43:37 2005 From: phil.boswell at gmail.com (Phil Boswell) Date: Fri, 25 Nov 2005 09:43:37 -0000 Subject: [Wikitech-l] Re: Suggestion for Category links References: Message-ID: "Jamie Bliss" wrote in message news:dm5mb8$8ls$1 at sea.gmane.org... > Phil Boswell wrote: >> [note cross-posting and prune as appropriate] >> How difficult would it be to add <> attributes to the Category >> links which appear at the bottom of articles? >> I was thinking it might be a good idea to have some way to display the >> "sort key" without having to delve into edit mode, so I wondered if this >> could be displayed as a "tool-tip" when hovering over the link, in the >> same way as the destination article is shown for a regular wikilink. Is >> this an idea which might be helpful, or am I getting punch-drunk after a >> difficult day? > Shouldn't it be the other way? Display the sort key in the page itself and > put the real title in the title="" attribute (no humor intended). No, because you want the set of categories to which the article belongs to be instantly visible. You would also want it readily available for printing. The "sort key" is a secondary attribute which can safely be left hidden unless the viewer wants to see it, and is likely to be less relavant to a printed version. -- Phil [[en:User:Phil Boswell]] From node.ue at gmail.com Fri Nov 25 10:18:50 2005 From: node.ue at gmail.com (Mark Williamson) Date: Fri, 25 Nov 2005 03:18:50 -0700 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: <41a006820511250113v62a80aev4f915226c77b1eb1@mail.gmail.com> References: <eb1ae9ed0511240434h55efb0f8x@mail.gmail.com> <849f98ed0511241242t79c8d799s@mail.gmail.com> <41a006820511241304l7215e18ex9bb64e16bfa2d8d8@mail.gmail.com> <849f98ed0511241321w3da5a87ew@mail.gmail.com> <20051124224743.GA25195@atlas.et.tudelft.nl> <849f98ed0511241457v1a2c7604g@mail.gmail.com> <41a006820511250113v62a80aev4f915226c77b1eb1@mail.gmail.com> Message-ID: <849f98ed0511250218h4e652372w@mail.gmail.com> > There is one other thing wrong with voting for languages. You are supposed > to vote for a project if you are willing to help with that project. That's quite untrue. If it were true, it would say so in some official or quasiofficial page. And it doesn't. > Consequently I have not voted for nds-nl. For people who vote AGAINST the > setup of a language there is no such barrier. There is nothing in there for > them. It does not have consequences. ... Then, why don't we have Zlatiborian WP? Or American English? If you requested a new language, "Meijssenish", and somehow managed to find 5 or 6 people willing to work on it, does that mean that opposition votes would not matter? The voting system is implemented to weed out requests that have issues with them or that might cause problems. Thus, Zlatiborian, American English, Brazilian Portuguese, DDR-Sprach, and the like, were not created, even though they had some supporters. > There has been a lot of acromony about this language, Mark has moved it out > of the standard request page several times and it still just does not die. No -- you've got it backwards. I have moved it BACK to the standard request page; Servien has been moving it off of there to the Approved requests page. > If anything it proves that people want this language and want it badly. I do No -- it proves that Servien is stubborn. He requested people to come vote against the Veluws WP because he was afraid the proposal might come through... even though that would've meant a Wikipedia in his language in some form or another. > not understand what is in there for Mark. I do know that a democracy and > voting where the votes do not carry the same weight is bad. 1) What there is for me, is that I think this request is absurd. So far, only 2 (count it -- 2) native speakers have weighed in on this. All other votes are from people who either do not speak the variety in question at all, or who speak it not as their native language. Some votes are based on the premise that it's to separate LS from the Dutch WP, which Servien told some people, which is patently absurd. Now, I would not oppose this request if I hadn't made 100% sure that no such language exists. According to a dialect atlas of LS referenced for me by Arbeo, there are 4 dialects of LS: 1) North Lowlands Saxon 2) Westphalian 3) Eastphalian 4) Schleswigish. The first dialect is the dialect used in the Netherlands, as well as much of Northern Germany, and is the dialect used on the current nds.wiki. There are no significant isoglosses along the Dutch-German border. Even the spraak/taal difference does not correspond to national boundaries -- in Groningen, they say "spraak". Veluws has some interesting linguistic features that distinguish it from other speech of the Northern LS area, but Gronings doesn't. Gronings is barely different from what's spoken in neighbouring Germany. Also, the European Charter for Regional and Minority Languages recognises one language called "Low Saxon", as a regional language in Germany and in the Netherlands. It doesn't recognise a separate "Dutch Low Saxon" language or "German Low Saxon" language. Now, if somebody requested a Westphalian WP, I'd support it because Westphalian and Northern LS are not mutually intelligible easily. But most Northern LS dialects are. Wikipedias are created for separate languages, not separate countries. What Servien is trying to do here is pretend that there is a separate language, when in fact there is not. All pages about "Nedersaksisch" note that it's spoken in NL and DE both. I found 0 references to a so-called "Dutch Low Saxon" language, and no website or book which considered the national border a real linguistic boundary. The benefit to a Wikipedia in which people from different countries cooperate is that national POVs are balanced. If Servien has his way, he will have a WP of only Dutch people with largely conservative views. Many things could be written there that would be declared POV at nds.wiki. Now, to address the "democracy" part -- Wikipedia is not a "democracy". Majority does _not_ rule. We have a little thing called "consensus". That means that there is general agreement. 3 people disagreeing out of 18 is not "general agreement". Yes, the 18 people are in a majority, but Wikipedia is designed so that the minority doesn't have to bend over so the majority can do whatever it wants, and the majority is encouraged to seek a compromise that more people will agree on. Now certainly, if it were 1 oppose vote to 50, that would be a definite consensus. But 3 to 18 isn't. It means there are still outstanding issues that need to be resolved, or the WP shouldn't be created. Now, if the ndsnl.wiki is created, that sets the precedent for other potentially ridiculous WPs... at any moment when they just happen to have a majority supporting (as Zlatiborian did for a few days), they can say "Oh our request is approved now create it". Mark PS Perhaps you want to know why I would vote against a Wikipedia, because you think its creation won't affect me. Well, if you think that, it's an incorrect assumption -- if a Wikipedia is created like that, for a national boundary rather than a genuine linguistic boundary, it degrades the status of the Foundation and of the project as a whole. I do not want to see this happen. Now, in all of this, it's also undeniable that Servien has not exactly had exemplary behaviour either. He started out by being rather rude, then moved on to trying to force his proposal through (at one point, the vote was 15-4, and yet he still said it should be considered "Approved" because of a majority, failing to understand the concept of consensus and unresolved issues). He was impatient with questions, did not provide full explanatory answers, and continued to be rude. More recently, he has resorted to namecalling aswell. -- "Take away their language, destroy their souls." -- Joseph Stalin From node.ue at gmail.com Fri Nov 25 10:20:50 2005 From: node.ue at gmail.com (Mark Williamson) Date: Fri, 25 Nov 2005 03:20:50 -0700 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: <849f98ed0511250218h4e652372w@mail.gmail.com> References: <eb1ae9ed0511240434h55efb0f8x@mail.gmail.com> <849f98ed0511241242t79c8d799s@mail.gmail.com> <41a006820511241304l7215e18ex9bb64e16bfa2d8d8@mail.gmail.com> <849f98ed0511241321w3da5a87ew@mail.gmail.com> <20051124224743.GA25195@atlas.et.tudelft.nl> <849f98ed0511241457v1a2c7604g@mail.gmail.com> <41a006820511250113v62a80aev4f915226c77b1eb1@mail.gmail.com> <849f98ed0511250218h4e652372w@mail.gmail.com> Message-ID: <849f98ed0511250220g1280838dn@mail.gmail.com> Ahh, and if you think Dutch spelling and German spelling of Lowlands Saxon are truly incompatible, check the archives of the mainpage at nds.wiki -- People have in the past posted messages there in Duthc spelt Lowlands Saxon, received responses in German spelling, and then responded in turn to that in Dutch spelling. People from across the border exchange mails in LS everyday on lowlands-l. Mark On 25/11/05, Mark Williamson <node.ue at gmail.com> wrote: > > There is one other thing wrong with voting for languages. You are supposed > > to vote for a project if you are willing to help with that project. > > That's quite untrue. If it were true, it would say so in some official > or quasiofficial page. And it doesn't. > > > Consequently I have not voted for nds-nl. For people who vote AGAINST the > > setup of a language there is no such barrier. There is nothing in there for > > them. It does not have consequences. > > ... > > Then, why don't we have Zlatiborian WP? Or American English? > > If you requested a new language, "Meijssenish", and somehow managed to > find 5 or 6 people willing to work on it, does that mean that > opposition votes would not matter? > > The voting system is implemented to weed out requests that have issues > with them or that might cause problems. Thus, Zlatiborian, American > English, Brazilian Portuguese, DDR-Sprach, and the like, were not > created, even though they had some supporters. > > > There has been a lot of acromony about this language, Mark has moved it out > > of the standard request page several times and it still just does not die. > > No -- you've got it backwards. I have moved it BACK to the standard > request page; Servien has been moving it off of there to the Approved > requests page. > > > If anything it proves that people want this language and want it badly. I do > > No -- it proves that Servien is stubborn. He requested people to come > vote against the Veluws WP because he was afraid the proposal might > come through... even though that would've meant a Wikipedia in his > language in some form or another. > > > not understand what is in there for Mark. I do know that a democracy and > > voting where the votes do not carry the same weight is bad. > > 1) What there is for me, is that I think this request is absurd. So > far, only 2 (count it -- 2) native speakers have weighed in on this. > All other votes are from people who either do not speak the variety in > question at all, or who speak it not as their native language. Some > votes are based on the premise that it's to separate LS from the Dutch > WP, which Servien told some people, which is patently absurd. Now, I > would not oppose this request if I hadn't made 100% sure that no such > language exists. > > According to a dialect atlas of LS referenced for me by Arbeo, there > are 4 dialects of LS: > > 1) North Lowlands Saxon > 2) Westphalian > 3) Eastphalian > 4) Schleswigish. > > The first dialect is the dialect used in the Netherlands, as well as > much of Northern Germany, and is the dialect used on the current > nds.wiki. > > There are no significant isoglosses along the Dutch-German border. > Even the spraak/taal difference does not correspond to national > boundaries -- in Groningen, they say "spraak". Veluws has some > interesting linguistic features that distinguish it from other speech > of the Northern LS area, but Gronings doesn't. Gronings is barely > different from what's spoken in neighbouring Germany. > > Also, the European Charter for Regional and Minority Languages > recognises one language called "Low Saxon", as a regional language in > Germany and in the Netherlands. It doesn't recognise a separate "Dutch > Low Saxon" language or "German Low Saxon" language. > > Now, if somebody requested a Westphalian WP, I'd support it because > Westphalian and Northern LS are not mutually intelligible easily. But > most Northern LS dialects are. > > Wikipedias are created for separate languages, not separate countries. > What Servien is trying to do here is pretend that there is a separate > language, when in fact there is not. All pages about "Nedersaksisch" > note that it's spoken in NL and DE both. I found 0 references to a > so-called "Dutch Low Saxon" language, and no website or book which > considered the national border a real linguistic boundary. > > The benefit to a Wikipedia in which people from different countries > cooperate is that national POVs are balanced. If Servien has his way, > he will have a WP of only Dutch people with largely conservative > views. Many things could be written there that would be declared POV > at nds.wiki. > > Now, to address the "democracy" part -- Wikipedia is not a > "democracy". Majority does _not_ rule. We have a little thing called > "consensus". That means that there is general agreement. 3 people > disagreeing out of 18 is not "general agreement". Yes, the 18 people > are in a majority, but Wikipedia is designed so that the minority > doesn't have to bend over so the majority can do whatever it wants, > and the majority is encouraged to seek a compromise that more people > will agree on. > > Now certainly, if it were 1 oppose vote to 50, that would be a > definite consensus. But 3 to 18 isn't. It means there are still > outstanding issues that need to be resolved, or the WP shouldn't be > created. > > Now, if the ndsnl.wiki is created, that sets the precedent for other > potentially ridiculous WPs... at any moment when they just happen to > have a majority supporting (as Zlatiborian did for a few days), they > can say "Oh our request is approved now create it". > > Mark > > PS > > Perhaps you want to know why I would vote against a Wikipedia, because > you think its creation won't affect me. Well, if you think that, it's > an incorrect assumption -- if a Wikipedia is created like that, for a > national boundary rather than a genuine linguistic boundary, it > degrades the status of the Foundation and of the project as a whole. I > do not want to see this happen. > > Now, in all of this, it's also undeniable that Servien has not exactly > had exemplary behaviour either. He started out by being rather rude, > then moved on to trying to force his proposal through (at one point, > the vote was 15-4, and yet he still said it should be considered > "Approved" because of a majority, failing to understand the concept of > consensus and unresolved issues). He was impatient with questions, did > not provide full explanatory answers, and continued to be rude. More > recently, he has resorted to namecalling aswell. > > -- > "Take away their language, destroy their souls." -- Joseph Stalin > -- "Take away their language, destroy their souls." -- Joseph Stalin From smolensk at eunet.yu Fri Nov 25 15:18:44 2005 From: smolensk at eunet.yu (Nikola Smolenski) Date: Fri, 25 Nov 2005 16:18:44 +0100 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: <849f98ed0511250220g1280838dn@mail.gmail.com> References: <eb1ae9ed0511240434h55efb0f8x@mail.gmail.com> <849f98ed0511250218h4e652372w@mail.gmail.com> <849f98ed0511250220g1280838dn@mail.gmail.com> Message-ID: <200511251618.44894.smolensk@eunet.yu> On Friday 25 November 2005 11:20, Mark Williamson wrote: > Ahh, and if you think Dutch spelling and German spelling of Lowlands > Saxon are truly incompatible, check the archives of the mainpage at > nds.wiki -- People have in the past posted messages there in Duthc > spelt Lowlands Saxon, received responses in German spelling, and then > responded in turn to that in Dutch spelling. > > People from across the border exchange mails in LS everyday on lowlands-l. It's nice to know that there are people who are even worse than us :) Have you considered installing extension which is being made for Serbian Wikipedia, which would enable that each article is viewable in both spellings? From hashar at altern.org Fri Nov 25 17:43:16 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Fri, 25 Nov 2005 18:43:16 +0100 Subject: [Wikitech-l] Re: wikipedia connection are slowed down: client side solutions? In-Reply-To: <87y83h3pjc.fsf@mat.ucm.es> References: <87y83h3pjc.fsf@mat.ucm.es> Message-ID: <dm7ifk$68f$1@sea.gmane.org> Uwe Brauer wrote: > I noted since a couple of days a slow down when I try to submit a > contribution. Is there anything I could do like enhance the cache of > my browser? There can be several cause, most of them are not related to client side caching :) When you submit a contribution, the software need to write in the master database (which is handling all writes for all projects). Then the software need to reparse the whole page and send it back to you (and flush various caches). -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From gmaxwell at gmail.com Fri Nov 25 18:24:55 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Fri, 25 Nov 2005 13:24:55 -0500 Subject: [Wikitech-l] scalability design ideas In-Reply-To: <A9C7D43A-0221-46A2-85EA-042DD7606A08@yahoo.com> References: <A9C7D43A-0221-46A2-85EA-042DD7606A08@yahoo.com> Message-ID: <e692861c0511251024r53dbed41x113305b71e0d20ff@mail.gmail.com> On 11/25/05, S. Woodside <sbwoodside at yahoo.com> wrote: > ARTICLE > - for storing the content > The unit of content is a 3-tuple: {wikitext, red coloured links, > templates} > - each time I am edited: > --- change my content > --- if I'm a new/moved article, change colour of links in articles > that reference me > --- if I'm a template, change each article that uses me > - goal: > --- when I'm edited, propagate those changes as *efficiently* as > possible to my fellow articles > --- insist that when I'm changed, directly or indirectly, I am only > read *once* by the Content cache > --- insist that I'm only changed by the authentication server And so you perform a linear scan of all articles redlinks to find the ones you must remove and a reparse of all articles to find the redlinks you must add every time there is a move or delete? Moves only decrease redlinks, but deletes must be handled as well. We have enough people dreaming up ideas, myself included. Show us the code, and the benchmarks. From edwardzyang at thewritingpot.com Sat Nov 26 03:46:54 2005 From: edwardzyang at thewritingpot.com (Edward Z. Yang) Date: Fri, 25 Nov 2005 22:46:54 -0500 Subject: [Wikitech-l] WikiStatus - OpenFact's replacement Message-ID: <4387DAAE.9000004@thewritingpot.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 OpenFact's Wikipedia Status ( http://tinyurl.com/48a5m ) is the original Wikipedia Status page. However, it suffers from a huge problem: it is prohibitively unwieldly to use. The solution? Create a more streamlined application designed for the exact task of allowing the community to report on contingency issues. Yup... it's called WikiStatus for lack of a more creative name. You can view an online version of it here: http://tinyurl.com/a62lu and the source code is here: http://tinyurl.com/b9z8n I'm not sure when/how OpenFacts became the de facto standard for status reporting, but it would be cool if we could add a link to WikiStatus from the standard error page. Of course, it's really new software (I consider it alpha quality) and it probably needs to be audited, but when there's a problem, there should be multiple contingency sites. Please check it out! Thanks. :D - -- Edward Z. Yang Personal: edwardzyang at thewritingpot.com SN:Ambush Commander Website: http://www.thewritingpot.com/ GPGKey:0x869C48DA http://www.thewritingpot.com/gpgpubkey.asc 3FA8 E9A9 7385 B691 A6FC B3CB A933 BE7D 869C 48DA -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) iD8DBQFDh9quqTO+fYacSNoRAmKQAJ4654F+d8NrScr40cwdek8x7Mky8wCfa3RU beH7MlBjYFiqejEZChf8EnA= =S57w -----END PGP SIGNATURE----- From f-x.p at laposte.net Sat Nov 26 10:05:04 2005 From: f-x.p at laposte.net (FxParlant) Date: Sat, 26 Nov 2005 11:05:04 +0100 Subject: [Wikitech-l] MediaWiki: 1.6devel Message-ID: <dm9c5q$8bl$1@sea.gmane.org> Hi, I've seen that wikipedia runs on a 1.6devel version of mediawiki. (see http://en.wikipedia.org/wiki/Special:Version ) Although I tried to open my eyes I couldn't find the files for any 1.6 version on http://sourceforge.net/project/showfiles.php?group_id=34373#files nor http://www.mediawiki.org/wiki/Download Can someone tell me what this version is ? Thanks Fran?ois From dirk at riehle.org Sat Nov 26 11:14:32 2005 From: dirk at riehle.org (Dirk Riehle) Date: Sat, 26 Nov 2005 12:14:32 +0100 Subject: Wikis and Wikipedia (was: Re: [Wikitech-l] WikiStatus - OpenFact's replacement In-Reply-To: <4387DAAE.9000004@thewritingpot.com> References: <4387DAAE.9000004@thewritingpot.com> Message-ID: <6.2.3.4.2.20051126120731.04344008@pop.gmail.com> Re: choice of term "WikiStatus" I think it would be good to make sure we don't equate the world of wikis with Wikipedia. Wikipedia may be the most well-known and important wiki, but the world of wikis as I'm sure everyone here knows is much broader and more diverse than even Wikipedia. I'm making this point after seeing some (influential?!) bloggers start equating wikis with Wikipedia, which helps nobody. (Neither will Wikipedia benefit from being equated with the LA Times desaster, nor will wikis be helped by being viewed through Encyclopedia glasses.) I recognize that you use WikiStatus only as a shorthand for Wikipedia Status and spell it out most of the time, but sometimes acronyms take on a life of their own, say if you register a domain for it. Thanks, Dirk At 26.11.2005, Edward Z. Yang wrote: >-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA1 > >OpenFact's Wikipedia Status ( http://tinyurl.com/48a5m ) is the original >Wikipedia Status page. However, it suffers from a huge problem: it is >prohibitively unwieldly to use. The solution? Create a more streamlined >application designed for the exact task of allowing the community to >report on contingency issues. Yup... it's called WikiStatus for lack of >a more creative name. > >You can view an online version of it here: http://tinyurl.com/a62lu and >the source code is here: http://tinyurl.com/b9z8n > >I'm not sure when/how OpenFacts became the de facto standard for status >reporting, but it would be cool if we could add a link to WikiStatus >from the standard error page. Of course, it's really new software (I >consider it alpha quality) and it probably needs to be audited, but when >there's a problem, there should be multiple contingency sites. > >Please check it out! Thanks. :D > >- -- > Edward Z. Yang Personal: edwardzyang at thewritingpot.com > SN:Ambush Commander Website: http://www.thewritingpot.com/ > GPGKey:0x869C48DA http://www.thewritingpot.com/gpgpubkey.asc > 3FA8 E9A9 7385 B691 A6FC B3CB A933 BE7D 869C 48DA >-----BEGIN PGP SIGNATURE----- >Version: GnuPG v1.4.1 (MingW32) > >iD8DBQFDh9quqTO+fYacSNoRAmKQAJ4654F+d8NrScr40cwdek8x7Mky8wCfa3RU >beH7MlBjYFiqejEZChf8EnA= >=S57w >-----END PGP SIGNATURE----- >_______________________________________________ >Wikitech-l mailing list >Wikitech-l at wikimedia.org >http://mail.wikipedia.org/mailman/listinfo/wikitech-l Dr. Dirk Riehle | Bayave Software GmbH | http://www.bayave.de Contact: dirk at riehle.org | +49 172 184 8755 | http://www.riehle.org Interested in wiki research? Please see http://www.wikisym.org! From timwi at gmx.net Sat Nov 26 11:50:04 2005 From: timwi at gmx.net (Timwi) Date: Sat, 26 Nov 2005 11:50:04 +0000 Subject: [Wikitech-l] Re: <link> elements for interlanguage link information In-Reply-To: <43862300.20000@pobox.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> Message-ID: <dm9iam$l0h$1@sea.gmane.org> Brion Vibber wrote: > Timwi wrote: > >>>It just makes it a lot harder to deal with such pages: if you >>>HTTP-redirect straight to the target page you're missing the link back >>>to the redirect page. (And that is *crucial* for editing work and >>>vandalism cleanup. It is non-negotiable.) >> >>I feel I should point out an implicit fallacy in this. It is >>non-negotiable that we need a link to the redirect page. It is *not* >>non-negotiable that we can't return a 301 HTTP redirection. > > The only fallacy is yours, you invented some claim that I said 301s can't be > used. Please stop getting so worked up over stuff, Brion. I didn't "invent" any "claim"; I pointed out a fallacy that people may make when reading your message, not necessarily a fallacy you made. > if you had read my message you'd have seen One of your most annoying habits is to allege that people didn't read your messages. I did, and I understood your point. I'm afraid you're the one that didn't understand mine - but maybe I wasn't quite clear enough, so I'll try to clarify: > my point was using a 301 would require sending additional parameters No, it wouldn't. _My_ point was that if we placed a link to the page you came from _somewhere else_ than a "redirected from" line (e.g. a list of "pages that redirect to here" on the Edit page), you would *not* need to send additional parameters. You could get back to the page a different way than you do currently. Of course, there's also someone else's suggestion of using the session variables to remember where you came from and still display a "redirected from" line. As far as I can see, you haven't replied to that idea. Timwi From krstic at fas.harvard.edu Sat Nov 26 12:07:24 2005 From: krstic at fas.harvard.edu (Ivan Krstic) Date: Sat, 26 Nov 2005 13:07:24 +0100 Subject: [Wikitech-l] MediaWiki: 1.6devel In-Reply-To: <dm9c5q$8bl$1@sea.gmane.org> References: <dm9c5q$8bl$1@sea.gmane.org> Message-ID: <43884FFC.80805@fas.harvard.edu> FxParlant wrote: > Can someone tell me what this version is ? It was decided Wikipedia would run from the latest development version in CVS. To make a checkout, see: http://sourceforge.net/cvs/?group_id=34373 -- Ivan Krstic <krstic at fas.harvard.edu> | 0x147C722D From timwi at gmx.net Sat Nov 26 12:21:47 2005 From: timwi at gmx.net (Timwi) Date: Sat, 26 Nov 2005 12:21:47 +0000 Subject: [Wikitech-l] Re: MediaWiki: 1.6devel In-Reply-To: <dm9c5q$8bl$1@sea.gmane.org> References: <dm9c5q$8bl$1@sea.gmane.org> Message-ID: <dm9k64$otr$1@sea.gmane.org> > I've seen that wikipedia runs on a 1.6devel version of mediawiki. > (see http://en.wikipedia.org/wiki/Special:Version ) > > Although I tried to open my eyes I couldn't find the files for any > 1.6 version on > http://sourceforge.net/project/showfiles.php?group_id=34373#files nor > http://www.mediawiki.org/wiki/Download It's in CVS :) From 2.718281828 at gmail.com Sat Nov 26 16:41:33 2005 From: 2.718281828 at gmail.com (SJ) Date: Sat, 26 Nov 2005 17:41:33 +0100 Subject: Wikis and Wikipedia (was: Re: [Wikitech-l] WikiStatus - OpenFact's replacement In-Reply-To: <6.2.3.4.2.20051126120731.04344008@pop.gmail.com> References: <4387DAAE.9000004@thewritingpot.com> <6.2.3.4.2.20051126120731.04344008@pop.gmail.com> Message-ID: <742dfd060511260841k355738f9n577f03d3c2e70236@mail.gmail.com> On 11/26/05, Dirk Riehle <dirk at riehle.org> wrote: > Re: choice of term "WikiStatus" > > I recognize that you use WikiStatus only as a shorthand for Wikipedia > Status and spell it out most of the time, but sometimes acronyms take > on a life of their own, say if you register a domain for it. So wikipstatus.com, with a silent p? From f-x.p at laposte.net Sat Nov 26 18:04:25 2005 From: f-x.p at laposte.net (FxParlant) Date: Sat, 26 Nov 2005 19:04:25 +0100 Subject: [Wikitech-l] Re: MediaWiki: 1.6devel In-Reply-To: <43884FFC.80805@fas.harvard.edu> References: <dm9c5q$8bl$1@sea.gmane.org> <43884FFC.80805@fas.harvard.edu> Message-ID: <dma88j$bmv$1@sea.gmane.org> Ivan Krstic wrote: > FxParlant wrote: > >>Can someone tell me what this version is ? > > > It was decided Wikipedia would run from the latest development version > in CVS. To make a checkout, see: > > http://sourceforge.net/cvs/?group_id=34373 > Thanks for this url. But as there aren't so many explanation, can you tell me in which modules I shall look for the version 1.6devel ? Thanks for your guidance. Fran?ois From krstic at fas.harvard.edu Sat Nov 26 18:49:11 2005 From: krstic at fas.harvard.edu (Ivan Krstic) Date: Sat, 26 Nov 2005 19:49:11 +0100 Subject: [Wikitech-l] Re: MediaWiki: 1.6devel In-Reply-To: <dma88j$bmv$1@sea.gmane.org> References: <dm9c5q$8bl$1@sea.gmane.org> <43884FFC.80805@fas.harvard.edu> <dma88j$bmv$1@sea.gmane.org> Message-ID: <4388AE27.50802@fas.harvard.edu> FxParlant wrote: > Thanks for this url. But as there aren't so many explanation, can you > tell me in which modules I shall look for the version 1.6devel ? The module with core MediaWiki code is 'phase3'. That said, since you weren't able to figure it out on your own, you probably *really* don't want to be running code from CVS. I suggest you grab the latest 1.5 release from here, instead: http://prdownloads.sourceforge.net/wikipedia/mediawiki-1.5.2.tar.gz?download -- Ivan Krstic <krstic at fas.harvard.edu> | 0x147C722D From brion at pobox.com Sat Nov 26 19:09:38 2005 From: brion at pobox.com (Brion Vibber) Date: Sat, 26 Nov 2005 11:09:38 -0800 Subject: [Wikitech-l] Re: <link> elements for interlanguage link information In-Reply-To: <dm9iam$l0h$1@sea.gmane.org> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> Message-ID: <4388B2F2.1020108@pobox.com> Timwi wrote: > No, it wouldn't. _My_ point was that if we placed a link to the page you > came from _somewhere else_ than a "redirected from" line (e.g. a list of > "pages that redirect to here" on the Edit page), you would *not* need to > send additional parameters. You could get back to the page a different > way than you do currently. If you have to jump through a bunch of extra hoops to get to it, then we've already lost the ability to easily get back where we came from. A 'pages that redirect here' would be only slightly more helpful than Whatlinkshere, as it doesn't tell you which one you actually came through. > Of course, there's also someone else's suggestion of using the session > variables to remember where you came from and still display a > "redirected from" line. As far as I can see, you haven't replied to that > idea. That would be a) extremely unreliable, b) harmful to caching. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051126/3234fd8b/attachment.pgp> From brion at pobox.com Sat Nov 26 19:10:43 2005 From: brion at pobox.com (Brion Vibber) Date: Sat, 26 Nov 2005 11:10:43 -0800 Subject: [Wikitech-l] WikiStatus - OpenFact's replacement In-Reply-To: <4387DAAE.9000004@thewritingpot.com> References: <4387DAAE.9000004@thewritingpot.com> Message-ID: <4388B333.6040905@pobox.com> Edward Z. Yang wrote: > OpenFact's Wikipedia Status ( http://tinyurl.com/48a5m ) is the original > Wikipedia Status page. However, it suffers from a huge problem: it is > prohibitively unwieldly to use. More importantly, it's on an unreliable, buggy, insecure old version of MediaWiki on somebody else's site which is less reliable than ours. That's why the links to it were removed some time ago. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051126/74358dee/attachment.pgp> From node.ue at gmail.com Sat Nov 26 19:24:28 2005 From: node.ue at gmail.com (Mark Williamson) Date: Sat, 26 Nov 2005 12:24:28 -0700 Subject: [Wikitech-l] Immediate request for wiki set-ups In-Reply-To: <200511251618.44894.smolensk@eunet.yu> References: <eb1ae9ed0511240434h55efb0f8x@mail.gmail.com> <849f98ed0511250218h4e652372w@mail.gmail.com> <849f98ed0511250220g1280838dn@mail.gmail.com> <200511251618.44894.smolensk@eunet.yu> Message-ID: <849f98ed0511261124y37af0f6bo@mail.gmail.com> Yes, but Servien and Heiko Evermann were rude about it, and at the moment it doesn't look like it's going to happen. Mark On 25/11/05, Nikola Smolenski <smolensk at eunet.yu> wrote: > On Friday 25 November 2005 11:20, Mark Williamson wrote: > > Ahh, and if you think Dutch spelling and German spelling of Lowlands > > Saxon are truly incompatible, check the archives of the mainpage at > > nds.wiki -- People have in the past posted messages there in Duthc > > spelt Lowlands Saxon, received responses in German spelling, and then > > responded in turn to that in Dutch spelling. > > > > People from across the border exchange mails in LS everyday on lowlands-l. > > It's nice to know that there are people who are even worse than us :) > > Have you considered installing extension which is being made for Serbian > Wikipedia, which would enable that each article is viewable in both > spellings? > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- "Take away their language, destroy their souls." -- Joseph Stalin From f-x.p at laposte.net Sat Nov 26 19:27:52 2005 From: f-x.p at laposte.net (FxParlant) Date: Sat, 26 Nov 2005 20:27:52 +0100 Subject: [Wikitech-l] Re: MediaWiki: 1.6devel In-Reply-To: <4388AE27.50802@fas.harvard.edu> References: <dm9c5q$8bl$1@sea.gmane.org> <43884FFC.80805@fas.harvard.edu> <dma88j$bmv$1@sea.gmane.org> <4388AE27.50802@fas.harvard.edu> Message-ID: <dmad52$nhj$1@sea.gmane.org> Hi Ivan, Yes, I looked pretty dumb on this one :-) I made a few extensions for mediawiki always with a simple text editor. I've never used CVS, because it is not much in use in human resources lessons :-) So today I had to install a cvs client, find out how it works, and discover that it required a name for a module... which I didn't have. If you think that I'm a bit of a lost cause, you might be right :-) Thanks for your advice, which is indeed, truly sensible. Fran?ois www.fxparlant.net Ivan Krstic wrote: > FxParlant wrote: > >>Thanks for this url. But as there aren't so many explanation, can you >>tell me in which modules I shall look for the version 1.6devel ? > > > The module with core MediaWiki code is 'phase3'. That said, since you > weren't able to figure it out on your own, you probably *really* don't > want to be running code from CVS. I suggest you grab the latest 1.5 > release from here, instead: > > http://prdownloads.sourceforge.net/wikipedia/mediawiki-1.5.2.tar.gz?download > From f-x.p at laposte.net Sat Nov 26 19:34:44 2005 From: f-x.p at laposte.net (FxParlant) Date: Sat, 26 Nov 2005 20:34:44 +0100 Subject: [Wikitech-l] Re: MediaWiki: 1.6devel In-Reply-To: <4388AE27.50802@fas.harvard.edu> References: <dm9c5q$8bl$1@sea.gmane.org> <43884FFC.80805@fas.harvard.edu> <dma88j$bmv$1@sea.gmane.org> <4388AE27.50802@fas.harvard.edu> Message-ID: <dmadhu$oee$1@sea.gmane.org> Super Ivan, It worked fine !! Someone recommanded me Eclipse which contains a cvs client. It downloaded the phase3 and it seems to be indeed the same as my usual mediawiki folder. thanks for your help Next step, finding out how to play with a diff file ... Fran?ois From astronouth7303 at gmail.com Sat Nov 26 19:52:11 2005 From: astronouth7303 at gmail.com (Jamie Bliss) Date: Sat, 26 Nov 2005 14:52:11 -0500 Subject: [Wikitech-l] Re: <link> elements for interlanguage link information In-Reply-To: <dm9iam$l0h$1@sea.gmane.org> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> Message-ID: <dmaedn$qpm$1@sea.gmane.org> Timwi wrote: > Brion Vibber wrote: > >> Timwi wrote: >> >>>> It just makes it a lot harder to deal with such pages: if you >>>> HTTP-redirect straight to the target page you're missing the link back >>>> to the redirect page. (And that is *crucial* for editing work and >>>> vandalism cleanup. It is non-negotiable.) >>> >>> >>> I feel I should point out an implicit fallacy in this. It is >>> non-negotiable that we need a link to the redirect page. It is *not* >>> non-negotiable that we can't return a 301 HTTP redirection. >> >> >> The only fallacy is yours, you invented some claim that I said 301s >> can't be >> used. > > > Please stop getting so worked up over stuff, Brion. I didn't "invent" > any "claim"; I pointed out a fallacy that people may make when reading > your message, not necessarily a fallacy you made. > >> if you had read my message you'd have seen > > > One of your most annoying habits is to allege that people didn't read > your messages. I did, and I understood your point. I'm afraid you're the > one that didn't understand mine - but maybe I wasn't quite clear enough, > so I'll try to clarify: > >> my point was using a 301 would require sending additional parameters > > > No, it wouldn't. _My_ point was that if we placed a link to the page you > came from _somewhere else_ than a "redirected from" line (e.g. a list of > "pages that redirect to here" on the Edit page), you would *not* need to > send additional parameters. You could get back to the page a different > way than you do currently. Caching. The browser can cache anything based on the URL and the headers given in Vary. (Within the bounds defined by the cache headers.) If you were to use HTTP redirects in this manner (and not add a CGI paramater), then Referer would have to be added to Vary, causing the browser to cache a seperate (but generally identical) version of the page _for every page it is linked from._ Using sessions would be impossible in this manner. The browser can not determine if what has been saved with the session has changed. As for telling the browser to not cache redirected pages, then you come up with this issue: - User ("Joe")navigates to [[Article A]]. - [[Article A]] links to [[Article B]] and [[Article C]]. - Joe navigates to [[Article B]], and his browser caches the copy. - Joe then navigates to [[Article C]] (from [[Article A]]), which links to [[Article D]] - [[Article D]] is a redirect to [[Article B]] - Joe navigates to [[Article D]] from [[Article C]] - The server sends a 301 Moved Permanently, and the browser loads the cached copy--The cached copy doesn't have the redirected from line! As for this shouting match, I suggest you two explain yourselves in detail so there is mutual understanding. Of course, you're both qualified adults and can take care of yourselves, so why am I even bothering to mention it? I'm sure you can logically come to an agreeable solution. -- Jamie ------------------------------------------------------------------- http://endeavour.zapto.org/astro73/ Thank you to JosephM for inviting me to Gmail! Have lots of invites. Gmail now has 2GB. From brion at pobox.com Sat Nov 26 21:08:19 2005 From: brion at pobox.com (Brion Vibber) Date: Sat, 26 Nov 2005 13:08:19 -0800 Subject: [Wikitech-l] PHP 5 comin' up soon Message-ID: <4388CEC3.70407@pobox.com> PHP 5.1.0 has finally come out of release-candidate status. At some point next week we're going to try migrating; at that point we'll be able to make use of new PHP 5 features in MediaWiki code, such as exceptions to simplify error-handling, the improved XML reading support, etc. This does mean that MediaWiki 1.6 will not run on PHP 4; it'll require PHP 5 or 5.1. Sorry for any inconvenience this may cause to third-party users on limited hosts still running older versions. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051126/e8d9f075/attachment.pgp> From timwi at gmx.net Sat Nov 26 21:11:30 2005 From: timwi at gmx.net (Timwi) Date: Sat, 26 Nov 2005 21:11:30 +0000 Subject: [Wikitech-l] Re: <link> elements for interlanguage link information In-Reply-To: <dmaedn$qpm$1@sea.gmane.org> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> <dmaedn$qpm$1@sea.gmane.org> Message-ID: <dmaj7b$7f5$1@sea.gmane.org> Jamie Bliss wrote: > >> No, it wouldn't. _My_ point was that if we placed a link to the page >> you came from _somewhere else_ than a "redirected from" line (e.g. a >> list of "pages that redirect to here" on the Edit page), you would >> *not* need to send additional parameters. You could get back to the >> page a different way than you do currently. > > Caching. > > The browser can cache anything based on the URL and the headers given in > Vary. (Within the bounds defined by the cache headers.) > > If you were to use HTTP redirects in this manner (and not add a CGI > paramater), then Referer would have to be added to Vary, causing the > browser to cache a seperate (but generally identical) version of the > page _for every page it is linked from._ How many more times do I have to state that my suggestion does away with the "Redirected from" line? Everyone keeps assuming that. From timwi at gmx.net Sat Nov 26 21:17:55 2005 From: timwi at gmx.net (Timwi) Date: Sat, 26 Nov 2005 21:17:55 +0000 Subject: [Wikitech-l] "What redirects here" list on Edit page In-Reply-To: <4388B2F2.1020108@pobox.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> <4388B2F2.1020108@pobox.com> Message-ID: <dmajjc$8v7$1@sea.gmane.org> Brion Vibber wrote: > Timwi wrote: > >>No, it wouldn't. _My_ point was that if we placed a link to the page you >>came from _somewhere else_ than a "redirected from" line (e.g. a list of >>"pages that redirect to here" on the Edit page), you would *not* need to >>send additional parameters. You could get back to the page a different >>way than you do currently. > > If you have to jump through a bunch of extra hoops to get to it, then > we've already lost the ability to easily get back where we came from. > A 'pages that redirect here' would be only slightly more helpful than > Whatlinkshere, as it doesn't tell you which one you actually came > through. I'm not convinced that you need to know which one you actually came through, especially if you have a list of all of them. If you follow a link in an article and are redirected to a place you didn't expect to be redirected to, then looking at a list of "articles that redirect here" is doubly useful because you can fix other inappropriate redirects as well, not just the one you stumbled upon. You mentioned fighting vandalism; but you normally notice vandalism on Recent Changes or Watchlists, where you generally follow History or Diff links, which are unaffected by my suggestion. >>[using session variables] > That would be a) extremely unreliable, b) harmful to caching. True. I didn't like that idea either. :) Timwi From astronouth7303 at gmail.com Sat Nov 26 22:17:22 2005 From: astronouth7303 at gmail.com (Jamie Bliss) Date: Sat, 26 Nov 2005 17:17:22 -0500 Subject: [Wikitech-l] Re: <link> elements for interlanguage link information In-Reply-To: <dmaj7b$7f5$1@sea.gmane.org> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> <dmaedn$qpm$1@sea.gmane.org> <dmaj7b$7f5$1@sea.gmane.org> Message-ID: <dmamu0$ibd$1@sea.gmane.org> Timwi wrote: > How many more times do I have to state that my suggestion does away with > the "Redirected from" line? Everyone keeps assuming that. Ah. I'd say (although I am not on the dev team) write the code and make it a user option. -- Jamie ------------------------------------------------------------------- http://endeavour.zapto.org/astro73/ Thank you to JosephM for inviting me to Gmail! Have lots of invites. Gmail now has 2GB. From edwardzyang at thewritingpot.com Sat Nov 26 23:18:16 2005 From: edwardzyang at thewritingpot.com (Edward Z. Yang) Date: Sat, 26 Nov 2005 18:18:16 -0500 Subject: [Wikitech-l] Re: WikiStatus - OpenFact's replacement Message-ID: <4388ED38.7020906@thewritingpot.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Eh... no one has actually commented on the program itself... :( Dirk Riehle wrote: > I think it would be good to make sure we don't equate the world of > wikis with Wikipedia. [snip] > > I recognize that you use WikiStatus only as a shorthand for Wikipedia > Status and spell it out most of the time, but sometimes acronyms > take on a life of their own, say if you register a domain for it. On second thought, the name doesn't seem that good. WikiStatus implies that it is a wiki about status (which is incorrect). The next interpretation is that it's the status of a wiki, Wikipedia, which is not true either: the software should be usable for any service out there on the web even though it was originally designed for Wikipedia. SJ <2.718281828 at gmail.com> wrote: > So wikipstatus.com, with a silent p? Well, it would be a .net, but no. :o) Any other suggestions? Brion Vibber wrote: > More importantly, it's on an unreliable, buggy, insecure old version > of MediaWiki on somebody else's site which is less reliable than > ours. That's why the links to it were removed some time ago. Here's the old message: Mark Ryan wrote: > Note that I have removed the link to OpenFacts.berlios.de, which was > getting overloaded as soon as Wikipedia went down. I have also removed > the link to the "offsite donation page" at Angela's request. The > French, German and Japanese messages direct those people to that > language's IRC channel. It's a pity, because pages like OpenFacts are useful precisely because they're not as fast paced as IRC (IRC channel gets flooded anyway), and have loads of useful links. What I'd really like to do is have Wikipedia set up an official contingency site and re-add the link to the error page. That's why I did this. So I'm sort of waiting for your blessing. ;) - -- Edward Z. Yang Personal: edwardzyang at thewritingpot.com SN:Ambush Commander Website: http://www.thewritingpot.com/ GPGKey:0x869C48DA http://www.thewritingpot.com/gpgpubkey.asc 3FA8 E9A9 7385 B691 A6FC B3CB A933 BE7D 869C 48DA -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) iD8DBQFDiO04qTO+fYacSNoRAm/LAKCBBmUJCPtzIgphIM3Cee7+/6eFWACeKtQi byb877FnjZsqn7eGy8q2L38= =Xexu -----END PGP SIGNATURE----- From t.starling at physics.unimelb.edu.au Sat Nov 26 23:49:19 2005 From: t.starling at physics.unimelb.edu.au (Tim Starling) Date: Sun, 27 Nov 2005 10:49:19 +1100 Subject: [Wikitech-l] Timeout problem fixed Message-ID: <dmas82$uob$1@sea.gmane.org> The site was randomly slow for the last week or two, maybe 10% of requests would time out. And for a couple of days, requests through the Amsterdam squids were timing out very regularly. Both problems are fixed now. I apologise that it took us so long to figure out -- it was a very frustrating problem that took a while despite the best efforts of the sysadmin team. In both cases it was subtle LVS misconfiguration, we're developing automated tools to test for similar problems occurring in the future. After that was fixed, I finished setting up the 20 new dual opterons, and put them into apache service. So service times for CPU-intensive operations should be quite good. And of course Brion put amane into service last week, so the problem with image load times should be fixed. The Florida squids might still be a bit slow, we have more on order. -- Tim Starling From doc at javastaff.com Sun Nov 27 00:41:26 2005 From: doc at javastaff.com (Federico Paparoni) Date: Sun, 27 Nov 2005 01:41:26 +0100 Subject: [Wikitech-l] J2ME Client for Wikipedia Message-ID: <dmavbp$5al$1@sea.gmane.org> Hi, i'm the administrator of a Java italian website, javastaff.com. We are a group of developers and we would like to create a J2ME client for Wikipedia, that would be an interesting software for a cellular phone. Of course we will do it only to donate a good software to the users. I have already talked with Jimmy Wales and he is entusiast of this project. We have already studied the structure of Wikipedia and we have done experiment parsing the information with a search on your website, but i would like to know if there is some page we can access to perform a better search (only text for example or a webservice or something else). I hope you like this idea :) Best regards --------------------------------------------------- Federico Paparoni JavaStaff.com Admin <doc>In girum imus nocte et consumimur ign</doc> --------------------------------------------------- From jama at debianlinux.net Sun Nov 27 02:04:48 2005 From: jama at debianlinux.net (Jama Poulsen) Date: Sun, 27 Nov 2005 03:04:48 +0100 Subject: [Wikitech-l] J2ME Client for Wikipedia In-Reply-To: <dmavbp$5al$1@sea.gmane.org> References: <dmavbp$5al$1@sea.gmane.org> Message-ID: <20051127020448.GA21542@conuropsis.org> On Sun, Nov 27, 2005 at 01:41:26AM +0100, Federico Paparoni wrote: > i'm the administrator of a Java italian website, javastaff.com. We are > a group of developers and we would like to create a J2ME client for > Wikipedia, that would be an interesting software for a cellular phone. > Of course we will do it only to donate a good software to the > users. See: http://www.en.wapedia.org I asked the author of that site (Florian Amrhein) if he wanted to release his parser tool as free software, but he wanted to clean it up first. An interesting quote from our discussion several months ago: "The problem is, that in mediawiki there is no separation between layout and content. Everywhere tables are used to make layout. As long this is not repaired - I think this is a major error, because CMS Systems should provide such a separation and stand on it - I can not display tables." So there are some major challenges to viewing Wikipedia articles in other ways. Using a tableless infobox rendering system (with stylesheet support) would be a good start. Jama Poulsen http://wikicompany.org http://debianlinux.net From brion at pobox.com Sun Nov 27 02:39:23 2005 From: brion at pobox.com (Brion Vibber) Date: Sat, 26 Nov 2005 18:39:23 -0800 Subject: [Wikitech-l] J2ME Client for Wikipedia In-Reply-To: <20051127020448.GA21542@conuropsis.org> References: <dmavbp$5al$1@sea.gmane.org> <20051127020448.GA21542@conuropsis.org> Message-ID: <43891C5B.8090405@pobox.com> Jama Poulsen wrote: > An interesting quote from our discussion several months ago: > > "The problem is, that in mediawiki there is no separation between > layout and content. Everywhere tables are used to make layout. MediaWiki is pretty sparse about tables, last I checked. The MonoBook skin is CSS-based and pretty table-free. There might be a couple still hiding around, please let us know if so. If you're talking about what *people on Wikipedia* do in spicing up pages with unnecessary, wasteful, stupid, non-portable markup, that's a different matter. Tables should never be abused like that, and if you see it happening you should speak up in that place. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051126/180ba4e0/attachment.pgp> From sbwoodside at yahoo.com Sun Nov 27 07:25:21 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Sun, 27 Nov 2005 02:25:21 -0500 Subject: Wikis and Wikipedia (was: Re: [Wikitech-l] WikiStatus - OpenFact's replacement In-Reply-To: <6.2.3.4.2.20051126120731.04344008@pop.gmail.com> References: <4387DAAE.9000004@thewritingpot.com> <6.2.3.4.2.20051126120731.04344008@pop.gmail.com> Message-ID: <E9D2BDAE-9246-4E85-98CB-CD9FE92F6312@yahoo.com> You mean the same way that people assumed that "Semapedia" was my work? (It's not) :-) Or the way so many programs are called "WinSomething" if they run on windows and MacSomething if they run on Mac, and LinThis and LinThat? --simon On Nov 26, 2005, at 6:14 AM, Dirk Riehle wrote: > Re: choice of term "WikiStatus" > > I think it would be good to make sure we don't equate the world of > wikis with Wikipedia. Wikipedia may be the most well-known and > important wiki, but the world of wikis as I'm sure everyone here > knows is much broader and more diverse than even Wikipedia. > > I'm making this point after seeing some (influential?!) bloggers > start equating wikis with Wikipedia, which helps nobody. (Neither > will Wikipedia benefit from being equated with the LA Times > desaster, nor will wikis be helped by being viewed through > Encyclopedia glasses.) > > I recognize that you use WikiStatus only as a shorthand for > Wikipedia Status and spell it out most of the time, but sometimes > acronyms take on a life of their own, say if you register a domain > for it. > > Thanks, > Dirk -- http://simonwoodside.com From sbwoodside at yahoo.com Sun Nov 27 07:39:00 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Sun, 27 Nov 2005 02:39:00 -0500 Subject: [Wikitech-l] J2ME Client for Wikipedia In-Reply-To: <20051127020448.GA21542@conuropsis.org> References: <dmavbp$5al$1@sea.gmane.org> <20051127020448.GA21542@conuropsis.org> Message-ID: <71B1758F-ECAB-4C85-9863-C79824282236@yahoo.com> On Nov 26, 2005, at 9:04 PM, Jama Poulsen wrote: > An interesting quote from our discussion several months ago: > > "The problem is, that in mediawiki there is no separation between > layout and content. Everywhere tables are used to make layout. > > As long this is not repaired - I think this is a major error, > because > CMS Systems should provide such a separation and stand on it - I can > not display tables." > > So there are some major challenges to viewing Wikipedia articles in > other ways. Using a tableless infobox rendering system (with > stylesheet > support) would be a good start. Wikipedia actually displays pretty well in mobile browsers. These are some of the standard problems: Too much chrome - mobiles have small screens and the users want to get straight to the content instead of scroll, scroll, scroll past the same chrome all the time. WP could use CSS display:none to get rid of this though (in combination with the media="handheld" include directive for CSS). Long articles - basically your average mobile user is going to be unhappy if they download a 50KB + article. The ideal way to solve this in my opinion would be some kind of automatic pagination of long articles. Large images - again it's a big download. If WP could create smaller thumbs of the images for mobiles that would be sweet. Again some kind of media="handheld" hacking could maybe be used for this. (Or browser detection.....) It would be really cool if wikipedia took a forefront in developing pages that used CSS handheld and other techniques properly to look good in both regular browsers and also on mobile phones. You can use Opera's "Small Screen" option to emulate a mobile that has proper CSS support. You can see some of the techniques I described above in action at my site ( http://semacode.org/ )... --simon From nospam-abuse at bloodgate.com Sun Nov 27 09:36:45 2005 From: nospam-abuse at bloodgate.com (Tels) Date: Sun, 27 Nov 2005 10:36:45 +0100 Subject: [Wikitech-l] Timeout problem fixed In-Reply-To: <dmas82$uob$1@sea.gmane.org> References: <dmas82$uob$1@sea.gmane.org> Message-ID: <200511271036.46868@bloodgate.com> -----BEGIN PGP SIGNED MESSAGE----- Moin, On Sunday 27 November 2005 00:49, Tim Starling wrote: > The site was randomly slow for the last week or two, maybe 10% of > requests would time out. And for a couple of days, requests through the > Amsterdam squids were timing out very regularly. Both problems are > fixed now. I apologise that it took us so long to figure out -- it was > a very frustrating problem that took a while despite the best efforts > of the sysadmin team. In both cases it was subtle LVS misconfiguration, > we're developing automated tools to test for similar problems occurring > in the future. > > After that was fixed, I finished setting up the 20 new dual opterons, > and put them into apache service. So service times for CPU-intensive > operations should be quite good. And of course Brion put amane into > service last week, so the problem with image load times should be > fixed. The Florida squids might still be a bit slow, we have more on > order. Just let me say: +---+---+---+---+---+ +---+---+---+---+ | T | H | A | N | K | | Y | O | U | ! | +---+---+---+---+---+ +---+---+---+---+ Best wishes, Tels - -- Signed on Sun Nov 27 10:34:36 2005 with key 0x93B84C15. Visit my photo gallery at http://bloodgate.com/photos/ PGP key on http://bloodgate.com/tels.asc or per email. "helft den armen v?geln" -- gegen kleinschreibung -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iQEVAwUBQ4l+LXcLPEOTuEwVAQEk2Qf/UdrBYyWeOJ/9NnrWDJFSFnBZcQ4hWRxA dr4kLfQYodXowQk0ggEYYqNSm+f3VKT+c1v1D0lRIeEh/2CHT4Ko56+/9nibuFo2 2bkvUWoooSC4S/c7N5YskVVeqWgk/a0Ucn1rge7XUhHdBtdc30oKFqlQhgfzrk0Z luwbnh3iVfxLcyR+8mK8+dQaUOBDALl1LTCyftoGVttg9JxgaUB8VrbY4s+uTjTP YjuEdV+b8/ZpjNHK3pc+9QgIEQWpQvkI6oYjEHYuHkJJ0qsHR7zNeeSBeQB8teS0 W3u9y+kotE3Wj4ZTe2DNqzOER9FJagvMHCis4YvNvo76WGp+6vMy5Q== =RTxP -----END PGP SIGNATURE----- From nospam-abuse at bloodgate.com Sun Nov 27 09:41:46 2005 From: nospam-abuse at bloodgate.com (Tels) Date: Sun, 27 Nov 2005 10:41:46 +0100 Subject: [Wikitech-l] J2ME Client for Wikipedia In-Reply-To: <71B1758F-ECAB-4C85-9863-C79824282236@yahoo.com> References: <dmavbp$5al$1@sea.gmane.org> <20051127020448.GA21542@conuropsis.org> <71B1758F-ECAB-4C85-9863-C79824282236@yahoo.com> Message-ID: <200511271041.48364@bloodgate.com> -----BEGIN PGP SIGNED MESSAGE----- Moin, On Sunday 27 November 2005 08:39, S. Woodside wrote: > On Nov 26, 2005, at 9:04 PM, Jama Poulsen wrote: > > An interesting quote from our discussion several months ago: > > > > "The problem is, that in mediawiki there is no separation between > > layout and content. Everywhere tables are used to make layout. > > > > As long this is not repaired - I think this is a major error, > > because > > CMS Systems should provide such a separation and stand on it - I > > can not display tables." > > > > So there are some major challenges to viewing Wikipedia articles in > > other ways. Using a tableless infobox rendering system (with > > stylesheet > > support) would be a good start. > > Wikipedia actually displays pretty well in mobile browsers. These are > some of the standard problems: [snipabit] Just to plug shamelessly one of my projects: http://bloodgate.com/perl/graph/ When graphs/flowcharts/relationships are not rendered as PNG but as HTML or ASCII, like so: # echo '[ Server ] -- HTTP --> [ Mobile\nClient ]' | perl as_ascii +--------+ +--------+ | Server | HTTP | Mobile | | | ------> | Client | +--------+ +--------+ Then the mobile user would have it easier to zoom the "image". The download size might even be smaller, the example above is 804 bytes as compressed PNG, but only about 100 bytes uncompressed text. Best wishes, Tels - -- Signed on Sun Nov 27 10:36:50 2005 with key 0x93B84C15. Visit my photo gallery at http://bloodgate.com/photos/ PGP key on http://bloodgate.com/tels.asc or per email. "Laugh and the world laughs with you, snore and you sleep alone." -- Unknown -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iQEVAwUBQ4l/WncLPEOTuEwVAQEYIQf/U/ccl65IxZWVb3OS+Neb1crh+ccvO5xh UVwLRzlRz0fOnZdtGzMm2fxMR4LuH3VzcaDGj3ODpkL8DK0lwEGPJc3EtMRMj/aq EdUHOJBjGtyzlV1UCqA9l6Aa72hovBanh3BxYMYieRQD/a1YVT+CKREUyl64o4r+ 08GYkmrQvKGzRhMKqDLBxrzhpZoNEQoz3VHf2NlY8LgzUmkWu23fNeY/lST1YrEF P/TKP311aSg/Y6uq/2XIMxP6m2kR+5C81y6ewrgybn3xNthw+aUnJXF8EevGYpou CQtHk5oIwb3LYONlsFdoTvNnFcpHc3SRsYwn9iWWQ2opWtOvplctJA== =Iz8V -----END PGP SIGNATURE----- From nospam-abuse at bloodgate.com Sun Nov 27 09:44:09 2005 From: nospam-abuse at bloodgate.com (Tels) Date: Sun, 27 Nov 2005 10:44:09 +0100 Subject: [Wikitech-l] Re: WikiStatus - OpenFact's replacement In-Reply-To: <4388ED38.7020906@thewritingpot.com> References: <4388ED38.7020906@thewritingpot.com> Message-ID: <200511271044.10754@bloodgate.com> -----BEGIN PGP SIGNED MESSAGE----- Moin, On Sunday 27 November 2005 00:18, Edward Z. Yang wrote: > Eh... no one has actually commented on the program itself... :( > > Dirk Riehle wrote: > > I think it would be good to make sure we don't equate the world of > > wikis with Wikipedia. [snip] > > > > I recognize that you use WikiStatus only as a shorthand for Wikipedia > > Status and spell it out most of the time, but sometimes acronyms > > take on a life of their own, say if you register a domain for it. > > On second thought, the name doesn't seem that good. WikiStatus implies > that it is a wiki about status (which is incorrect). The next > interpretation is that it's the status of a wiki, Wikipedia, which is > not true either: the software should be usable for any service out > there on the web even though it was originally designed for Wikipedia. > > SJ <2.718281828 at gmail.com> wrote: > > So wikipstatus.com, with a silent p? > > Well, it would be a .net, but no. :o) Any other suggestions? Heartbeat? Or, on a less serious note: CanYouHearMeNow? :) I suck at namefinding, obviously :) Best wishes, Tels - -- Signed on Sun Nov 27 10:42:45 2005 with key 0x93B84C15. Visit my photo gallery at http://bloodgate.com/photos/ PGP key on http://bloodgate.com/tels.asc or per email. "My wife is just a slow brain, running up the bill.." -- Often misheard song lyrics #149 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) iQEVAwUBQ4l/6XcLPEOTuEwVAQF0nAf7BWsSef53d+dAUCQslr0Qr+vlkHCmx5pd nRa7bkQL+wB3mWpC9rMP3coQlI+dlyLpKX7QHrD3ozhw1vyeISCTtRLVETgd20kD GRbSb1gE/AHgeB5FTqBxjEbOeeiuCNOoZ9hPuhl1JrGhvkrVObi7YvTDixpCATAz thGdKrH3fuORHHWYWlcw2z8ftPO5Oq1ukKSDxMVDxmXCgsTDtgcrb1wAZf6UZZoU 7j3GDVcPch8FA8xqtnMux1ksVr1Pp9w7/gczzBaNDLqi9gDHEUkTttG0QOL68NXU 3/RlwQLU3taGXuRxYiBO3LSDucd9SD9p+QQcN+WMtOj1YmLZxwG6gQ== =GbM/ -----END PGP SIGNATURE----- From midom.lists at gmail.com Sun Nov 27 12:58:21 2005 From: midom.lists at gmail.com (Domas Mituzas) Date: Sun, 27 Nov 2005 14:58:21 +0200 Subject: [Wikitech-l] Cluster report, September-November, 2005 Message-ID: <3F923789-BC6E-44EB-ADC0-08257BAFE303@gmail.com> Hello, just a shameless copy-paste from meta (http:// meta.wikimedia.org/wiki/Cluster_report%2C_September-November%2C_2005) These months were yet again amazing in Wikimedia growth history. Since September request rates doubled, lots of information added, modified and expanded, more users came. To deal with that site had to improve both software and hardware platforms again. Of course, more hardware was thrown at the problem. In mid-September three new database servers (thistle,ixia,lomaria) were added to the pool, removing ancient type of hardware from the service. With data growth rates 'old' 4GB-RAM boxes could not keep up with operation, except quite limited one. 40 dual-opteron application servers have been deployed, conserving our limited colocation space, as well as providing lots of performance for a buck. One batch of them (20) was deployed just this week. They're equipped with larger drives and more memory, thus allowing to place various unplanned services on them (9 apache servers are storing old revisions as well), some servers participate in shared memory pool, running memcached. One of really efficient purchases was 12k$ worth image server 'amane', providing us with storage space and even ability to to backup at current loads. It is running now highly efficient and lightweight HTTP server - lighttpd. So far images are served, but growth of Wikimedia Commons will force us to find a really scalable and reliable way to handle lots of media. Additionally 10 more application servers are ordered together with a new Squid cache server batch. These 10 single-opteron boxes will have 4 small and fast disks and should enable efficient caching of content. As all this gear was bought for donated money, we really appreciate community help here, thank you! Yahoo supplied cluster in Seoul, Korea has finally got into action, bringing cached content closer to Asian locations, as well as having master databases and application cluster for Japanese, Thai, Korean and Malaysian Wikipedias. For internal load balancing Perlbal was replaced by LVS, and we've got a nice flashy donated load balancing device that may be deployed into operation soon as well. LVS has to be handled with care and several tiny misconfiguration incidents seriously affected site performance. Lately the cluster has became quite big and complex and now we need more sophisticated and extensive sanity checks and test cases. There are lots of work in establishing more failover capabilities - we will be having two active links to our main ISP in Florida. Static HTML dump is (becoming) nice and usable and may help us in case of serious crashes. It can be served from Amsterdam cluster as well! As for last several days we managed to bring cluster into quite proper working shape, now it's important to fix everything and prepare for more load and more growth and yet another expansion. We hope that we will be able with the help of community to solve all our performance and stability issues and avoid being Lohipedia :) Lots of various problems were solved so far in order to achieve what we have now, and lots of low hanging fruits have been picked. What is dealt now with is complex and needs manpower and fresh ideas as well. Discussions are always welcome on #wikimedia-tech in Freenode (except during serious downtimes :). And, of course, Thanks Team (or rather, Family)! It is amazing to work together! Cheers, Domas From timwi at gmx.net Sun Nov 27 13:46:12 2005 From: timwi at gmx.net (Timwi) Date: Sun, 27 Nov 2005 13:46:12 +0000 Subject: [Wikitech-l] Re: <link> elements for interlanguage link information In-Reply-To: <dmamu0$ibd$1@sea.gmane.org> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> <dmaedn$qpm$1@sea.gmane.org> <dmaj7b$7f5$1@sea.gmane.org> <dmamu0$ibd$1@sea.gmane.org> Message-ID: <dmcdas$tgp$1@sea.gmane.org> Jamie Bliss wrote: > > I'd say (although I am not on the dev team) write the code and make it a > user option. Making it a user option is a good suggestion, however I hope everyone realises that the new behaviour would have to be default for the change to actually fulfill its intended purpose (improve search engine ranking). From timwi at gmx.net Sun Nov 27 14:33:31 2005 From: timwi at gmx.net (Timwi) Date: Sun, 27 Nov 2005 14:33:31 +0000 Subject: [Wikitech-l] Re: WikiStatus - OpenFact's replacement In-Reply-To: <4388ED38.7020906@thewritingpot.com> References: <4388ED38.7020906@thewritingpot.com> Message-ID: <dmcg3j$4g2$1@sea.gmane.org> > Eh... no one has actually commented on the program itself... :( It's called "silent praise"! :-) It means that it's so good that it doesn't warrant any criticism. :-) > On second thought, the name doesn't seem that good. WikiStatus implies > that it is a wiki about status (which is incorrect). No, it doesn't; Wikipedia isn't "a wiki about (encyclo)pedia(s)", Wiktionary isn't "a wiki about dictionary", etc. It's still incorrect though because it's not a wiki :) > The next interpretation is that it's the status of a wiki, Wikipedia, > which is not true either: the software should be usable for any > service out there on the web even though it was originally designed > for Wikipedia. In that case, call it "SiteStatus"? Timwi From timwi at gmx.net Sun Nov 27 14:40:45 2005 From: timwi at gmx.net (Timwi) Date: Sun, 27 Nov 2005 14:40:45 +0000 Subject: [Wikitech-l] Re: J2ME Client for Wikipedia In-Reply-To: <43891C5B.8090405@pobox.com> References: <dmavbp$5al$1@sea.gmane.org> <20051127020448.GA21542@conuropsis.org> <43891C5B.8090405@pobox.com> Message-ID: <dmcgh5$4g2$2@sea.gmane.org> Brion Vibber wrote: > > If you're talking about what *people on Wikipedia* do in spicing up pages with > unnecessary, wasteful, stupid, non-portable markup, that's a different matter. MediaWiki is not entirely innocent here because it encourages this behaviour. By far the easiest way to achieve the layouts people want is by using stupid, non-portable mark-up. Wanting to achieve a good layout, however, is neither unnecessary nor wasteful. Timwi From timwi at gmx.net Sun Nov 27 14:44:40 2005 From: timwi at gmx.net (Timwi) Date: Sun, 27 Nov 2005 14:44:40 +0000 Subject: [Wikitech-l] Re: J2ME Client for Wikipedia In-Reply-To: <71B1758F-ECAB-4C85-9863-C79824282236@yahoo.com> References: <dmavbp$5al$1@sea.gmane.org> <20051127020448.GA21542@conuropsis.org> <71B1758F-ECAB-4C85-9863-C79824282236@yahoo.com> Message-ID: <dmcgoh$5vo$1@sea.gmane.org> S. Woodside wrote: > > Long articles - basically your average mobile user is going to be > unhappy if they download a 50KB + article. Why? Do those phones have such little memory that they can't cope with that? Or are the browsers so crap they can't display the beginning of the page until all 50KB are loaded? > The ideal way to solve this in my opinion would be some kind of > automatic pagination of long articles. I would find *that* annoying - I would have to keep clicking and loading more and more pages to get to different parts of an article. Timwi From alfio.puglisi at gmail.com Sun Nov 27 14:56:33 2005 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sun, 27 Nov 2005 15:56:33 +0100 Subject: [Wikitech-l] Re: J2ME Client for Wikipedia In-Reply-To: <dmcgoh$5vo$1@sea.gmane.org> References: <dmavbp$5al$1@sea.gmane.org> <20051127020448.GA21542@conuropsis.org> <71B1758F-ECAB-4C85-9863-C79824282236@yahoo.com> <dmcgoh$5vo$1@sea.gmane.org> Message-ID: <4902d9990511270656g44887bf0m9071138608b8e69c@mail.gmail.com> On 11/27/05, Timwi <timwi at gmx.net> wrote: > S. Woodside wrote: > > > > Long articles - basically your average mobile user is going to be > > unhappy if they download a 50KB + article. > > Why? Do those phones have such little memory that they can't cope with > that? Or are the browsers so crap they can't display the beginning of > the page until all 50KB are loaded? Some mobile phone users pay their data connection according to the number of KB transferred. Good for reading text-only email, but not for much else. Alfio From h-j.luecking at t-online.de Sun Nov 27 15:04:21 2005 From: h-j.luecking at t-online.de (Heinz) Date: Sun, 27 Nov 2005 16:04:21 +0100 Subject: [Wikitech-l] Re: Wikis and Wikipedia In-Reply-To: <6.2.3.4.2.20051126120731.04344008@pop.gmail.com> References: <4387DAAE.9000004@thewritingpot.com> <6.2.3.4.2.20051126120731.04344008@pop.gmail.com> Message-ID: <dmchqc$8le$1@sea.gmane.org> If a Wiki is a site within the WWW USING a wiki, then wikipedia is not even a wiki. Then MediaWiki is (?) the most well-known wiki. Heinz Dirk Riehle schrieb: > Re: choice of term "WikiStatus" > > I think it would be good to make sure we don't equate the world of wikis > with Wikipedia. Wikipedia may be the most well-known and important wiki, > but the world of wikis as I'm sure everyone here knows is much broader > and more diverse than even Wikipedia. > From mathias.schindler at gmail.com Sun Nov 27 15:07:24 2005 From: mathias.schindler at gmail.com (Mathias Schindler) Date: Sun, 27 Nov 2005 16:07:24 +0100 Subject: [Wikitech-l] Re: Wikis and Wikipedia In-Reply-To: <dmchqc$8le$1@sea.gmane.org> References: <4387DAAE.9000004@thewritingpot.com> <6.2.3.4.2.20051126120731.04344008@pop.gmail.com> <dmchqc$8le$1@sea.gmane.org> Message-ID: <48502b480511270707g267755e1o4267ef5764692ccf@mail.gmail.com> On 11/27/05, Heinz <h-j.luecking at t-online.de> wrote: > If a Wiki is a site within the WWW USING a wiki, then wikipedia is not > even a wiki. Then MediaWiki is (?) the most well-known wiki. mediawiki is not an wiki, except mediawiki.org :) I'm glad we found a debate in which anyone can contribute anything without risking to move this debate Offtopic.... followup policy-l instead of this list anyone? Mathias From jherz at myrealbox.com Sun Nov 27 19:50:06 2005 From: jherz at myrealbox.com (=?ISO-8859-1?Q?J=FCrgen_Herz?=) Date: Sun, 27 Nov 2005 20:50:06 +0100 Subject: [Wikitech-l] Re: Cluster report, September-November, 2005 In-Reply-To: <dmcfag$2kk$1@sea.gmane.org> References: <3F923789-BC6E-44EB-ADC0-08257BAFE303@gmail.com> <dmcfag$2kk$1@sea.gmane.org> Message-ID: <438A0DEE.6040700@myrealbox.com> Anthere wrote: > This does not show the daily work of maintenance, install and improvement > though... True, but if you wanna take a closer look at their daily work, visit http://wikitech.leuksman.com/view/Server_admin_log J?rgen From astronouth7303 at gmail.com Sun Nov 27 20:05:44 2005 From: astronouth7303 at gmail.com (Jamie Bliss) Date: Sun, 27 Nov 2005 15:05:44 -0500 Subject: [Wikitech-l] Re: <link> elements for interlanguage link information In-Reply-To: <dmcdas$tgp$1@sea.gmane.org> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> <dmaedn$qpm$1@sea.gmane.org> <dmaj7b$7f5$1@sea.gmane.org> <dmamu0$ibd$1@sea.gmane.org> <dmcdas$tgp$1@sea.gmane.org> Message-ID: <dmd3jf$o8j$1@sea.gmane.org> Timwi wrote: > Jamie Bliss wrote: > Making it a user option is a good suggestion, however I hope everyone > realises that the new behaviour would have to be default for the change > to actually fulfill its intended purpose (improve search engine ranking). Does that line provide the inexperianced/ignorant user with any information of value? (ignorant == "unaware of how MW works in this regard") A casual reader will not care if they are redirected between pages. An inexperianced editor _may_ wonder why they link to [[Foo]] and end up at [[Bar]]; some help entries can help on this if they don't already exist. IMHO, the information is rarely of value. Situations in which it is almost always involves editing and maintenance. I consider it unlikely of an anonymous and/or inexperianced editor wondering about this. (The true test is the various support channels.) My request is that the status "301 Moved Permanently" not be used, and instead use "302 Found", "303 See Other", or "307 Temporary Redirect". (Only 302 is compatible with HTTP/1.0 agents.) -- Jamie ------------------------------------------------------------------- http://endeavour.zapto.org/astro73/ Thank you to JosephM for inviting me to Gmail! Have lots of invites. Gmail now has 2GB. From brian0918 at gmail.com Sun Nov 27 20:11:15 2005 From: brian0918 at gmail.com (Brian) Date: Sun, 27 Nov 2005 15:11:15 -0500 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias Message-ID: <438A12E3.6010002@gmail.com> There have been numerous reports recently of finding rampant vandalism on small Wikipedias/Wiktionaries. It might be useful to create an RC page and/or IRC channel which brings together the RC's of the ~50 smallest Wikipedias. brian0918 From astronouth7303 at gmail.com Sun Nov 27 20:09:36 2005 From: astronouth7303 at gmail.com (Jamie Bliss) Date: Sun, 27 Nov 2005 15:09:36 -0500 Subject: [Wikitech-l] Re: J2ME Client for Wikipedia In-Reply-To: <dmcgh5$4g2$2@sea.gmane.org> References: <dmavbp$5al$1@sea.gmane.org> <20051127020448.GA21542@conuropsis.org> <43891C5B.8090405@pobox.com> <dmcgh5$4g2$2@sea.gmane.org> Message-ID: <dmd3qn$o8j$2@sea.gmane.org> Timwi wrote: > Brion Vibber wrote: >> If you're talking about what *people on Wikipedia* do in spicing up >> pages with >> unnecessary, wasteful, stupid, non-portable markup, that's a different >> matter. > > MediaWiki is not entirely innocent here because it encourages this > behaviour. By far the easiest way to achieve the layouts people want is > by using stupid, non-portable mark-up. Wanting to achieve a good layout, > however, is neither unnecessary nor wasteful. Indeed. It should be remembered that most (if not almost all) contributors in Wikipedia are not knowledgable or even aware of HTML. -- Jamie ------------------------------------------------------------------- http://endeavour.zapto.org/astro73/ Thank you to JosephM for inviting me to Gmail! Have lots of invites. Gmail now has 2GB. From sbwoodside at yahoo.com Sun Nov 27 21:19:53 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Sun, 27 Nov 2005 16:19:53 -0500 Subject: [Wikitech-l] Re: J2ME Client for Wikipedia & low income nations In-Reply-To: <4902d9990511270656g44887bf0m9071138608b8e69c@mail.gmail.com> References: <dmavbp$5al$1@sea.gmane.org> <20051127020448.GA21542@conuropsis.org> <71B1758F-ECAB-4C85-9863-C79824282236@yahoo.com> <dmcgoh$5vo$1@sea.gmane.org> <4902d9990511270656g44887bf0m9071138608b8e69c@mail.gmail.com> Message-ID: <478C870E-B51A-471E-96DC-D0280C55B050@yahoo.com> On Nov 27, 2005, at 9:56 AM, Alfio Puglisi wrote: > On 11/27/05, Timwi <timwi at gmx.net> wrote: >> S. Woodside wrote: >>> >>> Long articles - basically your average mobile user is going to be >>> unhappy if they download a 50KB + article. >> >> Why? Do those phones have such little memory that they can't cope >> with >> that? Or are the browsers so crap they can't display the beginning of >> the page until all 50KB are loaded? > > Some mobile phone users pay their data connection according to the > number of KB transferred. Good for reading text-only email, but not > for much else. I think that it's more accurate to say "most" mobile phone users. Including a lot of users who have "unlimited" plans that do actually have a cap. The other reason is that mobile data speeds are currently on par roughly with a 14.4 modem, so that size matters for speed. 3G is faster, but it's not coming in very quickly. Here's another thing: in low-income nations, mobile clients might actually be WAY more available to the average person than a desktop client! The trend for mobile data in the developing nations is HUGE. --simon -- http://simonwoodside.com From edwardzyang at thewritingpot.com Sun Nov 27 22:00:59 2005 From: edwardzyang at thewritingpot.com (Edward Z. Yang) Date: Sun, 27 Nov 2005 17:00:59 -0500 Subject: [Wikitech-l] Re: WikiStatus - OpenFact's replacement Message-ID: <438A2C9B.1050102@thewritingpot.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Timwi wrote: > No, it doesn't; Wikipedia isn't "a wiki about (encyclo)pedia(s)", > Wiktionary isn't "a wiki about dictionary", etc. It's still incorrect > though because it's not a wiki :) You're right! Wikipedia is an "encyclopedia run on a wiki." So Wikistatus should be a "status run on a wiki." If you interpret wiki liberally and say "a collaborative Web site set up to allow user editing", WikiStatus isn't that far off the mark... but you're right, it's not a wiki. It's... hmm... an page with user-switchable status flags with imageboard style comment system. Now, to compress into a catchy two word phrase... Tels wrote: > Heartbeat? Or, on a less serious note: CanYouHearMeNow? :) Timwi wrote: > In that case, call it "SiteStatus"? Mmm... they're a bit too generic. (and CanYouHearMeNow is based off an ad, am I not mistaken?) I like SiteStatus, but it doesn't indicate that it is community run... I'm not good at naming either, as you can see. :o) - -- Edward Z. Yang Personal: edwardzyang at thewritingpot.com SN:Ambush Commander Website: http://www.thewritingpot.com/ GPGKey:0x869C48DA http://www.thewritingpot.com/gpgpubkey.asc 3FA8 E9A9 7385 B691 A6FC B3CB A933 BE7D 869C 48DA -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) iD8DBQFDiiybqTO+fYacSNoRAh8pAJwIj2tOtrzF6wxNbRa+zJlsIEDVfgCdFaBv Vv2dwxkgAxCQw1F92qFfI1E= =VU9k -----END PGP SIGNATURE----- From node.ue at gmail.com Sun Nov 27 23:36:03 2005 From: node.ue at gmail.com (Mark Williamson) Date: Sun, 27 Nov 2005 16:36:03 -0700 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias In-Reply-To: <438A12E3.6010002@gmail.com> References: <438A12E3.6010002@gmail.com> Message-ID: <849f98ed0511271536v4d9f017bw@mail.gmail.com> ... There already is such a thing, just not in an IRC channel Mark On 27/11/05, Brian <brian0918 at gmail.com> wrote: > There have been numerous reports recently of finding rampant vandalism > on small Wikipedias/Wiktionaries. It might be useful to create an RC > page and/or IRC channel which brings together the RC's of the ~50 > smallest Wikipedias. > > brian0918 > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- "Take away their language, destroy their souls." -- Joseph Stalin From beesley at gmail.com Mon Nov 28 00:18:56 2005 From: beesley at gmail.com (Angela) Date: Mon, 28 Nov 2005 11:18:56 +1100 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias In-Reply-To: <849f98ed0511271536v4d9f017bw@mail.gmail.com> References: <438A12E3.6010002@gmail.com> <849f98ed0511271536v4d9f017bw@mail.gmail.com> Message-ID: <8b722b800511271618r3b924d49h571c2f0c4c9572e4@mail.gmail.com> > On 27/11/05, Brian <brian0918 at gmail.com> wrote: > > There have been numerous reports recently of finding rampant vandalism > > on small Wikipedias/Wiktionaries. It might be useful to create an RC > > page and/or IRC channel which brings together the RC's of the ~50 > > smallest Wikipedias. See http://meta.wikimedia.org/wiki/SWMT The bloglines feed at http://www.bloglines.com/public/inactivewikipedias collects recent changes from small Wikipedias, Wiktionaries, Wikibooks and Wikiquotes. Angela From node.ue at gmail.com Mon Nov 28 01:08:21 2005 From: node.ue at gmail.com (Mark Williamson) Date: Sun, 27 Nov 2005 18:08:21 -0700 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias In-Reply-To: <8b722b800511271618r3b924d49h571c2f0c4c9572e4@mail.gmail.com> References: <438A12E3.6010002@gmail.com> <849f98ed0511271536v4d9f017bw@mail.gmail.com> <8b722b800511271618r3b924d49h571c2f0c4c9572e4@mail.gmail.com> Message-ID: <849f98ed0511271708s5f8a7e7fi@mail.gmail.com> Speaking of which, I've recently gone on a drive to try to eliminate active Wikis from the feed. I found a few actually, including os.wiki, cv.wiki, ceb.wiki, and ar.books, IIRC Cheers Mark On 27/11/05, Angela <beesley at gmail.com> wrote: > > On 27/11/05, Brian <brian0918 at gmail.com> wrote: > > > There have been numerous reports recently of finding rampant vandalism > > > on small Wikipedias/Wiktionaries. It might be useful to create an RC > > > page and/or IRC channel which brings together the RC's of the ~50 > > > smallest Wikipedias. > > See http://meta.wikimedia.org/wiki/SWMT > The bloglines feed at > http://www.bloglines.com/public/inactivewikipedias collects recent > changes from small Wikipedias, Wiktionaries, Wikibooks and Wikiquotes. > > Angela > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- "Take away their language, destroy their souls." -- Joseph Stalin From robla at robla.net Mon Nov 28 02:03:40 2005 From: robla at robla.net (Rob Lanphier) Date: Sun, 27 Nov 2005 18:03:40 -0800 Subject: [Wikitech-l] Re: <link> elements for interlanguage link information In-Reply-To: <dmd3jf$o8j$1@sea.gmane.org> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> <dmaedn$qpm$1@sea.gmane.org> <dmaj7b$7f5$1@sea.gmane.org> <dmamu0$ibd$1@sea.gmane.org> <dmcdas$tgp$1@sea.gmane.org> <dmd3jf$o8j$1@sea.gmane.org> Message-ID: <1133143421.24978.85.camel@localhost.localdomain> On Sun, 2005-11-27 at 15:05 -0500, Jamie Bliss wrote: > Timwi wrote: > > Jamie Bliss wrote: > > Making it a user option is a good suggestion, however I hope everyone > > realises that the new behaviour would have to be default for the change > > to actually fulfill its intended purpose (improve search engine ranking). [...] > My request is that the status "301 Moved Permanently" not be used, and > instead use "302 Found", "303 See Other", or "307 Temporary Redirect". > (Only 302 is compatible with HTTP/1.0 agents.) 302 doesn't achieve anything with respect to search engines, or at least Google: http://www.google.com/intl/en/webmasters/3.html It /shouldn't/ work for the others either. If it does, it's because those search engines are broken. 302 is a temporary redirect. 301 was part of HTTP/1.0: http://www.w3.org/Protocols/rfc1945/rfc1945 Rob From edwardzyang at thewritingpot.com Mon Nov 28 02:31:58 2005 From: edwardzyang at thewritingpot.com (Edward Z. Yang) Date: Sun, 27 Nov 2005 21:31:58 -0500 Subject: [Wikitech-l] Re: WikiStatus - OpenFact's replacement Message-ID: <438A6C1E.7090900@thewritingpot.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hmmm... I think I'll rename it CoLocus, short for Community Contingency Locus. What do you think? - -- Edward Z. Yang Personal: edwardzyang at thewritingpot.com SN:Ambush Commander Website: http://www.thewritingpot.com/ GPGKey:0x869C48DA http://www.thewritingpot.com/gpgpubkey.asc 3FA8 E9A9 7385 B691 A6FC B3CB A933 BE7D 869C 48DA -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) iD8DBQFDimweqTO+fYacSNoRAgxkAJ0bBzt/LPy6pNmU/UpTUNsn3YQ8/ACfRyZp XmaX0HwsgPe+0s9PJKSrgWc= =qlm4 -----END PGP SIGNATURE----- From vilerage at gmail.com Sun Nov 27 23:04:18 2005 From: vilerage at gmail.com (Matthew R. Howard, Sr.) Date: Sun, 27 Nov 2005 18:04:18 -0500 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias In-Reply-To: <438A12E3.6010002@gmail.com> References: <438A12E3.6010002@gmail.com> Message-ID: <438A3B72.7060300@vilerage.us> Brian wrote: > There have been numerous reports recently of finding rampant vandalism > on small Wikipedias/Wiktionaries. It might be useful to create an RC > page and/or IRC channel which brings together the RC's of the ~50 > smallest Wikipedias. > > brian0918 > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > > That does sound like a good idea. Do the smaller wikis have something similar to the #en.wikipedia irc channel that streams RC data? if so, I should be able to mod cool_cat's bot (possibly just run 1 bot per wiki) and use a single channel for the smaller wikis. My only issue with doing that would be, would the channel scroll so fast that no one would be able to keep track...? Maybe something like a trouble-ticket like system, fed by the irc bot, that users may 'take the case' of specic vandalisms, and then 'close the case' when they're done. That too, would entail a lot of extra work to get done on the part of the RC Patroler. If you have any specific ideas of how you think this could be implemented, let me know, I'm off work for the next 2 days, I should be able to bash something out ;] Vilerage (on irc and en.wikipedia ;]) From ccanddyy at yahoo.com Mon Nov 28 02:45:28 2005 From: ccanddyy at yahoo.com (candy) Date: Sun, 27 Nov 2005 18:45:28 -0800 Subject: [Wikitech-l] database dump compatibility issue Message-ID: <dmdr0d$f73$1@sea.gmane.org> Hi all, I imported the english database dump (pages_current.xml.bz2) in mediawiki 1.5 and it worked fine. Does the same dump works well with mediawiki 1.4.5 as well. I am a bit worried as the database schema of the 2 mediawikis are different. Another small problem. If you go to the following url: http://download.wikimedia.org/wikipedia/en/ the full wikipedia page dump(pages_full.xml.bz2) seems to have a size of 14.1 GB but when I download it its size is only 2GB. Any clue what went wrong? A prompt reply will be highly appreciated ! Candy From brion at pobox.com Mon Nov 28 03:26:33 2005 From: brion at pobox.com (Brion Vibber) Date: Sun, 27 Nov 2005 19:26:33 -0800 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias In-Reply-To: <438A3B72.7060300@vilerage.us> References: <438A12E3.6010002@gmail.com> <438A3B72.7060300@vilerage.us> Message-ID: <438A78E9.1070406@pobox.com> Matthew R. Howard, Sr. wrote: > Brian wrote: >> There have been numerous reports recently of finding rampant vandalism >> on small Wikipedias/Wiktionaries. It might be useful to create an RC >> page and/or IRC channel which brings together the RC's of the ~50 >> smallest Wikipedias. >> > That does sound like a good idea. Do the smaller wikis have something > similar to the #en.wikipedia irc channel that streams RC data? All of our wikis do. > if so, I > should be able to mod cool_cat's bot (possibly just run 1 bot per wiki) > and use a single channel for the smaller wikis. Could do. Alternatively it's probably possible to adjust our RC bots to have some send a second stream (but I'm not sure offhand how to do this, Tim can you comment?) > My only issue with doing > that would be, would the channel scroll so fast that no one would be > able to keep track...? In general it would hardly ever move. By definition they have low traffic. ;) -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051127/4e00f935/attachment.pgp> From gmaxwell at gmail.com Mon Nov 28 03:32:40 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Sun, 27 Nov 2005 22:32:40 -0500 Subject: [Wikitech-l] database dump compatibility issue In-Reply-To: <dmdr0d$f73$1@sea.gmane.org> References: <dmdr0d$f73$1@sea.gmane.org> Message-ID: <e692861c0511271932w165bfd69qe7e3556ce32d215@mail.gmail.com> On 11/27/05, candy <ccanddyy at yahoo.com> wrote: > I imported the english database dump (pages_current.xml.bz2) in > mediawiki 1.5 and it worked fine. Does the same dump works well with > mediawiki 1.4.5 as well. I am a bit worried as the database schema of > the 2 mediawikis are different. The import process converts the XML to whatever format your version needs on the back end. This (among other reasons) is why we only provide the XML dump for this data now. > Another small problem. If you go to the following url: > http://download.wikimedia.org/wikipedia/en/ > > the full wikipedia page dump(pages_full.xml.bz2) seems to have a size of > 14.1 GB but when I download it its size is only 2GB. Any clue what went > wrong? Sounds like the software you used to download only supports files as large as two gigs. In Linux I use wget, if you run windows I'm not sure what to suggest. From brian0918 at gmail.com Mon Nov 28 03:55:22 2005 From: brian0918 at gmail.com (Brian) Date: Sun, 27 Nov 2005 22:55:22 -0500 Subject: [Wikitech-l] Interwiki On Wheels? Message-ID: <438A7FAA.1070207@gmail.com> Would it be possible to write a bot which simply goes through all the pages, following all the interwiki links that currently exist and placing them where they don't exist? This seems like it would be very easy for someone to write, easy to implement, and very effective. For example, if an article on EN links to an article on ES, but that one links to an article on DE, then the bot will go back through and add es/de to en, en/de to es, and es/en to de, etc. This would eliminate a lot of the problems encountered by the Interwiki Link Checker, for example, which can't deal with comparing English and Chinese (provided that at least one page somewhere in the chain links to the Chinese version). Of course, it isn't a complete solution, but it seems like it would be a very effective one. For any chains where there are discrepancies (en -> de -> a different en), it can just not bother with that chain. brian0918 From jdunck at gmail.com Mon Nov 28 04:05:53 2005 From: jdunck at gmail.com (Jeremy Dunck) Date: Sun, 27 Nov 2005 22:05:53 -0600 Subject: [Wikitech-l] database dump compatibility issue In-Reply-To: <e692861c0511271932w165bfd69qe7e3556ce32d215@mail.gmail.com> References: <dmdr0d$f73$1@sea.gmane.org> <e692861c0511271932w165bfd69qe7e3556ce32d215@mail.gmail.com> Message-ID: <2545a92c0511272005w543a2e30j61d93c07f3861b35@mail.gmail.com> On 11/27/05, Gregory Maxwell <gmaxwell at gmail.com> wrote: > Sounds like the software you used to download only supports files as > large as two gigs. > In Linux I use wget, if you run windows I'm not sure what to suggest. wget under cygwin works for me. http://www.cygwin.com/ From brion at pobox.com Mon Nov 28 04:57:42 2005 From: brion at pobox.com (Brion Vibber) Date: Sun, 27 Nov 2005 20:57:42 -0800 Subject: [Wikitech-l] CVS mail changes Message-ID: <438A8E46.3060603@pobox.com> I've made two tweaks to the CVS commit messages that get sent to the mediawiki-cvs list: * Unified diff (-u) should be easier to read in most cases * Mail headers include a charset specifier, so UTF-8 text should display correctly in your mail reader To support the latter, we now have a slightly customized version of the syncmail script in our local CVSROOT, rather than using SF's global copy. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051127/c8398728/attachment.pgp> From brion at pobox.com Mon Nov 28 04:59:00 2005 From: brion at pobox.com (Brion Vibber) Date: Sun, 27 Nov 2005 20:59:00 -0800 Subject: [Wikitech-l] Interwiki On Wheels? In-Reply-To: <438A7FAA.1070207@gmail.com> References: <438A7FAA.1070207@gmail.com> Message-ID: <438A8E94.9000201@pobox.com> Brian wrote: > Would it be possible to write a bot which simply goes through all the > pages, following all the interwiki links that currently exist and > placing them where they don't exist? This seems like it would be very > easy for someone to write, easy to implement, and very effective. Don't the pywikipediabot folks already do this? -- brion vibber (brion @ pobox.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051127/d2a80981/attachment.pgp> From node.ue at gmail.com Mon Nov 28 07:36:58 2005 From: node.ue at gmail.com (Mark Williamson) Date: Mon, 28 Nov 2005 00:36:58 -0700 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias In-Reply-To: <438A78E9.1070406@pobox.com> References: <438A12E3.6010002@gmail.com> <438A3B72.7060300@vilerage.us> <438A78E9.1070406@pobox.com> Message-ID: <849f98ed0511272336r7e6d1312w@mail.gmail.com> I'm not sure it's nessecary. As both Angela and I have already noted in this thread, there is already a group dedicated to monitoring small Wikis for spam and vandalism. Mark On 27/11/05, Brion Vibber <brion at pobox.com> wrote: > Matthew R. Howard, Sr. wrote: > > Brian wrote: > >> There have been numerous reports recently of finding rampant vandalism > >> on small Wikipedias/Wiktionaries. It might be useful to create an RC > >> page and/or IRC channel which brings together the RC's of the ~50 > >> smallest Wikipedias. > >> > > That does sound like a good idea. Do the smaller wikis have something > > similar to the #en.wikipedia irc channel that streams RC data? > > All of our wikis do. > > > if so, I > > should be able to mod cool_cat's bot (possibly just run 1 bot per wiki) > > and use a single channel for the smaller wikis. > > Could do. Alternatively it's probably possible to adjust our RC bots to have > some send a second stream (but I'm not sure offhand how to do this, Tim can you > comment?) > > > My only issue with doing > > that would be, would the channel scroll so fast that no one would be > > able to keep track...? > > In general it would hardly ever move. By definition they have low traffic. ;) > > -- brion vibber (brion @ pobox.com) > > > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > > > -- "Take away their language, destroy their souls." -- Joseph Stalin From gmaxwell at gmail.com Mon Nov 28 07:59:42 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Mon, 28 Nov 2005 02:59:42 -0500 Subject: [Wikitech-l] Interwiki On Wheels? In-Reply-To: <438A8E94.9000201@pobox.com> References: <438A7FAA.1070207@gmail.com> <438A8E94.9000201@pobox.com> Message-ID: <e692861c0511272359p493d9989vf96bb11970365d8e@mail.gmail.com> On 11/27/05, Brion Vibber <brion at pobox.com> wrote: > Brian wrote: > > Would it be possible to write a bot which simply goes through all the > > pages, following all the interwiki links that currently exist and > > placing them where they don't exist? This seems like it would be very > > easy for someone to write, easy to implement, and very effective. > > Don't the pywikipediabot folks already do this? Yes, and you can supervise it .. which is somewhat important because the mappings are not 1:1 because of different styles of breaking ideas into articles, so you can end up through multiple layers of interwiki on a page which isn't quite what you thought it was... From oub at mat.ucm.es Mon Nov 28 11:03:26 2005 From: oub at mat.ucm.es (Uwe Brauer) Date: Mon, 28 Nov 2005 12:03:26 +0100 Subject: [Wikitech-l] arbitrary Anchors and links within the same page Message-ID: <87fyphnif5.fsf@mat.ucm.es> Hello I am just reading the handbook and would like to have a link to an arbitrary anchors within a page. It is not clear to me how to set that anchor. The handbook say use <span id=".." /> so I understood <span id="test" /> [[#test]] would be the trick. But now it seems not to work. Can anybody please give me a full example. Thanks Uwe Brauer From gerard.meijssen at gmail.com Mon Nov 28 13:05:43 2005 From: gerard.meijssen at gmail.com (Gerard Meijssen) Date: Mon, 28 Nov 2005 14:05:43 +0100 Subject: [Wikitech-l] Single login Message-ID: <438B00A7.6050000@gmail.com> Hoi, We have discussed the subject of single login many times. There are many scenario's that we can take to get to a solution. There is also the potential to do some "future proofing". At this moment in time all our security for users is pretty minimal; it relies on knowing a password or having a cookie on your system. For gaining read only access we do not require any authentication. There are several scenario's where (technically available) additional authentication possibilities will help us. * When a range of IP numbers is blocked because of frequent vandalism, we want to allow access for authenticated editors. These can be schools or proxies. * When we host educational content, we want to ensure that it is only the student who accesses his material * When we host educational content, we want to give access to a subset of data to a teacher of a student * When we collaborate with another web services like Kennisnet, we allow users authenticated by such an organisation to use our resources as an authenticated editor The point that I am trying to make is that future proofing makes sense. When we have the potential to do this and make use of proven open source technology, we should consider this as an option in stead of "rolling our own". A-Select http://a-select.surfnet.nl/ is a project run by "Surfnet", it is available under a BSD license. Scalability has been very much part of their existing projects. It is used as the engine for many big projects; DigiD http://www.digid.nl/ is a project to give people living in the Netherlands access to their personal information. Strong authentication like used by banks for on-line transactions are provided for. The Dutch library system, Dutch education .. they use it. I will make sure that material about all this will become available on Meta. I start by posting here because there is a need for discussing the issues that come up when you introduce the potential for more authentication to our growing list of services. Thanks, GerardM From netocrat at dodo.com.au Mon Nov 28 13:09:04 2005 From: netocrat at dodo.com.au (Netocrat) Date: Tue, 29 Nov 2005 00:09:04 +1100 Subject: [Wikitech-l] Ampersand symbol in URLs Message-ID: <438B0170.60305@dodo.com.au> I noticed while validating the HTML output of an extension that some urls are being generated with a plain & instead of encoded as & and the W3C validator complains about this as an error. This patch fixes the line of code that was the source of the uncoded ampersand (and another line I noticed) if anyone with CVS access chooses to apply it. -- http://members.dodo.com.au/~netocrat From avarab at gmail.com Mon Nov 28 14:08:18 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Mon, 28 Nov 2005 14:08:18 +0000 Subject: [Wikitech-l] <link> elements for interlanguage link information In-Reply-To: <1131981004.29814.52.camel@zhora.1481ruerachel.net> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> Message-ID: <51dd1af80511280608x4c03fb5n6efd231c582e0de1@mail.gmail.com> Could this thread please be kept on topic? It's about adding link elements for interwiki links not about changing the behaviour of redirects. From usenet at tonal.clara.co.uk Mon Nov 28 14:48:23 2005 From: usenet at tonal.clara.co.uk (Neil Harris) Date: Mon, 28 Nov 2005 14:48:23 +0000 Subject: [Wikitech-l] PHP compilation Message-ID: <438B18B7.7040406@tonal.clara.co.uk> The Roadsend compiler (a non-Free commercial product) appears to be a PHP-in-Scheme implementation that then uses the Bigloo Scheme compiler to generate binary executables. Its authors' benchmarks appear to show a 10% to 150% improvement compared to standard PHP bytecode execution. Since I'm just about to set to work on a largish Scheme-driven project, and once wrote a (horrible) compiler using Lisp as a target language some time ago, I find this quite interesting. This kind of improvement in PHP execution performance across the Wikipedia cluster, if available using Free software, would be rather useful. Does anyone know of any free-as-in-freedom work on anything similar? I wonder if this would make an interesting MSc project for someone? -- Neil From rowan.collins at gmail.com Mon Nov 28 15:48:41 2005 From: rowan.collins at gmail.com (Rowan Collins) Date: Mon, 28 Nov 2005 15:48:41 +0000 Subject: [Wikitech-l] "What redirects here" list on Edit page In-Reply-To: <dmajjc$8v7$1@sea.gmane.org> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> <4388B2F2.1020108@pobox.com> <dmajjc$8v7$1@sea.gmane.org> Message-ID: <9f02ca4c0511280748g6d960b4fy@mail.gmail.com> [Moving responses to what seems to be a new thread - though it may just be a new subject line, GMail's broken at telling the difference.] On 27/11/05, Jamie Bliss <astronouth7303 at gmail.com> wrote: > Does that line provide the inexperianced/ignorant user with any > information of value? (ignorant == "unaware of how MW works in this regard") > > A casual reader will not care if they are redirected between pages. An > inexperianced editor _may_ wonder why they link to [[Foo]] and end up at > [[Bar]]; some help entries can help on this if they don't already exist. Actually, I would disagree with this rather strongly - a casual reader (or newbie editor, which we encourage to be the same thing) will generally be *very* surprised if they end up at a different page from the one they expected, precisely because they *don't* know what a redirect is. They may well not care that they're redirected between pages, but if they're not told that that's the case, *they will be confused*. The chances of them finding a help page explaining this also seem rather slim - what would they search for? "Help:Pages which aren't called what you expect"? > IMHO, the information is rarely of value. Situations in which it is > almost always involves editing and maintenance. I consider it unlikely > of an anonymous and/or inexperianced editor wondering about this. (The > true test is the various support channels.) Well, you've hit the nail on the head there - the reason I am so confident in how new users will react is that I've seen their questions on the en.Wikipedia Help desk, things like "Whenever I try and look at page X, I end up at page Y instead! What am I doing wrong!?" Indeed, this suggests that even our current label isn't doing its job well enough; for one thing - IMHO - Monobook makes it far too small and faint, as though we're ashamed of it and want to hide it among the "mechanics" of the UI. On 26/11/05, Timwi <timwi at gmx.net> wrote: > I'm not convinced that you need to know which one you actually came > through, especially if you have a list of all of them. Well, I think it's important firstly to let users know that they *have* been redirected. Otherwise the wiki appears to be doing "magic" with links and page names, rather than being really straight-forward and easy to understand (even a piped link has the exact name of the target page written into it). And secondly, the combination of piped links and redirects may mean you don't know *exactly* what page you *should* have ended up at, and it might not jump out of the list at you. So on a page with lots of redirects, you'd have to: 1) go "back" a page and find the link you clicked, hovering over it to discover the actual target in the tooltip 2) go "forward" (or click it again) 3) click "what links here" (or "edit this page", though I don't see why it would belong there) and possibly an extra button to get the "what redirects here" display 4) find the link back to the title discovered at step 1 to get at the redirect to edit/view comments in history/etc... Or, of course, you could manually create a "redirect=no" URL after step 1, but that's hardly a user friendly interface. Maybe I'm making a meal of this, but it seems to me that's not all that unlikely a situation. > If you follow a link in an article and are redirected to a place you > didn't expect to be redirected to, then looking at a list of "articles > that redirect here" is doubly useful because you can fix other > inappropriate redirects as well, not just the one you stumbled upon. This is a good argument for power users, who understand how redirects work, and may even spot a redirect without any "redirected from" message - though I think even I would be slightly confused if the redirect was instant and invisible. But I'm not sure these are the only users who benefit from the current message, as explained above. The whatlinkshere page could do with some improvement, though, and perhaps being able to separate the redirects from the "normal" links would be a useful feature to add to it. -- Rowan Collins BSc [IMSoP] From netocrat at dodo.com.au Mon Nov 28 15:37:44 2005 From: netocrat at dodo.com.au (Netocrat) Date: Tue, 29 Nov 2005 02:37:44 +1100 Subject: [Wikitech-l] Re: Ampersand symbol in URLs References: <438B0170.60305@dodo.com.au> Message-ID: <pan.2005.11.28.15.37.42.31146@dodo.com.au> On Tue, 29 Nov 2005 00:09:04 +1100, Netocrat wrote: > I noticed while validating the HTML output of an extension that some > urls are being generated with a plain & instead of encoded as & and > the W3C validator complains about this as an error. This patch fixes > the line of code that was the source of the uncoded ampersand (and > another line I noticed) if anyone with CVS access chooses to apply it. Looks like the attachment's been stripped on gmane. Here it is inline. Index: Title.php =================================================================== RCS file: /cvsroot/wikipedia/phase3/includes/Title.php,v retrieving revision 1.243 diff -u -r1.243 Title.php --- Title.php 13 Nov 2005 04:09:06 -0000 1.243 +++ Title.php 28 Nov 2005 11:30:53 -0000 @@ -678,7 +678,7 @@ if( false === strpos( $url, '?' ) ) { $url .= '?'; } else { - $url .= '&'; + $url .= '&'; } $url .= $query; } @@ -725,7 +725,7 @@ if ( $query == '-' ) { $query = ''; } - $url = "{$wgScript}?title={$dbkey}&{$query}"; + $url = "{$wgScript}?title={$dbkey}&{$query}"; } } -- http://members.dodo.com.au/~netocrat From martina.greiner at gmx.de Mon Nov 28 17:23:50 2005 From: martina.greiner at gmx.de (Martina Greiner) Date: Mon, 28 Nov 2005 17:23:50 +0000 (UTC) Subject: [Wikitech-l] importDump.php runs and runs and does not do anythin Message-ID: <loom.20051128T181357-342@post.gmane.org> I am trying to upload the english Wikipedia xml dump from 20051105 xml. I have Mysql 5, Wikimedia 1.6 phase 3, and a mac with osx 10.3. I started the loading process six days ago using importDump.php and the process list shows me that php, bz, and mysql are still working. However, when I go into Mysql the row counts of the tables do not go up. For example, the user table has one row, and the text table has approx. 3500 rows. If I call show processes in Mysql, then I can see that queries are running (admin tells me that approx 200 queries/second are executed) but they seem not to be executed. My questions: is it normal to take six days to upload the Wikipedia dump? is it normal that the row count does not go up? If not, I would highly appreciated any help. Unfortunately I cannot use mwdumper, because OSX 10.3 only allows java 1.4 and with java 1.4 I get exception errors. Thank you! From brion at pobox.com Mon Nov 28 20:39:00 2005 From: brion at pobox.com (Brion Vibber) Date: Mon, 28 Nov 2005 12:39:00 -0800 Subject: [Wikitech-l] PHP compilation In-Reply-To: <438B18B7.7040406@tonal.clara.co.uk> References: <438B18B7.7040406@tonal.clara.co.uk> Message-ID: <438B6AE4.7000302@pobox.com> Neil Harris wrote: > The Roadsend compiler (a non-Free commercial product) appears to be a > PHP-in-Scheme implementation that then uses the Bigloo Scheme compiler > to generate binary executables. Its authors' benchmarks appear to show a > 10% to 150% improvement compared to standard PHP bytecode execution. > > Since I'm just about to set to work on a largish Scheme-driven project, > and once wrote a (horrible) compiler using Lisp as a target language > some time ago, I find this quite interesting. This kind of improvement > in PHP execution performance across the Wikipedia cluster, if available > using Free software, would be rather useful. > > Does anyone know of any free-as-in-freedom work on anything similar? There's an in-progress PHP compiler for the CLI (.NET/Mono) floating about somewhere, but it's not ready for prime time last I looked. http://php4mono.sourceforge.net/ -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051128/b172d87e/attachment.pgp> From brion at pobox.com Mon Nov 28 20:40:56 2005 From: brion at pobox.com (Brion Vibber) Date: Mon, 28 Nov 2005 12:40:56 -0800 Subject: [Wikitech-l] Re: Ampersand symbol in URLs In-Reply-To: <pan.2005.11.28.15.37.42.31146@dodo.com.au> References: <438B0170.60305@dodo.com.au> <pan.2005.11.28.15.37.42.31146@dodo.com.au> Message-ID: <438B6B58.1090302@pobox.com> Netocrat wrote: > On Tue, 29 Nov 2005 00:09:04 +1100, Netocrat wrote: >> I noticed while validating the HTML output of an extension that some >> urls are being generated with a plain & instead of encoded as & and >> the W3C validator complains about this as an error. This patch fixes >> the line of code that was the source of the uncoded ampersand (and >> another line I noticed) if anyone with CVS access chooses to apply it. > > Looks like the attachment's been stripped on gmane. Here it is inline. [snip] > - $url = "{$wgScript}?title={$dbkey}&{$query}"; > + $url = "{$wgScript}?title={$dbkey}&{$query}"; This patch is incorrect, and will cause broken URLs to be output throughout the wiki. Instead, you should locate the individual *output* of the bad URL that you found and patch *that* to properly HTML-encode its output. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051128/1f7dcea5/attachment.pgp> From brion at pobox.com Mon Nov 28 20:44:51 2005 From: brion at pobox.com (Brion Vibber) Date: Mon, 28 Nov 2005 12:44:51 -0800 Subject: [Wikitech-l] importDump.php runs and runs and does not do anythin In-Reply-To: <loom.20051128T181357-342@post.gmane.org> References: <loom.20051128T181357-342@post.gmane.org> Message-ID: <438B6C43.4010308@pobox.com> Martina Greiner wrote: > I am trying to upload the english Wikipedia xml dump from 20051105 xml. I have > Mysql 5, Wikimedia 1.6 phase 3, and a mac with osx 10.3. > > I started the loading process six days ago using importDump.php and the process > list shows me that php, bz, and mysql are still working. However, when I go into > Mysql the row counts of the tables do not go up. 1) Checked open transaction? 2) Are you using read-only mode on the wiki? This will cause all changes to the database to silently fail to run. 3) For large imports like this use mwdumper if you want decent speed. Please see http://meta.wikimedia.org/wiki/ > For example, the user table has > one row, and the text table has approx. 3500 rows. If I call show processes in > Mysql, then I can see that queries are running (admin tells me that approx 200 > queries/second are executed) but they seem not to be executed. They may all be in a transaction then, in which case the modifications won't be visible to your other process until committed. > My questions: is it normal to take six days to upload the Wikipedia dump? is it > normal that the row count does not go up? importDump.php is relatively inefficient, and is generally expected to be used with smallish data sets being copied in from another wiki. For bulk imports you'll get much much better performance with mwdumper. > If not, I would highly appreciated any help. Unfortunately I cannot use > mwdumper, because OSX 10.3 only allows java 1.4 and with java 1.4 I get > exception errors. Java 1.5 for Mac OS X can be downloaded from www.apple.com. 1.4 will still be the default JVM, but you can run 1.5 specifically with: /System/Library/Frameworks/JavaVM.framework/Versions/1.5/Commands/java -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051128/8d3f1985/attachment.pgp> From smolensk at eunet.yu Mon Nov 28 21:40:38 2005 From: smolensk at eunet.yu (Nikola Smolenski) Date: Mon, 28 Nov 2005 22:40:38 +0100 Subject: [Wikitech-l] Re: WikiStatus - OpenFact's replacement In-Reply-To: <438A6C1E.7090900@thewritingpot.com> References: <438A6C1E.7090900@thewritingpot.com> Message-ID: <200511282240.38994.smolensk@eunet.yu> On Monday 28 November 2005 03:31, Edward Z. Yang wrote: > Hmmm... I think I'll rename it CoLocus, short for Community Contingency > Locus. What do you think? CoCoLoco? ;) From dorozynskij at poczta.onet.pl Mon Nov 28 22:09:34 2005 From: dorozynskij at poczta.onet.pl (=?iso-8859-2?Q?Doro=BFy=F1ski_Janusz?=) Date: Mon, 28 Nov 2005 23:09:34 +0100 Subject: [Wikitech-l] importDump.php runs and runs and does not do anythin In-Reply-To: <438B6C43.4010308@pobox.com> Message-ID: <20051128220959Z6431733-11696+94121@ps1.test.onet.pl> | -----Original Message----- | From: ... Brion Vibber | Sent: Monday, November 28, 2005 9:45 PM / | importDump.php is relatively inefficient, and is generally | expected to be used with smallish data sets being copied in | from another wiki. For bulk imports you'll get much much | better performance with mwdumper. Well, it's true. mwdumper is excellent. I've got that result for processing pages_current of polish wiki - 233491 pages: conv. xml->sql 201 sec., and the whole process (unpack,conversion,load) ended for 30 min. Thx, Brion Btw, can you change mwdumper output from I get 10?000 pages (292,235/sec), 10?000 revs (292,235/sec) ... 233?491 pages (1?186,643/sec), 233?491 revs (1?186,643/sec) to 10.000 pages ( 292/sec), 10.000 revs ( 292/sec) ... 233.491 pages (1.186/sec), 233.491 revs (1.186/sec) or (without thousand separator) 10000 pages ( 292/sec), 10000 revs ( 292/sec) ... 233491 pages (1186/sec), 233491 revs (1186/sec) Reg., Janusz 'Ency' Dorozynski | | > If not, I would highly appreciated any help. Unfortunately I cannot | > use mwdumper, because OSX 10.3 only allows java 1.4 and | with java 1.4 | > I get exception errors. | | Java 1.5 for Mac OS X can be downloaded from www.apple.com. | | 1.4 will still be the default JVM, but you can run 1.5 | specifically with: | /System/Library/Frameworks/JavaVM.framework/Versions/1.5/Commands/java | | -- brion vibber (brion @ pobox.com) | | From timwi at gmx.net Mon Nov 28 23:14:41 2005 From: timwi at gmx.net (Timwi) Date: Mon, 28 Nov 2005 23:14:41 +0000 Subject: [Wikitech-l] Re: "What redirects here" list on Edit page In-Reply-To: <9f02ca4c0511280748g6d960b4fy@mail.gmail.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <dll8u3$oua$1@sea.gmane.org> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> <4388B2F2.1020108@pobox.com> <dmajjc$8v7$1@sea.gmane.org> <9f02ca4c0511280748g6d960b4fy@mail.gmail.com> Message-ID: <dmg30q$env$1@sea.gmane.org> Rowan Collins wrote: > >>A casual reader will not care if they are redirected between pages. An >>inexperianced editor _may_ wonder why they link to [[Foo]] and end up at >>[[Bar]]; some help entries can help on this if they don't already exist. > > Actually, I would disagree with this rather strongly - a casual reader > (or newbie editor, which we encourage to be the same thing) will > generally be *very* surprised if they end up at a different page from > the one they expected, precisely because they *don't* know what a > redirect is. They may well not care that they're redirected between > pages, but if they're not told that that's the case, *they will be > confused*. I think you're basing this on the helpdesk questions you've seen. Myself, I haven't ever seen such a question from a confused user, but then again, I don't monitor the various helpdesks we have regularly. I can't really imagine that particularly many people would be surprised at redirects. Almost all redirects make sense. It makes sense for a misspelling to redirect to the correct spelling. It makes sense for synonyms to redirect to each other. And it makes sense for too specific a topic (e.g. a minor fictional character) to redirect to a general article (clearly titled "list of minor characters", in this example). I think the only people who are "surprised" are those who are actually trying to figure out how things work; those who wonder how Wikipedians told the system that a link to X should redirect the user to Y, as there's no obvious way of doing that. In passing, I would also challenge your view that "we encourage casual readers and newbie editors to be the same thing" -- although we encourage everyone to join Wikipedia, it is still a product aimed at end-users who generally don't want to participate, so it makes sense to distinguish between the two. At this point I was going to make a new suggestion, namely to have a textbox with "titles that redirect to here" on the edit page, rather than a list of links. Redirects would then be edited at the target title, not the redirect's title. I've realised, though, that this would make a lot of things more complicated. In particular, if you add a redirect and then the system tells you "Sorry, that title already redirects to [[somewhere else]]", you would then have to go to [[somewhere else]], remove the redirect and come back. You would also have to do that if you wanted to turn the redirect into a disambiguation page. Maybe the way it's done now is the best after all. Timwi From timwi at gmx.net Mon Nov 28 23:24:52 2005 From: timwi at gmx.net (Timwi) Date: Mon, 28 Nov 2005 23:24:52 +0000 Subject: [Wikitech-l] Re: <link> elements for interlanguage link information In-Reply-To: <51dd1af80511280608x4c03fb5n6efd231c582e0de1@mail.gmail.com> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <51dd1af80511280608x4c03fb5n6efd231c582e0de1@mail.gmail.com> Message-ID: <dmg3jt$gei$1@sea.gmane.org> ?var Arnfj?r? Bjarmason wrote: > Could this thread please be kept on topic? You're (probably) assuming that if people hadn't changed the topic, more traffic would be generated on your original topic. I sincerely doubt this is true. I don't think anyone is discouraged from replying to your original topic by the mere presence of a diverging thread. The new topic is perfectly on-topic on this mailing list, so you're unlikely to succeed in prohibiting it. But even if it was off-topic, you would merely force it to be moved to another list, but you would still not encourage increased discussion of your original topic. Timwi From timwi at gmx.net Tue Nov 29 00:02:25 2005 From: timwi at gmx.net (Timwi) Date: Tue, 29 Nov 2005 00:02:25 +0000 Subject: [Wikitech-l] Re: WikiStatus - OpenFact's replacement In-Reply-To: <200511282240.38994.smolensk@eunet.yu> References: <438A6C1E.7090900@thewritingpot.com> <200511282240.38994.smolensk@eunet.yu> Message-ID: <dmg5qa$mab$1@sea.gmane.org> Nikola Smolenski wrote: > On Monday 28 November 2005 03:31, Edward Z. Yang wrote: > >>Hmmm... I think I'll rename it CoLocus, short for Community Contingency >>Locus. What do you think? > > CoCoLoco? ;) Not wanting to smash every idea here, but "Lokus" in German means loo (toilet), and "CoCoLoco" sounds suspiciously like "Kokolores", which means nonsense. :-) Timwi From timwi at gmx.net Tue Nov 29 00:07:07 2005 From: timwi at gmx.net (Timwi) Date: Tue, 29 Nov 2005 00:07:07 +0000 Subject: [Wikitech-l] Re: RC patrol for ~50 smallest wikipedias In-Reply-To: <849f98ed0511272336r7e6d1312w@mail.gmail.com> References: <438A12E3.6010002@gmail.com> <438A3B72.7060300@vilerage.us> <438A78E9.1070406@pobox.com> <849f98ed0511272336r7e6d1312w@mail.gmail.com> Message-ID: <dmg633$mab$2@sea.gmane.org> Mark Williamson wrote: > I'm not sure it's nessecary. As both Angela and I have already noted > in this thread, there is already a group dedicated to monitoring small > Wikis for spam and vandalism. "Not necessary" is not a reason to not do it. It is a help. And personally, I think this would be a *big* help. Timwi From rowan.collins at gmail.com Tue Nov 29 00:43:26 2005 From: rowan.collins at gmail.com (Rowan Collins) Date: Tue, 29 Nov 2005 00:43:26 +0000 Subject: [Wikitech-l] Re: "What redirects here" list on Edit page In-Reply-To: <dmg30q$env$1@sea.gmane.org> References: <1131981004.29814.52.camel@zhora.1481ruerachel.net> <1132538575.6605.35.camel@localhost.localdomain> <43813378.4060605@pobox.com> <dm4rkd$bl$1@sea.gmane.org> <43862300.20000@pobox.com> <dm9iam$l0h$1@sea.gmane.org> <4388B2F2.1020108@pobox.com> <dmajjc$8v7$1@sea.gmane.org> <9f02ca4c0511280748g6d960b4fy@mail.gmail.com> <dmg30q$env$1@sea.gmane.org> Message-ID: <9f02ca4c0511281643i6005d099t@mail.gmail.com> On 28/11/05, Timwi <timwi at gmx.net> wrote: > I think you're basing this on the helpdesk questions you've seen. > Myself, I haven't ever seen such a question from a confused user, but > then again, I don't monitor the various helpdesks we have regularly. Well, yes, I said I was; but I'm also basing it on the fact that those seemed very logical and reasonable confusions. > I can't really imagine that particularly many people would be surprised > at redirects. Almost all redirects make sense. It makes sense for a > misspelling to redirect to the correct spelling. It makes sense for > synonyms to redirect to each other. And it makes sense for too specific > a topic (e.g. a minor fictional character) to redirect to a general > article (clearly titled "list of minor characters", in this example). It's not just a case of redirects "making sense" - when people want to reverse a decision to redirect, it should be as easy as other edits. So if someone decides that two terms *aren't* synonyms, or that a character isn't really that minor at all, they should be able to edit the pages appropriately - I know this argument's much abused, but this does seem rather central to being "an encyclopedia that anyone can edit". And the first step in changing the redirect is to know that the redirect exists in the first place - I'm not convinced people would even find a "what redirects here" list if they were unaware of what a redirect was (though, I guess that's why you suggested putting it on the edit page - so that there was a higher chance of them discovering it "by accident". Still, seems a poor substitute for just announcing the redirection clearly in the first place.) > I think the only people who are "surprised" are those who are actually > trying to figure out how things work; those who wonder how Wikipedians > told the system that a link to X should redirect the user to Y, as > there's no obvious way of doing that. I guess the users I'm thinking of are those who are just about getting used to the basic principles of the wiki, and understand that [[foo]] should link to an article called "foo" - "how remarkably simple", they think... Then when they are presented with an article called "Meta-syntactic variable" instead, they are surprised - suddenly, the wiki is doing some kind of voodoo and interpretting what they say, rather than obeying it. They may not think of it as "a link to X redirecting the user to Y" at all - they'll just think they've misunderstood the basic "link by typing the title of the target page" philosophy. So yeah, I guess they're the ones trying to figure out "how things work" - but only in as much as they're trying to participate, as they've been constantly invited to do since they first visitted the site. Maybe, though, I'm over-compensating for my own system-logical mind (and experience with MediaWiki), and most new users *would* actually come to the correct conclusion with no clues other than consistently arriving at the "wrong" page; I'm not convinced, though. > In passing, I would also challenge your view that "we encourage casual > readers and newbie editors to be the same thing" -- although we > encourage everyone to join Wikipedia, it is still a product aimed at > end-users who generally don't want to participate, so it makes sense to > distinguish between the two. Well, I sort of see your point, though I'd point out that you don't have to "join" anything in order to edit Wikipedia, you just click "edit this page", so the transition from reader to editor is a very subtle one in some senses. But yeah, I'll accept that "redirected from" notices aren't that important to readers - assuming editors are clever enough with their introductions that the redirects *do* make sense - but as soon as you turn editor, you absolutely need to know what's going on. > At this point I was going to make a new suggestion, namely to have a > textbox with "titles that redirect to here" on the edit page, rather > than a list of links. Redirects would then be edited at the target > title, not the redirect's title. There was a discussion along those lines a while ago (either on this list or mediawiki-l, I forget which), which got rather confusing, to the extent that I'm still not sure if there might be a feasible middle way on this... -- Rowan Collins BSc [IMSoP] From node.ue at gmail.com Tue Nov 29 01:42:06 2005 From: node.ue at gmail.com (Mark Williamson) Date: Mon, 28 Nov 2005 18:42:06 -0700 Subject: [Wikitech-l] Re: RC patrol for ~50 smallest wikipedias In-Reply-To: <dmg633$mab$2@sea.gmane.org> References: <438A12E3.6010002@gmail.com> <438A3B72.7060300@vilerage.us> <438A78E9.1070406@pobox.com> <849f98ed0511272336r7e6d1312w@mail.gmail.com> <dmg633$mab$2@sea.gmane.org> Message-ID: <849f98ed0511281742r54d44f5q@mail.gmail.com> How would it be a big help? Mark On 28/11/05, Timwi <timwi at gmx.net> wrote: > Mark Williamson wrote: > > I'm not sure it's nessecary. As both Angela and I have already noted > > in this thread, there is already a group dedicated to monitoring small > > Wikis for spam and vandalism. > > "Not necessary" is not a reason to not do it. It is a help. And > personally, I think this would be a *big* help. > > Timwi > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > -- "Take away their language, destroy their souls." -- Joseph Stalin From edwardzyang at thewritingpot.com Tue Nov 29 02:27:09 2005 From: edwardzyang at thewritingpot.com (Edward Z. Yang) Date: Mon, 28 Nov 2005 21:27:09 -0500 Subject: [Wikitech-l] Re: WikiStatus - OpenFact's replacement Message-ID: <438BBC7D.1060607@thewritingpot.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Nikola Smolenski wrote: > On Monday 28 November 2005 03:31, Edward Z. Yang wrote: >>> Hmmm... I think I'll rename it CoLocus, short for Community >>> Contingency Locus. What do you think? > > CoCoLoco? ;) Timwi wrote: > Not wanting to smash every idea here, but "Lokus" in German means loo > (toilet), and "CoCoLoco" sounds suspiciously like "Kokolores", which > means nonsense. :-) No, no, that's absolutely fine. Worst thing that could happen is that no one answers to this extremely un-wikitech-l conversation and I end up settling for a subpar name. We probably want a meaningful name, I think. Unfortunately, I registered a SourceForge project under colocus, and it was approved, and now I can't change it. Shouldn't have been so trigger-happy. So, I'll either have to justify my name, or request my project is deleted and register a new one. >_< Locus means "the scene of any event or action (especially the place of a meeting)". Co doubles up meaning as community and contingency (the community was an after thought). Is having a good name really that important? - -- Edward Z. Yang Personal: edwardzyang at thewritingpot.com SN:Ambush Commander Website: http://www.thewritingpot.com/ GPGKey:0x869C48DA http://www.thewritingpot.com/gpgpubkey.asc 3FA8 E9A9 7385 B691 A6FC B3CB A933 BE7D 869C 48DA -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (MingW32) iD8DBQFDi7x8qTO+fYacSNoRAiBjAJ4+38zmYIzrwm7T36Y8vauD6UOVlwCeL3vb IDHacjqMM2884ZJPGvzrA44= =WrSS -----END PGP SIGNATURE----- From sbwoodside at yahoo.com Tue Nov 29 04:46:44 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Mon, 28 Nov 2005 23:46:44 -0500 Subject: [Wikitech-l] code gripes Message-ID: <334D6428-BA2E-4970-8901-66A7900F637A@yahoo.com> Hi someone said "show me the code" so I'm diving in. I'm trying to make parserTests work without the database (without installing anything). I'm basically just going through the code running it, fixing and error, then running until the next error. Now I'm brand new to MediaWiki but I've done a lot of programming in lots of languages. Including a bit of PHP. Seems to me like there's an awful lot of global functions which is a bit worrying. Maybe this is just the normal way to do things in PHP but it seems a bit worrying to me because it could mean spaghetti code. Umm... documentation isn't great but I've seen worse (like Mozilla). But.. the biggest thing seems to be that the database kind of permeates the whole code... I can see why parserTests' author wrote separating it from the database as a @todo... and it's a bit tricky. I'm trying to do that now, writing fake classes and stuff in there. We'll see if it works out. A good long-term project might be to refactor so that the database is well isolated from the rest of the code. So that, for example, you could rip out the database and insert something else, or like for parserTests, run some parts of the code without a database at all. That's especially useful for unit tests :-) (For example I tried to include "commandLine.inc" and it wouldn't go because there's no database. I made a "commandLineSimple.inc" and ripped out all the database code.) I've hit a point now where the Parser access User functions. With a command line test there's not going to be a user, so I could either fake one, or I could temporarily modify the parser. It seems like this might be a good place to insert an intermediate parsing format (something between the article{wikitext, ...} and the final HTML. --simon -- http://simonwoodside.com From netocrat at dodo.com.au Tue Nov 29 05:12:13 2005 From: netocrat at dodo.com.au (Netocrat) Date: Tue, 29 Nov 2005 16:12:13 +1100 Subject: [Wikitech-l] Re: Re: Ampersand symbol in URLs References: <438B0170.60305@dodo.com.au> <pan.2005.11.28.15.37.42.31146@dodo.com.au> <438B6B58.1090302@pobox.com> Message-ID: <pan.2005.11.29.05.12.11.466905@dodo.com.au> On Mon, 28 Nov 2005 12:40:56 -0800, Brion Vibber wrote: > Netocrat wrote: [incorrectly patching Title.php/getLocalURL() and getFullURL() to encode ampersands] > This patch is incorrect, and will cause broken URLs to be output > throughout the wiki. Too hasty - I didn't notice that there were already escaped versions of these functions and that unescaped versions were necessary. > Instead, you should locate the individual *output* of the bad URL that > you found and patch *that* to properly HTML-encode its output. I based part of the extension on code from BoardVote.php, which doesn't use the escaped url function to generate the action of a form. So the patch is not very significant anyhow but this is what it should have been: Index: BoardVote.php =================================================================== RCS file: /cvsroot/wikipedia/extensions/BoardVote/BoardVote.php,v retrieving revision 1.4 diff -u -r1.4 BoardVote.php --- BoardVote.php 13 Sep 2005 14:12:09 -0000 1.4 +++ BoardVote.php 29 Nov 2005 05:05:56 -0000 @@ -155,7 +155,7 @@ global $wgBoardCandidates, $wgOut; $thisTitle = Title::makeTitle( NS_SPECIAL, "Boardvote" ); - $action = $thisTitle->getLocalURL( "action=vote" ); + $action = $thisTitle->escapeLocalURL( "action=vote" ); if ( $this->mHasVoted ) { $intro = wfMsg( "boardvote_intro_change" ); } else { -- http://members.dodo.com.au/~netocrat From brion at pobox.com Tue Nov 29 05:53:44 2005 From: brion at pobox.com (Brion Vibber) Date: Mon, 28 Nov 2005 21:53:44 -0800 Subject: [Wikitech-l] Re: Re: Ampersand symbol in URLs In-Reply-To: <pan.2005.11.29.05.12.11.466905@dodo.com.au> References: <438B0170.60305@dodo.com.au> <pan.2005.11.28.15.37.42.31146@dodo.com.au> <438B6B58.1090302@pobox.com> <pan.2005.11.29.05.12.11.466905@dodo.com.au> Message-ID: <438BECE8.2080505@pobox.com> Netocrat wrote: > On Mon, 28 Nov 2005 12:40:56 -0800, Brion Vibber wrote: >> Netocrat wrote: > [incorrectly patching Title.php/getLocalURL() and getFullURL() to encode > ampersands] >> This patch is incorrect, and will cause broken URLs to be output >> throughout the wiki. > > Too hasty - I didn't notice that there were already escaped versions of > these functions and that unescaped versions were necessary. As a general rule, escaping should be done as close as possible to the actual use. It's part of the file format or communications protocol in your output, not a part of your data, so you want to keep them separate to avoid unnecessarily hobbling code that should be more flexible. URLs may go to different outputs, not always HTML; for instance HTTP headers, plaintext output, generated JavaScript code, etc, will all have different requirements from XML/HTML text. Early escaping is also dangerous, since additional processing or just forgetting where you got some value can end up producing either corrupt data (such as double-escaping) or security vulnerabilities from missing escaping (SQL injection, HTML/JavaScript injection, etc). >> Instead, you should locate the individual *output* of the bad URL that >> you found and patch *that* to properly HTML-encode its output. > > I based part of the extension on code from BoardVote.php, which doesn't > use the escaped url function to generate the action of a form. So the > patch is not very significant anyhow but this is what it should have been: [patch snipped] Thanks, I've applied it to CVS. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051128/59ad2b61/attachment.pgp> From brion at pobox.com Tue Nov 29 05:57:24 2005 From: brion at pobox.com (Brion Vibber) Date: Mon, 28 Nov 2005 21:57:24 -0800 Subject: [Wikitech-l] code gripes In-Reply-To: <334D6428-BA2E-4970-8901-66A7900F637A@yahoo.com> References: <334D6428-BA2E-4970-8901-66A7900F637A@yahoo.com> Message-ID: <438BEDC4.4020009@pobox.com> S. Woodside wrote: > I've hit a point now where the Parser access User functions. With a > command line test there's not going to be a user, so I could either fake > one, or I could temporarily modify the parser. If there's something in Parser that accesses a user directly it perhaps should be migrated to ParserOptions, as that's what it's for. > It seems like this might be a good place to insert an intermediate > parsing format (something between the article{wikitext, ...} and the > final HTML. There's a couple attempts at this floating around, not sure what the state is. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051128/b83a8283/attachment.pgp> From magnus.manske at web.de Tue Nov 29 08:14:44 2005 From: magnus.manske at web.de (Magnus Manske) Date: Tue, 29 Nov 2005 09:14:44 +0100 Subject: [Wikitech-l] code gripes In-Reply-To: <438BEDC4.4020009@pobox.com> References: <334D6428-BA2E-4970-8901-66A7900F637A@yahoo.com> <438BEDC4.4020009@pobox.com> Message-ID: <438C0DF4.8030604@web.de> Brion Vibber wrote: >S. Woodside wrote: > > >>It seems like this might be a good place to insert an intermediate >>parsing format (something between the article{wikitext, ...} and the >>final HTML. >> >> > >There's a couple attempts at this floating around, not sure what the state is. > > > Perfect opportunity for yet another shameless plug of mine :-) http://magnusmanske.de/wiki2xml/w2x.php converts wikitext into XML. It can work directly off wikitext, or using an existing MediaWiki installation as source, in which case it can automatically resolve templates. It probably needs some testing for bugs, but generally seems to work quite well. Code is in CVS, module "wiki2xml", directory "php". Magnus From gordon.joly at pobox.com Tue Nov 29 11:30:18 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Tue, 29 Nov 2005 11:30:18 +0000 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias In-Reply-To: <438A12E3.6010002@gmail.com> References: <438A12E3.6010002@gmail.com> Message-ID: <p06230910bfb1ec02138a@[192.168.116.8]> At 15:11 -0500 27/11/05, Brian wrote: >There have been numerous reports recently of finding rampant >vandalism on small Wikipedias/Wiktionaries. It might be useful to >create an RC page and/or IRC channel which brings together the RC's >of the ~50 smallest Wikipedias. > >brian0918 I run about 5 or 6 wikis using Mediawiki and from time to time we have vandalism of various kinds. So I would welcome this! -- Gordo (aka LoopZilla) gordon.joly at pobox.com http://pobox.com/~gordo/ http://www.loopzilla.org/ From gordon.joly at pobox.com Tue Nov 29 11:30:18 2005 From: gordon.joly at pobox.com (Gordon Joly) Date: Tue, 29 Nov 2005 11:30:18 +0000 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias In-Reply-To: <438A12E3.6010002@gmail.com> References: <438A12E3.6010002@gmail.com> Message-ID: <p06230910bfb1ec02138a@[192.168.116.8]> At 15:11 -0500 27/11/05, Brian wrote: >There have been numerous reports recently of finding rampant >vandalism on small Wikipedias/Wiktionaries. It might be useful to >create an RC page and/or IRC channel which brings together the RC's >of the ~50 smallest Wikipedias. > >brian0918 I run about 5 or 6 wikis using Mediawiki and from time to time we have vandalism of various kinds. So I would welcome this! -- Gordo (aka LoopZilla) gordon.joly at pobox.com http://pobox.com/~gordo/ http://www.loopzilla.org/ From normannormal at gmail.com Tue Nov 29 13:33:23 2005 From: normannormal at gmail.com (normannormal at gmail.com) Date: Tue, 29 Nov 2005 10:33:23 -0300 Subject: [Wikitech-l] Introduction Message-ID: <294859716.20051129103323@gmail.com> Hello to everyone! My name's Luis, from Uruguay. I am new to the list, and hope I will be able to help in the development of the wonderful MediaWiki. I?ve been looking for some tasks in bugzilla to start (any suggestions?) Also, once I have familiarized with the development process, I am very interested in the MediaWiki 2 projects and the Complex tasks. Cheers, Luis From netocrat at dodo.com.au Tue Nov 29 14:32:45 2005 From: netocrat at dodo.com.au (Netocrat) Date: Wed, 30 Nov 2005 01:32:45 +1100 Subject: [Wikitech-l] Re: Patch: suppress auto-numbering of TOC and honour user prefs References: <pan.2005.09.24.13.07.02.452954@dodo.com.au> Message-ID: <pan.2005.11.29.14.32.41.484141@dodo.com.au> On Sat, 24 Sep 2005 23:07:07 +1000, Netocrat wrote: > This is a minimal change to add a new magic token that prevents > numbering of the TOC and to do the same when the user option "Auto-number > headings" under "Misc" in "Preferences" is set. ^^^ Should have been "not set", but since I misinterpreted the purpose of that preference, I'm not surprised the patch received no attention. Here's a new patch that deals solely with suppressing TOC/heading auto-numbering when a __NOTOCNUM__ directive is included in the page. Same reasoning as originally (e.g. for FAQ pages where deprecated answers are removed but new content should not reuse that answer's number): > The token directive is useful because some sections in a wiki may already > include numbering as part of the heading and suppressing auto-numbering is > useful in those cases. Let me know if there's a more appropriate place to submit this. Index: includes/Parser.php =================================================================== RCS file: /cvsroot/wikipedia/phase3/includes/Parser.php,v retrieving revision 1.533 diff -u -r1.533 Parser.php --- includes/Parser.php 27 Nov 2005 06:04:41 -0000 1.533 +++ includes/Parser.php 29 Nov 2005 14:09:35 -0000 @@ -2708,6 +2708,11 @@ function formatHeadings( $text, $isMain=true ) { global $wgMaxTocLevel, $wgContLang, $wgLinkHolders, $wgInterwikiLinkHolders; + # do not number TOC entries or corresponding headings if the string + # __NOTOCNUM__ (not case-sensitive) occurs in the HTML + $mw =& MagicWord::get( MAG_NOTOCNUM ); + $noTocNum = $mw->matchAndRemove( $text ); + $doNumberHeadings = $this->mOptions->getNumberHeadings(); $doShowToc = true; $forceTocHere = false; @@ -2836,16 +2841,18 @@ $levelCount[$toclevel] = $level; - # count number of headlines for each level - @$sublevelCount[$toclevel]++; - $dot = 0; - for( $i = 1; $i <= $toclevel; $i++ ) { - if( !empty( $sublevelCount[$i] ) ) { - if( $dot ) { - $numbering .= '.'; + if (! $noTocNum ) { + # count number of headlines for each level + @$sublevelCount[$toclevel]++; + $dot = 0; + for( $i = 1; $i <= $toclevel; $i++ ) { + if( !empty( $sublevelCount[$i] ) ) { + if( $dot ) { + $numbering .= '.'; + } + $numbering .= $wgContLang->formatNum( $sublevelCount[$i] ); + $dot = 1; } - $numbering .= $wgContLang->formatNum( $sublevelCount[$i] ); - $dot = 1; } } } Index: languages/Language.php =================================================================== RCS file: /cvsroot/wikipedia/phase3/languages/Language.php,v retrieving revision 1.741 diff -u -r1.741 Language.php --- languages/Language.php 28 Nov 2005 23:56:35 -0000 1.741 +++ languages/Language.php 29 Nov 2005 14:09:56 -0000 @@ -203,6 +203,7 @@ # ID CASE SYNONYMS MAG_REDIRECT => array( 0, '#REDIRECT' ), MAG_NOTOC => array( 0, '__NOTOC__' ), + MAG_NOTOCNUM => array( 0, '__NOTOCNUM__' ), MAG_FORCETOC => array( 0, '__FORCETOC__' ), MAG_TOC => array( 0, '__TOC__' ), MAG_NOEDITSECTION => array( 0, '__NOEDITSECTION__' ), -- http://members.dodo.com.au/~netocrat From ubifrieda at gmail.com Tue Nov 29 15:33:35 2005 From: ubifrieda at gmail.com (Frieda Brioschi) Date: Tue, 29 Nov 2005 16:33:35 +0100 Subject: [Wikitech-l] Portale: and autore: namespaces Message-ID: <1b2348920511290733o7f49da26h@mail.gmail.com> Hi, I've two new namespaces request: one for it.wiki and one for it.source. For it.wiki: as English, German and French Wikipedia we'd like to have a namespace "Portale" too [here's the discussion on our village pump: <http://it.wikipedia.org/wiki/Wikipedia:Bar#Namespace_portali>] For it.source: we are already using a pseudo-namespace "Autore" [take a look here: <http://it.wikisource.org/w/index.php?title=Speciale%3AAllpages&from=Autore%3A&namespace=0>], is it possible, instead, having a "real" namespace "Autore"? Thanks! Ciao, Frieda 2005/9/8, Ashar Voultoiz <hashar at altern.org>: > Guillaume Blanchard wrote: > > Hi, > > We requested a new 'portal' namespace about one year ago but this came > > to nothing. Some days ago, we discovered the English and German > > Wikipedia are now using this namespace (sic!) so we requesting to be > > able to do same [1]. The French word for portal is 'portail'. > > Regards, > > > > Aoineko > > > > [1] > > http://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Le_Bistro/30_ao%C3%BBt_2005#Un_espace_de_nom_pour_nos_portails > > Done: > namespace 100 : Portail > namespace 101 : Discussion_Portail > > Announcing it on the local village pump. > > > -- > Ashar Voultoiz - WP++++ > http://en.wikipedia.org/wiki/User:Hashar > http://www.livejournal.com/community/wikitech/ > IM: hashar at jabber.org ICQ: 15325080 > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > ___________________________________________ http://it.wikipedia.org/wiki/Utente:Frieda From anthere9 at yahoo.com Tue Nov 29 16:47:43 2005 From: anthere9 at yahoo.com (Anthere) Date: Tue, 29 Nov 2005 08:47:43 -0800 (PST) Subject: [Wikitech-l] pages update call :-) Message-ID: <20051129164743.57635.qmail@web32913.mail.mud.yahoo.com> http://meta.wikimedia.org/wiki/Wikimedia_partners_and_hosts This page is very badly outdated. Could it be possible that it be cleaned up and straightened before the beginning of the fundrive (roughly a good week) ? Thanks Ant __________________________________ Yahoo! Music Unlimited Access over 1 million songs. Try it free. http://music.yahoo.com/unlimited/ From usenet at tonal.clara.co.uk Tue Nov 29 17:34:33 2005 From: usenet at tonal.clara.co.uk (Neil Harris) Date: Tue, 29 Nov 2005 17:34:33 +0000 Subject: Credit card processing for forthcoming fundraiser: Was: Re: [Wikitech-l] pages update call :-) In-Reply-To: <20051129164743.57635.qmail@web32913.mail.mud.yahoo.com> References: <20051129164743.57635.qmail@web32913.mail.mud.yahoo.com> Message-ID: <438C9129.4010509@tonal.clara.co.uk> Anthere wrote: > http://meta.wikimedia.org/wiki/Wikimedia_partners_and_hosts > > This page is very badly outdated. Could it be possible > that it be cleaned up and straightened before the > beginning of the fundrive (roughly a good week) ? > > Thanks > > Ant > > If there's going to be another fundraiser soon, does anyone know if any progress has been made towards accepting credit card donations? I believe that this could significantly improve the amount of money donated, by removing a barrier to donation for people would like to donate but do not have PayPal-style accounts, and do not want the hassle of money orders, inter-bank-transfers and so on. This has been promised for the last two fundraisers, and has never happened; it might be even worth delaying the forthcoming fundraiser for a week or so if this made accepting credit cards possible. Googling for "credit card processing" unearths large numbers of companies offering to do nearly all the work for you on this, for a fixed fee per transaction (a few tens of cents) and a small percentage of the value of the transaction. I seem to recall that the setup is generally on the lines of: * set up a CC processing account with the vendor, which generally requires a small amount of checking of your credentials * registering an SSL certificate for your domain [wikimedia.org, presumably] (takes a few hours, and a fax or two if you're in the phone book), and serving your payment page via HTTPS * creating/installing a CGI / web form script for that page with a bit of prevalidation logic (all needed fields filled in, right number of CC# digits, Luhn checksum and other sanity checks) that then proxies the finished CC payment requests to the processor's actual online CC acceptance server, also via HTTPS, for a final go/no go decision. Once it's set up once, you can then accept CC payments from anywhere in the world, and you can of course change CC processors at a later date with much less hassle. Presumably Bomis and/or Jimbo know everything there is to know about this; perhaps they might be willing to lend some technical / financial know-how? -- Neil From darwin.sadeli at gmail.com Tue Nov 29 15:58:08 2005 From: darwin.sadeli at gmail.com (Darwin Sadeli) Date: Tue, 29 Nov 2005 23:58:08 +0800 Subject: [Wikitech-l] Image Thumbnails Message-ID: <4b6eb6e40511290758i11efe7dfl247aef836c282106@mail.gmail.com> Hi, Currently I'm trying to replicate wikipedia contents into a local system for a research project. I've managed to download the whole article database and image dumps, but the image thumbnails are still missing. Are the image thumbnails available for download, and if yes, where can I download it? Thanks. Regards, Darwin S. From usenet at tonal.clara.co.uk Tue Nov 29 17:45:11 2005 From: usenet at tonal.clara.co.uk (Neil Harris) Date: Tue, 29 Nov 2005 17:45:11 +0000 Subject: [Wikitech-l] Re: Credit card processing for forthcoming fundraiser In-Reply-To: <20051129164743.57635.qmail@web32913.mail.mud.yahoo.com> References: <20051129164743.57635.qmail@web32913.mail.mud.yahoo.com> Message-ID: <438C93A7.5020205@tonal.clara.co.uk> Before anyone asks about internationalization, I would imagine that even an English-only CC payment page would be vastly superior to none at all, and there would be no shortage of volunteers to rapidly provide translations for the very limited amount of text needed in such a page. -- Neil From gmaxwell at gmail.com Tue Nov 29 17:57:29 2005 From: gmaxwell at gmail.com (Gregory Maxwell) Date: Tue, 29 Nov 2005 12:57:29 -0500 Subject: [Wikitech-l] Image Thumbnails In-Reply-To: <4b6eb6e40511290758i11efe7dfl247aef836c282106@mail.gmail.com> References: <4b6eb6e40511290758i11efe7dfl247aef836c282106@mail.gmail.com> Message-ID: <e692861c0511290957u47235db7jb10c8e387c68f982@mail.gmail.com> On 11/29/05, Darwin Sadeli <darwin.sadeli at gmail.com> wrote: > Currently I'm trying to replicate wikipedia contents into a local > system for a research project. I've managed to download the whole > article database and image dumps, but the image thumbnails are still > missing. Are the image thumbnails available for download, and if yes, > where can I download it? Thanks. They should be autogenerated on load-demand. From mail at tgries.de Tue Nov 29 18:03:10 2005 From: mail at tgries.de (Thomas Gries) Date: Tue, 29 Nov 2005 19:03:10 +0100 Subject: [Wikitech-l] RC patrol for ~50 smallest wikipedias In-Reply-To: <p06230910bfb1ec02138a@[192.168.116.8]> References: <438A12E3.6010002@gmail.com> <p06230910bfb1ec02138a@[192.168.116.8]> Message-ID: <438C97DE.5030504@tgries.de> Brian0981 you could altneratively consider to use "EnotifWiki" (see http://www.enotif.org ) and configure that to have e-mail notifications sent out * on (watched) page changes * on changes to your user- and user_talk page * and on new page creations EnotifWiki is currently based on MediaWiki1.5rc4 and soon upgraded to 1.5.2 or HEAD. Gordon Joly schrieb: > At 15:11 -0500 27/11/05, Brian wrote: > >> There have been numerous reports recently of finding rampant >> vandalism on small Wikipedias/Wiktionaries. It might be useful to >> create an RC page and/or IRC channel which brings together the RC's >> of the ~50 smallest Wikipedias. >> >> brian0918 > > > I run about 5 or 6 wikis using Mediawiki and from time to time we have > vandalism of various kinds. So I would welcome this! > From hashar at altern.org Tue Nov 29 20:33:01 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Tue, 29 Nov 2005 21:33:01 +0100 Subject: [Wikitech-l] new privacy link in footer Message-ID: <dmidtu$e6a$1@sea.gmane.org> Hello, To fix bug 4048, I added a new link in the footer. It can be customized using [[MediaWiki:Privacy]] and [[MediaWiki:Privacypage]] . You can disabled it by putting '-' (without quote) as text for [[MediaWiki:Privacy]]. cheers, -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From hashar at altern.org Tue Nov 29 21:22:24 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Tue, 29 Nov 2005 22:22:24 +0100 Subject: [Wikitech-l] server for nagios Message-ID: <dmigqh$op4$1@sea.gmane.org> Hello, I used larousse sometime ago but its pending a fedora core upgrade. Someone also told me that this server will probably get removed, I am personally fine with it: Nagios doesnt need a lot of cpu nor a lot of ram, so one of the small 512MB apaches will be fine (tingxi / rose ? ). If larousse is not a choice, would it be possible to get one of the idle server to install nagios ? Also I will most probably need root access to be able to install nagios or add me to a nagios group and add a /opt/nagios/ directory with write access to nagios group (that should be enough). I coded a nagios plugin that pull data from gmetad and use a caching system to avoid hammering the gmetad server. The server on which nagios will run will need to be added in the trusted server list for gmetad. cheers, -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From krstic at fas.harvard.edu Tue Nov 29 21:38:01 2005 From: krstic at fas.harvard.edu (Ivan Krstic) Date: Tue, 29 Nov 2005 22:38:01 +0100 Subject: [Wikitech-l] server for nagios In-Reply-To: <dmigqh$op4$1@sea.gmane.org> References: <dmigqh$op4$1@sea.gmane.org> Message-ID: <438CCA39.4020204@fas.harvard.edu> Ashar Voultoiz wrote: > I coded a nagios plugin that pull data from gmetad and use a caching > system to avoid hammering the gmetad server I might have lost track of the discussion at some point, but could you quickly reiterate what problem you're trying to solve with Nagios, and why Ganglia proper isn't a good fit if you intend to be collecting data from it? Thanks, -- Ivan Krstic <krstic at fas.harvard.edu> | 0x147C722D From midom.lists at gmail.com Tue Nov 29 21:40:17 2005 From: midom.lists at gmail.com (Domas Mituzas) Date: Tue, 29 Nov 2005 23:40:17 +0200 Subject: [Wikitech-l] server for nagios In-Reply-To: <438CCA39.4020204@fas.harvard.edu> References: <dmigqh$op4$1@sea.gmane.org> <438CCA39.4020204@fas.harvard.edu> Message-ID: <783512F7-37D5-43EF-8B6B-5F1E9399E6AE@gmail.com> > > I might have lost track of the discussion at some point, but could you > quickly reiterate what problem you're trying to solve with Nagios, and > why Ganglia proper isn't a good fit if you intend to be collecting > data > from it? Ganglia is for performance trends. Nagios is for monitoring events. Domas From krstic at fas.harvard.edu Tue Nov 29 21:49:20 2005 From: krstic at fas.harvard.edu (Ivan Krstic) Date: Tue, 29 Nov 2005 22:49:20 +0100 Subject: [Wikitech-l] server for nagios In-Reply-To: <783512F7-37D5-43EF-8B6B-5F1E9399E6AE@gmail.com> References: <dmigqh$op4$1@sea.gmane.org> <438CCA39.4020204@fas.harvard.edu> <783512F7-37D5-43EF-8B6B-5F1E9399E6AE@gmail.com> Message-ID: <438CCCE0.8060108@fas.harvard.edu> Domas Mituzas wrote: > Ganglia is for performance trends. Nagios is for monitoring events. The metric collection changes in Ganglia 3.0.0 were made with the intention of adding a monitoring/alert mechanism to Ganglia mainline in the short-term future. But if you need a solution now, or plan to monitor events that Ganglia doesn't know about, then yes, Nagios makes good sense. -- Ivan Krstic <krstic at fas.harvard.edu> | 0x147C722D From hashar at altern.org Tue Nov 29 23:42:38 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Wed, 30 Nov 2005 00:42:38 +0100 Subject: [Wikitech-l] Re: Image Thumbnails In-Reply-To: <4b6eb6e40511290758i11efe7dfl247aef836c282106@mail.gmail.com> References: <4b6eb6e40511290758i11efe7dfl247aef836c282106@mail.gmail.com> Message-ID: <dmip1f$iq4$1@sea.gmane.org> Darwin Sadeli wrote: > Hi, > > Currently I'm trying to replicate wikipedia contents into a local > system for a research project. I've managed to download the whole > article database and image dumps, but the image thumbnails are still > missing. Are the image thumbnails available for download, and if yes, > where can I download it? Thanks. Make sure you have: $wgUseImageResize = true; Then make sure you have gd extension in php (thats bundled in and usually activated). You can also use imagemagick instead of gd: $wgUseImageMagick = true; $wgImageMagickConvertCommand = "/usr/bin/convert"; Note: thumbnails are generated when you view an article. -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From hashar at altern.org Tue Nov 29 23:52:05 2005 From: hashar at altern.org (Ashar Voultoiz) Date: Wed, 30 Nov 2005 00:52:05 +0100 Subject: [Wikitech-l] Re: RC patrol for ~50 smallest wikipedias In-Reply-To: <849f98ed0511281742r54d44f5q@mail.gmail.com> References: <438A12E3.6010002@gmail.com> <438A3B72.7060300@vilerage.us> <438A78E9.1070406@pobox.com> <849f98ed0511272336r7e6d1312w@mail.gmail.com> <dmg633$mab$2@sea.gmane.org> <849f98ed0511281742r54d44f5q@mail.gmail.com> Message-ID: <dmipj5$je4$1@sea.gmane.org> Mark Williamson wrote: > How would it be a big help? > > Mark > > On 28/11/05, Timwi <timwi at gmx.net> wrote: > >>Mark Williamson wrote: >> >>>I'm not sure it's nessecary. As both Angela and I have already noted >>>in this thread, there is already a group dedicated to monitoring small >>>Wikis for spam and vandalism. >> >>"Not necessary" is not a reason to not do it. It is a help. And >>personally, I think this would be a *big* help. The idea is that we can get the inactive wikis to send they rc feed in one irc channel. For example: #inactives . Then people can reuse their bot to use that channel, analyse the stream and report warning. There is a very good tool written in Java that could be used by the team: http://en.wikipedia.org/wiki/WP:CDVF cheers, -- Ashar Voultoiz - WP++++ http://en.wikipedia.org/wiki/User:Hashar http://www.livejournal.com/community/wikitech/ IM: hashar at jabber.org ICQ: 15325080 From sbwoodside at yahoo.com Wed Nov 30 02:31:06 2005 From: sbwoodside at yahoo.com (S. Woodside) Date: Tue, 29 Nov 2005 21:31:06 -0500 Subject: [Wikitech-l] Re: Credit card processing for forthcoming fundraiser In-Reply-To: <438C9129.4010509@tonal.clara.co.uk> References: <20051129164743.57635.qmail@web32913.mail.mud.yahoo.com> <438C9129.4010509@tonal.clara.co.uk> Message-ID: <7602E0E1-3269-407C-B6E3-5564646C136E@yahoo.com> On Nov 29, 2005, at 12:34 PM, Neil Harris wrote: > If there's going to be another fundraiser soon, does anyone know if > any progress has been made towards accepting credit card donations? You might consider going with an online clearinghouse for donations. I just did some quick research online and found the following: http://www.networkforgood.org -- seems to be US-centric but tops in google http://www.justgive.org/ -- more international ? 3% cut http://www.guidestar.org/ - US non-profit database I imagine that people might be happier or more peace of mind donating through such a system as opposed to paypal or whatever. --simon -- http://simonwoodside.com From avarab at gmail.com Wed Nov 30 05:55:54 2005 From: avarab at gmail.com (=?ISO-8859-1?Q?=C6var_Arnfj=F6r=F0_Bjarmason?=) Date: Wed, 30 Nov 2005 05:55:54 +0000 Subject: [Wikitech-l] Introduction In-Reply-To: <294859716.20051129103323@gmail.com> References: <294859716.20051129103323@gmail.com> Message-ID: <51dd1af80511292155l39ee0c42qbd513fda46674223@mail.gmail.com> On 11/29/05, normannormal at gmail.com <normannormal at gmail.com> wrote: > Hello to everyone! > > My name's Luis, from Uruguay. > I am new to the list, and hope I will be able to help in the > development of the wonderful MediaWiki. I?ve been looking for some tasks in bugzilla to start (any suggestions?) > > Also, once I have familiarized with the development process, I am very interested in the MediaWiki 2 projects and the Complex tasks. > > Cheers, > Luis Hi Luis, you might want to look at the list of bugs that are open and are not enhancements in bugzilla: http://bugzilla.wikimedia.org/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=&product=MediaWiki&long_desc_type=substring&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&keywords_type=allwords&keywords=&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&bug_severity=blocker&bug_severity=critical&bug_severity=major&bug_severity=normal&bug_severity=minor&bug_severity=trivial&emailassigned_to1=1&emailtype1=substring&email1=&emailassigned_to2=1&emailreporter2=1&emailcc2=1&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=&chfieldto=Now&chfieldvalue=&cmdtype=doit&order=Reuse+same+sort+as+last+time&field0-0-0=noop&type0-0-0=noop&value0-0-0= Or just pick something you want to work on and think needs to be done. From brian0918 at gmail.com Wed Nov 30 09:06:07 2005 From: brian0918 at gmail.com (Brian) Date: Wed, 30 Nov 2005 04:06:07 -0500 Subject: [Wikitech-l] "Templates used in this page" Message-ID: <438D6B7F.5040008@case.edu> Is there a way to move the less useful "Templates used in this page" section to be below the more useful character-insert and editing guidelines? From brion at pobox.com Wed Nov 30 09:16:51 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 30 Nov 2005 01:16:51 -0800 Subject: [Wikitech-l] "Templates used in this page" In-Reply-To: <438D6B7F.5040008@case.edu> References: <438D6B7F.5040008@case.edu> Message-ID: <438D6E03.70308@pobox.com> Brian wrote: > Is there a way to move the less useful "Templates used in this page" > section to be below the more useful character-insert and editing > guidelines? Your site? Go in and edit EditPage.php. Our site? Wait until I finish committing updates to EditPage.php. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051130/0cf45570/attachment.pgp> From xed2507 at yahoo.co.uk Wed Nov 30 10:32:09 2005 From: xed2507 at yahoo.co.uk (Xed Mac) Date: Wed, 30 Nov 2005 10:32:09 +0000 (GMT) Subject: [Wikitech-l] Problem importing using importDump.php Message-ID: <20051130103209.45457.qmail@web25810.mail.ukl.yahoo.com> Hi all. Newbie here. I've setup up MediaWiki on OS X using the arcane instructions here: http://meta.wikimedia.org/wiki/Help:Running_MediaWiki_on_Mac_OS_X The Wiki works fine. Tested it by creating pages and uploading images etc. Then I wanted to import articles from Wikipedia. So I downloaded the latest "articles only" dump (20051127_pages_articles.xml.bz2), and then performed a bzip2 -d on it. Then I renamed it as "articles.xml" Next I tried to import it. I went to my wikis direcotory and renamed AdminSettings.sample as AdminSettings.php and changed $wgDBadminuser and $wgDBadminpassword to what I had specified when setting up the wiki (DB username and DB password on the Site Config screen) Then, after going to the maintenance directory, I wrote this in the terminal: php importDump.php /Users/xed/Desktop/articles.xml It spits out this error: <h1><img src='/~xed/testwiki/skins/common/images/wiki.png' style='float:left;margin-right:1em' alt=''>Testpedia has a problem</h1><p><strong>Sorry! This site is experiencing technical difficulties.</strong></p><p>Try waiting a few minutes and reloading.</p><p><small>(Can't contact the database server: Client does not support authentication protocol requested by server; consider upgrading MySQL client (localhost))</small></p> I thought it might have something to do with the size of the "articles.xml" file (nearly 4GB), but when I tested it on a single exported page from Wikipedia, it does the same thing. I'm running: MediaWiki: 1.5.2 PHP: 5.0.4 (apache) MySQL: 5.0.16-standard Thanks ___________________________________________________________ Yahoo! Model Search 2005 - Find the next catwalk superstars - http://uk.news.yahoo.com/hot/model-search/ From dgerard at gmail.com Wed Nov 30 14:51:13 2005 From: dgerard at gmail.com (David Gerard) Date: Wed, 30 Nov 2005 14:51:13 +0000 Subject: [Wikitech-l] Edits from bayle.wikimedia.org?? Message-ID: <fbad4e140511300651m2dae8981m8cf1e2d8fbd70c7e@mail.gmail.com> Doing a CheckUser, I saw someone as coming from 207.142.131.239. That's bayle.wikimedia.org ... how does that show up as the IP an edit's coming from? - d. From normannormal at gmail.com Wed Nov 30 15:01:49 2005 From: normannormal at gmail.com (normannormal at gmail.com) Date: Wed, 30 Nov 2005 12:01:49 -0300 Subject: [Wikitech-l] Introduction Message-ID: <516585673.20051130120149@gmail.com> Great! Thanks, I'll pick a couple of bugs to start with :) Luis From arubin at atl.lmco.com Wed Nov 30 15:34:28 2005 From: arubin at atl.lmco.com (Aron Rubin) Date: Wed, 30 Nov 2005 10:34:28 -0500 Subject: [Wikitech-l] Special:Math Message-ID: <438DC684.5010203@atl.lmco.com> == in includes/SpecialPage.php == 'Math' => new IncludableSpecialPage( 'Math' ), == in includes/SpecialMath.php == <?php /** * @package MediaWiki * @subpackage SpecialPage */ /** * Entry point : initialise variables and call subfunctions. * @param string $par Becomes "FOO" when called like Special:Math/op,a,b (default NULL) */ function wfSpecialMath( $par=NULL, $specialPage ) { global $indexMaxperpage, $toplevelMaxperpage, $wgRequest, $wgOut, $wgContLang; # Config $indexMaxperpage = 960; $toplevelMaxperpage = 50; # GET values $oprnd_a = $wgRequest->getVal( 'a' ); $oprnd_b = $wgRequest->getInt( 'b' ); $oprtr = $wgRequest->getInt( 'op' ); if( $par ) { list( $oprtr, $oprnd_a, $oprnd_b ) = explode( ",", $par, 3 ); } switch( $oprtr ) { case "add": $result = $oprnd_a + $oprnd_b; break; case "sub": case "subtract": $result = $oprnd_a - $oprnd_b; break; case "mlt": case "multiply": $result = $oprnd_a * $oprnd_b; break; case "div": case "divide": $result = $oprnd_a * $oprnd_b; break; } if( isset($result) ) { //$wgOut->setArticleBodyOnly( true ); $wgOut->addHtml( $result ); } } == Problem == Why does this produce "<p>5</p>" instead of "5" when included in a page like so: {{Special:Math/add,2,3}} Aron From rowan.collins at gmail.com Wed Nov 30 17:13:39 2005 From: rowan.collins at gmail.com (Rowan Collins) Date: Wed, 30 Nov 2005 17:13:39 +0000 Subject: [Wikitech-l] Re: Patch: suppress auto-numbering of TOC and honour user prefs In-Reply-To: <pan.2005.11.29.14.32.41.484141@dodo.com.au> References: <pan.2005.09.24.13.07.02.452954@dodo.com.au> <pan.2005.11.29.14.32.41.484141@dodo.com.au> Message-ID: <9f02ca4c0511300913g7daec1a8q@mail.gmail.com> On 29/11/05, Netocrat <netocrat at dodo.com.au> wrote: > Here's a new patch that deals solely with suppressing TOC/heading > auto-numbering when a __NOTOCNUM__ directive is included in the page. > Same reasoning as originally (e.g. for FAQ pages where deprecated answers > are removed but new content should not reuse that answer's number): > Let me know if there's a more appropriate place to submit this. Well, I think the officially preferred way is to attach to a [new] bug on http://bugzilla.wikimedia.org and add the "patch" and "need-review" keywords. I don't know for certain that that's any less likely to disappear into the ether than here, though... -- Rowan Collins BSc [IMSoP] From dorozynskij at poczta.onet.pl Wed Nov 30 17:22:26 2005 From: dorozynskij at poczta.onet.pl (=?iso-8859-2?Q?Doro=BFy=F1ski_Janusz?=) Date: Wed, 30 Nov 2005 18:22:26 +0100 Subject: [Wikitech-l] Problem importing using importDump.php In-Reply-To: <20051130103209.45457.qmail@web25810.mail.ukl.yahoo.com> Message-ID: <20051130172258Z1423737-29370+423954@ps2.test.onet.pl> | -----Original Message----- | From: wikitech-l-bounces at wikimedia.org | [mailto:wikitech-l-bounces at wikimedia.org] On Behalf Of Xed Mac | Sent: Wednesday, November 30, 2005 11:32 AM / | Hi all. Newbie here. Hi. :-) You must use mwdumper not importDump.php. It is most complex - you need at least two steps if not used pipe, but it is only right and quick way for very big wikis. importDump.php has still bugs and even if works, works as slow as snile (it's truth). Look http://meta.wikimedia.org/wiki/Data_dumps Using mwdumper with --order & --format you must preserve this sequence. If you code for ex. ... --format=sql:1.5 --output=blabla ... you not get .sql but not converted .xml :-)) Reg, Janusz 'Ency' Dorozynski From xed2507 at yahoo.co.uk Wed Nov 30 18:14:47 2005 From: xed2507 at yahoo.co.uk (Xed Mac) Date: Wed, 30 Nov 2005 18:14:47 +0000 (GMT) Subject: [Wikitech-l] Problem importing using importDump.php In-Reply-To: <20051130172258Z1423737-29370+423954@ps2.test.onet.pl> Message-ID: <20051130181447.84192.qmail@web25811.mail.ukl.yahoo.com> Hmm. I don't really understand. How do I use mwdumper? Double-clicking it comes up with an error ("The jar file "mwdumper.jar" couldn't be launched. Check the Console for possible error messages") . I'm almost completely ignorant about unix stuff, so can you be clearer? Is there one terminal command which will import the "articles.xml" file that I mentioned into my Wiki? --- Doro?y?ski Janusz <dorozynskij at poczta.onet.pl> wrote: > | -----Original Message----- > | From: wikitech-l-bounces at wikimedia.org > | [mailto:wikitech-l-bounces at wikimedia.org] On > Behalf Of Xed Mac > | Sent: Wednesday, November 30, 2005 11:32 AM > / > | Hi all. Newbie here. > > Hi. :-) > > You must use mwdumper not importDump.php. It is most > complex - you need at > least two steps if not used pipe, but it is only > right and quick way for > very big wikis. importDump.php has still bugs and > even if works, works as > slow as snile (it's truth). > > Look http://meta.wikimedia.org/wiki/Data_dumps > > Using mwdumper with --order & --format you must > preserve this sequence. If > you code for ex. ... --format=sql:1.5 > --output=blabla ... you not get .sql > but not converted .xml :-)) > > Reg, Janusz 'Ency' Dorozynski > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > ___________________________________________________________ WIN ONE OF THREE YAHOO! VESPAS - Enter now! - http://uk.cars.yahoo.com/features/competitions/vespa.html From dorozynskij at poczta.onet.pl Wed Nov 30 19:07:22 2005 From: dorozynskij at poczta.onet.pl (=?iso-8859-2?Q?Doro=BFy=F1ski_Janusz?=) Date: Wed, 30 Nov 2005 20:07:22 +0100 Subject: [Wikitech-l] Problem importing using importDump.php In-Reply-To: <20051130181447.84192.qmail@web25811.mail.ukl.yahoo.com> Message-ID: <20051130190736Z1025159-13241+141077@ps11.test.onet.pl> | -----Original Message----- | From: wikitech-l-bounces at wikimedia.org | [mailto:wikitech-l-bounces at wikimedia.org] On Behalf Of Xed Mac | Sent: Wednesday, November 30, 2005 7:15 PM / | Hmm. I don't really understand. How do I use mwdumper? | Double-clicking it comes up with an error ("The jar file | "mwdumper.jar" couldn't be launched. Check the Console for | possible error messages") . I'm almost completely ignorant | about unix stuff, so can you be clearer? Is there one | terminal command which will import the "articles.xml" file | that I mentioned into my Wiki? Well, you must have Java 1.5. Read http://download.wikimedia.org/tools/README.txt . The more can explain anybody here but not me. My unix experience is not so good that i can really help, sorry. I work with Windows. Reg., J'E'D From dorozynskij at poczta.onet.pl Wed Nov 30 19:57:27 2005 From: dorozynskij at poczta.onet.pl (=?iso-8859-2?Q?Doro=BFy=F1ski_Janusz?=) Date: Wed, 30 Nov 2005 20:57:27 +0100 Subject: [Wikitech-l] Problem importing using importDump.php In-Reply-To: <20051130172258Z1423737-29370+423954@ps2.test.onet.pl> Message-ID: <20051130195750Z6475777-14835+600141@ps1.test.onet.pl> | -----Original Message----- | From: ... Doro?y?ski Janusz | Sent: Wednesday, November 30, 2005 6:22 PM / | Using mwdumper with --order & --format you must preserve ... Of course mwdumper have --output parameter not --order. Sorry. Reg. J'E'D From fun at thingy.apana.org.au Wed Nov 30 21:47:22 2005 From: fun at thingy.apana.org.au (David Gerard) Date: Wed, 30 Nov 2005 21:47:22 +0000 Subject: [Wikitech-l] Re: [WikiEN-l] Using Talk pages to influence Google results (Most read US newpaper blasts Wikipedia) In-Reply-To: <1133368250.438dd3bab9834@webmail.fas.harvard.edu> References: <438D203F.5090205@gmail.com> <2ed171fb0511292201p4524ef24j1efb1bf37f76f990@mail.gmail.com> <438D63C9.2000201@thingy.apana.org.au> <86a63f350511300103q3f16d7b3j31c5100902fa0454@mail.gmail.com> <E4E66286-1399-4A21-A238-9C11149EB294@ctelco.net> <4cc603b0511300508y6b347740j3031c3388d61bcc2@mail.gmail.com> <CCDBDB77-443E-49D1-B651-17546FDEFF16@ctelco.net> <4cc603b0511300649q78e106e9s43589960009494a3@mail.gmail.com> <1133368250.438dd3bab9834@webmail.fas.harvard.edu> Message-ID: <438E1DEA.9090804@thingy.apana.org.au> Would it be a good idea to tell our robots.txt not to index talk pages? - d. jkelly at fas.harvard.edu wrote: > I wasn't aware that Google results were influenced by material on Talk pages. > If this is true, it explains instances in which I have seen anons post some > ideological screed in the article, have it removed, and then re-post it > repeatedly into the article's Talk page. Is this actually that effective a > tactic for using Wikipedia as a soapbox? > > Jason > > > > Quoting slimvirgin at gmail.com: > > >>On 11/30/05, Fred Bauder <fredbaud at ctelco.net> wrote: >> >>>This case Slim Virgin mentions is in arbitration now and a blatant >>>example of gaming Google by associating the name of the person with a >>>lot of accusations he has only a marginal connection with ... >>>At a minimum we need to not allow Google to index our talk pages. We >>>talk about a lot of things. They may be about information but they >>>are not encyclopedic. >> >>Fred, the case I was referring to isn't the one that's in arbitration, >>though I know the one you mean, and it's quite similar. I'm starting >>to wonder whether this is happening a lot: that troublemakers see our >>talk pages as a sort of Trojan horse. They pretend to be having an >>innocent conversation designed to sort out the good from the bad >>material, whereas in fact the discussion is only a vehicle being used >>to spread the bad stuff, which they know won't survive in our >>articles. >> >>Sarah >>_______________________________________________ >>WikiEN-l mailing list >>WikiEN-l at Wikipedia.org >>To unsubscribe from this mailing list, visit: >>http://mail.wikipedia.org/mailman/listinfo/wikien-l >> > > > > > > _______________________________________________ > WikiEN-l mailing list > WikiEN-l at Wikipedia.org > To unsubscribe from this mailing list, visit: > http://mail.wikipedia.org/mailman/listinfo/wikien-l > From mathias.schindler at gmail.com Wed Nov 30 21:53:49 2005 From: mathias.schindler at gmail.com (Mathias Schindler) Date: Wed, 30 Nov 2005 22:53:49 +0100 Subject: [Wikitech-l] Re: [WikiEN-l] Using Talk pages to influence Google results (Most read US newpaper blasts Wikipedia) In-Reply-To: <438E1DEA.9090804@thingy.apana.org.au> References: <438D203F.5090205@gmail.com> <2ed171fb0511292201p4524ef24j1efb1bf37f76f990@mail.gmail.com> <438D63C9.2000201@thingy.apana.org.au> <86a63f350511300103q3f16d7b3j31c5100902fa0454@mail.gmail.com> <E4E66286-1399-4A21-A238-9C11149EB294@ctelco.net> <4cc603b0511300508y6b347740j3031c3388d61bcc2@mail.gmail.com> <CCDBDB77-443E-49D1-B651-17546FDEFF16@ctelco.net> <4cc603b0511300649q78e106e9s43589960009494a3@mail.gmail.com> <1133368250.438dd3bab9834@webmail.fas.harvard.edu> <438E1DEA.9090804@thingy.apana.org.au> Message-ID: <48502b480511301353n27dfbe17wfb88425e176aa269@mail.gmail.com> On 11/30/05, David Gerard <fun at thingy.apana.org.au> wrote: > > Would it be a good idea to tell our robots.txt not to index talk pages? Our google sitemap file has given talk pages a far lower priority than the actual articles. From brion at pobox.com Wed Nov 30 22:37:43 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 30 Nov 2005 14:37:43 -0800 Subject: [Wikitech-l] Edits from bayle.wikimedia.org?? In-Reply-To: <fbad4e140511300651m2dae8981m8cf1e2d8fbd70c7e@mail.gmail.com> References: <fbad4e140511300651m2dae8981m8cf1e2d8fbd70c7e@mail.gmail.com> Message-ID: <438E29B7.5000009@pobox.com> David Gerard wrote: > Doing a CheckUser, I saw someone as coming from 207.142.131.239. > That's bayle.wikimedia.org ... how does that show up as the IP an > edit's coming from? Recently, a long time ago? There were known misconfigurations some time in the past. Details are required, please. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051130/7a9f13ec/attachment.pgp> From sabine_cretella at yahoo.it Wed Nov 30 23:21:42 2005 From: sabine_cretella at yahoo.it (Sabine Cretella) Date: Thu, 01 Dec 2005 00:21:42 +0100 Subject: [Wikitech-l] How to connect a wikicities project to a wikipedia? Message-ID: <438E3406.2070102@yahoo.it> Let's say a wikicities project has a description on Rome in a certain language - is there a possibility to insert an interwikilink to that language version withing wikipedia? Thanks for any hint! Ciao, Sabine ___________________________________ Yahoo! Mail: gratis 1GB per i messaggi e allegati da 10MB http://mail.yahoo.it From molasses-one at shaw.ca Wed Nov 30 23:40:42 2005 From: molasses-one at shaw.ca (edward molasses) Date: Wed, 30 Nov 2005 15:40:42 -0800 Subject: [Wikitech-l] switched from 1.4.6 to 1.5: extension now gives error Message-ID: <438E387A.2060505@shaw.ca> Hello, I had been using an extension with MediaWiki 1.4.6 but after installing it in MediaWiki 1.5, I get the following error when using the extension: Fatal error: Call to undefined function: fetchobject() in c:\program files\easyphp1-7\www\latest\extensions\DocumentExport.php on line 460 It appears to be having trouble with the function fetchobject() in Database.php. But I looked at that function in 1.4.6 and 1.5 and they both look identical. I was wondering if anyone knew what has changed in the way that function ( fetchobject() ) is used that might cause this error? Any help help would be much appreciated. Thanks, Andrew. From xed2507 at yahoo.co.uk Wed Nov 30 23:45:27 2005 From: xed2507 at yahoo.co.uk (Xed Mac) Date: Wed, 30 Nov 2005 23:45:27 +0000 (GMT) Subject: [Wikitech-l] Problem importing using importDump.php In-Reply-To: <20051130195750Z6475777-14835+600141@ps1.test.onet.pl> Message-ID: <20051130234527.9479.qmail@web25803.mail.ukl.yahoo.com> Thanks I'll try to get mwdumper working for me. (Google shows 0 results for: "OS X" mwdumper.jar !) --- Doro?y?ski Janusz <dorozynskij at poczta.onet.pl> wrote: > | -----Original Message----- > | From: ... Doro?y?ski Janusz > | Sent: Wednesday, November 30, 2005 6:22 PM > / > | Using mwdumper with --order & --format you must > preserve ... > > Of course mwdumper have --output parameter not > --order. Sorry. > > Reg. J'E'D > > _______________________________________________ > Wikitech-l mailing list > Wikitech-l at wikimedia.org > http://mail.wikipedia.org/mailman/listinfo/wikitech-l > ___________________________________________________________ Yahoo! Model Search 2005 - Find the next catwalk superstars - http://uk.news.yahoo.com/hot/model-search/ From brion at pobox.com Wed Nov 30 23:47:52 2005 From: brion at pobox.com (Brion Vibber) Date: Wed, 30 Nov 2005 15:47:52 -0800 Subject: [Wikitech-l] switched from 1.4.6 to 1.5: extension now gives error In-Reply-To: <438E387A.2060505@shaw.ca> References: <438E387A.2060505@shaw.ca> Message-ID: <438E3A28.6070006@pobox.com> edward molasses wrote: > I had been using an extension with MediaWiki 1.4.6 but after installing > it in MediaWiki 1.5, I get the following error when using the extension: > > Fatal error: Call to undefined function: fetchobject() in c:\program > files\easyphp1-7\www\latest\extensions\DocumentExport.php on line 460 > > It appears to be having trouble with the function fetchobject() in > Database.php. But I looked at that function in 1.4.6 and 1.5 and they > both look identical. I was wondering if anyone knew what has changed in > the way that function ( fetchobject() ) is used that might cause this > error? Any help help would be much appreciated. You've probably got an object that isn't a database object. Make sure you retrieved it from wfGetDB() and that it's not encountering some error. -- brion vibber (brion @ pobox.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: <http://lists.wikimedia.org/pipermail/wikitech-l/attachments/20051130/188c5de8/attachment.pgp> From rowan.collins at gmail.com Wed Nov 30 23:54:17 2005 From: rowan.collins at gmail.com (Rowan Collins) Date: Wed, 30 Nov 2005 23:54:17 +0000 Subject: [Wikitech-l] How to connect a wikicities project to a wikipedia? In-Reply-To: <438E3406.2070102@yahoo.it> References: <438E3406.2070102@yahoo.it> Message-ID: <9f02ca4c0511301554od7da1d1p@mail.gmail.com> On 30/11/05, Sabine Cretella <sabine_cretella at yahoo.it> wrote: > Let's say a wikicities project has a description on Rome in a certain > language - is there a possibility to insert an interwikilink to that > language version withing wikipedia? Typing [[Wikipedia:de:Rome]] should give you a link to http://de.wikipedia.org/wiki/Rome (redirecting magically via http://en.wikipedia.org/wiki/de:Rome), if that's the kind of thing you were after. Or did you mean the other way around, in which case you want something like [[Wikicities:c:Ancientcoins:Rome]] (note the extra "c:" to activate the magic redirection) -- Rowan Collins BSc [IMSoP] From xed2507 at yahoo.co.uk Wed Nov 30 23:57:21 2005 From: xed2507 at yahoo.co.uk (Xed Mac) Date: Wed, 30 Nov 2005 23:57:21 +0000 (GMT) Subject: [Wikitech-l] Using mwdumper.jar on OS X Message-ID: <20051130235721.50280.qmail@web25810.mail.ukl.yahoo.com> (was "Problem importing using importDump.php") Having given up on importDump.php, I'm now trying to import all Wikipedia articles using mwdumper.jar The command I typed into the terminal was: /System/Library/Frameworks/JavaVM.framework/Versions/1.5/Commands/java -jar /Users/xed/Desktop/mwdumper.jar --format=sql:1.5 /Users/xed/Desktop/20051127_pages_articles.xml.bz2 | /usr/local/mysql-standard-5.0.16-osx10.4-powerpc/bin/mysql -u wikiuser -p wikidb After entering the database password it came up with this error: ERROR 1146 (42S02) at line 31: Table 'wikidb.text' doesn't exist ...and then immediately started doing this: 1,000 pages (49.717/sec), 1,000 revs (49.717/sec) 2,000 pages (75.106/sec), 2,000 revs (75.106/sec) 3,000 pages (92.308/sec), 3,000 revs (92.308/sec) 4,000 pages (97.611/sec), 4,000 revs (97.611/sec) ..etc.. now it's up to 408,000 pages (306.929/sec), 408,000 revs (306.929/sec) ..which is nice. But what is it doing? Is it actually going into the Wiki I set up? Looking at the "All pages" Special page in my Wiki I just see the couple of pages that I had already made. Are there any other steps I have to take once mwdumper has done it's job? Thanks X ___________________________________________________________ To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com From molasses-one at shaw.ca Wed Nov 30 23:59:57 2005 From: molasses-one at shaw.ca (edward molasses) Date: Wed, 30 Nov 2005 15:59:57 -0800 Subject: [Wikitech-l] switched from 1.4.6 to 1.5: extension now gives error In-Reply-To: <438E3A28.6070006@pobox.com> References: <438E387A.2060505@shaw.ca> <438E3A28.6070006@pobox.com> Message-ID: <438E3CFD.70603@shaw.ca> Brion Vibber wrote: >edward molasses wrote: > > >>I had been using an extension with MediaWiki 1.4.6 but after installing >>it in MediaWiki 1.5, I get the following error when using the extension: >> >>Fatal error: Call to undefined function: fetchobject() in c:\program >>files\easyphp1-7\www\latest\extensions\DocumentExport.php on line 460 >> >>It appears to be having trouble with the function fetchobject() in >>Database.php. But I looked at that function in 1.4.6 and 1.5 and they >>both look identical. I was wondering if anyone knew what has changed in >>the way that function ( fetchobject() ) is used that might cause this >>error? Any help help would be much appreciated. >> >> > >You've probably got an object that isn't a database object. Make sure you >retrieved it from wfGetDB() and that it's not encountering some error. > >-- brion vibber (brion @ pobox.com) > > > >------------------------------------------------------------------------ > >_______________________________________________ >Wikitech-l mailing list >Wikitech-l at wikimedia.org >http://mail.wikipedia.org/mailman/listinfo/wikitech-l > Thanks for the help! I will see if i can hunt down the problem...