An automated run of parserTests.php showed the following failures:
This is MediaWiki version 1.10alpha (r19840).
Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
18 still FAILING test(s) :(
* URL-encoding in URL functions (single parameter) [Has never passed]
* URL-encoding in URL functions (multiple parameters) [Has never passed]
* TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html) [Has never passed]
* TODO: Link containing double-single-quotes '' (bug 4598) [Has never passed]
* TODO: message transform: <noinclude> in transcluded template (bug 4926) [Has never passed]
* TODO: message transform: <onlyinclude> in transcluded template (bug 4926) [Has never passed]
* BUG 1887, part 2: A <math> with a thumbnail- math enabled [Has never passed]
* TODO: HTML bullet list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML ordered list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML nested bullet list, open tags (bug 5497) [Has never passed]
* TODO: HTML nested ordered list, open tags (bug 5497) [Has never passed]
* TODO: Inline HTML vs wiki block nesting [Has never passed]
* TODO: Mixing markup for italics and bold [Has never passed]
* TODO: 5 quotes, code coverage +1 line [Has never passed]
* TODO: dt/dd/dl test [Has never passed]
* TODO: Images with the "|" character in the comment [Has never passed]
* TODO: Parents of subpages, two levels up, without trailing slash or name. [Has never passed]
* TODO: Parents of subpages, two levels up, with lots of extra trailing slashes. [Has never passed]
Passed 493 of 511 tests (96.48%)... 18 tests failed!
Hi,
I'm thinking about writing an extension...and I haven't really done
one totally from scratch yet, so please forgive the waste of
bandwidth if I'm missing something in the documentation...and the
length of this message.
Here's what I want to do and why. I'm running some wikis related to
biology/genomics. We use Cite.php, and I'd like to automatically
create pages for the references cited by citations in Cite. php.
I've already modified Cite so that if someone enters "PMID:<id
number>" as either the input text or the name= part, a block of code
(essentially an extension of Cite where I've added a "hook" in
Cite.php) goes to Pubmed, retrieves the detailed information about
the citation and substitutes the reference text.
For example, if you have
<ref name='PMID:2167309'/>
as the third citation, the reference that shows up is
3. ↑ Rathod PK & Khatri A (1990) Synthesis and antiproliferative
activity of threo-5-fluoro-L-dihydroorotate. J Biol Chem 265:14242-9
PMID:2167311
I like this because it reduces markup clutter that is a criticism of
Cite. The next thing I want to do is change the external link to
Pubmed to an internal link to a page in the wiki, where users can add
commentary about the reference. These pages will be stubbed with
more information from Pubmed, including a template, the abstract, and
the reference. I'm thinking of two possible strategies:
1) Run something that creates the page (if needed) when the parser
renders a page with a <ref> tag.
2) Run something that creates the page (if needed) when a user clicks
on the link in the references section
These aren't mutually exclusive, of course, but what I like about #2
is that I don't create pages unless someone actually wants to look at
them (upon reflection, this may create pages when the search engines
hit the links, but that may be OK) and, more importantly, I think
this can be done so that the page will be created if someone searches
for the reference and the citation doesn't already exist.
The way I'm thinking of doing this is to hook into AlternateEdit and
branch off to something that grabs the desired template from the
templates namespace, populates the wikitext with data, saves it, and
redirects to the saved page. Questions:
Q1) Am I nuts? (default=true) Is there anything that I'm forgetting
that will make this blow up?
Q2) Is that the right place to hook?
Q3) What's the best way to do the create/save/redirect step?
Q4) Has anyone already done something like this that I can adapt?
Q5) Would this be useful to anyone else?
For Q2 and Q3, I don't want the user to see the edit form. I want it
to look like the page was there all along. For #3 I have a kludgy
way to do it: create the page as XML and run maintenance/importDump.
There must be a better way, right? Thanks for any advice.
Jim
=====================================
Jim Hu
Associate Professor
Dept. of Biochemistry and Biophysics
2128 TAMU
Texas A&M Univ.
College Station, TX 77843-2128
979-862-4054
An automated run of parserTests.php showed the following failures:
This is MediaWiki version 1.10alpha (r19827).
Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
18 still FAILING test(s) :(
* URL-encoding in URL functions (single parameter) [Has never passed]
* URL-encoding in URL functions (multiple parameters) [Has never passed]
* TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html) [Has never passed]
* TODO: Link containing double-single-quotes '' (bug 4598) [Has never passed]
* TODO: message transform: <noinclude> in transcluded template (bug 4926) [Has never passed]
* TODO: message transform: <onlyinclude> in transcluded template (bug 4926) [Has never passed]
* BUG 1887, part 2: A <math> with a thumbnail- math enabled [Has never passed]
* TODO: HTML bullet list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML ordered list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML nested bullet list, open tags (bug 5497) [Has never passed]
* TODO: HTML nested ordered list, open tags (bug 5497) [Has never passed]
* TODO: Inline HTML vs wiki block nesting [Has never passed]
* TODO: Mixing markup for italics and bold [Has never passed]
* TODO: 5 quotes, code coverage +1 line [Has never passed]
* TODO: dt/dd/dl test [Has never passed]
* TODO: Images with the "|" character in the comment [Has never passed]
* TODO: Parents of subpages, two levels up, without trailing slash or name. [Has never passed]
* TODO: Parents of subpages, two levels up, with lots of extra trailing slashes. [Has never passed]
Passed 493 of 511 tests (96.48%)... 18 tests failed!
Hi all
Don`t know if it`s the right place to ask, but trying anyway...
I`ve tried to install on my pc a wikipedia dump from download.wikipedia.org:
- I installed mediawiki-1.9.0rc2 on php 5.1.6 and mysql 4.1.22
- I downloaded and imported on mysql the itwiki-20061021-pages-meta-history.xml.bz2 (I was interested keeping history, the 20061021 is a dump done from mw 1.9) with mwdumper.jar.
All seems to have correctly finished, I can see all the pages, but in almost every page there are present a lot of spurious characters and texts, like "{{#if:" followed by what appear interwiki links or something like this.
What I have missed in installation? Some option in LocalSettings.php?
I had this problem with also with the older dump...
Thanks,
--
E.Richiardone
Linux.Studenti staff
http://linux.studenti.polito.it
An automated run of parserTests.php showed the following failures:
This is MediaWiki version 1.10alpha (r19811).
Reading tests from "maintenance/parserTests.txt"...
Reading tests from "extensions/Cite/citeParserTests.txt"...
Reading tests from "extensions/Poem/poemParserTests.txt"...
3 previously failing test(s) now PASSING! :)
* Blank ref followed by ref with content [Fixed between 06-Feb-2007 08:15:27, 1.10alpha (r19802) and 07-Feb-2007 08:15:18, 1.10alpha (r19811)]
* Regression: non-blank ref "0" followed by ref with content [Fixed between 06-Feb-2007 08:15:27, 1.10alpha (r19802) and 07-Feb-2007 08:15:18, 1.10alpha (r19811)]
* Regression sanity check: non-blank ref "1" followed by ref with content [Fixed between 06-Feb-2007 08:15:27, 1.10alpha (r19802) and 07-Feb-2007 08:15:18, 1.10alpha (r19811)]
18 still FAILING test(s) :(
* URL-encoding in URL functions (single parameter) [Has never passed]
* URL-encoding in URL functions (multiple parameters) [Has never passed]
* TODO: Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html) [Has never passed]
* TODO: Link containing double-single-quotes '' (bug 4598) [Has never passed]
* TODO: message transform: <noinclude> in transcluded template (bug 4926) [Has never passed]
* TODO: message transform: <onlyinclude> in transcluded template (bug 4926) [Has never passed]
* BUG 1887, part 2: A <math> with a thumbnail- math enabled [Has never passed]
* TODO: HTML bullet list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML ordered list, unclosed tags (bug 5497) [Has never passed]
* TODO: HTML nested bullet list, open tags (bug 5497) [Has never passed]
* TODO: HTML nested ordered list, open tags (bug 5497) [Has never passed]
* TODO: Inline HTML vs wiki block nesting [Has never passed]
* TODO: Mixing markup for italics and bold [Has never passed]
* TODO: 5 quotes, code coverage +1 line [Has never passed]
* TODO: dt/dd/dl test [Has never passed]
* TODO: Images with the "|" character in the comment [Has never passed]
* TODO: Parents of subpages, two levels up, without trailing slash or name. [Has never passed]
* TODO: Parents of subpages, two levels up, with lots of extra trailing slashes. [Has never passed]
Passed 493 of 511 tests (96.48%)... 18 tests failed!
Hello,
I have investigated the history of the Low Saxon (nds) Wikipedia a bit. It was
updated to Phase III software in January 2004. Apparently lots of edits got
lost in this process (even the main page vanished). Looking at the statistics
under http://stats.wikimedia.org/EN/ChartsWikipediaNDS.htm you can see, that
there were only few edits in the six months before conversion. But after
conversion the edit numbers went up abruptly. Is this only a effect of lost
edits in this time?
Additionaly let me ask: What were the reasons that the conversion script caused
so much lost edits? Phase III was already in use by other wikis for six months,
so why was the conversion script still buggy in January 2004 when used for Low
Saxon Wikipedia?
Aren't there any dumps or archived versions made before conversion?
Would be nice if you could enlighten me a bit or point me to a page, where I can
find info about this.
Thanks and
Schöne Gröten
Slomox
Marcus Buck
I've seen similar stuff as a bash.org moderator. It's a spambot.
Andrew Garrett
(werdna)
On 06/02/07, Mark Clements <gmane(a)kennel17.co.uk> wrote:
> On mediawiki.org we quite often seem to get new pages created similar to
the
> following:
>
> Page name: /w/w/index.php?title=Extension:Guestbook/w/w/index.php
> Page text: Dear web-master ! I looked your site and I want to say that
> yor
> very well made it .All information on this site is represented for
> users. A
> site is made professionally. So to hold !
>
> The similarities are that the pages titles always contain
> index.php?title=xx
> (with various depths of /w or /wiki) and that the message is
> congratulating
> us on our site. The user is always an anon, but without being able to
> search the deleted pages I don't know whether the IP address is always
> the
> same. There never seems to be any link spam or any of the other common
> vandalism/spamming traits in the page text.
>
> Have any other wikis experienced this? Some kind of spambot gone wrong,
> or
> a mischievous repeat visitor? It's happened too often for me to think
> it's
> a genuine congratulatory comment posted by someone with a screwy
> browser...
>
> Is there anything we can do about it?
>
> - Mark Clements (HappyDog)
On 2/5/07, aaron(a)svn.wikimedia.org <aaron(a)svn.wikimedia.org> wrote:
> Revision: 19797
> Author: aaron
> Date: 2007-02-05 15:26:26 -0800 (Mon, 05 Feb 2007)
> . . .
>
> + /**
> + * Fetch revision text if it's available to THIS user
> + * @return string
> + */
> + function revText() {
> + if( !$this->userCan( self::DELETED_TEXT ) ) {
> + return "";
> + } else {
> + return $this->getRawText();
> + }
> + }
1) You should try to mark any new functions or variables as private,
public, or protected as appropriate. This was probably intended to be
public.
2) As a more general comment, userCan should probably be deprecated in
favor of a method that works for non-current users too. Globals
should be avoided where possible, at the very least in lower-level
functions.