Hello,
I'm trying to add a new javascript to the header block. I want to use the
function "addScript(...)" of the File "OutputPage.php" in an extension which
uses the hook "ParserAfterStrip". All this is working fine: While debugging,
I can see the given text is stored in the member "$this->mScripts" and I can
read the given text by using "getScript()".
The function "getScript(...)" is only used one time in the function
"outPage()" of the file "SkinTemplate.php" in this way:
$tpl->set('headscripts', $out->getScript());
But here the script is seen for the last time! It is NOT added to the
output.
I have no idea how to handle this, since to use this function
"addScript(...)" seams to be the right way for an extension. Please help me
to find my problem.
Best regards,
Kai Huener
I just stumbled upon this chat interface to Amazon and the BBC:
Simply add:
chat(a)insidemessenger.com to your MSN Messenger (or AOL) and start chatting.
Something like this might make a cool way to search Wikipedia -
especially if disambiguation pages could be integrated.
Paul
--
Yellowikis is to Yellow Pages, as Wikipedia is to The Encyclopedia Britannica
Hi All,
I'm trying to come up with a better 404 response for a small personal
homepage site that runs on MediaWiki 1.5.6.
At the moment in the .htaccess I have this:
=============================================
[holt]$ cat .htaccess
ErrorDocument 404 /index.php?title=404_Not_Found
[holt]$
=============================================
And in the "404_Not_Found" article I have this:
=============================================
Whatever you were looking for, it's not here any more.
To find it, please try the [[Main Page|homepage]], or see the
[[Special:Allpages|site map]] below, or use the search option on the
left.
----
{{Special:Allpages}}
=============================================
It's better than nothing, but it's not ideal, for 2 reasons:
* It also shows the "Display pages starting at:" and "Namespace:"
query parts of the "Special:Allpages" form, and I do wish it wouldn't.
* It doesn't search on the terms of the 404. Ideally I would want it
to be like the user searching on the missing page, and include the
"Article title matches" and the "Page text matches" sections of the
search result (if any), but again without the "Search in namespaces"
form stuff.
So basically:
* Few sentences of blurb explaining it's missing.
* Search results based on the URL of what they search for (only if
there were search results), without the Search form.
* Then a list of Allpages (but without the form).
Without writing a new "Special" page, is there any way to do something
like this? Or would it have to be done via a special page? Or is there
something that would do one of the above two things (either Allpages
inclusion without form, or search results inclusion without form)
without having to make a new special page?
All the best,
Nick.
I already questioned this before, but nobody answered or finished my request.
Here again:
Could somebody with the right for it please deactivate the Captcha at the new
users registration page on nds.wikipedia.org like it is deactivated on other
wikipedias? There is everyday activity on the wiki and we can handle good with
the incoming spam and vandalism. There is no need for this kind of barrier.
Thanks
Slomox
Marcus Buck
I've made a new diff extension, called wikidiff2. It uses the same diff algorithm that we've been
using in PHP, ported to C++. I've done some benchmarks on test sets which require lots of word-level
diffs. For the lines "a b c d" -> "a b c e" repeated many times, on srv31, the timings are:
PHP DifferenceEngine: 10230us per line
wikidiff (old C++ extension): 379us per line
wikidiff2 (new C++ extension): 11.5us per line
No doubt the ratios will be different under realistic conditions, but I know where I'm putting my money.
We've been using the PHP version lately rather than wikidiff, because wikidiff wasn't finding diffs
as short as people were used to. Because the new extension uses exactly the same algorithm as the
PHP version, there should be no user-visible differences.
wikidiff2 can also be compiled as a standalone executable and used to diff files.
It's not tested to my satisfaction yet, but once it is, I imagine we'll put it live on the Wikimedia
cluster. Eventually I imagine we could ditch the original extension and rename my one to wikidiff,
but I wanted to keep both of them around for the moment so that I could compare them.
-- Tim Starling
An automated run of parserTests.php showed the following failures:
Running test BUG 361: URL within URL, not bracketed... FAILED!
Running test External links: invalid character... FAILED!
Running test Bug 2702: Mismatched <i> and <a> tags are invalid... FAILED!
Running test A table with no data.... FAILED!
Running test A table with nothing but a caption... FAILED!
Running test Link containing "#<" and "#>" % as a hex sequences... FAILED!
Running test Magic links: PMID incorrectly converts space to underscore... FAILED!
Running test Template with thumb image (wiht link in description)... FAILED!
Running test Link to image page... FAILED!
Running test BUG 1887: A ISBN with a thumbnail... FAILED!
Running test BUG 1887: A <math> with a thumbnail... FAILED!
Running test BUG 561: {{/Subpage}}... FAILED!
Running test Simple category... FAILED!
Running test Section headings with TOC... FAILED!
Running test Media link with nasty text... FAILED!
Running test Bug 2095: link with pipe and three closing brackets... FAILED!
Running test Sanitizer: Validating the contents of the id attribute (bug 4515)... FAILED!
Passed 264 of 281 tests (93.95%) FAILED!
I'm replying to this wikipedia-l post in wikitech-l, it's more relevant
here.
Brion Vibber wrote:
> I'd been waiting on Tim's in-progress code to compare. Apparently there's not
> really anything much of that left (his work mostly transmogrified into the
> templatelinks temple) so I'm poking at Magnus's code now.
Salvatore's moderation feature was implemented in a similar way to
Magnus' one, in that it used an extra revision ID field in the page
table to point to the relevant version. Salvatore's used parameters
passed back to Revision to determine whether page_latest or
page_verified should be used, whereas Magnus's code operated mainly at
the UI level, redirecting to a page with an oldid parameter, IIRC.
Neither of them had the structure required for efficient caching, that
is, page/tag retrieval instead of page/revision retrieval. The basic
problem is that tugela, which we are now using instead of memcached, has
no efficient means for identifying and purging expired keys. In fact at
the moment, this garbage collection is not done at all. To limit the
growth of the cache under these circumstances, it's better to index the
parser cache by page and tag, rather than page and revision ID. I
thought that the best way to implement a tag concept, to merge Magnus's
and Salvatore's features while minimising MySQL index space, would be to
put the tag information in its own table.
Then there's the problem of template and link colour changes. I posted
to wikitech-l about that before. Magnus's suggestion of storing the
wikitext with the templates expanded at save time is a quite reasonable
solution.
I stopped working on Salvatore's moderation feature when Brion
implemented the semi-protection proposal put forward by the English
Wikipedia. It was quite redundant -- Salvatore's feature was a form of
semi-protection, a more complicated one than the one that the
Wikipedians were supporting. I was even working on integrating it into
the protection UI when Brion rewrote that part of the code. At that
stage I still hadn't addressed the caching issue. So I salvaged what I
could of my code branch (mostly the templatelinks table), and abandoned
the feature. I wasn't interested enough in the stable version feature to
keep working on the backend.
Perhaps the simplest solution at the moment is to put Magnus's feature
live (after the necessary code cleanup), and put up with the lack of
caching for a while. We've still got a bit of spare hardware capacity
haven't we? The request rate for stable versions should be lower than it
would have been for verified revisions. If I understand it correctly,
stable revisions are not displayed by default, verified revisions would
have been.
-- Tim Starling
Greetings,
Very soon (maybe this afternoon), I'd like to submit a patch to add
OpenID login support to MediaWiki. Dan Libby has already contributed
such a patch:
http://bugzilla.wikimedia.org/show_bug.cgi?id=3060
Our patch (JanRain, Inc.) is a patch against CVS HEAD, extends Dan
Libby's original modifications, and uses the PHP OpenID library that
we built and maintain at
http://www.openidenabled.com/openid/libraries/php/
Here are some notes from the openid.txt file included in the patch:
- OpenID support works in *addition* to normal wiki logins, including
any external authentication plugin configured by the MediaWiki
administrator. If a username looks like a URL, OpenID auth is
tried; otherwise, the regular authentication rules apply.
- If OpenID support cannot be verified (either because the library is
missing or because the store directory can't be initialized -- see
step (3) in Installation), MediaWiki will function normally even if
$wgUseOpenID is set to true.
- The account creation form cannot currently be used to create
accounts with OpenID identity URLs. If you want to create an
account with your OpenID, just log in. The account will be created
automatically.
Any thoughts or concerns? I'm busy preparing some things and then
I'll attach a .diff to the bug ticket above.
Lastly, provided that the patch is accepted at some point, I'll be
happy to be active in supporting and maintaining OpenID support in
MediaWiki.
Thanks!
--
Jonathan Daugherty
JanRain, Inc.
An automated run of parserTests.php showed the following failures:
Running test BUG 361: URL within URL, not bracketed... FAILED!
Running test External links: invalid character... FAILED!
Running test Bug 2702: Mismatched <i> and <a> tags are invalid... FAILED!
Running test A table with no data.... FAILED!
Running test A table with nothing but a caption... FAILED!
Running test Link containing "#<" and "#>" % as a hex sequences... FAILED!
Running test Magic links: PMID incorrectly converts space to underscore... FAILED!
Running test Template with thumb image (wiht link in description)... FAILED!
Running test Link to image page... FAILED!
Running test BUG 1887: A ISBN with a thumbnail... FAILED!
Running test BUG 1887: A <math> with a thumbnail... FAILED!
Running test BUG 561: {{/Subpage}}... FAILED!
Running test Simple category... FAILED!
Running test Section headings with TOC... FAILED!
Running test Media link with nasty text... FAILED!
Running test Bug 2095: link with pipe and three closing brackets... FAILED!
Running test Sanitizer: Validating the contents of the id attribute (bug 4515)... FAILED!
Passed 264 of 281 tests (93.95%) FAILED!
Greetings,
I am a contributor to the en:Wikinews project. For the past few months
we've been working on a country portal for Australia:
http://en.wikinews.org/wiki/Portal:Australia
We would like to start advertising this portal using pamphlets and
posters in the hope of increasing the amount of local original reporting
that we produce. Unfortunately the URL is a little inappropriate for
such a purpose. We were hoping to get this URL:
http://australia.wikinews.org
... to redirect to a multilingual portal page, that would link to the
Australian portal within each Wikinews language project. I've created a
quick example:
http://meta.wikimedia.org/wiki/Australia.wikinews.org_portal
.... modeled on the one used at www.wikinews.org.
Any idea where I should go next with this request? We basically want it
to function the same as www.wikinews.org currently does.
Thanks,
Dave.
(Wikinews User:Borofkin)