Could someone please rename Special:OldReviewedPages to
Special:PendingChanges.
For starters the current is wrong. It actually list old unreviewed pages and
not old reviewed pages. Special:PendingChanges would have better name
recognition with the feature that it implements, and would do what it says
on the box: list changes which are pending.
Sorry, I forgot a topic ; reposting the previous message:
I would like to extend the syntax of the <ref> tag (Cite extension), in
order to deal with footnotes that are spread on several transcluded
pages. Since the Cite extension is widely used, I guess I better ask
here first.
Here is an illustration of the problem :
http://en.wikisource.org/wiki/Page:Robert_the_Bruce_and_the_struggle_for_Sc…
On the bottom of the scan you can see the second half of a footnote.
That footnote begins at the previous page :
http://en.wikisource.org/wiki/Page:Robert_the_Bruce_and_the_struggle_for_Sc…
Wikisourcers currently have no way to deal with these cases in a clean
way. I have written a patch for this (the code is here :
http://dpaste.org/QOMH/ ). This patch extends the "ref" syntax by adding
a "follow" parameter, like this :
<ref follow="foo">bar</ref>
After two pages are transcluded, the wikitext passed to the parser will
look like this :
blah blah blah
blah blah blah<ref name="note1">beginning of note 1</ref>
blah blah blah
blah blah blah
blah blah blah<ref follow="note1">end of note</ref>
blah blah blah
This wikitext is rendered as a single footnote, located in the text at
the position of the parent <ref>. If the parent <ref> is not found (as
is the case when you render only the second page), then the text inside
the tag is rendered at the beginning of the list of references, with no
number and no link.
does this make sense ?
Thomas
Hey,
I'm looking for a way to change values passed to a parser function when it's
executed. This would be to change addresses in the parser functions of the
Maps extension into coordinates, so the geocoding doesn't need to happen
every time the parser function is executed.
Example:
You enter:
{{display_map:foobar}}
Then save the page, and when you edit, it shows:
{{display_map:93.42, 12.34}}
What hook would be suited for such a thing?
Cheers
--
Jeroen De Dauw
* http://blog.bn2vs.com
* http://wiki.bn2vs.com
Don't panic. Don't be evil. 50 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 6C 69
66 65!
--
Tim Starling wrote:
> You don't need to store the original passwords in a recoverable form
> in order to rehash them. You can just apply extra hashing to the old
> hash. This is how the A->B transition worked, and it's how the B->C
> transition should work too, unless someone knows of some kind of
> cryptographic problem with it. It's a convenient method because it
> saves the cost of underground vaults, with no loss in security.
In that case you could always discard the private portion of the key-pair to
produce a strictly "one-way" function. And at least with this scheme you always
do have the option
of moving to 'C' regardless of whether it can accept the end-products of B as
inputs. Plus I would wager that asymmetric ciphers will stand up to attacks far
longer than most hashing functions.
It's been said (e.g. [1]) that hashing passwords with two rounds of
MD5 is basically a waste of time these days, because brute-forcing
even relatively long passwords is now feasible with cheap hardware.
Indeed, you can buy software [2] which claims to be able to check 90
million MediaWiki passwords per second on an ordinary GPU. That would
let you crack a random 8-letter password in 20 minutes.
So the time has probably come for us to come up with a "C" type
password hashing scheme, to replace the B-type hashes that we use at
the moment. I've been thinking along the lines of the following goals:
1. Future-proof: should be adaptable to faster hardware.
2. Upgradeable: it should be possible to compute the C-type hash from
the B-type hash, to allow upgrades without bothering users.
3. Efficient in PHP, with default configure options.
4. MediaWiki-specific, so that generic software can't be used to crack
our hashes.
The problem with the standard key strengthening algorithms, e.g.
PBKDF1, is that they are not efficient in PHP. We don't want a C
implementation of our scheme to be orders of magnitude faster than our
PHP implementation, because that would allow brute-forcing to be more
feasible than is necessary.
The idea I came up with is to hash the output of str_repeat(). This
increases the number of rounds of the compression function, while
avoiding tight loops in PHP code.
PHP's hash extension has been available by default since PHP 5.1.2,
and we can always fall back to using B-type hashes if it's explicitly
disabled. The WHIRLPOOL hash is supported. It has no patent or
copyright restrictions so it's not going to be yanked out of Debian or
PHP for legal reasons. It has a 512-bit block size, the largest of any
hash function available in PHP, and its security goals state that it
can be truncated without compromising its properties.
My proposed hash function is a B-type MD5 salted hash, which is then
further hashed with a configurable number of invocations of WHIRLPOOL,
with a 256-bit substring taken from a MediaWiki-specific location. The
input to each WHIRLPOOL operation is expanded by a factor of 100 with
str_repeat().
The number of WHIRLPOOL iterations is specified in the output string
as a base-2 logarithm (whimsically padded out to 3 decimal digits to
allow for future universe-sized computers). This number can be
upgraded by taking the hash part of the output and applying more
rounds to it. A count of 2^7 = 128 gives a time of 55ms on my laptop,
and 12ms on one of our servers, so a reasonable default is probably
2^6 or 2^7.
Demo code: http://p.defau.lt/?udYa5CYhHFrgk4SBFiTpGA
Typical output:
:C:007:187aabf399e25aa1:9441ccffe8f1afd8c277f4d914ce03c6fcfe157457596709d846ff832022b037
-- Tim Starling
[1] <http://www.theregister.co.uk/2010/08/16/password_security_analysis/>
[2] http://www.insidepro.com/eng/egb.shtml
Hi guys,
At the moment we are discussing an opportunity to create full scale
true WYSIWYG client for media wiki. To the moment we have a technology
which should allow us to implement with a good quality and quite fast.
Unfortunately we are not sure
if there is a real need/interest for having such kind of client at the
media wiki world, as well as what are actual needs of media wiki
users. So we decided to write to this list. Any feedback/suggestion
will be very helpful.
P.S. Screen cast demonstrating our experimental client for Trac wiki
http://www.screencast.com/t/MDkzYzM4
Regards,
Pavel
Vector is still a miserable failure for mobile phone users.
Is there any timescale for this being fixed? At the very least,
graceful degradation put into place?
Latest bug report (from a friend in a Facebook conversation):
"mazing. conservapedia still works on my mobile, wikipedia doesn't. "
"htc touch pro 2, windows mobile 6.1 pro, using internet explorer
(because opera is a steaming pile of shit). not sure what version of
IE...
yeah, wikipedia used to work marvelously, i've completely stopped
using it since the update - i only... ever used it to look things up
in the pub anyways, anywhere else i'd need better cites :P"
Please. Save our readers from having to use Conservapedia instead,
just because Monobook works and Vector doesn't!
- d.
Hey all. This message is primarily directed at those students involved with
this year's MediaWiki Google Summer of Code projects.
Firstly, I hope this email finds you and your projects well. As I recall,
the development phase is just ending, and, now that you have a little more
time, it would be great to give your projects some publicity in front of a
wider Wikimedia audience (all the projects appear to be applicable to at
least some WMF sites; forgive me if I'm wrong).
I write for *The Signpost* (its weekly Technology Report e.g. today's [1]).
If you're not familiar with it, it has a fairly sizeable readership,
particularly on the English Wikipedia. I'm confident many of the Tech Report
readers would be intrigued to know what you've been doing, how much success
you've had, and what it might mean for Wikimedia sites, as explained in your
own words (just a couple of paragraphs and a thumbnail where applicable
would be more than enough).
If you're interested, and I hope you will be, you can either reply directly
to the list, to me, or get me via my English Wikipedia talk page.
Thanks,
Jarry1250
p.s. I have okayed this with RobLa.
[1]
http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2010-08-16/Techno…
Hi all.
I have wanted to push RC-messages via XMPP for a long time. I have now a working
demo on the toolserver: join enwiki(a)conference.jabber.toolserver.org with any
jabber client to see the human readable part.
The demo works by polling the API, for production use,
<http://www.mediawiki.org/wiki/Extension:XMLRC> should be enabled on the live sites.
The architecture is similar to the one used for the IRC channels: MediaWiki
emits UDP packets (in the case of XMLRC, containing XML - the same <rc> tags you
would get from the API). The packets are received by a standalong bridge process
(written in python) that multiplexes the messages into the appropriat channels
(XMPP MUC rooms, in my case). Details can be found on the extension page.
I have also written a small client lib that provides convenient access to the RC
properties that are enclosed in the XMPP message. See the extension page for links.
So, what do you think, what would it take to get this live?
-- daniel
PS: relevant tracking bug: https://bugzilla.wikimedia.org/show_bug.cgi?id=17450