Hey,
I just got a bug report for the following code:
$realFunction = array( 'OutputPage', 'includeJQuery' );
if ( is_callable( $realFunction ) ) {
//...
}
The user is getting a strict warning because the includeJQuery is not
static. Now I'm wondering why this check is done in such an odd way to begin
with, this would be a lot simpler:
if ( method_exists( $wgOut, 'includeJQuery' ) ) { ... }
I vaguely remember someone saying something about this beeing needed for
HipHop, but can't find this documented anywhere where such checks are used.
So can someone enlighten me here? :)
Cheers
--
Jeroen De Dauw
http://www.bn2vs.com
Don't panic. Don't be evil.
--
In the Hebrew Wikipedia there's been some discussions about changing
the links in the sidebar. Is there a clever way to do it by using
click statistics?
For example, can we get statistics about how many people click each
link in the sidebar, and if possible - what kind of user click them -
registered, anonymous, having more than 5 edits, etc.?
Of course, this may be useful for all projects.
--
Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
http://aharoni.wordpress.com
“We're living in pieces,
I want to live in peace.” – T. Moore
[posted to foundation-l and wikitech-l, thread fork of a discussion elsewhere]
THESIS: Our inadvertent monopoly is *bad*. We need to make it easy to
fork the projects, so as to preserve them.
This is the single point of failure problem. The reasons for it having
happened are obvious, but it's still a problem. Blog posts (please
excuse me linking these yet again):
* http://davidgerard.co.uk/notes/2007/04/10/disaster-recovery-planning/
* http://davidgerard.co.uk/notes/2011/01/19/single-point-of-failure/
I dream of the encyclopedia being meaningfully backed up. This will
require technical attention specifically to making the projects -
particularly that huge encyclopedia in English - meaningfully
forkable.
Yes, we should be making ourselves forkable. That way people don't
*have* to trust us.
We're digital natives - we know the most effective way to keep
something safe is to make sure there's lots of copies around.
How easy is it to set up a copy of English Wikipedia - all text, all
pictures, all software, all extensions and customisations to the
software? What bits are hard? If a sizable chunk of the community
wanted to fork, how can we make it *easy* for them to do so?
And I ask all this knowing that we don't have the paid tech resources
to look into it - tech is a huge chunk of the WMF budget and we're
still flat-out just keeping the lights on. But I do think it needs
serious consideration for long-term preservation of all this work.
- d.
If you have any FIXMEs sitting around in CodeReview, you'll be getting
another email from me tonight asking you to fix them.
As a reminder for our experienced developers and an introduction for our
new developers, there are two or three things that should be done when
you feel you've addressed a FIXME'd revision:
1. Make sure the FIXME'd revision is mentioned in your commit
summary. (For example: "re: rXXXX. Fixes whitespace problems").
2. Change the status of the revision from FIXME back to NEW.
3. Leave a comment in response to the comment that pointed out the
problem saying you've addressed it.
The third one item is optional and redundant, but it helps the person
who marked the code FIXME to see that you have, indeed, fixed the code.
They can then change it's status to "resolved".
DO NOT change your fixme'd revisions to "resolved"! Please, only change
them to "new".
And now, to send those emails I promised,
Mark.
> Message: 4
> Date: Mon, 15 Aug 2011 19:49:40 -0400
> From: mhershberger(a)wikimedia.org (Mark A. Hershberger)
> Subject: [Wikitech-l] Mobile bug Triage ? non-WMF devs wanted!
> To: Wikitech List <wikitech-l(a)lists.wikimedia.org>
> Message-ID: <87k4aeme0b.fsf(a)everybody.org>
> Content-Type: text/plain; charset=utf-8
>
>
> What: ?Mobile? bug triage
> When: Wednesday, August 17, 17:00UTC
> Time zone conversion: http://hexm.de/5r
> Where: #wikimedia-dev on freenode
> Use http://webchat.freenode.net/ if you don't have an IRC
> client
>
> This Wednesday, I'll be conducting a bug triage of bugs related to
> mobile devices and the new MobileFrontend Extension. In order to get as
> much non-WMF interest as possible, I'm announcing the triage two days
> early.
>
> If you have a wiki that you would like to add mobile support to, then
> this is the triage for you! You'll have a chance to talk to developers
> about the issues and, if you can help with PHP development, you'll be
> able to see which bugs are most important to you to be fixed and even
> get some hints to begin diving into the code yourself.
>
If we're trying to attract non-wmf devs, parts of the extension seems
very wmf specific (or do you mean we're trying to attract volunteer
devs who only care about Wikipedia?).
For example, at one part there is a check hardcoding things specific
to the wmf server setup:
stripos( $_SERVER['HTTP_VIA'], '.wikimedia.org:3128' ) !== false )
There's another line doing stuff like:
$featuredArticle = $this->mainPage->getElementById( 'mp-tfa' );
$newsItems = $this->mainPage->getElementById( 'mp-itn' );
Which I assume is a refrence to class names used on the english
wikipedia's main page.
I personally feel (without actually looking at the issues involved, so
take this with salt) that its bad to hardcode such things even if its
only going to be used by wmf, since they can change. However if we're
aiming for third party re-use, then hardcoding such things is
definitely a bad idea.
-bawolff
Let me retitle one of the topics nobody seems to touch.
On Fri, Aug 12, 2011 at 13:44, Brion Vibber <brion(a)pobox.com> wrote:
> * media files -- these are freely copiable but I'm not sure the state of
> easily obtaing them in bulk. As the data set moved into TB it became
> impractical to just build .tar dumps. There are batch downloader tools
> available, and the metadata's all in dumps and api.
Right now it is basically locked: there is no way to bulk copy the
media files, including doing simply a backup of one wikipedia, or
commons. I've tried, I've asked, and the answer was basically to
contact a dev and arrange it, which obviously could be done (I know
many of the folks) but that isn't the point.
Some explanations were mentioned, mostly mentioning that media and its
metadata is quite detached, and thus it's hard to enforce licensing
quirks like attribution, special licenses and such. I can guess this
is a relevant comment since the text corpus is uniformly licensed
under CC/GFDL while the media files are at best non-homogeneous (like
commons, where everything's free in a way) and completely chaos at its
worst (individual wikipedias, where there may be anything from
leftover fair use to copyrighted by various entities to images to be
deleted "soon").
Still, I do not believe it's a good method to make it close to
impossible to bulk copy the data. I am not sure which technical means
is best, as there are many competing ones.
We could, for example, open up an API which would serve media file
with its metadata together, possibly supporting mass operations.
Still, it's pretty ineffective.
Or we could support zsync, rsync and such (and I again recommend
examining zsync's several interesting abilities to offload the work to
the client), but there ought to be some pointers to image metadata, at
least an oneliner file with every image linking to the license page.
Or we could connect the bulk way to established editor accounts, so we
could have at least a bit of an assurance that s/he knows what s/he's
doing.
--
byte-byte,
grin
Hi,
I usually don't post to mailing lists, but Brion suggested I should do
this for the page content language.
I suppose most people now that I improved the RTL support.
Documentation of that is now at
http://www.mediawiki.org/wiki/Directionality_support
If it is incomplete or unclear about something, please ask so I can
improve the docs.
While doing that, I introduced a "page content language" that defines
the language in which a specific page is written. I added docs for
that as well, see http://www.mediawiki.org/wiki/Language_in_MediaWiki
For special pages it is $wgLang, for MediaWiki namespace pages it
depends on the subpage code, for other pages it is $wgContLang.
Extensions (like Translate) can change the language a page is supposed
to be written in.
This affects the direction of the content, the TOC, and (in theory) the grammar.
Again, if the docs are missing something important, let me know.
But, now that I am writing this anyway, I have a question: should
magic words like CURRENTMONTH and NUMBEROFARTICLES use the page
content language rather than wgContLang? It would be more logical (and
on Incubator even wanted:
http://incubator.wikimedia.org/wiki/Template:Wp/lkt/CURRENTMONTHNAMEI
) but I am not sure if it would break things, e.g. when just with a
template.
(And btw, another i18n thing that needs attention is LanguageConverter
(even just for missing docs). I am looking if I can help out there.)
Regards,
Robin aka SPQRobin
If you never intended to use, or if you do not use the RSS extension ...
...then you can stop reading here.
http://www.mediawiki.org/wiki/Extension:OpenID
When debugging E:RSS, I found some inconsistencies how two templates are
used for rendering RSS feeds on mediawiki pages, and fixed this and
other issues in a new version
(not yet in svn).
I would be interested in your _quick_ _feedback_ about the changes
(intended to be published today)
which are listed in the
!!preview of RELEASE-NOTES!!
=== Version 1.90 2011-08-15 ===
* removed parsing of each single channel subelement (item)
* only the finally constructed feed is sent to the recursive parser:
in pre-1.9 versions, each channel subelement (item) was sent to the parser
* [[MediaWiki:Rss-item]] default has channel subelement <description> added
This was never present in previous versions
* Rss template default name has been changed:
until 1.8: [[Template:RSSPost]]
1.9: [[MediaWiki:Rss-feed]], an existing [[Template:RSSPost]]
takes precedence to be compatible with pre-1.9 versions
* introduced [[MediaWiki:Rss-feed]] with a meaningful default as part
of the release. The channel subelements which make the feed are rendered
in this standard layout:
* <title>
: <description>
: <author> <date>
* There are several ways to customize the final layout of feed items:
1. Admins can change the [[MediaWiki:Rss-feed]] default page
2. Users can use the optional template= parameter to tell the extension
to render the feed with a different layout
<rss template=Pagename>...</rss> use layout as in [[Template:Pagename]]
3. <rss template=Namespace:Pagename>...</rss> use [[Namespace:Pagename]]
Tom
Is there any plan according to supersede the old template system with
built-in software support in core or in extension, at least partially?
I mean there are several common templates, that should be designed once and
professionally, and used on all Wikipedias: like amboxes, infoboxes,
navboxes, coordinate templates, portal templates, sister project templates,
and so on. And I don't mean a „template-commons” with unchanged template
syntax, but real software support.
Users became more and more „perverse” about creating more complicated and
resource-hungry templates, what only a few editor can modify and understand
correctly, because their complexity.
The current practise is far from ideal, these templates I mentioned above
should look uniform, and be informational. Currently they are target to
bikeshed operations: on the hungarian Wikipedia, there was even a voting
about the font size of the infoboxes. Wikipedia is not a coloring book, and
not about constant redesigning of important parts of the articles. As we do
not change the default skin in every half a year, we should not allow to
change the look of standard informational elements, at least not in that
amateur way („my favourite color/font is better than yours!”).
And I not even mentioned, that high percentage of the current templates are
full of not valid html code, because the average user do not understand (and
should not have to understand) html/html5/css/advanced parser functions.
So, is there any plan or ongoing debate/developement about this?
*
*
Farewell,
*Glanthor*