The Wikimedia Foundation is now an official liaison member of the
Unicode Consortium:
http://www.unicode.org/consortium/memblist.html#liais
Rick McGowan is the Unicode representative to Wikimedia, and I'll be
serving as Wikimedia's respresentative to Unicode until our new
localization engineers come on board.
Ryan Kaldari
Hi everyone,
We plan to rebranch 1.18 from trunk soon (as in, almost immediately).
What we've found is that many things in the current branch are harder
to get working than they should be, and that trunk seems to be in
better shape (in particular, all tests are passing now). That will
mean taking on more code review, but it's work we'll have to do one
day or another. Apologies to hashar for not making this decision
sooner, since he spent a good chunk of the weekend trying to get unit
tests fully operational in the 1.18 branch (but thank you for making
the effort!)
That's not to say that everything that has been checked into trunk
over the past couple of months is automatically going to be a feature
of 1.18. Anything that's too complicated or risky that's been checked
in will be backed out of the branch and saved for 1.19.
Chad Horohoe (^demon on IRC) is going to be the one rebranching.
Let us know if you have any questions.
Thanks!
Rob
Recently, there was a discussion on a bug (“UNIQ key exposed”
https://bugzilla.wikimedia.org/14562) about the priority setting I had
given the bug.
It was part of the problems I found in Bugzilla last December and
gathered into a tracking bug (https://bugzilla.wikimedia.org/26213).
It looks like I made the wrong decision on #14562 since it was part of
an extension that, while deployed on enwiki, wasn't likely to be
triggered.
When I was discussing this with Robla, he suggested I ask about this on
wikitech-l, so here goes:
There are at least four bugs live on Wikipedia that leave really ugly
UNIQ strings in the wikitext. I've created a demonstration of them on
my wiki page: http://hexm.de/4x
The bug numbers are on the page linked to their entry in Bugzilla.
I suppose these are all linked to the parser work that Brion & co are
currently working on, but the arrival of the new parser 6 months to a
year or more away (http://www.mediawiki.org/wiki/Future/Parser_plan),
I'd like to get these sort of parser issues sorted out now.
For those more familar with the current parser: how can those developers
who are less experienced start fixing the problem? How important are
these issues?
Mark.
MediaWiki.org is great for extension authors, as far as it goes. Today,
though, someone asked on #mediawiki how to create development branches
for their extension in their SVN repo. I told him I didn't think it
could be done — that he might have to use the SVN repo as a backend to
push to from git or bzr — but I'm not sure that answer was correct.
Anyway, as I was writing about the UNIQ tracking bug, I thought of some
documentation and support that we should try to get in place for
extension developers. Since Sumana is creating a lot of good
documentation about testing lately, that is where I started:
* What sort of things should they test?
* Can they have tests that will continue to work against the current
parser and the next one?
* How can they write parser tests and unit tests to try out their
code?
* How can they make sure that those tests are run on the test server?
(I think this actually requires some work on the test server, but…)
Of course, that documentation would help more than just the extension
writers who have “UNIQ” showing up in their output. What else could we
do to support extension authors?
Mark.
Gave Aaron Parecki access to extensions. He's been working on
extensions such as:
* MediaWiki-SEO-Title-Tag
* MediaWiki-Changelog-Graphs
* MediaWiki-Glossary-Extension
- Ryan
This week for our IRC bug triage, I decided to focus on problems
reported with caching. We focused on six bugs.
You can read the logs of the discussion: http://hexm.de/54
The etherpad: http://etherpad.wikimedia.org/BugTriage-2011-07http://bugzilla.wikimedia.org/20468 — User::invalidateCache throws
1205: Lock wait timeout exceeded
These lock timeouts happen frequently enough that we can start to
track them down. A Tim said, to solve this: “We should reduce the
transaction time and number of locks in a transaction.”
Since these are showing up enough, we'll start to log the
backtrace, figure out where it is being called and add commit()
where necessary.
http://bugzilla.wikimedia.org/26338 — Wikimedia Javascript and CSS
files are getting an extra max-age cache-control param
This bug was filed back before ResourceLoader was deployed. After
Ryan confirmed that this was less of a problem now that it was
less of a problem now, he pointed to a couple of places that files
are still served without ResourceLoader that would benefit from
adding Apache directives.
http://bugzilla.wikimedia.org/26360 — Disabling sessions in memcached
produces open() error
Before we got to this one in triage Chad was already busy
investigating it. He thinks this was broken way back in r49370.
Under “You broke it you buy it”, he is fixing the problem.
http://bugzilla.wikimedia.org/29223 — Querying for rvdiffto=prev fails
for many revids: "notcached"
Sam has reportedly been working at this one and may have already
fixed it in trunk. I’ll check with him.
All was not lost in the discussion of this bug, though. It
reminded Tim that there is a similar problem with action=parse.
It only fetches from the parser cache, it doesn't store to it.
This problem reduces our parser cache hit ratio significantly
since we have a growing number of action=parse hits due to Android
and iPhone apps.
I filed a new bug to fix the problem Tim mentioned:
http://bugzilla.wikimedia.org/29907http://bugzilla.wikimedia.org/29384 — Load order of request in IE6
messes with dependancy resolving (mediawiki.util not available in
time)
Krinkle has been looking into this one but doesn't yet know what
is causing it. Perhaps he and Trevor will have time to look at it
in this coming week when he is in San Francisco.
http://bugzilla.wikimedia.org/29552 — Squid cache of redirect pages
don't get purged when page it redirects to gets edited
Much of the discussion for this bug and the next one overlapped,
but Tim suggested that we should be seeing the same problems with
templatelinks as we are ssing with redirect pages.
Roan responded that he thought there frequently were problems with
templatelinks but that they were mis-attributed to the job queue
instead of squid problems.
http://bugzilla.wikimedia.org/28613 — Thumbnails of updated files fail
to purge on squids
There is lots of speculation as to *what* is causing these
problems. Initially, we thought the squid caching problem was a
symptom of a hardware issue that the new routers being installed
week would fix.
With the new routers in place, though, it became clear that this
wasn't simply a matter of faulty hardware. After some discussion,
we thought packet loss (perhaps because MediaWiki does not
throttle the UDP packets it sends) might be a cause. I filed a
ticket in RT (http://rt.wikimedia.org/Ticket/Display.html?id=1174)
to get Ops to add listeners to the multicast group so that we
could see if there was any packet loss and, if so, where it was
coming from.
If it turns out that there is no packet loss (or other network
problems), then we'll have to look at MediaWiki itself.
Thanks to everyone's participation, I felt like this week's triage was
especially productive.
Till next week,
Mark.
Hello,
Back in June, I have added REL1_18 to the continuous integration test
suite. To make it works, I had to disable the 'Database' test group
which was severely broken at the time.
Demon fixed the Database group slowness with r88755. I have backported
it in REL1_18 together with other test fixes (see commit message).
The backport revision is r92239 [2]. I have run the tests on CC for this
revision WITHOUT the Database group. Then I enabled the Database group
and triggered a build manually. End result:
The GOOD: CC is now really running tests for 1.18 (including parser)
The BAD: 1.18 is greatly broken (robla will make us fix it)
The Weird: my commit (r92239) is not the root cause :-))
From a quick look, broken tests are related to the Block rewrite, some
weird Api breakages and parser tests fix that need backports.
[1] http://ci.tesla.usability.wikimedia.org/cruisecontrol/
[2] http://www.mediawiki.org/wiki/Special:Code/MediaWiki/92239
--
Ashar Voultoiz