There are the things in test/ and t/, but they're really
outdated and horribly maintained (and some don't
even work).
-Chad
On Jul 16, 2009 8:18 PM, "Aryeh Gregor"
<Simetrical+wikilist(a)gmail.com<Simetrical%2Bwikilist(a)gmail.com>>
wrote:
On Thu, Jul 16, 2009 at 6:18 PM, Tim Landscheidt<tim(a)tim-landscheidt.de>
wrote: > Perl's take on TAP...
We don't use any standard testing framework. The parser tests were
written by us from the ground up. In addition to not even attempting
to cover a large majority of the software's operation.
_______________________________________________ Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia....
I have never been a QA engineer. However, it doesn't require great experience to see that the MW software development process is broken. I provide the following comments not in a destructive spirit. The success of the MW software is obvious. However, in my view unless the development process introduces some QA procedures, the code eventually will collapse and its reputation will degrade.
My interest in MW (the software, not the organization) is driven by a desire to provide an enhancement in the form of an extension. So, I began by building a small development environment on my machine (a work in progress). Having developed software for other organizations, I intuitively sought out what I needed in terms of testing in order to provide a good quality extension. This meant I needed to develop unit tests for my extension and also to perform regression testing on the main code base after installing it. Hence some of my previous questions to this email list.
It soon became apparent that the MW development process has little or no testing procedures. Sure, there are the parser tests, but I couldn't find any requirement that developers had to run them before submitting patches.
Out of curiosity, I decided to download 1.16a (r52088), use the LocalSettings file from my local installation (1.14) and run some parser tests. This is not a scientific experiment, since the only justification for using these extensions in the tests is I had them installed in my personal wiki. However, there is at least one thing to learn from them. The results are:
Mediawiki 52088 Parser Tests
Extensions : 1) Nuke, 2) Renameuser, 3) Cite, 4) ParserFunctions, 5) CSS Style Sheets, 6) ExpandTemplates, 7) Gadgets, 8) Dynamic Page List, 9) Labeled Section Transclusion. The last extension has 3 require_once files: a) lst.php, b) lsth.php, and c) compat.php.
Test Extensions ParserTests Test Fails
1 1,2,3,4,5,6,7,8,9 19
2 1 14
3 2 14
4 3 14
5 4 14
6 5 14
7 6 14
8 7 14
9 8 14
10 9 (abc) 19
11 9 (a) 18
12 9 (ab) 19
13 1,2,3,4,6,7 14
Note that the extension that introduces all of the unexpected parser test failures is Labeled Section Transclusion. According to its documentation, it is installed on *.wikisource.org, test.wikipedia.org, and en.wiktionary.org.
I am new to this development community, but my guess is since there are no testing requirements for extensions, its author did not run parser tests before publishing it. (I don't mean to slander him and I am open to the correction that it ran without unexpected errors on the MW version he tested against.)
This rather long preamble leads me to the point of this email. The MW software development process needs at least some rudimentary QA procedures. Here are some thoughts on this. I offer these to initiate debate on this issue, not as hard positions.
* Before a developer commits a patch to the code base, he must run parser tests against the change. The patch should not be committed if it increases the number of parser test failures. He should document the results in the bugzilla bug report.
* If a developer commits a patch without running parser tests or commits a patch that increases the number of parser test failures, he should be warned. If he does this another time with some time interval (say 6 months), his commit privileges are revoked for some period of time (say 6 months). The second time he becomes a candidate for commit privilege revocation, they will be revoked permanently.
* An extension developer also should run parser tests against a MW version with the extension installed. The results of this should be provided in the extension documentation. An extension should not be added to the extension matrix unless it provides this information.
* A test harness that performs regression tests (currently only parser tests) against every trunk versions committed in the last 24 hours should be run nightly. The installed extensions should be those used on the WMF machines. The results should be published on some page on the Mediawiki site. If any version increases the number of parser test failures, the procedure described above for developers is initiated.
* A group of developers should have the responsibility of reviewing the nightly test results to implement this QA process.
I am sure there are a whole bunch of other things that might be done to improve MW QA. The point of this message is to initiate a discussion on what those might be.
Just a quick note --
Mail from Bugzilla is now being sent from the address
bugzilla-daemon(a)wikimedia.org instead of
bugzilla-daemon(a)mail.wikimedia.org to (hopefully) play nicer with spam
filters and such. You may need to update your local filters if you're
checking this exact address to put bugmail in a folder.
-- brion
Hello,
I need to use Proofreadpage extension. To handle images, I have installed
the WebStore extension.
When I upload a DjVu image and try to download a JPEG page corresponding to
this file for the first time, I get a 404 error, followed by the JPEG image :
my browser (Lynx) first displays "Alerte! : HTTP/1.1 404 Not Found" and after,
it displays the message which permit to download the image. Next
downloads doesn't produce the 404 error (I can directly get the image).
I would like to know if it is possible to directly get the image without the
404 error message even if the image is downloaded for the first time. On
http://upload.wikimedia.org/wikisource/ , it seems that it is possible,
right ?
I don't know if my problem is Webstore, MediaWiki (or Apache?) related.
So if it is not the right place to ask this question, could you tell me
where is the better place to do it ?
Best regards,
Alex
Hello there, long time no see:)
In the last few days I've been working on the project of getting
OpenStreetMap onto Wikimedia as outlined here:
http://techblog.wikimedia.org/2009/04/openstreetmap-maps-will-be-added-to-w…
Unfortunately I wasn't able to hack on it sooner (but of course other
people have been working on it too!) and the project has been somewhat
held up by the WM-DE servers being delayed.
Anyway, one thing standing between us and world domination is
rendering those static maps, I'm going to implement this but first I'd
like to get comments on *how* we'd like to do it, so I've written a
plan for doing it:
http://www.mediawiki.org/wiki/Extension:SlippyMap/Static_map_generation
Would generating static images like that be fine for Wikimedia
purposes or is this totally crazy? I think it would be just fine, but
then again I did write the Cite extension so take that with a grain of
salt:)
And to spam a bit: if getting pretty OpenStreetMap maps deployed is
something you'd like to happen sooner than later head over to our
development page:
http://www.mediawiki.org/wiki/Extension:SlippyMap#Development
I'm working off the Bugzilla tasklist which should be an approximate
indication of stuff that needs to be done.
Well, its just an idea. I'm not going to bet my house on its acceptance. But, here are some thoughts why it might work.
Mediawiki powers an awful lot of wikis, some used by businesses that cannot afford instability in its operation. It is in their interest to ensure it remains maintainable. So, they might be willing to provide some funding. In addition I'm sure Mediawiki is used by some parts of the government (both US and other countries), so there might be some funding available through those channels.
As to whether it is an interesting challenge, I agree writing a new parser in and of itself isn't. But, reengineering a heavily used software product that has to keep working during the process is a significant software reengineering headache. I once worked on a system that attempted to do that and we failed. It took us 10 years to transition (we actually got it into production for while) and by that time everything had changed. They ultimately through it away. The grand challenge is to do "rapid" software reengineering.
In regards to the 2%, you could stipulate that the solution must provide tools to automatically convert the 2% (or the vast majority of them).
Anyway, its only an idea. I think the biggest impediment is it requires someone with both a commitment to it and significant juice to spearhead it. That is probably why it wouldn't work.
--- On Tue, 7/14/09, Aryeh Gregor <Simetrical+wikilist(a)gmail.com> wrote:
> From: Aryeh Gregor <Simetrical+wikilist(a)gmail.com>
> Subject: Re: [Wikitech-l] Is this the right list to ask questions about parserTests
> I suspect nobody's going to stand a chance without
> funding.
>
> $ cat includes/parser/*.php | wc -l
> 11064
>
> That's not the kind of thing most people write for an
> interesting challenge.
>
> Also, you realize that 2% of pages would mean 350,000 pages
> on the
> English Wikipedia alone? Probably a million pages
> across all
> Wikimedia wikis? And who knows how many if you
> include third-party
> wikis?
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
Forwarded with permission of the author - he emailed this privately
after a comment on the whatwg list.
Summary: Theora video on iPhone is not going to be easy even with a
volunteer to write it - the first and second generation iPhone/iPodt
Touch CPUs aren't up to the task. So, Theora fans have a new puzzle to
try: get an anaemic ARM to decode Theora fast enough to be useful!
When I asked if I could forward this here, he said to feel free and also noted:
"I've been hanging out in irc to improve ffmpeg's Theora decoder, but
arm11 is going to be especially hard for any Theora decoder due to the
lack of L2 cache and only 16k of L1 data cache."
- d.
---------- Forwarded message ----------
From: David Conrad <lessen42(a)gmail.com>
Date: 2009/7/14
Subject: Re: [whatwg] HTML 5 video tag questions
To: dgerard(a)gmail.com
Hi David,
On Jul 13, 2009, at 2:09 PM, David Gerard wrote:
>
> iPhone Safari users (does iPhone Safari support <video> yet?) are,
> unfortunately, out in the cold until someone writes a Wikimedia client
> app that does Theora for them. That won't be us unless a volunteer
> steps up.
First of all, iPhone Safari does indeed recognize the <video> tag, but
treats it essentially the same way as <object>, in that it uses its
own controls and plays the video completely separate from the web
page. Of course, not much else really makes sense on a small screen.
I recently investigated how feasible it would be to create a Theora
video player for the iPhone/iPod touch and found the following
shortcomings (targeting a iPod touch 1g):
- libtheora-thusnelda can only get 23 fps on the 640x272 Transformers
trailer used at Dailymotion's HTML5 demo decoding to /dev/null. Given
the weak simd capabilities of the arm11, I doubt that this could be
sped up by more than 20%, and I think 10% is a more likely upper
bound.
- The only iPhone API for displaying frames that is fast enough for
video is OpenGL ES 1.1, which requires each frame to be converted to
RGB, padded to power of two dimensions, and then a blocking copy to
video memory. All of this adds significant overhead.
All in all, I think it may not be possible to play Theora much larger
than CIF on current iPod touch or any iPhone other than the iPhone
3gs. The iPhone 3gs (and likely this year's iPod touch), however, has
a much more powerful Cortex-A8 and also supports OpenGL ES 2.0,
eliminating the need for a CPU yuv -> rgb conversion and padding to
power of two dimensions. This should be more than sufficient for SD
Theora; the Transformers clip currently decodes at 52 fps to /dev/null
on a BeagleBoard with some NEON optimizations.
So, I'm shelving this for now. I might pick it back up once an iPod
touch with a Cortex-A8 is released this September, but I thought you
might be interested in my findings anyway.
-David
Hm. Sounds like an opportunity. How about Mediawiki issuing a grand challenge. Create a well-documented/structured (open source) parser that produces the same results as the current parser on 98% of Wikipedia pages. The prize is bragging rights and a letter of commendation from someone or other. I suspect there are a bunch of graduate students out there that would find the challenge interesting.
Rationalizing the parser would help the development process. For the 2% of the pages that fail, challenge others to fix them. They key is not getting stuck in the "we need a formal syntax" debate. If the challengers want to create a formal syntax that is up to them. Mediawiki should only be interested in the final results.
--- On Tue, 7/14/09, Aryeh Gregor <Simetrical+wikilist(a)gmail.com> wrote:
> They're supposed to pass, in theory, but never have.
> Someone wrote
> the tests and the expected output at some point as a sort
> of to-do
> list. I don't know why we keep them, since they just
> confuse
> everything and make life difficult. (Using the
> --record and --compare
> options helps, but they're not that convenient.) All
> of them would
> require monkeying around with the parser that nobody's
> willing to do,
> since the parser is a hideous mess that no one understands
> or wants to
> deal with unless absolutely necessary.
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
Thanks. From your response I'm not sure if these tests are "supposed" to fail (there are test suites that have tests like that) or they are supposed to succeed but there are bugs in the parser or other code that cause them to fail. Can you clarify?
--- On Tue, 7/14/09, Aryeh Gregor <Simetrical+wikilist(a)gmail.com> wrote:
> From: Aryeh Gregor <Simetrical+wikilist(a)gmail.com>
> Subject: Re: [Wikitech-l] Is this the right list to ask questions about parserTests
> To: "Wikimedia developers" <wikitech-l(a)lists.wikimedia.org>
> Date: Tuesday, July 14, 2009, 3:40 PM
> On Tue, Jul 14, 2009 at 5:16 PM, dan
> nessett<dnessett(a)yahoo.com>
> wrote:
> > Can anyone tell me which of the parser tests are
> supposed to fail? Also, is there a trunk version for which
> only these tests fail?
>
> These are the perpetual failures:
>
> 13 still FAILING test(s) :(
> * Table security: embedded pipes
> (http://lists.wikimedia.org/mailman/htdig/wikitech-l/2006-April/022293.html)
> [Has never passed]
> * Link containing double-single-quotes
> '' (bug 4598) [Has never passed]
> * HTML bullet list, unclosed tags (bug
> 5497) [Has never passed]
> * HTML ordered list, unclosed tags
> (bug 5497) [Has never passed]
> * HTML nested bullet list, open tags
> (bug 5497) [Has never passed]
> * HTML nested ordered list, open tags
> (bug 5497) [Has never passed]
> * Inline HTML vs wiki block
> nesting [Has never passed]
> * dt/dd/dl test [Has never
> passed]
> * Images with the "|" character in the
> comment [Has never passed]
> * Bug 6200: paragraphs inside
> blockquotes (no extra line breaks)
> [Has never passed]
> * Bug 6200: paragraphs inside
> blockquotes (extra line break on
> open) [Has never passed]
> * Bug 6200: paragraphs inside
> blockquotes (extra line break on
> close) [Has never passed]
> * Bug 6200: paragraphs inside
> blockquotes (extra line break on
> open and close) [Has never passed]
>
> r51509 is a revision on which they're the only failures,
> but it's
> pretty old (there's probably a somewhat more recent
> one). The
> breakage looks like it occurred in r52213 and r52726,
> according to
>
> git bisect start trunk `git svn find-rev r51509` &&
> git bisect run php
> phase3/maintenance/parserTests.php --regex 'Section
> headings with TOC'
> git bisect start trunk `git svn find-rev r51509` &&
> git bisect run php
> phase3/maintenance/parserTests.php --regex
> '<references> after
> <gallery>'
>
> (yay git!).
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
Can anyone tell me which of the parser tests are supposed to fail? Also, is there a trunk version for which only these tests fail?
--- On Fri, 7/10/09, Aryeh Gregor <Simetrical+wikilist(a)gmail.com> wrote:
> From: Aryeh Gregor <Simetrical+wikilist(a)gmail.com>
> Subject: Re: [Wikitech-l] Is this the right list to ask questions about parserTests
> To: "Wikimedia developers" <wikitech-l(a)lists.wikimedia.org>
> Date: Friday, July 10, 2009, 3:49 PM
> On Fri, Jul 10, 2009 at 6:35 PM, dan
> nessett<dnessett(a)yahoo.com>
> wrote:
> > I don't want to irritate people by asking
> inappropriate questions on this list. So please direct me to
> the right list if this is the wrong one for this question.
> >
> > I ran parserTests and 45 tests failed. The result
> was:
> >
> > Passed 559 of 604 tests (92.55%)... 45 tests failed!
> >
> > I expect this indicates a problem, but sometimes test
> suites are set up so certain tests fail. Is this result good
> or bad?
>
> We usually have about 14 failures. We should really
> be able to mark
> them as expected, but our testing framework doesn't support
> that at
> the moment. The current workaround is to use --record
> and --compare,
> but that's a pain for a few reasons.
>
> I get 49 test failures. It looks like someone broke a
> lot of stuff.
> It happens; frankly, we don't take testing too seriously
> right now.
> There are no real automated warnings. Brion used to
> have a bot post
> parser test results daily to wikitech-l, but that was
> discontinued.
> So people tend to break parser tests without noticing.
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>