Hey,
I'm one of the GSoC students for Wikimedia Foundation this
year, and have just released the first versions of my extensions [0, 1].
I do not know how to add them to the SVN repository though. (I have never worked with SVN before.)
My
mentor pointed out that I should place a request here [2], which I have
done. Since he's not really familiar with Windows (which I'm using), he
was not able to help me with the following things:
- What's the easiest way to generate an SSH public key on Windows?
- What's a good SVN client to use for Windows?
Any help with those would greatly be appreciated.
[0] http://www.mediawiki.org/wiki/Extension:Maps
[1] http://www.mediawiki.org/wiki/Extension:Semantic_Maps
[2] http://www.mediawiki.org/wiki/Commit_access_requests#Current_requests
Cheers,
De Dauw '[RTS]BN+VS*' Jeroen
Forum: code.bn2vs.com
Blog: blog.bn2vs.com
Xfire: bn2vs ; Skype: rts.bn.vs
Don't panic. Don't be evil.
70 72 6F 67 72 61 6D 6D 69 6E 67 20 34 20 6C 69 66 65!
I have added an application option ktf-to-fail that when specified accumulates tests with known-to-fail status as if they failed. When this option is missing, failure statistics do not include known-to-fail results and there is a summary at the end of parserTests that specifies how many known-to-fail tests were run (unless that number is zero). I also have modified parserTests to indicate the known-to-fail status when that option is specified.
But, there is still an issue. How should the per-test known-to-fail option interact with the compare and record application options? Should parserTests be modified to record and compare known-to-fail results? Or, should these results be silent for recording purposes and treated as failures only if the ktf-to-fail application options is specified?
This is my first post and I think I've selected the appropriate list.
Please let me know if there is a better place to post my question.
I have a client with an installed search engine that they don't want to
part with. I have used it to index their installed instance of MW. Is
there a way to integrate the searching of their repository into the
onboard MW search?
Thanks.
Aryeh Gregor wrote:
> I'm CCing wikitech-l here for broader input, since I do think
> Wikipedia would be interested in adopting this but I can't really
> speak for Wikipedia myself. The history of this discussion can be
> found in the archives:
>
> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-July/021133.html
Brandon Sterne's messages are not in that archive. When you reply to
them and CC to the list, you break threading, so it's not really
obvious what proposal you're both talking about. But I assume it's the
idea of allowing CSP to temporarily stop enforcing and complain only,
to simplify deployment.
<http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-July/021179.html>
> I think both whatwg and wikitech-l are configured to bounce messages
> by unregistered users. For wikitech-l members who want to comment,
> the registration link for whatwg is:
>
> http://lists.whatwg.org/listinfo.cgi/whatwg-whatwg.org
I was subscribed, but I was trolled there until I gave up and
unsubscribed, at which point they quietly implemented my proposal.
I don't know why you think more input is needed, it's a reasonable
proposal. Just flame everyone until you get your way.
-- Tim Starling
I think I have found the problem causing the continuous redirect on my test wiki. However, since I am new at this, I want to run this past someone with a better understanding of the code to make sure I have it right.
At line 145 of WebRequest::extractTitle() [r53551] is the following test:
if( substr( $path, 0, $baseLen ) == $base )
This is checking that the string in $base is the prefix of the last element in $path. However, the code in this method does not take into account that the pathname in the URL might have spaces translated into '%20' escape codes.
The URL to my testwiki is:
'/MediawikiTest/Latest%20Trunk%20Version/phase3/index.php/Main_Page'
This is the value in $path. However, the value in $base is:
'/MediawikiTest/Latest Trunk Version/phase3/index.php/'
So, the call to substr fails and the code that sets 'title' in $matches never executes (which means $_GET never gets a 'title' entry). The solution is to either convert the '%20' escapes in $path to blanks or convert the blanks in $base to '%20' escapes. This bug could be fixed in extractTitle() or since $path is an argument of this method and $base is extracted from an argument, perhaps it should be fixed elsewhere.
If someone confirms this is a bug, I will open a bug report in Bugzilla.
I have modified parserTests to take a "known to fail" switch so those tests that have always failed now pass. It was pretty simple. It only required 3 changes to parserTests.inc and some editing on parserTests.txt. I added an option for each test called flipresult. When this option is specified, the test succeeds when it fails and vice versa.
I have tested the modified parserTest on 1.16a running over a 1.14 schema database. However, I have run into a problem attempting to install the latest trunk revision so I can test against it. Specifically, I added a database called wikitestdb so I would have a production and test wiki. However, when I checked out the latest trunk revision, ran the install script and update.php, and then accessed http://<wiki path>/index.php the installation gets into a infinite redirect loop. When I attempted to debug this (using netbeans 6.7 and Xdebug) the redirect doesn't appear. That is, Main_Page is rendered and displayed. The only difference between the two URLs are the first uses http://<wiki path>/index.php (which redirects to http://<wiki path>/index.php/Main_Page), while the debug session specifies http://localhost/MediawikiTest/Latest%20Trunk%20Version/phase3/index.php?XD….
I need some help figuring out what is happening. I imagine using this list for that purpose would be inappropriate. So, if someone would volunteer to help me (email me at the from address in this email), I can get the parserTest changes tested against the newest revision. I can then open a bug (or use an already open bug) and attach the patch and edited parserTests.txt file to it.
Thanks! Here are the values that cause entry into the else-if statement:
$targetUrl === 'http://localhost/MediawikiTest/Latest Trunk Version/phase3/index.php/Main_Page'
$action === 'view'
$request->data === <null array>
$this->GET === <null array>
$title->mDbkeyform === 'Main_Page'
_SERVER[REQUEST_METHOD] === GET
I can see why the redirect else-if is entered (no title= parameter, $action === view), but the targetUrl looks OK to me. I'm not sure why the logic should analyze this case as a redirect.
P.S. When I previously dumped the get request using httpfox it showed (I haven't figured out how to configure netbeans to use FF, so this dump is from a separate run not using the debugger):
(Request-Line) GET /MediawikiTest/Latest%20Trunk%20Version/phase3/index.php/Main_Page HTTP/1.1
Host localhost
User-Agent Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.4; en-US; rv:1.9.0.11) Gecko/2009060214 Firefox/3.0.11
Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive 300
Connection keep-alive
--- On Tue, 7/21/09, Aryeh Gregor <Simetrical+wikilist(a)gmail.com> wrote:
> From: Aryeh Gregor <Simetrical+wikilist(a)gmail.com>
> Subject: Re: [Wikitech-l] Continually falling through InitializeSpecialCases else-if
> To: "Wikimedia developers" <wikitech-l(a)lists.wikimedia.org>
> Date: Tuesday, July 21, 2009, 10:09 AM
> On Tue, Jul 21, 2009 at 12:34 PM, dan
> nessett<dnessett(a)yahoo.com>
> wrote:
> > else if( $action == 'view' &&
> !$request->wasPosted() &&
> > (
> !isset($this->GET['title']) ||
> $title->getPrefixedDBKey() != $this->GET['title'] )
> &&
> > !count( array_diff(
> array_keys( $this->GET ), array( 'action', 'title' ) ) )
> )
>
> $action == 'view': This is a normal page view (not edit,
> history, etc.)
>
> !$request->wasPosted(): This is a GET request, not
> POST.
>
> !isset($this->GET['title']) ||
> $title->getPrefixedDBKey() !=
> $this->GET['title']: Either the title= parameter in the
> URL is unset,
> or it's set but not to the same thing as $title.
>
> !count( array_diff( array_keys( $this->GET ), array(
> 'action', 'title'
> ) ) ) ): There is no URL query parameter other than "title"
> and
> "action" (e.g., no oldid=, diff=, . . .).
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Can someone please help. I have only been working on this code for 5 days and I do not yet understand it. It turns out that the redirect happens in InitializeSpecialCases. The reason I get into a redirect loop is the code continually falls into the else if statement that has the following conditions:
else if( $action == 'view' && !$request->wasPosted() &&
( !isset($this->GET['title']) || $title->getPrefixedDBKey() != $this->GET['title'] ) &&
!count( array_diff( array_keys( $this->GET ), array( 'action', 'title' ) ) ) )
This is a sufficiently complex expression that I have no idea what each term represents. There must be someone out there who understands it. I just need someone to explain it so I can figure out what is going wrong.
If making different namespaces per filetype wasn't
feasible, what about making [[File:]] better so it
automatically returns the best way to use the media--<img> tag for images,
video/audio tags (or fallbacks)
as appropriate. That way if a file is changed (ie ogg
over png) it still displays properly.
This is all dependent on stripping extensions from
uploads, though.
-Chad
On Jul 20, 2009 6:15 PM, "Aryeh Gregor"
<Simetrical+wikilist(a)gmail.com<Simetrical%2Bwikilist(a)gmail.com>>
wrote:
On Mon, Jul 20, 2009 at 6:20 AM, Dmitriy Sintsov<questpc(a)rambler.ru> wrote:
> I am not sure that the...
Maybe they don't retrieve the page in the first place, because they
don't want to waste bandwidth and processing time getting images. It
would be rather a waste to send dozens or hundreds of HEAD requests on
every Flickr page (or whatever) just to make sure that all those
things ending in a suffix universally accepted to designate images
really *are* images.
On Mon, Jul 20, 2009 at 9:45 AM, Nikola Smolenski<smolensk(a)eunet.yu> wrote:
> It's a necessary evil...
Well, that would make no difference if you actually downloaded the
content, or the first handful of bytes. It's easy to *very* reliably
distinguish binary image data from HTML if you get to look at the
first several bytes of the file.
Anyway, I think the "right" way to do this would be to omit the suffix
from the page name entirely, treating the format as an implementation
detail. That way you could, for instance, upload an SVG over a PNG or
a PNG over a JPEG, and have all users be automatically updated without
manually changing the references. This does get a little confusing
when you consider totally different types of media, though, like audio
or video or PDF or whatnot. If NS_FILE (NS_IMAGE) weren't hardcoded
in thirty million places both in code and templates, I might suggest
different namespaces for different media types instead of one unified
File: namespace, but that seems impractical at this point.
_______________________________________________ Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia....