Hi, is there any way to download the wiki codes of, say a category within a given wikipedia? I would like to download the papers in 'théorie des ensembles' at French wikipedia to translate them for Lombard one. I would like, afterwards, to go on in the same way for other categories. Downloading the entire wikipedia is too huge, whereas copying and pasting each paper is a little bit slow. Many thanks. Sincerely yours,
Claudi
---------------------------------
Découvrez un nouveau moyen de poser toutes vos questions quelque soit le sujet ! Yahoo! Questions/Réponses pour partager vos connaissances, vos opinions et vos expériences. Cliquez ici.
Hi,
I would want to create a simple parser extension which creates a new page
(similar to the inputbox of Eric Moeller), but the idea of my parser
extension would be to take the current category(s) of the article where the
inputbox is placed on and automatically use the same categories on the new
page you provide to the inputbox. This way if you create a new article from
a page that's listed in the categories "Visual Basic" and "Coding Tips"
would also automatically propose these categories on the newly created page,
which would greatly simplify the process a user has to go through to create
a page.
Now the problem, I've thrown some code together (see below) but I'm stuck at
the very start of my idea: the problem is that $wgOut->mCategoryLinks is an
empty array (I would normally pass the value of the categories via a hidden
input in the form).
Anyone knows if there is any way to determine the categories of the current
article at this point? Any other possible solution?
All ideas are very welcomed.
Cheers,
Peter.
my code:
<?php
$wgExtensionFunctions[] = "wfAParserExtension";
function wfAParserExtension() {
global $wgParser;
# register the parser extensions with the WikiText parser
$wgParser->setHook( "newarticle", "render_newarticle" );
}
// render_newarticle
// parses the <newarticle></newarticle> tag
function render_newarticle ( $input, $argv ) {
global $wgOut;
print_r ($wgOut->mCategoryLinks); // mCategoryLinks empty array here :(
$action = htmlspecialchars( $wgScript );
$createform=<<<ENDFORM
<table border="0" width="100%" cellspacing="0" cellpadding="0">
<tr>
<td>
<form name="newarticle" action="$action" method="get" class="newarticle">
<input type='hidden' name="action" value="edit">
<input type='hidden' name="preload" value="">
<input class="newarticleInput" name="title" type="text" value="" />
<input type='submit' name="create" class="newarticleButton" value="Add
Article"/>
</form>
</td>
</tr>
</table>
ENDFORM;
return $createform;
}
?>
An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test Template with thumb image (wiht link in description)... FAILED!
Running test message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test Language converter: output gets cut off unexpectedly (bug 5757)... FAILED!
Running test HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test Parsing optional HTML elements (Bug 6171)... FAILED!
Running test Inline HTML vs wiki block nesting... FAILED!
Running test Mixing markup for italics and bold... FAILED!
Running test 5 quotes, code coverage +1 line... FAILED!
Running test HTML Hex character encoding.... FAILED!
Running test dt/dd/dl test... FAILED!
Passed 409 of 426 tests (96.01%) FAILED!
Sorry for crossposting this. It felt wikisource-l was the most appropriate
list, but since the topic has been discussed a lot in wikitech-l it seemed
reasonable to post it there, too.
I agree the poem-tag makes life easier on wikisource, it saves loads of time
when putting poems there. I wonder if it would be a good idea to add another
semantic tag - that for the "intro" text, before the actual poem. Sometimes
there is none but usually there is the name of the poem and/or the name of
the author, and sometimes a little extra info.
Att small wikisources, this little intro - like most texts on the wikis -
are often in plain text. When the poem tag is applied, it does not look so
good. The result is like this
http://sv.wikisource.org/wiki/Till_min_far
With no difference in indentation or font, it is kind of difficult to see
where the intro text ends and the poem starts especially if we imagine a
very short intro. It is not appealing to the eye. One could add extra blank
lines, that would work, but on most wikis that method seems to be frowned
upon. English wikisource has a set of templates for fomatting the "intro"
part - here is an example.
http://en.wikisource.org/wiki/That_Day
So now, at least at Swedish Wikisource experiments with similar templates
have been started. That is an option, but adding a semantic tag resulting in
the need of templates seems a bit awkward to me.
What is the solution here, for the wikis that do not already have these
elaborate templates? One could do some wiki-specific adaptation to the poem
tag, so that it adds blank spaces above the poem - that is however not so
nifty when there actually is no "intro". Should we ask to get another
semantic tag for the intro? Or is templates, like at English Wikisource, the
major solution?
/habj
Folks
I am working on a project that would add some really exciting capabilities to the MediaWiki. I am looking for a developer who knows the source code base interesting in consulting on this project.
I’ve been working on modeling threats to election systems, the value of having a paper trail as well as other means to prevent election fraud.
The results are here:
http://www.brennancenter.org/programs/dem_vr_hava_machineryofdemocracy.html
We have budget to add threat modeling capabilities to the MediaWiki which could be used for lots of other modeling tasks.
The work involves taking our catalogs of attacks on voting systems and representing them in ways that are more understandable, editable, augmentable, and still easy to manipulate and analyze automatically.
Anyone out there want to get paid to add some cool capabilities to the MediaWiki? You will get to work with and perhaps publish with some of the major names in computer security and perhaps develop a new methodology for decision making in the IT security space.
Send an email if there is ANY chance you would like to help, either take the lead and get paid or consult and support our work to protect democracy as a volunteer.
The ideal candidate would know something about computer security.
Please write to ericlewisii(a)aim.com directly on this.
Eric
________________________________________________________________________
Check Out the new free AIM(R) Mail -- 2 GB of storage and industry-leading spam and email virus protection.
Hello!
You are receiving this email because your project has been select to
take part in a new effort by the PHP QA Team to make sure that your
project still works with PHP versions to-be-released. With this we
hope to make sure that you are either aware of things that might
break, or to make sure we don't introduce any strange regressions.
With this effort we hope to build a better relation between the PHP
Team and the major projects.
If you do not want to receive these heads-up emails, please reply to
me personally and I will remove you from the list; but, we hope that
you want to actively help us making PHP a better and more stable tool.
The first release candidate of PHP 5.2.0 was released today, it can
be downloaded from http://downloads.php.net/ilia/. This release
incorporates a large number of changes and new features so don't be
surprised if you come across a few bugs. If you discover any (we hope
not) please notify PHP's QA team at "php-qa(a)lists.php.net".
In case you think that other projects should also receive this kinds
of emails, please let me know privately, and I will add them to the
list of projects to contact.
Best Regards,
Ilia Alshanetsky
5.2 Release Master
In case anyone hasn't noticed, the number of MediaWiki extensions in
existence has soared in the previous year. They are scattered all around
the internet and it is a chore to make sure all of your extensions are up to
date.
In an attempt to alleviate the confusion of managing extensions, I propose a
more formal extension system.
Step 1: Overhaul how MediaWiki deals with extensions. Loading an extension
via 'require_once' is silly and has all sorts of limitations (for example,
if your extension file which modifies $wgExtensionFunctions is loaded from
within a function, $wgExtensionFunctions won't actually get modified unless
it is brought into scope of the calling function). In addition, there is no
easy way to tell if an extension is a special page extension, parser hook
extension, combination, etc. In my proposed system, MediaWiki extensions
would all be derived from a base 'Extension" class. There would be
interfaces that would allow extensions to become a SpecialPage extension,
parser extension, hook extension, etc. Furthermore, if extensions were
packaged as a class, we could give the base extension class useful
variables, such as "sourceURL" which would allow developers to provide a URL
to the most up-to-date version of an extension. Of course, the ultimate
benefit to turning extensions into classes is that it would make developing
extensions easier since OOP gives you a building block for your work, not a
clean slate.
Step 2: Write a manager for MediaWiki that allows you to load and upgrade
extensions remotely. Want to upgrade an extension? Just go to a special
page, hit the button to refresh the list for updates, and click the checkbox
next to the extension you want to update.
Critics out there will retort that this will slow things down. Yes, it
won't be as fast as explicitly typing require_once in LocalSettings.php.
However, the system could also be designed with speed in mind. For example,
it would be possible to serialize all the loaded extension objects into a
file (or shared memory) which is loaded for every page request. I take this
approach with my new Farmer extension (
http://www.mediawiki.org/wiki/User:IndyGreg/Farmer), which allows you to
specify which extensions are loaded via a web interface. The performance
hit is negligible.
Thoughts?
Greg
An automated run of parserTests.php showed the following failures:
Running test Table security: embedded pipes (http://mail.wikipedia.org/pipermail/wikitech-l/2006-April/034637.html)... FAILED!
Running test Link containing double-single-quotes '' (bug 4598)... FAILED!
Running test Template with thumb image (wiht link in description)... FAILED!
Running test message transform: <noinclude> in transcluded template (bug 4926)... FAILED!
Running test message transform: <onlyinclude> in transcluded template (bug 4926)... FAILED!
Running test BUG 1887, part 2: A <math> with a thumbnail- math enabled... FAILED!
Running test Language converter: output gets cut off unexpectedly (bug 5757)... FAILED!
Running test HTML bullet list, unclosed tags (bug 5497)... FAILED!
Running test HTML ordered list, unclosed tags (bug 5497)... FAILED!
Running test HTML nested bullet list, open tags (bug 5497)... FAILED!
Running test HTML nested ordered list, open tags (bug 5497)... FAILED!
Running test Parsing optional HTML elements (Bug 6171)... FAILED!
Running test Inline HTML vs wiki block nesting... FAILED!
Running test Mixing markup for italics and bold... FAILED!
Running test 5 quotes, code coverage +1 line... FAILED!
Running test HTML Hex character encoding.... FAILED!
Running test dt/dd/dl test... FAILED!
Passed 409 of 426 tests (96.01%) FAILED!
Hello,
I'll apologize in advantage if this is the wrong mailinglist or if i've
done anything wrong.. i've never used something like that before and was
not going to before i wrote that patch :D
I've filed a Bug in MediaZilla
(#6794 -> http://bugzilla.wikimedia.org/show_bug.cgi?id=6794) and
written a patch for it which is pretty working and (in my opinion) no
work needs to be added for it (except for one hebrew language string or
whatever that is, i think). You might want to take a look at the patch,
it's uploaded as an attachment to the bug, an explanation also goes there.
It's about the ParserFunctions, also used in Wikipedia (I honestly hope
this one will be used there :-) ), a new one -> {{for: }}. This might be
very usefull to clean up things like {{Babel-1}} {{Babel-2}} ... {{Babel-n}}
By the way, can anyone of you maybely answer and send me a link, maybely
to a tutorial, or explain how that participation-thing works? I'm
willingly to do some developing but honestly do not have the time to
learn all that CVS, NNTP and whatever stuff (although i do see that it
won't be possible not to care about that for longer time...) - just for
the beginning ;) - thank you in advance.
with kind regard, Warhog
http://en.wikipedia.org/wiki/Special:Statistics
The job queue length on the English Wikipedia is high and has been
rising continuously. I suspect that something is stuck.
The resulting inconsistencies are confusing my bot a bit; I'll just work
around the problem in the meantime.
Thanks!
Beland