Hi everyone,
I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Kind Regards,
Hugo Vincent,
Bluewater Systems.
Hi!
I've read on the techblog that the new UI go live in April. I have
some questions:
1) What version? Acai, babaco, citron?
2) How/where could a wiki customize the special character insert menu,
and the inserted strings? And the embed file (picture) button inserts
this: "[[Example.jpg]]", without any "File:" or "Image:"!
3) The search and replace button is available in firefox, but does not
appear at all in opera. Why?
4) Currently the new navigable TOC does not work on FF/Opera at all
(I've tried those).
Not too early for live deployment?
Regards,
Akos Szabo (Glanthor Reviol)
Sorry about bugging the list about it, but can anyone please explain
the reason for not enabling the Interlanguage extension?
See bug 15607 -
https://bugzilla.wikimedia.org/show_bug.cgi?id=15607
I believe that enabling it will be very beneficial for many projects
and many people expressed their support of it. I am not saying that
there are no reasons to not enable it; maybe there is a good reason,
but i don't understand it. I also understand that there are many other
unsolved bugs, but this one seems to have a ready and rather simple
solution.
I am only sending it to raise the problem. If you know the answer, you
may comment at the bug page.
Thanks in advance.
--
Amir Elisha Aharoni
heb: http://haharoni.wordpress.com | eng: http://aharoni.wordpress.com
cat: http://aprenent.wordpress.com | rus: http://amire80.livejournal.com
"We're living in pieces,
I want to live in peace." - T. Moore
Hello all,
I would really really love to have the ability to use the algorithmic
package inside MediaWiki software. I am writing up the details of a
bunch of algorithms, and so far I have found it extremely painful to
do. If there is a better way to make my algorithms look beautiful,
I'd love to hear it. For your convenience, here is a link that shows
some nice output of the algorithmic package:
http://en.wikibooks.org/wiki/LaTeX/Algorithms_and_Pseudocode
What needs to be done to add support for this? Is this something that
can be done, or does it conflict with some design principle behind the
current solution?
Thanks!
AJ
Hello,
I'm a member of the German language Wikipedia community and have a
question that no-one could give me a definite answer to so far. I hope
someone here can answer it, or point me to where I should go to get a
definite answer.
The question is, what level of self-determination do the 260 language
versions of Wikipedia have as to the design of their user interfaces
(skins)? Can individual wikis choose independently modifications of
their skins, and which of the available skins to use as the default
for unregistered users, or is this controlled centrally by the
Foundation?
For backgrund, this question arose after the German language Wikipedia
(de.wikipedia.org) was switched from Monobook to Vector as the default
skin on the 10th of June 2010, resulting in considerable criticism
from the community. On the more sober side of the debate, it was asked
whether it would be theoretically possible to return to Monobook as
the default skin, at least for some time until the biggest known
issues with Vector have been fixed. Under the theoretical scenario
that a majority voted for a return to Monobook as the default skin,
would it be possible at all to switch it back? Or would the Foundation
not permit that?
The question seems to be a very fundamental one and I would also
appreciate insights into the big picture. How independent are the
language versions? To what degree can they govern themselves and to
what degree are they bound by decisions made centrally by the
Foundation?
Thanks,
Martin
Is it possible (and if so, how to) totally replace the output from a
Special page (including the sending headers etc)?
That is, can I use a SpecialPage as a SPARQL Endpoint and have it output
pure RDF/XML (or similar) if a SPARQL query is sent to it via POST,
while if not, it should just display the SpecialPage including a form
for submitting a SPARQL query.
Samuel
--
Samuel Lampa
---------------------------------------
Biotech Student @ Uppsala University
GSoC Student @ Semantic MediaWiki
---------------------------------------
E-mail: samuel.lampa[at]gmail.com
Mobile: +46 (0)70-2073732
Blog: http://saml.rilspace.org
Twitter: http://twitter.com/samuellampa
---------------------------------------
Hi All,
Recently I had a thought about trackback.php to attempt to discard spam
so I've made this minor alteration. Could be it reviewed and perhaps
included? Thanks
The purpose of this is to simply reject trackback requests when the
source of the connection is ipv4 and stored in a RBL (I don't know how
well maintained ipv6 RBL databases are).
This requires $wgUseTrackbacksRBL==true to be added to LocalSettings.php
Index: trackback.php
===================================================================
@@ -43,17 +43,33 @@
$tburl = strval( $_POST['url'] );
$tbname = strval( @$_POST['blog_name'] );
$tbarticle = strval( $_REQUEST['article'] );
+$tbip = strval( $_SERVER['REMOTE_ADDR'] );
$title = Title::newFromText($tbarticle);
if( !$title || !$title->exists() )
XMLerror( "Specified article does not exist." );
+
+if( $wgUseTrackbacksRBL==true && substr_count( $tbip, ":" ) == 0 &&
substr_count( $ip, "." ) > 0 ) {
+
+ $rbl_list = array( "zen.spamhaus.org", "dnsbl.njabl.org",
"dnsbl.sorbs.net", "bl.spamcop.net" );
+
+ foreach( $rbl_list as $rbl_site ) {
+ $ip_arr = array_reverse( explode( '.', $tbip ) );
+ $lookup = implode( '.', $ip_arr ) . '.' . $rbl_site;
+ if( $lookup != gethostbyname( $lookup ) ) {
+ XMLerror( $tbip . " is listed in " . $rbl_site
);
+ }
+ }
+}
+
Thanks for your time.
--
Best regards,
Ed http://www.s5h.net/
There is a lot of potential in Wikisource, but it depends
heavily on the ProofreadPage extension and it has several
bugs that are reported but don't get fixed.
ThomasV is the main developer and perhaps he is the only
maintainer? It would be in the interest of the Wikimedia
Foundation to assign a salaried developer or two into
developing a more robust framework for Wikisource, either
by improving the existing extension or by integrating
some or all of its functionality into MediaWiki proper.
People everywhere have a need to make some PDF (or Djvu)
document available on a website, page by page, with the
ability to add categories and talk pages. This ability
is what the ProofreadPage extension adds to MediaWiki.
In my mind, it is as essential as the support for uploading
JPEG images and automatically generating thumbnails.
Adding multipage documents to a wiki should be a far more
common need than adding mathematical equations.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
On 06/28/2010 11:25 PM, Birgitte SB wrote:
> Are you thinking of some bugs in paticular? I would like to see what
> has already been said at bugzilla if you have the numbers.
Force now to look, I'm surprised to find there really are
only a few reported bugs in the ProofreadPage extension.
Some of the bugs I've long waited for to be fixed are
instead in the PDF/Djvu page extraction, which is not
part of ProofreadPage,
https://bugzilla.wikimedia.org/show_bug.cgi?id=21526https://bugzilla.wikimedia.org/show_bug.cgi?id=23326
I have a long time frustration with how over-complicated
ProofreadPage is with its extra namespaces, new tags
and dozens of parameters that each newcomer needs to
learn, and it spilled over yesterday when I registered
https://bugzilla.wikimedia.org/show_bug.cgi?id=24168
If there are more bugs, they should be reported in
Bugzilla. But what also really needs to be done is to
reconsider if ProofreadPage really needs to be all this
complicated. Maybe I should register that as a bug?
I wish to clarify that my first post should not be
read as a personal attack on ThomasV. We all want
Wikisource to improve and grow, and we're working
together towards that goal.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se