Hello techs :)
I would like to ask about a specific bug that prevents showing the
diff for new pages in the RC rss feed. Can someone please look at it
and tell me someting about it.
http://bugzilla.wikimedia.org/show_bug.cgi?id=3996
Thank you in advance
--
Palica
http://sk.wikipedia.org/User:Palica
Nezabudni si vziať svoje Wikamíny. / Don't forget to take your
Wikamins today. - Palica
http://sk.wikipedia.org - slobodná encyklopédia, ktorú môže každý
upravovať - AJ TY
Hello
sorry, English is the common language for this ng. ignore my previous
posting.
I like to import wiki entries created with DokuWiki
(http://wiki.splitbrain.org/wiki:dokuwiki) in the MediaWiki database.
The data's are stored in UTF-8 text files. >Is a Converter already
available?
I think the easiest way could be an direct import into the mySQL
database. So I need an small script that use the functions of MediaWiki
like insertNewArticle. The script has to be insert the
localsettings.php, connect to the database, login an admin and insert my
text as a new page (text, summary, ...).
thanks in advance
andreas
Thanks Brion for reminding me that
http://foo is replace into a link
But, Sorry for asking again. Is there a proper way to have this kind of
link in a template to work (Mediawiki 1.5.0)
<iframe charset="iso-8859-1" xml-encoding="iso-8859-1"
src="http://rcm-fr.amazon.fr/e/cm?t=httpwwwfxpa03-21&o=8&p=8&l=as1&asins={{{1}}}…"
style="width:120px;height:240px;float:left;" scrolling="no"
marginwidth="0" marginheight="0" frameborder="0"></iframe>
In mw1.3, I hacked a bit the regexp in Parser.php so that anything
beginning with an '=' was left out of attributes replacement.
Here (mw1.5.0), I tried the simple <html></html> surrounding, but with
no success.
I'm really not sure there is a way to write this in wiki language, but
if someone knows...
Thanks for any solution
François
-----BEGIN PGP SIGNED MESSAGE-----
Moin,
my graph extension (see http://bloodgate.com/perl/graph) returns either
HTML, or SVG. However, mediawiki treats the returned text as wikitext,
e.g. it parses it, attempts transform it into HTML and the sanitizes it.
The latter step should be done for security reasons - after all, my
extension processes user input and I wouldn't trust it completely to
always return well-formed HTML.
However, I would like to somehow annouce that my extension already returns
HTML. At the moment the returned text cannot contain leading spaces or
empty line because these will be turned into "<p>" or "<pre>", completely
destroying the output.
- From the comment in the example extension (where I based mine on) I read
that the extension should return HTML, but it seems the returned text
will be treated as wikitext and not just HTML. Is it possible to change
this?
(I have the feeling that I already asked that, but couldn't find it
again :)
Another, related problem is SVG output, this too is treated first as
wikitext, then as HTML, and in this process the output is destroyed. Now
it would be possible to handle SVG output like <math> does, e.g.
producing an external file and the referencing it. However, I would
prefer to generate inline SVG. For this to work the output must be passed
unaltered, because the HTML sanitizer seems to not like SVG at all (not
surprise :)
Has anybody done something in this regard?
The code for my extension is inlined below, you can find the complete
package at my site mentioned above.
If this isn't the right place to ask this question, please kindly redirect
me.
Best wishes,
Tels
<?php
# Graph WikiMedia extension
# (c) by Tels http://bloodgate.com 2004-2005
# Takes text between <graph> </graph> tags, and runs it through the
# external script "graphcnv", which generates an ASCII, HTML or SVG
# graph from it.
$wgExtensionFunctions[] = "wfGraphExtension";
function wfGraphExtension() {
global $wgParser;
# register the extension with the WikiText parser
# the second parameter is the callback function for processing the
text between the tags
$wgParser->setHook( "graph", "renderGraph" );
}
# for Special::Version:
$wgExtensionCredits[parserhook][] = array(
'name' => 'graph extension',
'author' => 'Tels',
'url' => 'http://wwww.bloodgate.com/perl/graph/',
'version' => 'v0.13 using Graph::Easy v' . `perl -MGraph::Easy -e
'print $Graph::Easy::VERSION'`,
);
# The callback function for converting the input text to HTML output
function renderGraph( $input ) {
global $wgInputEncoding;
if( !is_executable( "graph/graphcnv" ) ) {
return "<strong class='error'><code>graph/graphcnv</code> is not
executable</strong>";
}
$cmd = "graph/graphcnv ".
escapeshellarg($input)." ".
escapeshellarg($wgInputEncoding);
$output = `$cmd`;
if (strlen($output) == 0) {
return "<strong class='error'>Couldn't execute
<code>graph/graphcnv</code></strong>";
}
return $output;
}
?>
- --
Signed on Thu Nov 17 11:52:40 2005 with key 0x93B84C15.
Visit my photo gallery at http://bloodgate.com/photos/
PGP key on http://bloodgate.com/tels.asc or per email.
"Some spammers have this warped idea that their freedom of speech is
guaranteed all the way into my hard drive, but it is my firm belief that
their rights end at my firewall." -- Nigel Featherston
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
iQEVAwUBQ3xjQncLPEOTuEwVAQEu1Qf8CbyBzGmmQBsKnlIyLhZ6XGbYved014TE
DgRJSuIO7zP04q1W7wu7WuzPuTeH2kB6fcYoANEoV4YzEL4vpWA9tpscvBcAYViO
WtKi/+f3KSoPJLmFqQckuiRWz4jHa5SBRJKzcwX8XchrXVXojxZ7xrbEZaOl896g
ehBUANxeBL/wcByBTJY1gShZ2f6N6GlFW/3/ERLbhRsC8qvKe0fkbDFs6w9C8KWN
4/VStDOTQ3hu0dMt6q6s82psKi+jtRb7asNAaxmGsie3w6tX6Q1jEwMo8fdsfv5X
0ASy2sol+zFWaR19BbgXM3cS6nOa75u0GMEsfmXRrTqmD9GjnJwbzA==
=dBzA
-----END PGP SIGNATURE-----
Hello!
You are receiving this email because your project has been selected to
take part in a new effort by the PHP QA Team to make sure that your
project still works with PHP versions to-be-released. With this we hope
to make sure that you are either aware of things that might break, or to
make sure we don't introduce any strange regressions. With this effort
we hope to build a better relation between the PHP Team and the major
projects.
If you do not want to receive these heads-up emails, please reply to me
personally and I will remove you from the list; but, we hope that you
want to actively help us making PHP a better and more stable tool.
The first release candidate of PHP 4.4.2 can be found at
http://downloads.php.net/derick/ . If everything goes well, we hope to
release PHP 4.4.2 next tuesday. If you find any issues, please contact
the PHP QA team at "php-qa(a)lists.php.net".
The main things that this release addresses are:
- problems with mod_rewrite and apache 2
- the key() and current() bug
Please test those cases extra carefully on as many platforms as
possible.
In case you think that other projects should also receive this kinds of
emails, please let me know privately, and I will add them to the list of
projects to contact.
regards,
Derick
--
Derick Rethans
http://derickrethans.nl | http://ez.no | http://xdebug.org
The Wikipedia clone site http://www.splammer.com/ appears to me to be
copying Wikipedia content in real time, apparently by reading the raw
Wikitext, rather than running from a dump.
Could someone take a look to see whether this is the case?
-- Neil
There are situations where ip based blocking is overbroad (many users
behind a proxy) and situations where it is ineffective (user can
change IP). As a result some people have thought it desirable to be
able to block users based on a cookie, which although not foolproof
itself would be a useful additional tool.
I'd like to propose we implement half of that to gain something which
is useful right away but would require almost no work: Cookie based
sockcheck.
When a user edits, we request a cookie "usertoken" or whatever. If
they do not have one, we generate a long random number and give them
one. Every edit made by that browser (no matter which user is logged
in) the cookie is returned. We add an extra column to recent changes
to store this value.
A new version of sockcheck is produced that finds users who share
revisions with the same token, much like we can do with IPs already.
Viola, cookie based sockcheck.
Thoughts?
Hi,
I would like to request the creation of Nedersaksisch; Low Saxon (NL)
as soon as possible, it has been discussed for over 5 months now, most
things have been cleared up and most people agree the wiki should be
created soon, for the benefit of the Low Saxon community in the
Netherlands and I guess to wikipedia. There are 2 oppose votes, one
from Node ue (who deletes the request from the approved page almost
daily) and has been put on again by various users, the second user is
a non-active anonymous user at meta-wiki.
Romani; Vlax Romany has 1 "temporary against vote" which is also a
non-active anonymous user at meta-wiki. The majority however supports
the creation, and should be created as soon as possible sothat the
Romani-community can also start with their own wikipedia.
Summary:
1. Nedersaksisch
2. Romani
More info can be found at:
http://meta.wikimedia.org/wiki/Approved_requests_for_new_languages
or at: http://meta.wikimedia.org/wiki/Template:Requests for new
languages/nds-nl and http://meta.wikimedia.org/wiki/Template:Requests
for new languages/Vlax Romany.
Regards,
Servien Ilaino
Well we are building the NAP-wikipedia and of course there are parts
where one easily can transfer data by just translating it from one
Wikipedia to the other. In this case we already uploaded the Calendar -
and now it would make sense to transfer the contents of the Italian
wikipedia there by translating it - people and events stay the same. So
what I now would like to reach is:
1) having a dump from the Italian wikipedia
2) extracting all pages of the calendar
3) translate them with the help of OmegaT
Why OmegaT? Well in the sections "Born" and "Died" after the name of the
person you vey often find just "actor, actress, writer, politician" or
whatever - this means that there would be quite a lot of 100% matches
and the translation would be much faster with the tool than without.
These lines all are created like this:
*[[name of the person]] description
Now to have the possibility to get 100% matches I need at least a line
break after
*[[name of the person]]
that needs to be taken out again after having translated the file.
Then in a second step, with the help of a bot, the translated parts can
be transferred into the articles.
Well: what I now need is some advice on how to have this done - and then
what can be done for Neapolitan can easily be repeated for other languages.
This means I need some help to get this regular expression ... I mean
some code that runs through the data, inserts the line break and after
translation takes it out again.
Who can help me with this? Btw. the TMX (translation memory) is going to
be available under GFDL for anyone - well this should be obvious.
This text was originally posted here (in order to allow to collect how
to's):
http://www.wesolveitnet.com/modules/newbb/viewtopic.php?topic_id=83&post_id…
Thank you!!!
Ciao, Sabine
*****
Sabine Cretella
http://www.wordsandmore.it
s.cretella(a)wordsandmore.it
skype: sabinecretella
___________________________________
Yahoo! Messenger: chiamate gratuite in tutto il mondo
http://it.messenger.yahoo.com