Hi Matt,
I've done some extensive work with skinning MediaWiki over the last
couple months. Things like displaying different styles on depending on
what actual wiki page was showing (and making db calls in some places to
figure that out...).
That said, I found I got the biggest bang for my buck (by far) simply
changing the stylesheets. If you look at the page source for MediaWiki,
you'll find that pretty much everything has a CSS class associated with
it, which makes hiding/showing/moving things on the page very easy. I'm
continually impressed with this aspect (as well as others, of course) of
the software.
Contact me offline if you have specific questions; I'd be happy to walk
you through some examples.
ry
> Hello,
>
> I was wondering if anyone here has any experience or has any
information on
> skinning wiki's.
> I am looking for perhaps some white papers and or a tutorial. I am
surprised
> not to find more on the web on it since it is no easy task for those not
> already tempered with the daunting task.
>
> Anyways, any info on skinning a wiki would be really appreciated - I am
> trying to find folks who want to share any lessons learned or
information on
> the matter.
>
> Thank you.
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)wikimedia.org
> http://mail.wikipedia.org/mailman/listinfo/wikitech-l
>
>
Hello,
I was wondering if anyone here has any experience or has any information on
skinning wiki's.
I am looking for perhaps some white papers and or a tutorial. I am surprised
not to find more on the web on it since it is no easy task for those not
already tempered with the daunting task.
Anyways, any info on skinning a wiki would be really appreciated - I am
trying to find folks who want to share any lessons learned or information on
the matter.
Thank you.
I'm planning to parse the page text from wikipedia downloads.
Is there a document of all the supported markups (past and present),
or is the PHP code all there is to go off of?
Hi all,
I'd like to have some hints on how i could be able to configure a
multi-language website with Mediawiki in the same way as it was set up
at DevMo (http://developer.mozilla.org) ?
The setup I want is to have uri in the form :
* http://www.mywebsite.com/en/ for english
* http://www.mywebsite.com/fr/ for french
* http://www.mywebsite.com/es/ for spanish
* etc ...
I want to set it up on a GNU / Linux Debian server with Apache.
I'm afraid the only way to do this is to set-up as much mediawiki
install as I have languages and then generate interwiki links.
Is that right ?
Thanks.
Mathieu
Bonjour à tous,
Tout d'abord un grand merci pour votre aide que j'apprécie tout particulièrement.
J'ai maintenant une question concernant la recherche de caractères accentués:
- Dans les pages d'aide, il est indiqué qu'une recherche sur le mot "église" ou "eglise" doit donner le même résultat.
- Dans notre configuration, mediawiki 1.4.5, une recherche sur le mot "créer" donne 17 articles, mais une recherche sur le mot "creer" donne aucun résultat...
Comment faire pour que les résultats soient identiques dans les deux cas?
Quelle instruction dans quel fichier de configuration?
Merci d'avance et à bientôt,
Philippe Roth
It's a problen about plural forms in some languages. Example, where
are 3 form of plural in Russian, depending on "count mod 10".
Some translations (transliteration):
120 articles = 120 statEY
121 articles = 121 statYA
122 articles = 122 statYI
123 articles = 123 statYI
125 articles = 125 statEY
126 articles = 126 statEY
See http://en.wikipedia.org/wiki/Plural
So I have written function "convertPluralForm" in Language.php and
LanguageRu.php. Invoked:
{{NUMBEROFARTICLES}}
{{pluralform:{{NUMBEROFARTICLES}}|wordform1|wordform2|wordform3}}
and it's fine, but it don't work in MediaWiki messages. Examle in
"MediaWiki:categoryarticlecount" ("There are $1 articles in this
category."), because in
$1 {{pluralform:$1|wordform1|wordform2|wordform3}}
substitution of number occurs later, than a call of function (function
pluralform receives not number, but a string "$1").
Any ideas?
--
meta:ajvol
Dear WikiTech ML,
I'd like to ask wiki programmers a wiki-software
improvment. Since now if we have two words that means
the same we have to write the article for one of them
and redirect to that article the other:
ex. States or US or USA redirect to United States of
America
That force wiki writers to create tons of boring
redirects (one by one) both for:
1) real redirects (diffrent way to say the same thing)
ex. Soia cheese redirect to Tofu
2) synonyms (different word to means the same thing)
ex. Gauffres and Waffles
3) most likely errors (wrong words whose meaning is
guessable)
ex. State for States.
That means that "equivalent expressions", synonyms,
errors have to be handled in the same way.
I suggest it would be easy and more effective to have:
A) REDIRECT tag
B) SYNONYMS list to be writed at the very end of the
article
C) a (I suppose it's the right name) Fuzzy System that
grab the correct meaning from your typing errors -
like Google "Maybe you're looking for..."
That will help those who write, those who search and
those who write again using correct synonyms. The
lasts can link to an existing article the most easy
way
ex. If I write an article "Waffle" with an active
Synonyms list with "Gauffre", you can write your
article using [[Gauffre]] insted of
writing [[Gauffre]] in your article
Opening it
writing #REDIRECT [[Waffle]]
and doing the same for plural form.
Plus, in most language you have to handle singular and
plural form of feminin, neutrum or masculin genus of
words...
ex. I'm writing an Italian article for Beignets. I
have to write redirect for:
Bigné
Bignè
Bignole
Choux (al correct forms for the same thing).
Plus singular forms
Beignet
Bignola
That is to say 7 redirect... that can be a simple
active list at the end of the main article.
A fuzzy system wiil help to work out bigne e bigne'
for those who shearch and don't know what accented e
is to be used or if the wiki system will accept
accents.
Hoping it will be possible,
Best regards,
Valentina Faussone
--------------------------
Gent. WikiTech ML,
Vorrei chiedere ai programmatori del software wiki una
miglioria. Ora come ora se abbiamo due parole che
significano la stessa cosa dobbiamo scrivere il nostro
articolo per una delle due e fare un redirect dalla
seconda verso la prima:
ex. States o US o USA reindirizzano verso Stati Uniti
d'America
Questo obbliga a creare tonnellate di noiosissimi
redirect (uno per uno) per:
1) reindirizzamenti reali, cioè espressioni
lignuistiche diverse che significano la medesima cosa
ex. Formaggio di Soia reindirizza su Tofu
2) Sinonimi, cioè parole diverse, ugualmente corrette,
che indicano lo stesso oggetto
ex. Gauffres e Waffles
3) Errori comuni, cioè parole errate il cui
significato è comunque intuibile
ex. State al posto di States per indicare gli USA
Ciò significa che espressioni equivalenti, sinonimi e
errori comuni di battitura vanno trattati nello stesso
modo.
Suggerisco che sarebbe più facile e produttivo avere:
A) il tag REDIRECT
B) i SINONIMI gestiti come lista da inserire alla fine
di ogni articolo
C) un sistema Fuzzy (spero che il nome sia giusto) che
intuisca il significato inteso anche dagli errori, sul
tipo di quello di Google "Forse stavi cercando..."
Questo aiuterebbe chi scrive, chi effettua una ricerca
e chi scrive dopo il "primo" articolo di un dato
argomento a scrivere e linkare con maggior facilità
sinonimi corretti.
ex. se scrivo un articolo "Waffle" (è un tipo di
dolce) e posso inserire un sinonimo attivo alla fine
dell'articolo come "Gauffres", tu puoi scrivere il tuo
articolo linkando tanto [[Waffle]] quanto [[Gauffre]]
ed arrivare all'articolo principale, invece di:
scrivere [[Gauffre]] nel tuo articolo
Aprirlo
Scrivere #REDIRECT [[Waffle]]
Fare lo stesso per eventuali altri sinonimi e per
tutti i plurali
Inoltre in molte lingue si deve tenere conto non solo
delle forme singolari e plurali, ma anche dei generi
femminile, maschile e neutro...
ex. Sto scrivendo un articolo sui Beignets. Dovrei
scrivere un redirect per:
Bignè
Bigné
Bignole
Choux (tutte forme ugualmente accreditate della
medesima cosa)
Inoltre dovrei inserire anche le forme singolari,
ovvero
Beignet
Bignola
7 redirect che potrebbero benissimo essere una lista
alla fine dello stesso articolo principale. Un sistema
Fuzzy aiuterebbe a gestire le voci bigne e bigne' per
quelli che, cercando, non sapessero se il sistema
accetta gli accenti, o quale e accentata usare.
Sperando che questo sia possibile,
I migliori saluti,
Valentina Faussone
___________________________________
Yahoo! Mail: gratis 1GB per i messaggi e allegati da 10MB
http://mail.yahoo.it
Thanks Brion, very neat.
One more question: it would be nice if each project also wrote a line to a global log that is publicly accessable.
People could monitor this to detect that a dump has started and how much progress has been made.
E.g. at http://download.wikimedia.org/dumpprogress.txt
2005-Sep-03 12:18:30 special commons / completed normally
2005-Sep-03 12:27:10 special sources / completed normally
2005-Sep-03 12:39:12 wikiquote ab / completed normally
2005-Sep-03 12:39:23 wikiquote ac / completed normally
2005-Sep-03 12:40:23 wikiquote ae
Cheers, Erik Zachte
Dear ML,
I'm unsubscribing from wikitech because of overwork.
If somebody needs me please use my e-mail address.
thanks
Valentina Faussone
___________________________________
Yahoo! Messenger: chiamate gratuite in tutto il mondo
http://it.beta.messenger.yahoo.com
We're now accepting XFF headers from NTL proxies. This means that NTL
users will now appear to be editing from their home IP address, rather
than from a proxy. Blocks will be specific to a particular NTL
connection. This should be a big help in dealing with a couple of really
annoying NTL vandals who have been operating on en, e.g. MARMOT.
The proxy list I'm using comes from:
http://ben.cheetham.me.uk/resources/net/ntl-proxy-list
That was the most up-to-date list I could find, it's possible there are
some missing. If anyone has a better source for this information, please
speak up.
Big thanks to NTL for being good a Internet citizen and sending out this
information, as opposed to our arch nemesis ISP, AOL.
-- Tim Starling