Gentlemen, let's say one is afraid one day the forces of evil will
confiscate one's small wiki, so one wants to encourage all loyal users to
keep a backup of the whole wiki (current revisions fine, no need for
full history).
OK, we want this to be as simple as possible for our loyal users, just
one click needed. (So forget Special:Export!)
And, we want this to be as simple as possible for our loyal
administrator, me. I.e., use existing facilities, no cronjobs to run
dumpBackup.php (or even mysqldump, which would be giving up too much
information) and then offering a link to what they produce.
The format desired is for later making a new wiki via Special:Import,
so indeed the Special:Export or dumpBackup.php --current outputs are
the desired format.
I just can't figure out the right
http://www.mediawiki.org/wiki/API URL recipe...
api.php ? action=query & generator=allpages & format=xmlfm & ...?
Could it be that the API lacks the "bulk export of XML formatted data"
capability of Special:Export?
If one click is not enough, then at least one click per Namespace. I
would just have the users backup Main: and Category:, for example.
Embedding the API URL would be no problem, I would just use
[{{SERVER}}/api.php?... Backup this whole site to your disk]
I'm very happy to announce that the Wikimedia Foundation is now opening
hiring for the Wikipedia Usability Initiative!
Realized by a grant from the Stanton Foundation, the goal of this
initiative is to measurably increase the usability of Wikipedia for new
contributors by improving the underlying software on the basis of user
behavioral studies, thereby reducing barriers to public participation.
We have three positions open, all local in San Francisco. See the linked
pages for details and how to submit your CV:
http://wikimediafoundation.org/wiki/Job_openings/Interaction_Designer_(proj…http://wikimediafoundation.org/wiki/Job_openings/Sr._Software_Developer_(pr…http://wikimediafoundation.org/wiki/Job_openings/Software_Developer_(projec…
The new team will be lead by project manager Naoko Komura, who was very
helpful in organizing localization and translations for our recent
fundraiser, and will coordinate closely with me and the rest of
Wikimedia's core developers. Also joining the project will be Wikimedia
staff developer Trevor Parscal.
As always, all of Wikimedia's software development is open-source, and
we expect to be able to roll improvements into the live Wikipedia
environment and general MediaWiki releases over the course of the project.
-- brion vibber (brion @ wikimedia.org)
CTO, Wikimedia Foundation
San Francisco
Can we look into enabling $wgAllowCopyUploads on Wikimedia projects?
This will let Wikimedia work a lot better with external archives
especial around large video files that are cumbersome for users to
download and then upload over our POST upload interface on home Internet
connections. In particular archive.org huge repository of public domain
and free licensed footage now support temporal urls for video:
http://metavid.org/blog/2008/12/08/archiveorg-ogg-support/
not to mention adding some clips from metavid's public domain
legislative collection to relevant articles ;)
As I understand we need to set up a proxy for the internal apache nodes
to access the outside web? Who can look at enabling this?
peace,
--michael
--------------------------------------------------
From: "Greg L" <greg_l_at_wikipedia(a)comcast.net>
Sent: Thursday, January 08, 2009 2:43 PM
To: "Voice of All" <jschulz_4587(a)msn.com>
Subject: StringFunctions
> JSchulz:
>
> Have you read the link here:
> http://en.wikipedia.org/wiki/Wikipedia_talk:Manual_of_Style_(dates_and_numb…
>
> …describing a scientific notation-formatting template? It could use a
> character-counting parser function that StringFunction could handle if it
> worked well (which it doesn’t).
>
> I ran this by Jimbo and he thought having a character-counting parser
> function for Wikipedia made sense. He said he couldn’t pressure Wikipedia’s
> paid developers to make it and I should run it by Erik to see how he
> would like to have it handle. Erik referred me to wikitech.
>
> Do you have a volunteer developer in mind who might be interested in
> making StringFunction bullet-proof?
>
> Greg
Hi,
I’m new to this venue so please have patience with me. Jimbo suggested
I contact Erik and Erik said I should post here.
Wikipedia authors of magic words and templates could really use a
character-counting parser function. All the background information can
be found here:
http://en.wikipedia.org/w/index.php?title=User_talk:Jimbo_Wales&oldid=26081…
- Developer_support_for_parser_function
In a nutshell though, there is currently a template on en.Wikipedia
called {{val}} that delimits numbers (places what appears to be
thinspaces very three characters in scientific notation). It currently
must use math-based techniques to parse the value and this results in
rounding errors 5–10% of the time.
A character-counting parser function would accept interrogations such
as “Are there more than four characters remaining in the string when
counting right from the decimal point?” And “If so, feed me three more
characters.” Such a parser function would be very handy for many other
purposes. With a good, bullet-proof parser function, our small army of
template authors could produce some nice new tools.
I can be reached at Greg_L_at_Wikipedia(a)comcast.net
Greg
You would not want to hard code the |high quality or |low quality
version to the article / wiki syntax. The idea is that you just put
something like [[Stream:my_movie]] or [[File:my_movie.ogg]] (in your
example) and then based on the client bandwidth, client supported
codecs, local language, and client player preference and the available
streams on the server the javascript player choses the appropriate set
of video, audio, and text streams to playback.
Its not quite the same problem as existing File namespace
transformations system. Same temporal meaning files are not _only_ files
programmability transcoded from a single my_video.ogg source. Its
different from the thumbnails or flattened SVGs derivatives. Aside from
the issue of associating dubbed audio tracks, Users will never upload
their original HD footage because of bandwidth constraints and for
maximum quality on the low-end version we want to create the low
bandwidth derivatives from the original footage so the derivatives will
not really be a "temporary file" that can easily be regenerated.
We can certainly overload the File namespace as you suggest to enable
such associations but the minimal hierarchy of a Stream namespace to
associate specific identical temporal meaning files seems
cleaner/worthwhile to me. It will mean that a single File: page will be
associated with many actual "files" that may have different licenses etc.
The stream namespace also contains interfaces for associated layers of
timed text putting all those associations into the File namespace may
complicate things but we can go that way if its the consensus.
peace,
--michael
Platonides wrote:
> Michael Dale wrote:
>
>> If we want to support multiple quality settings for a single "stream"
>> this will require a bit more infrastructure. Specifically I propose we
>> add another namespace for temporal media called Stream: and have it
>> directly map to ROE xml something like: http://tinyurl.com/72x57r more
>> info on ROE http://wiki.xiph.org/index.php/ROE
>>
>> File:my_movie_low_quality.ogg and File:my_movie_high_quality.ogg would
>> soft redirect to Stream:my_movie and all the meta info would be stored
>> there. The Stream namespace also allows us to group other media tracks
>> that share a temporal meaning such as multiple language audio dubbing
>> and multilingual transcripts/ subtitles. The javascript player can then
>> dynamically select audio language and or subtitles based on the user
>> language.
>>
>
> No need. Users would use [[File:my_movie.ogg|high]] or
> [[File:my_movie.ogg|low]]
> The actual ROE file would be at
> upload/thumb/f/fa/my_movie.ogg/my_movie-ROE.ogg (or
> Special:MvExportStream as metavidwiki does) as other types do. Same for
> the other temporary files.
>
>
> _______________________________________________
> Commons-l mailing list
> Commons-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/commons-l
>