Hi,
Le Sunday 26 September 2004 09:00, Gerard Meijssen a écrit :
> Jo wrote:
> > Hi Gerard,
> >
> > I added sound to this page as well. On Linux it plays automatically,
> > but way too fast. (I recorded it on Windows and then I converted it,
> > if I remember well)
> >
> > http://en.wiktionary.org/wiki/What%27s_your_name%3F
> >
> > Can you hear it correctly (on Windows)?
No problem to listen to this on Linux with Kplayer as well as the .ogg file
http://nl.wiktionary.org/wiki/Afbeelding:nl-Nederlands.ogg. I think any sound
software can be used on Linux, they all support OGG as a standard.
> > Jo
IMO, it's just a matter of providing the right plug-in with the information
how to install it if needed. I don't want MP3 files in Wikimedia projects.
Yann
--
http://www.non-violence.org/ | Site collaboratif sur la non-violence
http://www.forget-me.net/ | Alternatives sur le Net
http://fr.wikipedia.org/ | Encyclopédie libre
http://www.forget-me.net/pro/ | Formations et services Linux
I've checked in some tweaks to support the PHPTAL 1.0.0 development
snapshot running on PHP 5.0. It's not 100% working right, but pages
display which is nice.
Requires that you've:
* Downloaded PHP 1.0.0dev2 snapshot and installed it with pear
* Applied two patches to it, which I've posted to the phptal-users list
* Turn on $wgUsePHPTal manually in LocalSettings.php
* Make sure the include_path includes your PEAR directory.
-- brion vibber (brion @ pobox.com)
If you're interested in the Wikidata project but you can't directly
contribute in terms of code, you can help with some research:
http://kendra.org.uk/http://dev1.kendra.org.uk/
These people are doing something very similar, i.e. building free-form
databases using a wiki-like model (see the second link).
We need to know:
1) What exactly are Kendra's capabilities?
2) Is there potential for cooperation between Kendra and Wikimedia?
If you want to research this, please respond to this message so that
there's no needless duplication of effort. You could use
http://meta.wikimedia.org/wiki/Kendra_evaluation
to chronicle your findings.
Regards,
Erik
Completely ignoring the progress on the according meta page, I went
ahead and created a database feature for the wiki.
It is in CVS HEAD. You'll have to run the two CREATE queries (in
SpecialData.php, as a comment) and set $wgUseData = true ; in your
LocalSettings.php
A new "Data" namespace contains the display/edit form for entering the
data sets. NOTE: This is mono-language only at the moment, but I don't
see a problem in either putting many language versions into it, or use
data namespaces from several databases.
Example [[Data:Movie]] :
{| cellpadding=5
!Field!!width='50%'|Value!!Notes
|-
|Title||((!title/line))||
|-
|Year||((year/number))||
|-
|Tagline||((tagline/line))||
|-
|Plot summary||((plot/multiline))||
|-
|Actors||((actors/multiline))||
|-
|Runtime||((runtime/number))||min
|-
|Country||((country/line))||
|-
|Color||((color/dropdown/Technicolor/B&W))||
|}
This display similar to the mock-up I found on meta. Keys are defined
like ((this)), parameters for the edit screen are separated by a "/".
Note the multiple options for the ((color)) key, and the ((!title)) key,
where the "!" defines the primary key.Multiple versions (history) are
made on entries with the same primary key. Currently, only one primary
key is allowed, but it would be no real problem to change that.
The actual data entry/display is done via Special:Data. At the moment,
it can already add data using the above form, preview it, and store it
with multiple revisions (like wiki). I have started minimal work on
listing entries, but will probably continue later today.
Now begin stomping me for just hacking this without informing anyone
first ;-)
Magnus
I got a rough estimate of when the millionth article was added:
If the 109 cur/old dumps from Sep 20 would tell the whole story it would
have been at Sep 14 at about 21:55 hrs (server time).
Of course articles were deleted between Sep 14 and Sep 20 so the result
would have been different with dumps from an earlier date.
In fact it would bring the time forward.
At the other hand some articles may not have matched the criterium of 'no
redirect' and 'at least one internal link' on Sep 14, while they did at Sep
20.
I collected all articles in the 'cur' database that fulfilled the criteria
at the time of the dump, then found the time they were added either in 'cur'
or 'old'.
Duplicate articles were ignored (they happen sometimes after clicking 'save'
twice within seconds), also articles that exist only in old were not counted
(a few hundred, they may partially be explained by different dump times for
cur and old, with deletions coming in between, but probably mostly are just
aborted transactions).
As said I processed the 109 languages that are currently in the weekly stats
job, so I might have missed a few entries in very recent startup languages
that are still looking forward to their first milesone of 10 articles :)
Given all the above, I think it is not useful to point to one specific
article as the millionth, as too much noise is available.
Erik Zachte
Ray:
> Of course it's impossible.
....
>It's all about publicity. You look at whoever wrote the first new article
after 21:55
on Sept. 14; send him a T-shirt and mousepad with the logo, and make a big
fuss about it.
Well this article, not being the true millionth article (see previous
discussion), was at least the millionth article still in the database when
the dumps were made on Sep 20:
Added Sep 14, 21:55:16
http://he.wikipedia.org/wiki/%D7%93%D7%92%D7%9C_%D7%A7%D7%96%D7%97%D7%A1%D7%
98%D7%9F
I hope it is not controversial in any way, it looks like a peaceful image.
The text is minimal, but most articles started that way.
Erik Zachte
as [[User:formulax]] mentioned in
http://mail.wikipedia.org/pipermail/wikipedia-l/2004-September/017409.html
The guy called [[User:Yaohua2000]] today upload a text file,
in which contains passwords of some users, to threaten us.
we deleted the file immediately.
but the situation seems very dangerous for us now,
one can steal the password of our users.
The flood vandalism in Zh: serveral months ago was took by [[User:Yaohua2000]].
This guy seems to be very clever but very evil also.
That user discovered the bug and reported it in #mediawiki. The best solution would be serve downloads from a separate domain, so project cookies would not affect.
Domas
G'day everyone,
I've done several hour work and stripped lots of code from memcached and replaced it with BerkeleyDB library hooks.
The package can be located at http://dammit.lt/dbcached.tgz, though guys on #mediawiki already told it is 'notmemcached', though it has no ./configure and Makefile has to be tweaked yet by hand, as it is opensource, someone might fix that ;-) Anyway, it implements set/get methods with memcached interface, but has persistant on-disk store, cache management and may have transactions and stuff (just additional two or three lines of code at the initialisation). That would have ACID store at the cost of memcached. That would also allow analysis of internal cached data structures, out-of-software data maintainance (as it's store can be accessed by berkeleydb library), and other stuff.
Therefore, we've got near-line-memory store, or near-line-disk store, something in between.
I've done simple benchmarks, providing that in cached operations speed is equivalent or outperforming. I didn't check with large arrays of data yet, but that's already specific to applications.
I'm offering deployment of this stuff on wikimedia servers with possible other future uses (like distributed search store, I discussed with some p2p gurus on freenode, session caches, 100%-effective parser cache, some other object store...)
If project is interested, I'd like to put a module on wikipedia or some other cvs and clean/debug/test/improve code.
Cheers,
Domas
P.S. Some benchmarks:
Processes in action:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21989 midom 15 0 90128 18m 2224 S 6.0 0.8 0:03.82 dbcached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23296 midom 15 0 16720 14m 1320 R 5.9 0.6 0:02.11 memcached
READ ACCESS:
METHOD:
while i<10000:
i+=1
key=i
# All keys exist and get a half-kilobyte data
mc.get(key)
$ time python memtest.py
real 0m9.091s
user 0m0.692s
sys 0m1.799s
$ time python dbtest.py
real 0m9.043s
user 0m0.796s
sys 0m1.776s
WRITE ACCESS:
while i<10000:
i+=1
key=i
mc.set(key,data)
$ time python dbtest.py
real 0m7.969s
user 0m0.713s
sys 0m0.962s
$ time python memtest.py
real 0m7.723s
user 0m0.735s
sys 0m0.844s