Are there any recent statistics about SUL? How many accounts exist on
Wikimedia projects, how many accounts are SUL, how many unconflicting
non-SUL accounts remain, how many conflicts remain? How many Wikimedia
projects participate in SUL and how many and which are not part of it?
Thanks
Marcus Buck
User:Slomox
On 9/8/2010 10:18 AM, Aryeh Gregor wrote:
Well, this is
probably my last post on this subject for now. I think
I've made my points. Those who don't get them yet probably will
continue not to get them, and those who get them but disagree
probably
will continue to disagree. It looks like nothing big is going to
change right now, but I hope that when Danese gets up to this, we'll
see real improvements and not just attempts to paper over the
problem
without properly understanding it.
I'll just make a few further brief points to reiterate some things I
said that seem to still be misunderstood:
On Sun, Sep 5, 2010 at 10:27 PM, Tim Starling<tstarling(a)wikimedia.org>
wrote:
I don't think
you really know that. It's hard to see how much work
goes on behind closed doors when you only have a cursory involvement
with the project.
It's pretty easy to figure out that there aren't daily (or weekly or
monthly) face-to-face meetings among developers who live scattered
across the world.
None of the
open source projects I've been involved with fit the model
you describe. For instance, Squid makes heavy use of face-to-face
meetings, despite their geographically distributed development team.
Just to be clear: face-to-face meetings are great, in moderation.
I'm
totally in favor of them. But having lots of conferences is not the
same as working in an office together.
I think that's a
false dichotomy.
It is. There's a spectrum of middle ground in between, but the
endpoints are perfectly tenable as well. I think that, given
Wikimedia's mission as well as practical concerns, moving MediaWiki
development significantly further toward openness would be a good
thing.
I can say that despite being a
nobody at Mozilla and having gotten
only one (rather trivial) patch accepted, I feel like I'm taken more
seriously by most of their paid developers than by most of ours.
I'm sorry to hear that, and I'd like to know (off list) which paid
developers are making you feel that way.
It would be unfair to name anyone, in public or in private. If I've
had negative experiences with some paid developers, that should
really
count in their favor, because it means I have had *some* experience
interacting with them, period. If we exclude paid developers who
were
preexisting community members:
* I can think of two who I see with any regularity in #mediawiki.
* I can think of maybe three who I've had more than one conversation
with on IRC ever.
* I don't think I've ever seen a wikitech-l post from the majority
of them.
I can't think why most of them should even know who I am, except now
maybe some disgruntled volunteer who's making trouble for them. Why
would I *expect* them to respect me?
On Tue, Sep 7, 2010 at 8:29 PM, Ryan Kaldari<rkaldari(a)wikimedia.org>
wrote:
First of all,
all this talk of secret listservs and IRC channels is
malarkey. Yes, there are private listservs and IRC channels. All of
them
are private for very specific and well-established reasons. Most of
them
are only used in very specific circumstances (for example if there
was a
security breach that needed to be discussed privately) and tend to
be
very low traffic. They are not the places where important decisions
are
made.
1) Either paid developers are coordinating someplace where
volunteers
don't see it, or they're not coordinating at all. The latter is
implausible, so it's the former. It makes no difference if it's
face-to-face meetings, teleconferences, IRC, or mailing lists, or
even
a technically public place that volunteers don't know about -- it's
hidden.
2) The secret IRC channel is not low-traffic. The 1000th line
before
now in #wikimedia-tech (excluding parts/joins/etc., also excluding
/me
for simplicity) was about five days ago:
$ grep -v '[^ ]* [^ ]* \*' FreeNode-#wikimedia-tech.log | tail -n
1000
| head -n 1
100903 16:08:55<jps> and if you are only doing those in
groups of 10,
you need to multiply by at least 3
Doing the same on my log of the secret channel gives 100903
00:03:40,
meaning it has roughly the same traffic level as #wikimedia-tech
over
that period. Anyone who hangs out there can tell you that almost
nothing there is secret. I can't speak for private-l, because I'm
not
on it.
Secondly, the
idea that developers here in the office don't interact
with the community is absurd. The developers here interact with the
community constantly.
If the goal is to attract volunteers and make them feel part of the
community, it doesn't matter whether the paid people think they're
doing a good enough job. It matters whether the volunteers think
it.
I'm pretty sure it's clear by now that practically none of us do.
As
I said, anyone interested in fixing the problem would do well to
start
by surveying volunteers rather than looking at the issue from their
own perspective, and Danese told me she does plan to do that -- so
I'll wait.
Hi all,
Ok here is my idea for today:
The Linux community thrives because every volunteer developer has
access
to the full kernel and operating system, and can innovate totally on
their own, and the best work will make it into the kernel which is
redeployed to all users for future innovation. This is basically the
wild west of open source software. I think Wikimedia foundation has
done amazing work over the years, as I am learning about XML etc I can
see all the work that was put into it all, but now that it is so complex
I think it is hard for the internal Wikimedia developers to interact
with the community as there are so many volunteer feature requests and
patches that aren't able to be handled by Wikimedia foundation, as was
described in this discussion.
I think there should be a volunteer managed cloud computing project
that
is made specifically for development of mediaWiki and also for
development of database dumps and image dumps or any other good ideas
volunteers have. It will not be competing with Wikimedia foundation,
but instead would be a partnership that is designed to be beneficial to
everyone to allow development to take place easier. This cloud
computing project could be sponsored by the Wikimedia foundation or by
volunteers.
Some things that could be tested on the cloud computing project
could be
XML schema change to include diff's to handle article revisions, so that
the full history dumps would grow at a much slower rate. I am sure
there are 1000000 ideas that volunteers would have. How much would it
cost to set up a cloud computing project like this?
Thanks for reading,
cheers
Jamie
Hi,
I have no logs of any secret Wikimedia channels/lists. I think your reply is addressed to the wrong person, Aryeh was mentioning something about Wikimedia secret channels earlier. I am trying to fix my mail settings, I apologize for the confusion for my thread formatting. Please bear with me, this is the most trouble I have ever had with mail settings, I knew I was dumb but it is ridiculous, I am just going to subscribe to individual emails from now on (after this digest reply) as the newsreader wasn't working for me properly.
cheers,
Jamie
----- Original Message -----From
Ryan Kaldari <rkaldari(a)wikimedia.org>
Date
Mon, 13 Sep 2010 10:44:54 -0700
To
wikitech-l(a)lists.wikimedia.org
Subject
Re: [Wikitech-l] Community vs. centralized development
On 9/11/10 2:48 PM, Jamie Morken wrote:
>
Doing the same on my log of the secret channel gives 100903 00:03:40,
meaning it has roughly the same traffic level as #wikimedia-tech over
that period. Anyone who hangs out there can tell you that almost
nothing there is secret. I can't speak for private-l, because I'm not
on it.
>
Which channel are you talking about? Regarding private-l, my
understanding is that that list was originally set up to deal with
real-life wikistalking issues, which obviously requires privacy to
discuss. Please correct me if that is incorrect.
Ryan Kaldari
Hi List
I have a local copy of mediawiki (v1.15) and I'm trying to force
output of content in "zh-tw" format for testing. I figured this could
be done by setting:
$wgLanguageCode = 'zh-tw';
inside of LocalSettings.php. But this doesn't seem to work. Could
somebody shed some light on how to do this?
Thanks!
-Sean
Looks like this bug hasn't been mentioned on this list.
Nemo
-------- Messaggio Originale --------
Oggetto: [Foundation-l] Issue of transparency and bug waiting for a year
Data: Sun, 12 Sep 2010 02:02:28 +0200
Da: Daniel ~ Leinad
A: Wikimedia Foundation Mailing List <foundation-l(a)lists.wikimedia.org>
Dear Wikimedia Foundation,
Please look at bug
https://bugzilla.wikimedia.org/show_bug.cgi?id=20476 and help fix it.
This bug has been waiting for a year to resolve, and a year has passed
since the stewards are permanent global oversighters. I would like to
recall that, according Oversight policy[1], stewards can have
oversight access on all wikis by granting themselves *temporary* local
oversight access. I would also like to point out that we can not
change its current configuration, because it would hinder the work of
stewards (details in description of bug).
You are doing a great job with usbility, strategy, outreach etc.
(thank you!), but you could hire someone to such a boring job, and
help ensure the transparency of actions taken by the stewards and
other trusted users.
Wikimedia Community can check *all* CheckUser actions taken by
stewards in user rights log[2] and the same situation should be with
Oversight actions. Of course, I trust all the stewards, but I would
like to feel comfortable and I would like to have clear evidence that
the whole community can trust me and other stewards.
I hope the Wikimedia Foundation is committed to taking care of
transparency in the community and we will not have to wait another
year to fix this bug.
Regards,
Leinad
[1] - http://meta.wikimedia.org/wiki/Oversight
[2] - http://meta.wikimedia.org/wiki/Special:Log/rights
Hi,
I did some "testing" on Domas' pagecounts log files:
original file: pagecounts-20100910-040000.gz downloaded from: http://dammit.lt/wikistats/
the original file "pagecounts-20100910-040000.gz" was parsed to remove all lines except those
beginning with "en File". This shows what files were downloaded in that hour, mostly images but further
parsing is needed to remove non-image files (ie. *.ogg audio etc)
example parsed line from pagecounts-20100910-040000.gz:
en File:Alexander_Karelin.jpg 1 9238
the 1 indicates the file was downloaded once this hour, and the 9238 is the bytes transferred, which
depends on what image scaling was used
it is located at: "http://en.wikipedia.org/wiki/File:Alexander_Karelin.jpg" and linked from the page:
http://en.wikipedia.org/wiki/Aleksandr_Karelin
We also may want to parse out the lines that begin with "commons.m File" and "commons.m Image" from
the pagecounts file as they also contain image links
after we parse the pagecounts files down to image links only, then we can merge them together, the more
we merge the better our image view data will be for sorting the image list generated by wikix by view
frequency.
Wikix has the complete list of images for the wiki we are creating an image dump for, so any extra
images from these pagecounts files that aren't in wikix's image list won't be added to the image dump,
and also images that are in wikix's list but not in the pagecounts files will still be added to the image dump,
but can be put into a tar file showing they are infrequently accessed.
I did the parsing manually with a txt editor, but for the next step of merging the pagecounts files we will
need to make some scripts.
I think in the end we will not use wikix as it doesn't create a simple image list from the wiki's xml file.
cheers,
Jamie
If you install:
http://www.mediawiki.org/wiki/Extension:VariablesExtension#Installation
Then edit the main page to contain the following (between the '---'):
---
{{#vardefine:pi|3.14159265418}}
{{#expr:{{#var:pi}}+1}}
---
The main page should, when rendered, now, show the number 4.14159265418
What I would like is something very similar called "CellsExtension"
which provides only the keyword "#cell" as in:
---
{{#expr:{{#cell:pi}}+1}}
---
However, it gets the value of "pi" from:
http://somedomain.org/mediawiki/index.php?title=Pi
Ideally, whenever a mediawiki rendered page is cached, dependency
pointers are created from all pages from which cells fetched values
during rendering of the page (implying the evaluation of #expr's. That
way, when the mediawiki source for one of the cached pages is edited,
not only is its cached rendering deleted, but so are all cached
renderings that depend on it directly or indirectly. This is so that
the next time those pages are accessed, they are rendered -- and
cached -- again, freshly evaluating the formulas in the #expr's
(which, of course, will contain #cell references such as {{#cell:pi}}).
Hi there,
we are carrying a Mediawiki installation (1.14.1).
Now we were asked to offer a mobile access version like en.m.wikipedia.org.
I could not find any documentation/information about techniques behind this solution, what to install or configure?
Any hints?
--
Ich freue mich auf Deine/Ihre Antwort!
Uwe (Baumbach)
U.Baumbach(a)web.de
___________________________________________________________
Neu: WEB.DE De-Mail - Einfach wie E-Mail, sicher wie ein Brief!
Jetzt De-Mail-Adresse reservieren: https://produkte.web.de/go/demail02
I have been working on the ResourceLoader branch, where I've ended up
writing a CSSMin class which performs CSS minification, URI-remapping
and data-URI in-lining. It got me thinking that this class would be
pretty useful to non-MediaWiki projects too, but sadly we don't have a
history of sharing in this way...
* Software we've ported to PHP ourselves like our native-PHP CDB
implementation or CSSJanus are buried in our code-base, and make
use of a couple of trivial wf* global functions, making it
somewhat inaccessible to third-party users. Which sucks because
third-party users are important! They use the code in their own
systems, make improvements and potentially pass them back to us,
however if we don't make these things more general-purpose the
code will more likely get taken from our repository, tweaked and
never passed back; if we don't make it more easily accessible the
code will never be found and we won't be taking advantage of the
entire PHP development community. Sadness...
* Software we've borrowed from other projects like JSMin are also
buried within our MediaWiki-proprietary code, and while these
libraries can operate independently of MediaWiki, we need to make
it clear that they should be kept in sync with their original
sources both, upstream and down.
* Software we've created is often potentially useful to other
projects, but unfortunately tied to and buried within MediaWiki.
In some of these cases, the ties to MediaWiki are trivial and
could be either optional or removed entirely, and the component
could be factored out to a more general-purpose library, available
for re-use.
I don't have a very mature proposal for solving this completely, but as
a first step, it seems like we should have a libraries folder which we
can move things that can function in a stand-alone manner to. Initial
candidates appear to be libraries that already function in a stand-alone
way such as JSMin.php, CSSJanus.php, and CSSMin.php (in the
resourceloader branch right now but will be coming to trunk soon).
Additional software could be moved into this space after some
un-tethering such as Cdb/Cdb_PHP, DjVuImage, etc.
Overall, I think it would be great if we could take a look at this and
other ways to better share our work with non-MediaWiki projects, and
give back to the open-source community.
I welcome your thoughts and input.
- Trevor