Hi,
Little bit of a different thread today--but something's been kind of biting
at me over the last couple of weeks and I thought it'd be best to get it
out there :)
What support would there be for changing the MediaWiki logo and
being consistent with it?
I'm not suggesting a drastic change, like substituting puppies for the
flower. I'm looking at a more subtle change, as in moving from our
current logo to something like[0]. Originally, I didn't like the SVG version
but over time it's managed to grow on me quite a bit. Although, I stil
think the text could use tweaking, something closer to the current color
would be nice. There's a couple of pretty big reasons I think we should
switch to this (or something like it):
1) It scales much nicer. The current version looks absolutely awful at
higher resolutions, and at lower ends becomes rather featureless. A
version natively designed as an SVG (but keeping the original design
ideas) takes care of that.
2) It fits much nicer with the other WMF logos (other than the puzzle
globe, which will never match :)
3) We've already started selling stickers based on the SVG version[1],
so it might be good to update it on MediaWiki.org to match.
So...thoughts? Should we do this more formal-like in an RfC or
something? Other colors you'd like to paint the bikeshed?
-Chad
[0] http://commons.wikimedia.org/wiki/File:Mediawiki_logo_reworked_2.svg
[1] http://shop.wikimedia.org/products/wikimedia-project-stickers-pack-of-12
In the past couple of weeks I've been talking with Sam Reed (WMF's
current MediaWiki release manager) and Rob Laphiner (WMF's Platform
Engineering Director) about the future of MediaWiki tarballs.
I began this discussion after Rob expressed regret about the WMF's
ability to give tarball distribution the attention it deserves. Since
the WMF is focused on maintaining Wikipedia and its sister projects,
tarball distribution often loses among competing priorities.
The Foundation has made MediaWiki available for everyone and that's a
great thing. But Wikimedia's funding comes from donations as a result
of requests on Wikipedia, not from distribution of MediaWiki, so they
are rightly focused on their production cluster.
Other users of the MediaWiki software have different needs. For
instance, Citizendium, and Wikia and have both pegged their MediaWiki
installations at 1.16.5 for stability and made their own modifications
-- essentially forking the code. Forking is not ideal, but it is
understandable because there is no cooperation around individual
MediaWiki releases over the long term. With a third party to manage
MediaWiki releases and maintain long term support for selected releases,
cooperation between non-WMF users would be smoother.
To this start effort, I welcome interested collaborators from the
community of MediaWiki users outside of the WMF. With your help, we
will start making and maintaining MediaWiki releases based on the core
MediaWiki code without forking development.
I've been discussing this with some MediaWiki sites as well as setting
up a separate mailing list for packagers (such as Debian and RedHat
distributors) and discussing it there. So far the response has been
positive.
So now I'm asking you guys. Any interest?
--
http://hexmode.com/
Find peace within yourself and there will be peace on heaven and
earth. -- Abba Isaac
If you're reading this and you want help with your written English,
you're welcome to participate. More information below.
Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
---------- Forwarded message ----------
From: Erika Hanson <writeresh(a)gmail.com>
Date: Sun, Jun 10, 2012 at 8:42 PM
Subject: Re: lessons learned from the most recent English lessons
To: Sumana Harihareswara <sumanah(a)wikimedia.org>
Cc: Erika Hanson <writeresh(a)gmail.com>
Hello, Everyone!
I am offering informal English lessons this coming Sunday, the 17th,
starting at 9am US/Mountain time. Anyone who would like to
participate, please leave a paragraph of text at the below link by
this coming Thursday.
See you then!
Erika
>
> Link: http://notes.wikimediadc.org/p/english-lessons
>
On 06/11/2012 04:42 AM, Thorsten Glaser wrote:
> On Thu, 7 Jun 2012, Mark A. Hershberger wrote:
>
>> I especially like the idea of an /etc/mediawiki.d config directory.
> We have something like that in mediawiki-extensions-base,
> with a set of scripts (mwenext/mwdisext) similar to those
> shipped with Apache 2, so the sysadmin can manually enable
> (and disable again) extensions.
Excellent.
>> So yes, I'm happy we are improving collaboration packagers <-> upstream.
> Agreed.
In order to help this collaboration -- and in recognition of my own
knowledge gaps when it comes to MW code -- I'm adding wikitech-l to the
CC list. (Original message is here:
http://lists.alioth.debian.org/pipermail/pkg-mediawiki-devel/2012-June/0020…)
> By the way, here™ we have more local patches; please tell me what you
> think about them (currently against 1.15):
>
> === modified file 'debian/changelog'
> --- debian/changelog 2012-05-30 14:51:27 +0000
> +++ debian/changelog 2012-05-30 14:52:41 +0000
> @@ -1,3 +1,17 @@
> +mediawiki (1:1.15.5-10tarent1) unstable; urgency=low
> +
> + [ Thorsten Glaser ]
> + * debian/patches/tarent.patch: new
> + - includes/specials/SpecialAllpages.php:
> + raise $maxPerPage, $maxLineCount
> + - includes/DefaultSettings.php: add webdavs:// and file: support
> + * debian/control: prefer PostgreSQL over MySQL in Depends/Recommends
Could you help me understand the changes the rational for increasing
maxPerPage and maxLineCount by two orders of magnitude? Is there any
benefit to showing 30,000 pages on an index sub-page?
> +
> + [ Roland Mas ]
> + * debian/patches/tarent.patch: add workaround for search bug
Has this been passed upstream? It looks like the version check should
be removed.
Thanks for asking for the input.
--
http://hexmode.com/
Find peace within yourself and there will be peace on heaven and
earth. -- Abba Isaac
On Sunday, I posted the following to the Analytics mailing list,
but didn't see any response there, so I'm reposting here.
At the Berlin hackathon, I improved the script I wrote in December
for compiling statistics on external links. My goal is to learn how many
links Wikipedia has to a particular website, and to monitor this over time.
I figure this might be intresting for GLAM cooperations.
This is found in the external links table, but since I want to filter out
links from talk and project pages, I need to join it with the page table,
where I can find the namespace. I've tried the join on the German Toolserver,
and it works fine for the minor wikis, but it tends to time out (beyond
30 minutes) for the ten largest Wikipedias. This is not because I fail to
use indexes, but because I want to run a substring operation on millions
of rows. Even an optimized query takes some time.
As a faster alternative, I have downloaded the database dumps, and processed
them with regular expressions. Since the page ID is a small integer, counting
from 1 up to a few millions, and all I want to know for each page ID is
whether or not it belongs to a content namespace, I can do with a bit vector
of a few hundred kilobytes. When this is loaded, and I read the dump of the
external links table, I can see if the page ID is of interest, truncate the
external link down to the domain name, and use a hash structure to count the
number of links to each domain. It runs fast and has a small RAM footprint.
In December 2011 I downloaded all the database dumps I could find, and
uploaded the resulting statistics to the Internet Archive, see e.g.
http://archive.org/details/Wikipedia_external_links_statistics_201101
One problem though is that I don't get links to Wikisource, Wikiquote this
way, because they are not in the external links table. Instead they are
interwiki links, found in the iwlinks table. The improvement I made in Berlin
is that I now also read the interwiki prefix table and the iwlinks table.
It works fine.
One issue here, is the definition of content namespaces. Back in December,
I decided to count links found in namespaces 0 (main), 6 (File:),
Portal, Author and Index. Since then, the concept of "content namespaces"
has been introduced, as part of refining the way MediaWiki counts articles
in some projects (Wiktionary, Wikisource), where the normal definition
(all wiki pages in the main namespace that contain at least one link)
doesn't make sense. When Wikisource, using the ProofreadPage extension,
adds a lot of scanned books in the Page: namespace, this should count as
content, despite these pages not being in the main namespace, and whether
or not the pages contain any link (which they most often do not).
One problem is that I can't see which namespaces are "content" namespaces
in any of the database dumps. I can only see this from the API,
http://en.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=namespa…
The API only provides the current value, which can change over time. I can't
get the value that was in effect when the database dump was generated.
Another problem is that I want to count links that I find in the File:
(ns=6) and Portal: (mostly ns=100) namespaces, but these aren't marked as
content namespaces by the API. Shouldn't they be?
Is anybody else doing similar things? Do you have opinions on what should
count as content? Should I submit my script (300 lines of Perl) somewhere?
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
Project Runeberg - free Nordic literature - http://runeberg.org/
Currently there is no proper way in Lua to get such basic things about
the current page name. To fix this, I propose to provide a MediaWiki
API accessible from the Lua scripts:
<https://www.mediawiki.org/wiki/Extension:Scribunto/API_specification>
Any feedback appreciated.
— Victor.
Hi all,
Ryan Lane just showed me that in Gerrit there is a separate right for creating repositories. I suggest we give this right to all WMF engineers. A repo is free and fun and will prevent unnecessary delays.
Best,
Diederik
Hello everyone,
It’s with great pleasure that I’m announcing that Adam Wight has joined the Wikimedia Foundation as a Fundraising Engineer.
Before joining us, Adam was customizing open-source web services for non-profits at Giant Rabbit. This makes him the first Fundraising engineer to be familiar with CiviCRM **before** joining the team — in fact, he has contributed event registration workflow and other minor changes back to the project. :-) He also did work on the Atako Project (the first open-source Google Gadget directory), “Halfway Library” to share and review books, and “Prokaryote” a evolution/behavior patterns simulator used in university and high school classrooms. If you ever snuck into the Unix lab to get their workstations running SETI@home, you probably used his code (he wrote the X-windows implementation). He has recently contributed an "Offline" extension for Mediawiki, and he is helping with a distributed wiki project "OneCommons".
On the side, he’s involved with a number of education and agricultural projects, including being the programmer at the Multinational Exchange for Sustainable Agriculture and is a cofounder and worker at The Local food coop at UC Berkeley. He also is obsessed with blacksmithing (no, this is not a new coding process — I mean that he’s a blacksmith and has been a carpenter and housepainter).
His first official day was on May 31st (where he was at the Berlin Hackathon), but his first day at the San Francisco office will be on June 13th. He will be working with the FR-Tech team, no doubt fixing our many bugs in CiviCRM.
Please join me in welcoming Adam to the Wikimedia Foundation. :-)
Take care,
Terry
terry chay 최태리
Director of Features Engineering
Wikimedia Foundation
“Imagine a world in which every single human being can freely share in the sum of all knowledge. That's our commitment.”
p: +1 (415) 839-6885 x6832
m: +1 (408) 480-8902
e: tchay(a)wikimedia.org
i: http://terrychay.com/
w: http://meta.wikimedia.org/wiki/User:Tychay
aim: terrychay