Hi,
On Tue, Mar 1, 2016 at 3:36 PM, David Strine <dstrine(a)wikimedia.org> wrote:
> We will be holding this brownbag in 25 minutes. The Bluejeans link has
> changed:
>
> https://bluejeans.com/396234560
I'm not familiar with bluejeans and maybe have missed a transition
because I wasn't paying enough attention. is this some kind of
experiment? have all meetings transitioned to this service?
anyway, my immediate question at the moment is how do you join without
sharing your microphone and camera?
am I correct thinking that this is an entirely proprietary stack
that's neither gratis nor libre and has no on-premise (not cloud)
hosting option? are we paying for this?
-Jeremy
As of 950cf6016c, the mediawiki/core repo was updated to use DB_REPLICA
instead of DB_SLAVE, with the old constant left as an alias. This is part
of a string of commits that cleaned up the mixed use of "replica" and
"slave" by sticking to the former. Extensions have not been mass
converted. Please use the new constant in any new code.
The word "replica" is a bit more indicative of a broader range of DB
setups*, is used by a range of large companies**, and is more neutral in
connotations.
Drupal and Django made similar updates (even replacing the word "master"):
* https://www.drupal.org/node/2275877
* https://github.com/django/django/pull/2692/files &
https://github.com/django/django/commit/beec05686ccc3bee8461f9a5a02c607a023…
I don't plan on doing anything to DB_MASTER, since it seems fine by itself,
like "master copy", "master tape" or "master key". This is analogous to a
master RDBMs database. Even multi-master RDBMs systems tend to have a
stronger consistency than classic RDBMs slave servers, and present
themselves as one logical "master" or "authoritative" copy. Even in it's
personified form, a "master" database can readily be thought of as
analogous to "controller", "governer", "ruler", lead "officer", or such.**
* clusters using two-phase commit, galera using certification-based
replication, multi-master circular replication, ect...
**
https://en.wikipedia.org/wiki/Master/slave_(technology)#Appropriateness_of_…
***
http://www.merriam-webster.com/dictionary/master?utm_campaign=sd&utm_medium…
--
-Aaron
O'Reilly just published some of their popular books for free, either as
part of open access movement or some kind of marketing (or both). I find
them useful to Wikimedia developers. It supports several types of e-books
so you can read it in your kindle, etc.:
* Performance, Operations, Release engineering:
http://www.oreilly.com/webops-perf/free/
* Data, AI, Analytics: http://www.oreilly.com/data/free/
* Programming, architecture, Open source culture:
http://www.oreilly.com/programming/free/
* Security: http://www.oreilly.com/security/free/
* Web platform, design: http://www.oreilly.com/web-platform/free/
This is a rather unusual type of email so I wasn't sure I was doing the
right thing so I just sent it to wikitech-l. Please spread the word if you
think it's okay or tell me if you think not. Thanks.
Best
The Parsing team at the Wikimedia Foundation that develops the Parsoid
service is deprecating support for node 0.1x. Parsoid is the service
that powers VisualEditor, Content Translation, and Flow. If you don't
run a MediaWiki install that uses VisualEditor, then this announcement
does not affect you.
Node 0.10 has reached end of life on October 31st, 2016 [1] and node
0.12 is scheduled to reach end of life December 31st, 2016 [1].
Yesterday, we released a 0.6.1 debian package [2] and a 0.6.1 npm
version of Parsoid [3]. This will be the last release that will have
node 0.1x support. We'll continue to provide any necessary critical bug
fixes and security fixes for the 0.6.1 release till March 31st 2017 and
will be completely dropping support for all node versions before node
v4.x starting April 2017.
If you are running a Parsoid service on your wiki and are still using
node 0.1x, please upgrade your node version by April 2017. The Wikimedia
cluster runs node v4.6 right now and will soon be upgraded to node v6.x
[4]. Parsoid has been tested with node 0.1x, node v4.x and node v6.x and
works with all these versions. However, we are dropping support for node
0.1x right away from the master branch of Parsoid. Going forward, the
Parsoid codebase will adopt ES6 features available in node v4.x and
higher which aren't supported in node 0.1x and will constitute a
breaking change.
Subramanya Sastry (Subbu),
Technical Lead and Manager,
Parsing Team,
Wikimedia Foundation.
[1] Node.js Long Term Support schedule @ https://github.com/nodejs/LTS
[2] https://www.mediawiki.org/wiki/Parsoid/Releases
[3] https://www.npmjs.com/package/parsoid
[4] https://phabricator.wikimedia.org/T149331
Hi all!
This is a Final Call for Comments on the RFC on Content Model Storage [1][2]. If
no new and serious objections are raised within a week, the Architecture
Committee will approve this RFC and drive its implementation.
The RFC on Content Model Storage was originally approved in 2015, but was then
postponed in favor of another RFC, which proposes to create a separate content
meta-data table [3] as part of the move towards multi-content revisions (MCR) [4].
However, MCR in turn got stuck on database performance concerns. So we now
propose to go ahead with implementing the original RFC. The idea is to assign a
number to every content model (and content format), and then use these numbers
to refer to the models and formats in the database, instead of repeating the
same string millions of times (which is my fault btw, sorry about that).
Since the original RFC was already approved, and the situation does not seem to
have changed since then, we see no need for another round of discussions. If
nobody raises any new and serious objections within a week, this should be good
to go.
Cheers,
Daniel
[1] https://phabricator.wikimedia.org/T105652
[2] https://www.mediawiki.org/wiki/Requests_for_comment/Content_model_storage
[3] https://phabricator.wikimedia.org/T142980
[4]
https://www.mediawiki.org/wiki/Multi-Content_Revisions/Content_Meta-Data#Da…
--
Daniel Kinzler
Senior Software Developer
Wikimedia Deutschland
Gesellschaft zur Förderung Freien Wissens e.V.
Hi,
On the second day of the Wikimedia Developer Summit (January 10) there will
be a Q&A session with Victoria Coleman (Wikimedia Foundation CTO) and Wes
Moran (VP of Product). It is a plenary session and it will be
video-streamed.
The questions for this session are being crowdsourced at
http://www.allourideas.org/wikidev17-product-technology-questions. Anyone
can propose questions and vote, anonymously, as many times as you want. At
the moment, we have 25 questions and 451 votes.
An important technical detail: questions posted later have also good
chances to make it to the top of the list as long as new voters select
them. The ranking is made out of comparisons between questions, not
accumulation of votes. For instance, the current top question is in fact
one of the last that has been submitted so far.
Why posting or voting a good question? One obvious reason is to encourage
the Foundation's Technology and Product top managers to bring a good answer
in a public session with minutes taken and video recording. :) Beyond
that, if the ranking of questions makes sense and is backed by
participation numbers, it has a serious chance to influence plans and
discussions beyond the Summit.
The current ranking does make sense, but maybe you could help covering more
areas, other perspectives?
1. How do we deal with the lack of maintainers for all Wikimedia
deployed code?
2. Do we have a plan to bring our developer documentation to the level
of a top Internet website, a major free software project?
3. For WMF dev teams, what is the right balance between pushing own work
versus seeking and supporting volunteer contributors?
4. During the next year or so, what balance do you think we should
strike between new projects and technical debt?
5. When are we going to work on a modern talk pages system for good?
6. Whose responsibility is to assure that all MediaWiki core components
and the extensions deployed in Wikimedia have active maintainers?
7. How important is to have a well maintained and well promoted catalog
of tools, apps, gadgets, bots, templates, extensions...?
8. Will MediaWiki ever become easier to install and manage? (e.g. plugin
manager à la Wordpress). How much do we care about enterprise users?
9. What should be the role of the Architecture Committee in WMF planning
(priorities, goals, resources...) and are we there yet?
10. In addition to Community Tech, should the other WMF Product teams
prioritize their work taking into account the Community Wishlist results?
The full list:
http://www.allourideas.org/wikidev17-product-technology-questions/results
--
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
Hi all,
I'm doing some renovations on recitation-bot and running into trouble when
the time comes for pywikibot to upload article data to wikisource and
commons. The thread doing so hangs without any sort of informative error. I
made sure that the unix user under which the web service that is using
pywikibot is running is logged into each wiki per Max's advice but I still
have the problem. I'm going to try to get more information about what's
going on but would also appreciate pointers about what might be going
wrong. Particularly, the web service is now running under Kubernetes rather
than sun grid engine, so I suspect that the login state might not be making
it into the container - can anyone advise on where the login state is
maintained and whether this will be transferred into the kubernetes
container?
Thanks,
Anthony