Vagrant 1.6 changed the order of steps Vagrant performs on initialization:
it now evaluates the project's Vagrantfile after loading plugins and
parsing command-line arguments. This means that the various subcommands
provide for role management no longer work, since the relevant plugins are
loaded from the top of Vagrantfile, which is now too late a stage to be
loading plugins.
Loading plugins from Vagrantfile was always a bit of a hack, but it was a
good hack that allowed us to bridge over a complicated plugin packaging
process and provide a tailored Vagrant experience right from 'vagrant up'.
I'd like to fix this without adding steps to the installation process, but
I'm not sure how. I spent a few hours bashing my head against this problem
earlier today and didn't get anywhere. I would really welcome a creative
solution.
The relevant bug is https://bugzilla.wikimedia.org/show_bug.cgi?id=65066 ,
originally reported by Robert Vogel.
Ori
Bernd will be working remotely from Fort Collins, CO, where he has
lived ever since emigrating from Germany many years ago.
He joins the Wikimedia Foundation after developing software at HP in
Germany and the US for many years. He's worked on both front-end and
back-end components, developing applications for enterprise management
software as well as consumer software (HP MediaSmart Server, WebOS
related work, and a couple of Android apps).
Bernd is very passionate about user experience and Android. He is
excited to contribute to open source projects. When not developing
Android apps, he also enjoys learning about some of the latest web
technologies, currently favoring Meteor.js. Afk he enjoys playing
volleyball, ultimate frisbee and soccer.
Bernd will join the Apps team working closely with Yuvi and Dmitry on
the rebooted native Android Wikipedia app.
Please Welcome Bernd
--tomasz
There've been some issues reported lately with image scaling, where
resource usage on very large images has been huge (problematic for batch
uploads from a high-resolution source). Even scaling time for typical
several-megapixel JPEG photos can be slower than desired when loading up
into something like the MMV extension.
I've previously proposed limiting the generatable thumb sizes and
pre-generating those fixed sizes at upload time, but this hasn't been a
popular idea because of the lack of flexibility and potentially poor
client-side scaling or inefficient network use sending larger-than-needed
fixed image sizes.
Here's an idea that blends the performance benefits of pre-scaling with the
flexibility of our current model...
A classic technique in 3d graphics is
mip-mapping<https://en.wikipedia.org/wiki/Mip-mapping>,
where an image is pre-scaled to multiple resolutions, usually each 1/2 the
width and height of the next level up.
When drawing a textured polygon on screen, the system picks the most
closely-sized level of the mipmap to draw, reducing the resources needed
and avoiding some classes of aliasing/moiré patterns when scaling down. If
you want to get fancy you can also use trilinear
filtering<https://en.wikipedia.org/wiki/Trilinear_filtering>,
where the next-size-up and next-size-down mip-map levels are combined --
this further reduces artifacting.
I'm wondering if we can use this technique to help with scaling of very
large images:
* at upload time, perform a series of scales to produce the mipmap levels
* _don't consider the upload complete_ until those are done! a web uploader
or API-using bot should probably wait until it's done before uploading the
next file, for instance...
* once upload is complete, keep on making user-facing thumbnails as
before... but make them from the smaller mipmap levels instead of the
full-scale original
This would avoid changing our external model -- where server-side scaling
can be used to produce arbitrary-size images that are well-optimized for
their target size -- while reducing resource usage for thumbs of huge
source images. We can also still do things like applying a sharpening
effect on photos, which people sorely miss when it's missing.
If there's interest in investigating this scenario I can write up an RfC
with some more details.
(Properly handling multi-page files like PDFs, DjVu, or paged TIFFs could
complicate this by making the initial rendering extraction pretty slow,
though, so that needs consideration.)
-- brion
Hi everyone,
I'm pleased to announce Mukunda Modell, a new member of our Release
and QA group[1] in Platform Engineering. He'll be working on the
multitude of things that need to be done to make the process of
getting code from the first developer submission out into production
in a reliable and timely way that doesn't compromise the quality of
the site. He has the title "Release Engineer", which is something you
shouldn't read too literally if you're familiar with these things.
More on this in a bit.
Mukunda lives and works in Springfield, Missouri (not far at all from
Zack Exley, i.e. far away from any direct flight to SF). He most
recently worked in the past few months as an independent consultant
specializing in deployment issues. Prior to that worked for deviantArt
(remotely) for several years, and came highly recommended by his
colleagues that worked with him there (Gilles specifically prodded
Mukunda to apply). Mukunda started off as a PHP developer there, and
then gradually moved into more operations and release focused
activities. Prior to that he's worked on network-attached storage
(Niveus Media) and on the back-end for web games (D.Lux Games)
I'm going to borrow a quote from Ori's interview feedback on Mukunda
as to one big reason we hired him. Ori wrote “what impressed me the
most was the palpable gratification he seems to derive from improving
developers' workflows. It motivates him to scrutinize tools and
configurations and to look for ways in which they fit poorly with
developer requirements. Which is exactly what we need for this role.”
Indeed. One big thing that Mukunda did at deviantArt was to champion
the choice of Phabricator for their organization, and he was
responsible for the subsequent migration to and maintenance of their
installation. His timing coming into this organization is
impeccable[2]
We chose the title "Release Engineer" because it's a reasonably
standard term in the industry for the skills and responsibilities that
we have. In Wikimedia world, we call what Mukunda is doing
"deployment", and really, it's going to be everything from improving
the Beta Cluster, to improving our Vagrant setup, to improving scap,
to setting up Phabricator. If you look at work that Bryan Davis and
Antoine Musso have been doing recently, you won't be too far away from
the work that Mukunda will be tackling.
Please join me in welcoming Mukunda to the Wikimedia Foundation!
Rob
[1] https://www.mediawiki.org/wiki/Wikimedia_Release_and_QA_Team
[2] https://www.mediawiki.org/wiki/Requests_for_comment/Phabricator
We had three Phabricator related sessions in Zürich, two introducing
task/bug management and code review, and one focusing on the Wikimedia
Phabricator Day 1 project.
Check the links at the (re)new(ed) landing page for all things Phabricator:
https://www.mediawiki.org/wiki/Phabricator
--
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
Following change I189ba71de[0], the hierarchical list in
Special:Allpages becomes a simple alphabetic pager if the total number
of pages exceeds a safety threshold. The threshold is designed to
protect wikis on which the load generated by the process of generating
the hierarchical list would be prohibitively expensive (bug 56840[1]).
I189ba71de resolved the immediate operational issue, but there is a
further question of whether we want to keep the hierarchical list at
all, especially given that it cannot be enabled (in its current
implementation, at least) on larger installations.
>From my perspective, the ideal outcome of this discussion would be
that we agree that the hierarchical list is a poor fit for the
MediaWiki of today, and we resolve to remove it from core.
According to stats.grok.se, enwiki's Special:Allpages receives
approximately 158 hits a day.[2]
[0]: https://gerrit.wikimedia.org/r/#/c/94690/
[1]: https://bugzilla.wikimedia.org/show_bug.cgi?id=56840
[2]: http://stats.grok.se/en/latest90/Special:Allpages
---
Ori Livneh
ori(a)wikimedia.org
Hello everyone,
I’m pleased to announce Rob Moen is moving from the VisualEditor team to the Growth team.
In Growth, Rob will fill the team's third full-time engineer position, joining Matthew Flaschen, Sam Smith, and Andrew Russell Green (who's collaborating with Growth on the Campaigns extension). With Rob's help, the team will be able to move faster on projects like its experiments acquiring anonymous editors and building features for new article creators.
Rob has done amazing work on VisualEditor. Starting with when the team specifically requested him from Editor Engagement two years ago to work on the user interface features such as toolbars and inspectors and finishing up with mobile integration and major UploadWizard media integration changes.
He’ll bring that deep knowledge of VisualEditor, OOJS, and OOUI to projects focused on experimentation and growth in the editor community. Besides being a rare combination of front-end and full-stack engineer with extensive MediaWiki experience, he’s also the only engineer in Growth in the same timezone as Steven Walling and the design team members.
VisualEditor team is now looking for two candidates for open engineering positions, one of which will fill Rob's spot on the team. Check out the job description[0] especially if you can help refer someone.
Take care,
terry
[0]: http://hire.jobvite.com/CompanyJobs/Careers.aspx?c=qSa9VfwQ&cs=9UL9Vfwt&pag…
terry chay 최태리
Director of Features Engineering
Wikimedia Foundation
“Imagine a world in which every single human being can freely share in the sum of all knowledge. That's our commitment.”
p: +1 (415) 839-6885 x6832
m: +1 (408) 480-8902
e: tchay(a)wikimedia.org
i: http://terrychay.com/
w: http://meta.wikimedia.org/wiki/User:Tychay
aim: terrychay
I think I've articulated our key performance principles, and am looking
for good and bad examples. Our performance guidelines will be a set of
values/principles, each one with a good example and a bad example
drawn from our own experience. Here's the list:
https://www.mediawiki.org/wiki/Performance_guidelines#General_performance_p…
Please feel free to paste links on the talk page, saying whether it's a
good or bad example - I will add prose and explanation. I'll also be
gathering examples on Friday at the Zurich hackathon. I want to polish
this and get it approved during the hackathon.
(I'm still cleaning up the detailed explanation of each principle.)
RfC we can use to discuss larger questions:
https://www.mediawiki.org/wiki/Requests_for_comment/Performance_standards_f…
--
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation