Hello,
I will be *upgrading Gerrit* from the 3.7 series to the 3.8 series. I
have scheduled the upgrade for *Monday March 25th at 9am UTC*. It is
immediately after the UTC morning backport & config window.
The upgrade requires the Gerrit service to be stopped for the duration
of the upgrade. Given we do not need to reindex all the changes, the
downtime should be just a few minutes.
Gerrit 3.8 brings:
* Rebase on behalf of the uploader, so that the rebaser does not take
over the change (the original uploader is preserved)
* Rebase a chain of changes: when working with a series of change, the
whole series can be rebased atomically which saves a lot of manual
rebasing actions
* Browser Notifications: get a notification when a change requires
your attention
* And more UI changes
<https://www.gerritcodereview.com/3.8.html#gerrit-ui-changes>
The release notes: https://www.gerritcodereview.com/3.8.html
<https://www.gerritcodereview.com/3.8.html>The upgrade task:
https://phabricator.wikimedia.org/T354886
Deployment calendar entry
<https://wikitech.wikimedia.org/wiki/Deployments#deploycal-item-20240325T0900>
Antoine "hashar" Musso
Wikimedia Release Engineering
Hi,
*TL;DR;*
We have started our journey of deprecating the ObjectCache
<https://doc.wikimedia.org/mediawiki-core/master/php/classObjectCache.html>
class and moving to ObjectCacheFactory
<https://doc.wikimedia.org/mediawiki-core/master/php/classObjectCacheFactory…>
: https://gerrit.wikimedia.org/r/c/mediawiki/core/+/955771. This means we
have stopped using the *$instances* member in ObjectCache. If you are an
extension author or maintainer, please look at the new interface in
ObjectCacheFactory and migrate callers (if you find any).
*Longer version*
Recently, there has been some on-going work on ObjectCache and we realized
code was taking a factory based pattern due to various code paths to
configure, setup and obtain BagOStuff cache instances. With this patch:
https://gerrit.wikimedia.org/r/c/mediawiki/core/+/955771, we created a
proper MediaWiki factory service and deprecated various methods on the
ObjectCache class including the static member `ObjectCache::$instances`
which used to hold references to various instances of the caches
(BagOStuff) - sounds familiar? Yes, we're getting rid of scattered usage of
global state one class at a time :).
We have taken care of places [1][2][3] where this public member was
referenced, so, we are certain there are no consumers of this field
<https://codesearch.wmcloud.org/deployed/?q=ObjectCache%3A%3A%5C%24instances>
both
in MW core and extensions that we deploy today in production, and we
encourage extension authors and maintainers to migrate to the new way of
constructing and/or obtaining cache instances via ObjectCacheFactory which
goes through our global services container (with DI capabilities).
You can also have a look at the Phabricator task [4] which explains the
problem that this refactoring solves/improves and the impact it has on
MediaWiki. The patch will ride the train next week (starting March 25th,
2024) and if there are any issues found along the way, please file a task
and add #MediaWiki-lib-BagOStuff.
This work is only step 1 into unifying, centralizing and getting rid of
global state in the logic related to the ObjectCache class and making it
consistent with how we do things with the global services locator today in
MediaWiki.
External/related links, see:
* https://www.mediawiki.org/wiki/Manual:$wgObjectCaches
* https://www.mediawiki.org/wiki/Object_cache
Thank you!
P.S: I personally want to thank Daniel Kinzler and Timo Tijhof for all the
code review and guidance into making this work materialize and about to hit
production. <3
[1] https://gerrit.wikimedia.org/r/c/mediawiki/core/+/1011159
[2]
https://gerrit.wikimedia.org/r/c/mediawiki/extensions/ConfirmEdit/+/1009498
[3]
https://gerrit.wikimedia.org/r/c/mediawiki/extensions/DonationInterface/+/1…
[4] https://phabricator.wikimedia.org/T358346
--
Derick,
On behalf of MediaWiki Platform Team
Hello everyone,
Please join us in celebrating a very successful Datacenter Switchover. This
switch to our data center in Virginia was run by Effie Mouzeli. Despite
some minor hiccup on Effie's network connection (a similar thing happened
to Clément a year ago, this is starting to become a pattern) it was
completed without a hitch.
For context, the Site Reliability Team (SRE) runs a planned data center
switchover periodically, moving all wikis from our primary data center in
(for this instance, Texas) to the secondary data center (for this instance,
Virginia). This is an important periodic test of our tools and procedures,
to ensure the wikis will continue to be available even in the event of
major technical issues. It also gives all our SRE and ops teams a chance to
do maintenance and upgrades on systems that normally run 24 hours a day.
The switchover process requires a brief read-only period for all
Foundation-hosted wikis, which started at 14:00 UTC on Wednesday March
20th, and lasted 3 minutes and 8 seconds. All our public and private wikis
continued to be available for reading as usual. Users saw a notification of
the upcoming maintenance, and anyone still editing was asked to try again
in a few minutes.
As with the previous Switchover, I 've been trying to discern the effect of
the Switchover in many of the graphs we have to monitor the infrastructure
in https://grafana.wikimedia.org. In many, it's impossible to tell the
event. We consider this very nice and attribute it to various improvements
done throughout the years from many teams, in and outside SRE. The most
discernible graph we have is of the edit rate.
This switchover is our first where we are predominantly on MediaWiki on
Kubernetes, setting a very nice milestone for the project.
As per our newer process, we no longer have a Switchback. We will be
staying in Virginia as our primary data center for the next 6 months,
switching back to Virginia on Wednesday, September 25th.
As always, my deepest thanks to all people that have helped with this, in
one way or another, ranging from the person running point, to all SREs and
developers/deployers participating or having contributed, to people in
Movement Communications for helping with the messaging.
To report any issues, you can reach us in #wikimedia-sre on IRC, or file a
Phabricator ticket with the datacenter-switchover tag (pre-filled form
here); we'll be monitoring closely for reports of trouble during and after
the switchover. (If you're new to Phab, there's more information at
Phabricator/Help.) The switchover, preparation as well as followup actions
are tracked in Phabricator Task T357547
--
Alexandros Kosiaris
Principal Site Reliability Engineer
Wikimedia Foundation
Hello everyone,
Wikimedia is gearing up to apply as a mentoring organization for Google
Summer of Code 2024 <
https://www.mediawiki.org/wiki/Google_Summer_of_Code/2024>[1] and Outreachy
Round 28 <https://www.mediawiki.org/wiki/Outreachy/Round_28> [2].
Currently, we're crafting a list of exciting project ideas for the
application. If you have any suggestions for projects, whether coding or
non-coding (design, documentation, translation, outreach, research), please
share them by February 5th via this Phabricator task: <
https://phabricator.wikimedia.org/T354734> [3]. Note that for non-coding
projects eligible for Outreachy, slots are limited and will be allocated to
mentors on a first-come, first-serve basis.
Timeline
In your role as a mentor, your involvement spans the application period for
both programs, taking place from March to April. During this time, you'll
guide candidates in making small contributions to your project and address
any project-related queries they may have. As the application period
concludes, you'll further intensify your collaboration with accepted
candidates throughout the coding period, which extends from May to August.
Your support and guidance are crucial to their success in the program.
Guidelines for Crafting Project Proposals:
-
Follow this task description template when you propose a project in
Phabricator: <
https://phabricator.wikimedia.org/tag/outreach-programs-projects> [4].
You can also use this workboard to pick an idea if you don't have one
already. Add #Google- Summer-of-Code (2024) or #Outreachy (Round 28) tag.
-
Project should require an experienced developer ~15 days and a newcomer
~3 months to complete.
-
Each project should have at least two mentors, including one with a
technical background.
-
Ideally, the project has no tight deadlines, a moderate learning curve,
and fewer dependencies on Wikimedia's core infrastructure. Projects
addressing the needs of a language community are most welcome.
* Learn more about the roles and responsibilities of Mentors for both
programs:*
-
Outreachy: <https://www.mediawiki.org/wiki/Outreachy/Mentors> [5]
-
Google Summer of Code: <
https://www.mediawiki.org/wiki/Google_Summer_of_Code/Mentors> [6]
Thank you,
Links:
[1] https://www.mediawiki.org/wiki/Google_Summer_of_Code/2024
[2] https://www.mediawiki.org/wiki/Outreachy/Round_28
[3] https://phabricator.wikimedia.org/T354734
[4] https://phabricator.wikimedia.org/tag/outreach-programs-projects
[5] https://www.mediawiki.org/wiki/Outreachy/Mentors
[6] https://www.mediawiki.org/wiki/Google_Summer_of_Code/Mentors
--
*Onyinyechi Onifade *
Technical Community Program Manager
Wikimedia Foundation <https://wikimediafoundation.org/>
Please, take a look of https://phabricator.wikimedia.org/T360357 I know 19
days is very close to request this, but in this country (Argentina) is very
difficult to schedule it with more time. Thanks in advance!!
As of 2024-03-14T11:02 UTC the Toolforge Grid Engine service has been
shutdown.[0][1]
This shutdown is the culmination of a final migration process from
Grid Engine to Kubernetes that started in in late 2022.[2] Arturo
wrote a blog post in 2022 that gives a detailed explanation of why we
chose to take on the final shutdown project at that time.[3] The roots
of this change go back much further however to at least August of 2015
when Yuvi Panda posted to the labs-l list about looking for more
modern alternatives to the Grid Engine platform.[4]
Some tools have been lost and a few technical volunteers have been
upset as many of us have striven to meet a vision of a more secure,
performant, and maintainable platform for running the many critical
tools hosted by the Toolforge project. I am deeply sorry to each of
you who have been frustrated by this change, but today I stand to
celebrate the collective work and accomplishment of the many humans
who have helped imagine, design, implement, test, document, maintain,
and use the Kubernetes deployment and support systems in Toolforge.
Thank you to the past and present members of the Wikimedia Cloud
Services team. Thank you to the past and present technical volunteers
acting as Toolforge admins. Thank you to the many, many Toolforge tool
maintainers who use the platform, ask for new capabilities, and help
each other make ever better software for the Wikimedia movement. Thank
you to the folks who who will keep moving the Toolforge project and
other technical spaces in the Wikimedia movement forward for many,
many years to come.
[0]: https://sal.toolforge.org/log/DrOgPI4BGiVuUzOd9I1b
[1]: https://wikitech.wikimedia.org/wiki/Obsolete:Toolforge/Grid
[2]: https://wikitech.wikimedia.org/wiki/News/Toolforge_Grid_Engine_deprecation#…
[3]: https://techblog.wikimedia.org/2022/03/14/toolforge-and-grid-engine/
[4]: https://lists.wikimedia.org/pipermail/labs-l/2015-August/003955.html
Bryan, on behalf of the Toolforge administrators
--
Bryan Davis Wikimedia Foundation
Principal Software Engineer Boise, ID USA
[[m:User:BDavis_(WMF)]] irc: bd808
Hello,
We are now only three weeks away from the Wikimedia Wishathon! Exciting
news - User:Lucas Werkmeister has signed up to host a piano concert during
a social hour 🎉
Join us and contribute to the development of community wishes between March
15th and 17th! Participate in discussion sessions and work on user scripts,
gadgets, extensions, tools and more!
The full event schedule is available here: <
https://meta.wikimedia.org/wiki/Event:WishathonMarch2024>.
Explore the event wiki for project ideas and keep an eye out for
non-technical tasks (documentation and design-related) that will soon be
added to the Wishathon workboard: <
https://phabricator.wikimedia.org/project/view/5906/>. Project breakouts
will also be added to the schedule, where you can participate in wish
development or explore innovative solutions as a user, developer, or
designer.
We are seeking volunteers to assist with a wide range of activities such as
monitoring discussion channels during hacking hours, answering technical
queries, and helping with session note-taking. Check out the Help desk
schedule and add yourself to a slot where you are available and interested
in providing assistance: <
https://meta.wikimedia.org/wiki/Event:WishathonMarch2024/Help_desk>.
If you have any questions about the Wishathon, reach out via Telegram: <
https://t.me/wmhack>.
Cheers,
Srishti
On behalf of the Wishathon organizing committee
*Srishti Sethi*
Senior Developer Advocate
Wikimedia Foundation <https://wikimediafoundation.org/>
I'm trying to use the new workflow for uploading Docker images to the
registry. Following the link under wikitech:Docker-registry#Downloading
images
<https://wikitech.wikimedia.org/wiki/Docker-registry#Downloading_images> I
ended up on mw:GitLab/Workflows/Deploying services to production
<https://www.mediawiki.org/wiki/GitLab/Workflows/Deploying_services_to_produ…>
as the recommended way to do it.
As far as I can tell the service repos should live under
repos/mediawiki/services/ in Gitlab and you need to have access to the
group to import repos there. I clicked on "Request access" in the menu for
that group, but I don't think anything has happened since then. Is there
anything else I need to do to be granted access?
For context, the service I want to add Docker image for is part of
Speechoid, a service bundle(?) for Wikispeech
<https://www.mediawiki.org/wiki/Extension:Wikispeech>. Currently we have a
few other services that have their code on Gerrit
<https://gerrit.wikimedia.org/r/admin/repos/q/filter:wikispeech>.
*Sebastian Berlin*
Utvecklare/*Developer*
Wikimedia Sverige (WMSE)
E-post/*E-Mail*: sebastian.berlin(a)wikimedia.se
Telefon/*Phone*: (+46) 0707 - 92 03 84
On my debian 11 VPS cron.service is running and there is an /etc/crontab.
Same is true on debian 12 machines at home.
On my debian 12 VPS cron.service is not running and there is no
/etc/crontab. However /etc/cron.daily, etc. exist and have scripts. In the
past crontab also controlled daily, etc. Does the cron package need to be
installed or is there another mechanism?