Is https://wikitech.wikimedia.org/wiki/Operating_system_upgrade_policy
accurate?
I shows support ending as follows:
Buster: September 2023
Bullseye: September 2025
I have the impression that VPS support for Buster is ending in May or June
of this year.
Also, if I look at an instance's OS in Horizon I see
debian-12.0-bookworm (deprecated 2024-04-10)
I'm not clear why this would be deprecated already.
Thanks for clarifying.
TL;DR: If you start to notice new or noisy puppet failures on your VMs,
please notify me directly or open a phab ticket and assign it to me
(Andrew).
==
What's happening:
Over the last few weeks I've been upgrading cloud-vps puppet servers to
newer builds that support the latest version of the puppet config
language, version 7. That's done for almost all cases; there are a few
project-local puppetmasters that I've been nervous about messing with
directly; in those cases I've opened phabricator tickets and assigned
them to project admins. For clarity, I've been using 'puppetserver'
terminology for new servers, whereas older servers were generally called
'puppetmasters.' [0]
Now that most servers are upgraded, it's time for me to flip the setting
that causes them to actually use the version 7 parser and compiler. In
almost all cases this will be backwards-compatible with the existing
catalogs but we may turn up a few edge cases that require repair.
What you need to do:
If you have one of those phab tickets about puppetservers open for your
project, please respond on the ticket so I know you're there and know
what your plan is.
All other users, please reach out to me if you start seeing new or
surprising puppet failures and I'll help sort out the transition.
-Andrew
[0] https://wikitech.wikimedia.org/wiki/Help:Project_puppetserver
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hi all!
This is to let you know that Toolforge continuous jobs now support
health-checks!
To use it you need to provide `--health-check-script ./script.sh` while
creating
your job. You can also provide the script as a string like this
`--health-check-script "cat /etc/os-release"`. Toolforge will periodically
attempt
to execute your health-check script inside your running job and will
restart
your job if the script completes with an exit code of 1.
Note: if you use a script file for health-check, do not forget to make the
file
executable (chmod u+x script.sh). If toolforge can't execute your
health-check
script, your job will never start.
Also a reminder that you can find this and smaller user-facing updates about
the Toolforge platform features here:
https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Changelog
Original task: https://phabricator.wikimedia.org/T335592
--
Ndibe Raymond Olisaemeka
Software Engineer - Technical Engagement
Wikimedia Foundation <https://wikimediafoundation.org/>
<https://wikimediafoundation.org>
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hi,
Toolforge's Harbor instance (image registry) will be down briefly for a
version upgrade from 2.9.0 to 2.10.1 tomorrow Thursday 4 April at 9:00 UTC.
https://phabricator.wikimedia.org/T354507
This should not affect any tools that are not using the new build service,
nor any tools that are already running.
https://wikitech.wikimedia.org/wiki/Help:Toolforge/Build_Service
If you are using the builds service, you will not be able to run any new
builds, or start a job or a webservice from an image built with the build
service while Harbor is down. The outage is expected to last a few minutes.
We will send an update before starting maintenance work, and once
everything is back up and running.
Cheers,
--
Slavina Stefanova (she/her)
Software Engineer | Developer Experience
Wikimedia Foundation
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hello!
In order to conserve resources and prevent bot-net hijacking, cloud-vps
users have a few maintenance responsibilities. This spring two of these
duties have come due: an easy one and a hard one. Tl;dr: visit
https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2024_Purge, claim
your projects, and replace any hosts still running Debian Buster.
-- #1: Claim your projects --
This one is easy. Please visit the following wiki page and make a small
edit in your project(s) section, indicating whether you are or aren't
still using your project:
https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2024_Purge
this serves a couple of purposes. It allows us to identify and shut down
abandoned or no-longer-useful projects, it provides us with some updated
info about who cares about a given project (often useful for future
contact purposes) and it increases visibility into projects that are
used but unmaintained.
Regarding that last item: if you know that you depend on a project but
are not an admin or member of that project, please make a note of that
on the above page as well!
-- #2: Replace Debian Buster --
This one may require some work. Long term support for the Debian Buster
OS release is quickly running out (ending June 30), so VMs running
Buster need to be replaced with hosts running a new Debian version. You
may or may not be responsible for Buster instances; you can see a break
down of remaining Buster hosts on either of these pages:
https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2024_Purge (you
should be visiting that page anyway, because of item 1)
https://os-deprecation.toolforge.org/
More details about this process can be found here:
https://wikitech.wikimedia.org/wiki/News/Buster_deprecation
Typically in-place upgrades of VMs don't work all that well, so my
advice is to start fresh with a new server running Bookworm and to
migrate workloads to the new host. I've found Cinder volumes to be a big
help in this process; once all of your persistent data and config is in
a detachable volume it's fairly straightforward to move and will make
future upgrades that much easier.
WMCS staff will be standing by to help with any quota changes you might
need to help with this move; you can open a quota request ticket at
https://phabricator.wikimedia.org/project/view/2880/ -- and, as always,
we'll do our best to support you on IRC and on the cloud mailing list.
Thank you for your support and attention!
-Andrew + the WMCS team
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Quarry will move to k8s on Monday 2024-04-01. Part of this is going to
involve exporting and importing the database, as well as syncing the NFS.
To this end there may be some data loss of any queries run in the cutover
time. As always don't rely on quarry to save queries, keep any important
queries local to your system and copy them into quarry.
Thank you
--
*Vivian Rook (They/Them)*
Site Reliability Engineer
Wikimedia Foundation <https://wikimediafoundation.org/>
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
As of 2024-03-14T11:02 UTC the Toolforge Grid Engine service has been
shutdown.[0][1]
This shutdown is the culmination of a final migration process from
Grid Engine to Kubernetes that started in in late 2022.[2] Arturo
wrote a blog post in 2022 that gives a detailed explanation of why we
chose to take on the final shutdown project at that time.[3] The roots
of this change go back much further however to at least August of 2015
when Yuvi Panda posted to the labs-l list about looking for more
modern alternatives to the Grid Engine platform.[4]
Some tools have been lost and a few technical volunteers have been
upset as many of us have striven to meet a vision of a more secure,
performant, and maintainable platform for running the many critical
tools hosted by the Toolforge project. I am deeply sorry to each of
you who have been frustrated by this change, but today I stand to
celebrate the collective work and accomplishment of the many humans
who have helped imagine, design, implement, test, document, maintain,
and use the Kubernetes deployment and support systems in Toolforge.
Thank you to the past and present members of the Wikimedia Cloud
Services team. Thank you to the past and present technical volunteers
acting as Toolforge admins. Thank you to the many, many Toolforge tool
maintainers who use the platform, ask for new capabilities, and help
each other make ever better software for the Wikimedia movement. Thank
you to the folks who who will keep moving the Toolforge project and
other technical spaces in the Wikimedia movement forward for many,
many years to come.
[0]: https://sal.toolforge.org/log/DrOgPI4BGiVuUzOd9I1b
[1]: https://wikitech.wikimedia.org/wiki/Obsolete:Toolforge/Grid
[2]: https://wikitech.wikimedia.org/wiki/News/Toolforge_Grid_Engine_deprecation#…
[3]: https://techblog.wikimedia.org/2022/03/14/toolforge-and-grid-engine/
[4]: https://lists.wikimedia.org/pipermail/labs-l/2015-August/003955.html
Bryan, on behalf of the Toolforge administrators
--
Bryan Davis Wikimedia Foundation
Principal Software Engineer Boise, ID USA
[[m:User:BDavis_(WMF)]] irc: bd808
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hello,
We are now only three weeks away from the Wikimedia Wishathon! Exciting
news - User:Lucas Werkmeister has signed up to host a piano concert during
a social hour 🎉
Join us and contribute to the development of community wishes between March
15th and 17th! Participate in discussion sessions and work on user scripts,
gadgets, extensions, tools and more!
The full event schedule is available here: <
https://meta.wikimedia.org/wiki/Event:WishathonMarch2024>.
Explore the event wiki for project ideas and keep an eye out for
non-technical tasks (documentation and design-related) that will soon be
added to the Wishathon workboard: <
https://phabricator.wikimedia.org/project/view/5906/>. Project breakouts
will also be added to the schedule, where you can participate in wish
development or explore innovative solutions as a user, developer, or
designer.
We are seeking volunteers to assist with a wide range of activities such as
monitoring discussion channels during hacking hours, answering technical
queries, and helping with session note-taking. Check out the Help desk
schedule and add yourself to a slot where you are available and interested
in providing assistance: <
https://meta.wikimedia.org/wiki/Event:WishathonMarch2024/Help_desk>.
If you have any questions about the Wishathon, reach out via Telegram: <
https://t.me/wmhack>.
Cheers,
Srishti
On behalf of the Wishathon organizing committee
*Srishti Sethi*
Senior Developer Advocate
Wikimedia Foundation <https://wikimediafoundation.org/>
Hello all,
We are on the last stretch of the grid engine deprecation process[0] and
this means that the grid will be shutting down on Thursday, the 14th of
March.
You can find a reminder of the full timeline here[1]
There's about 30 tools still running on the grid, if you are one of the few
left to migrate,
kindly ensure they are migrated before the 14th or reach out[2] to the team
if you are facing any challenges or need some assistance.
We have also reached out on phabricator and via email to the remaining
maintainers that still have their tools running on the grid to see if we
can help ease the migration or see if there are any blocking issues.
If you have a tool that is still on the grid and you cannot meet the above
deadline, kindly reach out via the tool migration phabricator ticket or our
support channels[2], note that this is a hard deadline and no extensions
would be granted but we might be able to help you do the transition.
We really appreciate all the effort and feedback given on the new platform,
this will help us improve our service and reduce the maintenance burden in
the long term for tool maintainers and toolforge admins alike.
[0]:
https://wikitech.wikimedia.org/wiki/News/Toolforge_Grid_Engine_deprecation
[1]:
https://wikitech.wikimedia.org/wiki/News/Toolforge_Grid_Engine_deprecation#…
[2]:
https://wikitech.wikimedia.org/wiki/Portal:Toolforge/About_Toolforge#Commun…
--
Seyram Komla Sapaty
Developer Advocate
Wikimedia Cloud Services
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…
Hi all!
Good news, we have enabled health checks for all the webservices running on
toolforge.
There's no action required on your part, the next time you restart or
stop/start your webservice, it will have a tcp health check by default (just
making sure something is listening).
The most interesting feature though is being able to pass a url to use as HTTP
health check.
To do so you can pass `--health-check-url /path/to/health` to your `toolforge
webservice start` command, and toolforge will automatically restart your
webservice if it stops responding to that path (you can change the path to
whatever you want, ex. `/`).
Note that this url will be queried quite often, so try to avoid hitting a page
that uses many resources.
Also a reminder that you can find this and smaller user-facing updates about
the Toolforge platform features here:
https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Changelog
Original task: https://phabricator.wikimedia.org/T341919
Cheers!
--
David Caro
SRE - Cloud Services
Wikimedia Foundation <https://wikimediafoundation.org/>
PGP Signature: 7180 83A2 AC8B 314F B4CE 1171 4071 C7E1 D262 69C3
"Imagine a world in which every single human being can freely share in the
sum of all knowledge. That's our commitment."
_______________________________________________
Cloud-announce mailing list -- cloud-announce(a)lists.wikimedia.org
List information: https://lists.wikimedia.org/postorius/lists/cloud-announce.lists.wikimedia.…