Hi,
please tell me, what is the thought behind the impossibility, that a normal
admin can delete pages with more than 5000 revisions.
Thank you
Martin aka Doc Taxon ...
Hello,
After many years of work, I'm happy to announce a milestone in addressing
one of our major areas of tech debt in database infrastructure: we have
eliminated all schema drifts between MediaWiki core and production.
It all started six years ago when users in English Wikipedia reported that
checking history of some pages is quite slow *at random*. More in-depth
analysis showed the revision table in English Wikipedia was missing an
important index in some of the replicas. An audit of the schema of the
revision table revealed much bigger drifts in the revision table of that
Wiki. You can read more in its ticket: T132416
<https://phabricator.wikimedia.org/T132416>
Lack of schema parity between expectation and reality is quite dangerous.
Trying to force an index in code assuming it would exist in production
(under the same name) would cause fatal error every time it’s attempted.
Trying to write to a field that doesn’t exist is similar. Such changes
easily pass tests and work well in our test setups (such as beta cluster)
just to cause an outage in production.
If only one table in one Wiki had this many drifts, looking at all Wikis
and all tables became of vital importance. We have around ~1,000 wikis,
~200 hosts (each one hosting on average ~100 Wikis), and each Wiki has
around ~130 tables (half of them being tables from MediaWiki core) and each
table can have multiple drifts.
We slowly started looking for and addressing schema drifts five years ago
and later automated the discovery by utilizing abstract schema (before
that, the tool had to parse SQL) and discovered an overwhelming number of
drifts. You can look at the history of the work in T104459
<https://phabricator.wikimedia.org/T104459>.
Around fifty tickets addressing the drifts have been completed and they are
collected in T312538 <https://phabricator.wikimedia.org/T312538>. I suggest
checking some of them to see the scale of the work done. Each one of these
tickets took days to months of work to finish. Large number of them also
existed in primary databases, requiring a primary switchover and read-only
time for one or more Wikis. Each drift was different, in some cases, you
needed to change the code and not production so it needed a thorough
investigation.
Why do such drifts happen? The most common reason was when a schema change
happened in code but it was never requested to be applied in production.
For example, a schema change in code in 2007 led to having any wiki created
before that date to have a different schema than wikis created after it. We
introduced processes
<https://wikitech.wikimedia.org/wiki/Schema_changes#Workflow_of_a_schema_cha…>
and tooling to make sure this doesn’t happen anymore in 2015 but we still
needed to address previous drifts. The second common reason was when a host
didn’t get the schema change for various reasons (was out of rotation when
the schema was being applied, a shortcoming of the manual process). By
automating <https://wikitech.wikimedia.org/wiki/Auto_schema> most of the
schema change operational work we reduced the chance of such drifts from
happening as well.
After finishing core, we now need to look at WMF-deployed extensions,
starting with FlaggedRevs <https://phabricator.wikimedia.org/T313253> that,
while being deployed to only 50 wikis and having only 8 tables, has ~7,000
drifts. Thankfully, most other extensions are in a healthier state.
I would like to personally thank Manuel Arostegui and Jaime Crespo for
their monumental dedication to fix these issues in the past years. Also a
big thank you to several of our amazing developers, Umherirrender, James
Forrester and Sam Reed who helped on reporting, going through the history
of MediaWiki to figure out why these drifts happened, and helping build the
reporting tools.
Best
--
*Amir Sarabadani (he/him)*
Staff Database Architect
Wikimedia Foundation <https://wikimediafoundation.org/>
Hi everyone,
The Movement Strategy Forum <https://forum.movement-strategy.org/> (MS
Forum) is a multilingual collaborative space for all conversations about
Movement Strategy implementation. We are inviting all Movement participants
to collaborate on the MS Forum. The goal of the forum is to build community
collaboration using an inclusive multilingual platform.
The Movement Strategy
<https://meta.wikimedia.org/wiki/Special:MyLanguage/Movement_Strategy> is a
collaborative effort to imagine and build the future of the Wikimedia
Movement. Anyone can contribute to the Movement Strategy, from a comment to
a full-time project.
Join this forum with your Wikimedia account, engage in conversations, and
ask questions in your language.
The Movement Strategy and Governance team (MSG) launched the proposal for
this MS Forum in May. After a 2-month review period, we have just published
the Community Review Report
<https://forum.movement-strategy.org/t/ms-forum-community-review-report/1436>.
It
includes a summary of the discussions, metrics, and information about the
next steps.
We look forward to seeing you at the MS Forum!
Best regards,
--
Software developer | Python mentor @OpenCLassrooms | E-commerce specialist
| Founder and Tech lead at CAURIS DEV: https://www.cauris-dev.com/
Hello!
A friendly reminder that the feedback period ends on the 21st of August.
Please spare 5-10 minutes to leave feedback[0] on the Toolhub taxonomy[1].
Toolhub[2] is a catalog of 1500+ tools used by a wide range of Wikimedia
contributors: editors, developers, patrollers, researchers, admins and more.
We want to make finding and categorizing these tools as easy as possible.
The taxonomy is at the heart of how tool search works, and your feedback
would help improve it.
Whether you are a current user of Toolhub or hearing about it for the first
time doesn't matter – your input is valuable and much appreciated either
way!
=== How To Provide Feedback ===
Use the discussion page[3] of the feedback page to provide your responses
to the questions.
You will find more details on the feedback page.
=== Implementation ===
At the end of the feedback round, the team will evaluate and work on the
necessary improvements.
This is expected to be completed by the end of September 2022.
[0]: https://meta.wikimedia.org/wiki/Toolhub/Data_model/Feedback
[1]: https://meta.wikimedia.org/wiki/Toolhub/Data_model#Taxonomy_v2
[2]: https://toolhub.wikimedia.org/
[3]: https://meta.wikimedia.org/wiki/Talk:Toolhub/Data_model/Feedback
Thanks
--
Seyram Komla Sapaty
Developer Advocate
Wikimedia Cloud Services
Merai pyarai Dosto aajkal log zindigi ki bhagam bhag mai lagai huwai hai. Kuch log dusro kai liyai bhi jeetai hai or apnai liyai bhi. or kuch log sirf or sirf apnai liyai jeetai hai. pta nhi in logo kai Dil patthar hai kya. Mai aaj aapko aik aisi kahani btanai ja raha hoon. Jaha dukh or takleefo ka pahad toot gya tha. Aik waqt tha jab inki zindigi bhout khusgawar thi. yai log kaafi paisai Waley thai.
Islaam ka phaila Rukn kalma touheed ka iqrar Or shahadat hai. iskai do juz hai. Aik yai kai bandigi kai layaq sirf khuda hai. Khuda ko mannai wala sirf usi ki ibadat karta hai. Sirf usi sai darta hai. sirf usi kai ahkaam ki ataat karta hai. Or sirf usi ka banda bankar zindigi guzarta hai...
Hello everyone,
The sixth workshop on the topic of "How to maintain bots" is coming up - it
will take place on Friday, July 29th at 16:00 UTC. You can find more
details on the workshop and a link to join here: <
https://meta.wikimedia.org/wiki/Small_wiki_toolkits/Workshops#How_to_mainta…>
[1].
This session will focus on best practices for maintaining bots and tools in
the Wikimedia ecosystem. It will cover a few practices that can help
developers run a bot or a tool with help from others, such as picking a
license, adding co-maintainers to the project, publishing source code,
writing docs, and much more.
To participate in this workshop, you would need basic familiarity with bots
or tools development. You can add your discussion ideas in the etherpad doc
linked from the workshops page.
We look forward to your participation!
Best,
Srishti
On behalf of the SWT Workshops Organization team
[1]
https://meta.wikimedia.org/wiki/Small_wiki_toolkits/Workshops#How_to_mainta…
*Srishti Sethi*
Senior Developer Advocate
Wikimedia Foundation <https://wikimediafoundation.org/>
Good evening all,
Please be aware that the deployment-prep beta cluster is currently offline
due to technical issues following the Cloud Services incident earlier today.
You can follow
https://phabricator.wikimedia.org/T315350 for updates on the issues.
A massive thank you to Brian King and TheresNoTime for working on the
issues so far.
Any assistance is appreciated on task or #wikimedia-releng on Libera. There
is no ETA at the moment for service restoration as the issue is unclear.
Thanks,
RhinosF1/Samuel
--
Thanks,
Samuel