Hi Dimi, 

regarding TERREG you wrote that hopefully next time the MEPs and staffers won't miss a deadline and run a procedure check. Do you think they really missed it? I'm wondering if the topic appeared so delicate to some that maybe they used the not-filing-a-motion as a deliberate tactic, rather than engage. Do you have thoughts on that? 

Best
Micha
---
Michael Jahn
Leiter Programme
Director of Programs

Wikimedia Deutschland e. V. | Tempelhofer Ufer 23-24 | 10963 Berlin
Tel. (030) 219 158 26-0
https://wikimedia.de

Unsere Vision ist eine Welt, in der alle Menschen am Wissens der Menschheit teilhaben, es nutzen und mehren können. Helfen Sie uns dabei!
https://spenden.wikimedia.de

Aktuelle Nachrichten rund um Wikipedia, Wikimedia und Freies Wissen im Newsletter: Zur Anmeldung.

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V. Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für Körperschaften I Berlin, Steuernummer 27/029/42207.




Am Do., 29. Apr. 2021 um 16:19 Uhr schrieb Dimitar Parvanov Dimitrov <dimitar.parvanov.dimitrov@gmail.com>:
Wow! What a month! The Terrorist Content Regulation passed without a final vote, an Artificial Intelligence law was proposed unexpectedly quickly and over 600 amendment proposals to the Data Governance Act were tabled. And, and... we started a blog! A lot to unpack, so we will spare you the update on the Data Services Act this time around, as no big shifts occurred there anyway. 

Anna & Dimi


This and previous reports on Meta-Wiki: https://meta.wikimedia.org/wiki/EU_policy/Monitor

======

TERREG

In an unexpected turn of events, Terrorist Content Regulation has been adopted without a final vote. It has been possible due to a procedural peculiarity [01] that defaults legislation “inherited” from the previous legislative term to an adoption without a vote. A vote is a possibility if a political group or 71 MEPs puts a motion to reject it or to open it up for amendments. But since nobody filed one within a given deadline, the adoption was simply announced at the plenary session, to the surprise of many MEPs. 

--

This way, the dangers of content filtering, over-policing of content by state and private actors, and the cross-border prerogatives for governments will become law in 12 months from now without a final stamp from the elected representatives of the European citizens. As much as we didn’t expect a miracle of rejection of a hard-fought-for proposal [02], in democracy it is important to see where your representatives stand through a vote. 

We can only hope that next time, the MEPs and staffers who fought hard for this text to be better, won’t miss a deadline and run a procedure check as part of their preparations to an important vote.

======

AI Regulation

The European Commission proposed  the world’s first AI law. Curiously, the EU and US didn’t seem out of sync on this - the Federal Trade Commission published its own set of guidance [03] with partially overlapping requirements. But back to Europe: The proposal wants to ban some uses of AI (real-time facial recognition in public places & social scoring) and to impose obligations on “high-risk” uses (think credit scoring, self-driving cars, social benefits). It requires high-quality data sets, testing for discriminatory outcomes and a certain amount of transparency. The devil is, as always, in the detail. 

---

Bans: The proposed regulation outlines a list of banned artificial intelligence applications that includes government-conducted social scoring, real-time biometric recognition systems (e.g. facial recognition)  and practices that “manipulate persons through subliminal techniques beyond their consciousness” or “exploit vulnerable groups such as children or people with disabilities''. [04] As you can expect, these bans come with numerous exceptions. Real-time facial recognition, for instance, shall be allowed when looking for missing children or in the case of imminent terrorist threat. Expect long debates and wrestling on concrete wordings. 

---

High-risk Uses: A further category of regulated AI applications are “high-risk uses”. Of course, the details of the definition will be key here. Expect some fluffy wording combined with a list of concrete examples in an annex [05], which is supposed to be updated by the European Commission over the years. The proposed Annex includes uses in transport (think self-driving cars), education, employment, credit scoring or benefits applications, asylum and border control management. This list will be a major lobbying battle.  lists uses where AI will always be “high risk,” such as employment and migration control. 

When applying AI to high risk uses the operator, producer or distributor is required to have a quality management system, undergo a conformity assessment (through national authority or self-assessment), keep documentation & logs, notify a national authority, ensure human oversight, take corrective actions when risks are recognised and apply the CE marking. [06]

A lot to unpack here and, of course, the devil is in the details. Expect us to look very closely into the education AI uses and what exactly will be covered.

---

Transparency Obligations: There are even fluffier transparency obligations for “certain AI systems”. In a very simplified translation from legalese the rule basically wants to say that if an AI system interacts with natural persons, the person must know that it is AI/ML and what it does (e.g. if it recognises emotions).

---

First reactions and legislative process: We think the proposal is filled with good intentions that can end up as very sensible general rules for AI development and deployment or can terminate in a bureaucratic hell for everyone. Not sure we mentioned this before, but it looks like the devil will be in the details. The European Consumer Protection Bureau (BEUC) criticised that consumers aren’t given a straightforward way to enforce their rights and access to redress and remedies. [07] EDRi and the European Data Protection Supervisor call for adding predictive policing and all forms of biometric surveillance in public places into unacceptable uses category. [08] Tech Industry trade lobbies such as CCIA and DOT Europe were quick to warn against unnecessary red tape, but also seemed to see some sense in the approach.[09] We are now waiting for the European Parliament committee to fight over and agree which one will be responsible - a three-way race between the Internal Market, Legal Affairs and Civil Rights committees. 

======

Data Governance Act

---

We now have over 600 amendments tabled on the DGA. A lot to unpack, but we will basically support the types of changes: 

1. Amendments that will ensure that general interest projects (such as freely licensed knowledge resources) aren’t obliged to register with a national authority (a requirement planned for some cross-industry data-sharing clearinghouses). Currently the wording is unclear. 

2. Amdements that will restrict the use of the sui generis database rights.

3. Amendments that will ensure that the DGA doesn’t interfere with the GDPR. 

The meetings of the MEPs to discuss their amendments and look for compromises are scheduled for May and April, but will likely continue after summer. All amendments: [10][11] 

======

wikimedia.brussels

---

Now that stand-alone blogs aren’t cool and hip anymore, we have finally gotten around to starting one :/ The idea behind it is to have a place to write more regularly on legislative files and to establish it as a source for EU policymakers. Here are some reads that are already online:

======

======

END

======

[01]https://www.europarl.europa.eu/doceo/document/RULES-9-2021-01-18-RULE-069_EN.html 

[02]https://data.consilium.europa.eu/doc/document/ST-14308-2020-REV-1/en/pdf 

[03]https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai

[04]https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence

[05]https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75789

[06]https://en.wikipedia.org/wiki/CE_marking

[07]https://www.beuc.eu/publications/eu-proposal-artificial-intelligence-law-weak-consumer-protection/html

[08]https://twitter.com/edri/status/1386968653996888069

[09]https://techcrunch.com/2021/04/21/europe-lays-out-plan-for-risk-based-ai-rules-to-boost-trust-and-uptake/

[10]https://www.europarl.europa.eu/doceo/document/ITRE-AM-692584_EN.pdf

[11]https://www.europarl.europa.eu/doceo/document/ITRE-AM-691468_EN.pdf


_______________________________________________
Publicpolicy mailing list
Publicpolicy@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/publicpolicy