03 May 2024
KI als neues Wahlkampfinstrument
KI-Systeme werden auch in Deutschland verstärkt zur Erzeugung von politischen Inhalten verwendet. Politische Parteien nutzen die Fähigkeiten Künstlicher Intelligenz, um unwahre Inhalte über den politischen Gegner zu generieren und zu verbreiten. Eine Überprüfung zeigt, dass die neue europäische Digitalregulierung nicht auf politische Sachverhalte zugeschnitten ist. Continue reading >>
0
14 March 2024
Shortcomings of the AI Act
After the much-awaited vote of the 13th March 2024 by the European Parliament, it is time to begin evaluating the state of fundamental rights in light of the AI Act. In this blog post, three areas of potential inconsistencies and risks are examined: differentiation of provider and deployer, biometrics used in real-time and post-factum, and the standards of biometric recognition in the areas of immigration. Continue reading >>
0
23 January 2024
Orwell’sche Gleichgültigkeit und Europäische Demokratie
Stolz wurde am 9. Dezember 2023 verkündet, dass „der KI-Deal steht“ – so ließe sich die damalige Pressemeldung des Rates paraphrasieren. Mittlerweile ist allerdings Besorgnis im Hinblick auf die weitere Ausformung des erzielten Kompromisses angebracht. Nachdem die Institutionen bei einem langwierigen letzten Treffen innerhalb der Trilog-Verhandlungen zwischen Parlament, Rat und Kommission zu einem „provisional agreement“ fanden, das den langen wie gewundenen Weg der (angeblich) „weltweit ersten“ KI-Regulierung zu einem Ende bringen könnte, scheint sich der für Ende Januar erwartete konkrete Textentwurf des AI Acts in mehrerlei Hinsicht von den dortigen Festlegungen zu entfernen. In der Sache lassen sich gewichtige rechtliche Einwände gegen den konkreten Regulierungsansatz vorbringen; noch schwerwiegender lastet allerdings das bedeutsame wie kritische Defizit der demokratischen Legitimierung dieser wichtigen regulatorischen Entscheidung auf den aktuellen Entwicklungen des AI Acts. Continue reading >>
0
13 December 2023
What’s Missing from the EU AI Act
The AI Act negotiators may still have been recovering from the political deal that was struck during the night of December 8 to 9 when two days later Mistral AI, the French startup, open sourced its potent new large language model, Mixtral 8x7B. Though much smaller in size, it rivals and even surpasses GPT 3.5 on many benchmarks thanks to a cunning architecture combining eight different expert models. While a notable technical feat, this new release epitomizes the most pressing challenges in AI policy today, and starkly highlights the gaps left unaddressed by the AI Act: mandatory basic AI safety standards; the conundrum of open-source models; the environmental impact of AI; and the need to accompany the AI Act with far more substantial public investment in AI. Continue reading >>
0
15 November 2023
Biden, Bletchley, and the emerging international law of AI
Everyone talks about AI at the moment. Biden issues an Executive Order while the EU hammers out its AI Act, and world and tech leaders meet in the UK to discuss AI. The significance of Biden’s Executive Order can therefore only be understood when taking a step back and considering the growing global AI regulatory landscape. In this blogpost, I argue that an international law of AI is slowly starting to emerge, pushing countries to adopt their own position on this technology in the international regulatory arena, before others do so for them. Biden’s Executive Order should hence be read with exactly this purpose in mind. Continue reading >>
0
10 November 2023
Europe and the Global Race to Regulate AI
The EU wants to set the global rule book for AI. This blog explains the complex “risk hierarchy” that pervades the proposed AI Act, currently in the final stages of trilogue negotiation. This contrasts with the US focus on “national security risks”. We point out shortcomings of the EU approach requiring comprehensive risk assessments (ex ante), at the level of technology development. Using economic analysis, we distinguish exogenous and endogenous sources of potential AI harm arising from input data. We are sceptical that legislators can anticipate the future of a general purpose technology, such as AI. We propose that from the perspective of encouraging ongoing innovation, (ex post) liability rules can provide the right incentives to improve data quality and AI safety. Continue reading >>
0
08 September 2023
An Interdisciplinary Toolbox for Researching the AI-Act
The proposed AI-act (AIA) will fundamentally transform the production, distribution, and use of AI-systems across the EU. Legal research has an important role to play in both clarifying and evaluating the AIA. To this end, legal researchers may employ a legal-doctrinal method, and focus on the AIA’s provisions and recitals to describe or evaluate its obligations. However, legal-doctrinal research is not a panacea that can fully operationalize or evaluate the AIA on its own. Rather, with the support of interdisciplinary research, we can better understand the AIA’s vague provisions, test its real-life application, and create practical design requirements for the developers of AI-systems. This blogpost gives a short glimpse into the methodological toolbox for researching the AI-act. Continue reading >>
0
18 August 2023
One Act to Rule Them All
Soon Brussels' newest big thing - the Artificial Intelligence Act - will enter the Trilogues. In order to better understand what’s at stake, who are the main actors and their motivations, and how to make one’s mind about all the conflicting claims we need to dive into the legal, economic and political aspects of the AI Act. The aim of this piece is to contextualize major milestones in the negotiations, showcase some of its critical features and flaws, and present challenges it may in the near future pose to people affected by “smart” models and systems. Continue reading >>
0
07 April 2023
Squaring the Circle
The Italian Data Protection Authority banned ChatGPT for violating EU data protection law. As training and operating large language models like ChatGPT requires massive amounts of (personal) data, AI's future in Europe, to an extent, hinges upon the GDPR. Continue reading >>27 February 2023
Action Recommended
The DSA will have a say in what measures social media platforms will have to implement with regard to the recommendation engines they deploy to curate people’s feeds and timelines. It is a departure from the previous narrow view of content moderation, and pays closer attention to risks stemming from the amplification of harmful content and the underlying design choices. But it is too early to celebrate. Continue reading >>
0