The Problems of the Automation Bias in the Public Sector: A Legal Perspective

Lade...
Vorschaubild
Datum
2023
Herausgeber:innen
Autor:innen
Ruschemeier, Hannah
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Weizenbaum Institute
Zusammenfassung

The automation bias describes the phenomenon, proven in behavioural psychology, that people place excessive trust in the decision suggestions of machines. The law currently sees a dichotomy—and covers only fully automated decisions, and not those involving human decision makers at any stage of the process. However, the widespread use of such systems, for example to inform decisions in education or benefits administration, creates a leverage effect and increases the number of people affected. Particularly in environments where people routinely have to make a large number of similar decisions, the risk of automation bias increases. As an example, automated decisions providing suggestions for job placements illustrate the particular challenges of decision support systems in the public sector. So far, the risks have not been sufficiently addressed in egislation, as the analysis of the GDPR and the draft Artificial Intelligence Act show. I argue for the need for regulation and present initial approaches.

Beschreibung
Schlagwörter
Articifial Intelligence \ Bias \ Adm-Decisions \ Gdpr \ Discrimination \ AI-Act Pes
Verwandte Ressource
Verwandte Ressource
Zitierform
Ruschemeier, H. (2023). The Problems of the Automation Bias in the Public Sector: A Legal Perspective. Weizenbaum Conference Proceedings 2023. AI, Big Data, Social Media, and People on the Move, 59–69. https://doi.org/10.34669/wi.cp/5.6