The AI Act Proposal: Towards the next transparency fallacy? Why AI regulation should be based on principles based on how algorithmic discrimination works
dc.contributor.author | Berendt, Bettina | |
dc.contributor.editor | Bundesministerium für Umwelt, Naturschutz, nukleare Sicherheit und Verbraucherschutz | |
dc.contributor.editor | Rostalski, Frauke | |
dc.date.accessioned | 2024-05-02T15:03:16Z | |
dc.date.available | 2024-05-02T15:03:16Z | |
dc.date.issued | 2022 | |
dc.description.abstract | Artificial Intelligence (AI) can entail large benefits as well as risks. The goals of protecting individuals and society and establishing conditions under which citizens find AI “trustworthy” and developers and vendors can produce and sell AI, the ways in which AI works have to be understood better and rules have to be established and enforced to mitigate the risks. This task can only be undertaken in collaboration. Computer scientists are called upon to align data, algorithms, procedures and larger designs with values, ‘ethics’ and laws. Social scientists are called upon to describe and analyse the plethora of interdependent effects and causes in socio-technical systems involving AI. Philosophers are expected to explain values and ethics. And legal experts and scholars as well as politicians are expected to create the social rules and institutions that support beneficial uses of AI and avoid harmful ones. This article starts from a computers-and-society perspective and focuses on the action space of lawmaking. It suggests an approach to AI regulation that starts from a critique of the European Union’s (EU) proposal for a Regulation commonly known as the AI Act Proposal, published by the EU Commission on 21 April 2021. | |
dc.identifier.citation | Berendt, B. (2022). The AI Act Proposal: Towards the next transparency fallacy? Why AI regulation should be based on principles based on how algorithmic discrimination works. In Bundesministerium für Umwelt, Naturschutz, nukleare Sicherheit und Verbraucherschutz & F. Rostalski (Hrsg.), Künstliche Intelligenz—Wie gelingt eine vertrauenswürdige Verwendung in Deutschland und Europa? (1. Aufl., S. 31–52). Mohr Siebeck. https://doi.org/10.1628/978-3-16-161299-2 | |
dc.identifier.doi | https://doi.org/10.1628/978-3-16-161299-2 | |
dc.identifier.isbn | 978-3-16-161298-5 | |
dc.identifier.uri | https://www.weizenbaum-library.de/handle/id/624 | |
dc.language.iso | deu | |
dc.publisher | Mohr Siebeck | |
dc.rights | open access | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
dc.title | The AI Act Proposal: Towards the next transparency fallacy? Why AI regulation should be based on principles based on how algorithmic discrimination works | |
dc.type | BookPart | |
dc.type.status | publishedVersion | |
dcmi.type | Text | |
dcterms.bibliographicCitation.url | https://doi.org/10.1628/978-3-16-161299-2 | |
local.researchgroup | Daten, algorithmische Systeme und Ethik | |
local.researchtopic | Digitale Technologien in der Gesellschaft |
Dateien
Originalbündel
1 - 1 von 1
Lade...
- Name:
- Berendt_2022_The-AI-Act-Proposal.pdf
- Größe:
- 159.06 KB
- Format:
- Adobe Portable Document Format
- Beschreibung: