The AI Act Proposal: Towards the next transparency fallacy? Why AI regulation should be based on principles based on how algorithmic discrimination works

dc.contributor.authorBerendt, Bettina
dc.contributor.editorBundesministerium für Umwelt, Naturschutz, nukleare Sicherheit und Verbraucherschutz
dc.contributor.editorRostalski, Frauke
dc.date.accessioned2024-05-02T15:03:16Z
dc.date.available2024-05-02T15:03:16Z
dc.date.issued2022
dc.description.abstractArtificial Intelligence (AI) can entail large benefits as well as risks. The goals of protecting individuals and society and establishing conditions under which citizens find AI “trustworthy” and developers and vendors can produce and sell AI, the ways in which AI works have to be understood better and rules have to be established and enforced to mitigate the risks. This task can only be undertaken in collaboration. Computer scientists are called upon to align data, algorithms, procedures and larger designs with values, ‘ethics’ and laws. Social scientists are called upon to describe and analyse the plethora of interdependent effects and causes in socio-technical systems involving AI. Philosophers are expected to explain values and ethics. And legal experts and scholars as well as politicians are expected to create the social rules and institutions that support beneficial uses of AI and avoid harmful ones. This article starts from a computers-and-society perspective and focuses on the action space of lawmaking. It suggests an approach to AI regulation that starts from a critique of the European Union’s (EU) proposal for a Regulation commonly known as the AI Act Proposal, published by the EU Commission on 21 April 2021.
dc.identifier.citationBerendt, B. (2022). The AI Act Proposal: Towards the next transparency fallacy? Why AI regulation should be based on principles based on how algorithmic discrimination works. In Bundesministerium für Umwelt, Naturschutz, nukleare Sicherheit und Verbraucherschutz & F. Rostalski (Hrsg.), Künstliche Intelligenz—Wie gelingt eine vertrauenswürdige Verwendung in Deutschland und Europa? (1. Aufl., S. 31–52). Mohr Siebeck. https://doi.org/10.1628/978-3-16-161299-2
dc.identifier.doihttps://doi.org/10.1628/978-3-16-161299-2
dc.identifier.isbn978-3-16-161298-5
dc.identifier.urihttps://www.weizenbaum-library.de/handle/id/624
dc.language.isodeu
dc.publisherMohr Siebeck
dc.rightsopen access
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleThe AI Act Proposal: Towards the next transparency fallacy? Why AI regulation should be based on principles based on how algorithmic discrimination works
dc.typeBookPart
dc.type.statuspublishedVersion
dcmi.typeText
dcterms.bibliographicCitation.urlhttps://doi.org/10.1628/978-3-16-161299-2
local.researchgroupDaten, algorithmische Systeme und Ethik
local.researchtopicDigitale Technologien in der Gesellschaft
Dateien
Originalbündel
Gerade angezeigt 1 - 1 von 1
Lade...
Vorschaubild
Name:
Berendt_2022_The-AI-Act-Proposal.pdf
Größe:
159.06 KB
Format:
Adobe Portable Document Format
Beschreibung: