Aufsätze
Dauerhafte URI für die Sammlung
Wissenschaftliche Aufsätze
Listen
Auflistung Aufsätze nach Autor:in "Berendt, Bettina"
Gerade angezeigt 1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- ItemArticulation Work and Tinkering for Fairness in Machine Learning(2024) Fahimi, Miriam; Russo, Mayra; Scott, Kristen M.; Vidal, Maria-Esther; Berendt, Bettina; Kinder-Kurlanda, KatharinaThe field of fair AI aims to counter biased algorithms through computational modelling. However, it faces increasing criticism for perpetuating the use of overly technical and reductionist methods. As a result, novel approaches appear in the field to address more socially-oriented and interdisciplinary (SOI) perspectives on fair AI. In this paper, we take this dynamic as the starting point to study the tension between computer science (CS) and SOI research. By drawing on STS and CSCW theory, we position fair AI research as a matter of 'organizational alignment': what makes research 'doable' is the successful alignment of three levels of work organization (the social world, the laboratory, and the experiment). Based on qualitative interviews with CS researchers, we analyze the tasks, resources, and actors required for doable research in the case of fair AI. We find that CS researchers engage with SOI research to some extent, but organizational conditions, articulation work, and ambiguities of the social world constrain the doability of SOI research for them. Based on our findings, we identify and discuss problems for aligning CS and SOI as fair AI continues to evolve.
- ItemBias in data‐driven artificial intelligence systems—An introductory survey(2020) Ntoutsi, Eirini; Fafalios, Pavlos; Gadiraju, Ujwal; Iosifidis, Vasileios; Nejdl, Wolfgang; Vidal, Maria‐Esther; Ruggieri, Salvatore; Turini, Franco; Papadopoulos, Symeon; Krasanakis, Emmanouil; Kompatsiaris, Ioannis; Kinder‐Kurlanda, Katharina; Wagner, Claudia; Karimi, Fariba; Fernandez, Miriam; Alani, Harith; Berendt, Bettina; Kruegel, Tina; Heinze, Christian; Broelemann, Klaus; Kasneci, Gjergji; Tiropanis, Thanassis; Staab, SteffenArtificial Intelligence (AI)‐based systems are widely employed nowadays to make decisions that have far‐reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well‐grounded in a legal frame. In this survey, we focus on data‐driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth.
- Item(De)constructing ethics for autonomous cars: A case study of Ethics Pen-Testing towards “AI for the Common Good”(2020) Berendt, BettinaRecently, many AI researchers and practitioners have embarked on research visions that involve doing AI for “Good”. This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, the concept of Ethics Pen-Testing (EPT) identifies challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The current paper reports on a first evaluation of EPT. EPT is applicable to various artefacts that have ethical impact, including designs for or implementations of specific AI technology, and requirements engineering methods for eliciting which ethical settings to build into AI. The current study focused on the latter type of artefact. In four independent sessions, participants with close but varying involvements in “AI and ethics” were asked to deconstruct a method that has been proposed for eliciting ethical values and choices in autonomous car technology, an online experiment modelled on the Trolley Problem. The results suggest that EPT is well-suited to this task: the remarks made by participants lent themselves well to being structured by the four lead questions of EPT, in particular, regarding the question what the problem is and about which stakeholders define it. As part of the problem definition, the need became apparent for thorough technical domain knowledge in discussions of AI and ethics. Thus, participants questioned the framing and the presuppositions inherent in the experiment and the discourse on autonomous cars that underlies the experiment. They transitioned from discussing a specific AI artefact to discussing its role in wider socio-technical systems. Results also illustrate to what extent and how the requirements engineering method forces us to not only have a discussion about which values to “build into” AI systems, the substantive building blocks of the Common Good, but also about how we want to have this discussion at all. Thus, it forces us to become explicit about how we conceive of democracy and the constitutional state and the procedural building blocks of the Common Good.