Open Access-Publikationen
Dauerhafte URI für den Bereich
Listen
Auflistung Open Access-Publikationen nach Forschungsbereichen "Verantwortung – Vertrauen – Governance"
Gerade angezeigt 1 - 20 von 41
Treffer pro Seite
Sortieroptionen
- ItemAlgorithmic Governance(2019) Katzenbach, Christian; Ulbricht, LenaAlgorithmic governance as a key concept in controversies around the emerging digital society highlights the idea that digital technologies produce social ordering in a specific way. Starting with the origins of the concept, this paper portrays different perspectives and objects of inquiry where algorithmic governance has gained prominence ranging from the public sector to labour management and ordering digital communication. Recurrent controversies across all sectors such as datafication and surveillance, bias, agency and transparency indicate that the concept of algorithmic governance allows to bring objects of inquiry and research fields that had not been related before into a joint conversation. Short case studies on predictive policy and automated content moderation show that algorithmic governance is multiple, contingent and contested. It takes different forms in different contexts and jurisdictions, and it is shaped by interests, power, and resistance.
- ItemAlgorithmic regulation. A maturing concept for investigating regulation of and through algorithms(2022) Ulbricht, Lena; Yeung, KarenThis paper offers a critical synthesis of the articles in this Special Issue with a view to assessing the concept of “algorithmic regulation” as a mode of social coordination and control articulated by Yeung in 2017. We highlight significant changes in public debate about the role of algorithms in society occurring in the last five years. We also highlight prominent themes that emerge from the contributions, illuminating what is distinctive about the concept of algorithmic regulation, reflecting upon some of its strengths, limitations, and its relationship with the broader research field. In closing, we argue that the core concept is valuable and maturing. It has evolved into an analytical bridge that fosters cross-disciplinary development and analysis in ways that enrich its early “skeletal” form, thereby enabling careful and context-sensitive analysis of algorithmic regulation in concrete settings while facilitating critical reflection concerning the legitimacy of existing and proposed regulatory regimes.
- ItemAnonymität im Internet: Interdisziplinäre Rückschlüsse auf Freiheit und Verantwortung bei der Ausgestaltung von Kommunikationsräumen(Academia, 2021) Gräfe, Hans-Christian; Hamm, Andrea; Berger, Franz X.; Deremetz, Anne; Hennig, Martin; Michell, Alix„Die anonyme Nutzung ist dem Internet immanent.“ So lautet eine unter Juristinnen bekannte und weit verbreitete Behauptung, die sich hinterfragen lassen muss. Die Wirklichkeit im Internet zeigt ein hinreichend anderes Bild. Unternehmen, die hinter den Kulissen immense Umsätze mit personalisierter Werbung erwirtschaften, registrieren nicht nur die Webseiten, die wir besuchen, sondern erfassen als Metadaten auch jede unserer Mausbewegungen, jeden Tastendruck und jede Änderung der Scrollposition. Anhand ihrer individuellen Verhaltensmuster können Internetnutzende nicht nur identifiziert, sondern auch Aussagen über ihre Gewohnheiten und politischen Überzeugungen, ihre gesundheitliche und finanzielle Situation, ihre Persönlichkeit und vieles mehr getroffen werden.
- ItemBias in data‐driven artificial intelligence systems—An introductory survey(2020) Ntoutsi, Eirini; Fafalios, Pavlos; Gadiraju, Ujwal; Iosifidis, Vasileios; Nejdl, Wolfgang; Vidal, Maria‐Esther; Ruggieri, Salvatore; Turini, Franco; Papadopoulos, Symeon; Krasanakis, Emmanouil; Kompatsiaris, Ioannis; Kinder‐Kurlanda, Katharina; Wagner, Claudia; Karimi, Fariba; Fernandez, Miriam; Alani, Harith; Berendt, Bettina; Kruegel, Tina; Heinze, Christian; Broelemann, Klaus; Kasneci, Gjergji; Tiropanis, Thanassis; Staab, SteffenArtificial Intelligence (AI)‐based systems are widely employed nowadays to make decisions that have far‐reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well‐grounded in a legal frame. In this survey, we focus on data‐driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth.
- ItemBig Data und Governance im digitalen Zeitalter(transcript, 2019) Ulbricht, Lena; Hofmann, Jeanette; Kersting, Norbert; Ritzi, Claudia; Schünemann, Wolf
- ItemDas Verfahren geht weit über „die App“ hinaus – Datenschutzfragen von Corona-Tracing-Apps. Einführung in Datenschutz-Folgenabschätzungen als Mittel, gesellschaftliche Implikationen zu diskutieren(2020) Bock, Kirsten; Kühne, Christian Ricardo; Mühlhoff, Rainer; Ost, Měto R.; Pohle, Jörg; Rehak, RainerSeit der Ausbreitung des SARS-CoV-2-Virus in Europa Anfang 2020 wir dan technischen Lösungen zur Eindämmung der Pandemie gearbeitet. Unter den verschiedenen Systementwürfen stechen jene hervor, die damit werben, datenschutzfreundlich und DSGVO-konform zu sein. Die DSGVO selbst verpflichtet die Betreiberïnnen umfangreicher Datenverarbeitungssysteme wie etwa Tracing-Apps zur Anfertigung einer Datenschutz-Folgenabschätzung (DSFA) aufgrund des hohen Risikos für die Rechte- und Freiheiten (Art. 35 DSGVO). Hierbei handelt es sich um eine strukturierte Risikoanalyse, die mögliche grundrechtsrelevante Folgen einer Datenverarbeitung im Vorfeld identifiziert und bewertet. Wir zeigen in unserer DSFA, dass auch die aktuelle, dezentrale Implementierung der Corona-Warn-App zahlreiche gravierende Schwachstellen und Risiken birgt. Auf der rechtlichen Seite haben wir die Legitimationsgrundlage einer freiwilligen Einwilligung untersucht und formulieren die begründete Forderung, dass der Einsatz einer Tracing-App gesetzlich geregelt werden muss. Weiterhin wurden Maßnahmen zur Verwirklichung von Betroffenenrechten nicht ausreichend betrachtet. Nicht zuletzt ist die Behauptung, ein Datum sei anonym, hoch voraussetzungsreich. Anonymisierung muss als ein kontinuierlicher Vorgang begriffen werden, der eine Abtrennung des Personenbezugs zum Ziel hat und auf dem Zusammenspiel von rechtlichen, organisatorischen und technischen Maßnahmen beruht. Der derzeit vorliegenden Corona-Warn-App fehlt es an einem solchen expliziten Trennungsvorgang. Unsere DSFA zeigt dabei auch die wesentlichen Defizite der offiziellen DSFA der Corona-Warn-App auf.
- ItemData Governance and Sovereignty in Urban Data Spaces Based on Standardized ICT Reference Architectures(2019) Cuno, Silke; Bruns, Lina; Tcholtchev, Nikolay; Lämmel, Philipp; Schieferdecker, InaEuropean cities and communities (and beyond) require a structured overview and a set of tools as to achieve a sustainable transformation towards smarter cities/municipalities, thereby leveraging on the enormous potential of the emerging data driven economy. This paper presents the results of a recent study that was conducted with a number of German municipalities/cities. Based on the obtained and briefly presented recommendations emerging from the study, the authors propose the concept of an Urban Data Space (UDS), which facilitates an eco-system for data exchange and added value creation thereby utilizing the various types of data within a smart city/municipality. Looking at an Urban Data Space from within a German context and considering the current situation and developments in German municipalities, this paper proposes a reasonable classification of urban data that allows the relation of various data types to legal aspects, and to conduct solid considerations regarding technical implementation designs and decisions. Furthermore, the Urban Data Space is described/analyzed in detail, and relevant stakeholders are identified, as well as corresponding technical artifacts are introduced. The authors propose to setup Urban Data Spaces based on emerging standards from the area of ICT reference architectures for Smart Cities, such as DIN SPEC 91357 “Open Urban Platform” and EIP SCC. In the course of this, the paper walks the reader through the construction of a UDS based on the above-mentioned architectures and outlines all the goals, recommendations and potentials, which an Urban Data Space can reveal to a municipality/city. Finally, we aim at deriving the proposed concepts in a way that they have the potential to be part of the required set of tools towards the sustainable transformation of German and European cities in the direction of smarter urban environments, based on utilizing the hidden potential of digitalization and efficient interoperable data exchange.
- Item(De)constructing ethics for autonomous cars: A case study of Ethics Pen-Testing towards “AI for the Common Good”(2020) Berendt, BettinaRecently, many AI researchers and practitioners have embarked on research visions that involve doing AI for “Good”. This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, the concept of Ethics Pen-Testing (EPT) identifies challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The current paper reports on a first evaluation of EPT. EPT is applicable to various artefacts that have ethical impact, including designs for or implementations of specific AI technology, and requirements engineering methods for eliciting which ethical settings to build into AI. The current study focused on the latter type of artefact. In four independent sessions, participants with close but varying involvements in “AI and ethics” were asked to deconstruct a method that has been proposed for eliciting ethical values and choices in autonomous car technology, an online experiment modelled on the Trolley Problem. The results suggest that EPT is well-suited to this task: the remarks made by participants lent themselves well to being structured by the four lead questions of EPT, in particular, regarding the question what the problem is and about which stakeholders define it. As part of the problem definition, the need became apparent for thorough technical domain knowledge in discussions of AI and ethics. Thus, participants questioned the framing and the presuppositions inherent in the experiment and the discourse on autonomous cars that underlies the experiment. They transitioned from discussing a specific AI artefact to discussing its role in wider socio-technical systems. Results also illustrate to what extent and how the requirements engineering method forces us to not only have a discussion about which values to “build into” AI systems, the substantive building blocks of the Common Good, but also about how we want to have this discussion at all. Thus, it forces us to become explicit about how we conceive of democracy and the constitutional state and the procedural building blocks of the Common Good.
- ItemDigital democracy(2021) Berg, Sebastian; Hofmann, JeanetteFor contemporary societies, digital democracy provides a key concept that denotes, in our understanding, the relationship between collective self-government and mediating digital infrastructures. New forms of digital engagement that go hand in hand with organisational reforms are re-intermediating established democratic settings in open-ended ways that defy linear narratives of demise or renewal. As a first approach, we trace the history of digital democracy against the background of its specific media constellations, describing continuities and discontinuities in the interplay of technological change and aspirations for democratisation. Thereafter, we critically review theoretical premises concerning the role of technology and how they vary in the way the concept of digital democracy is deployed. In four domains, we show the contingent political conditions under which the relationship between forms of democratic selfdetermination and its mediating digital infrastructures evolve. One lesson to learn from these four domains is that democratic self-governance is a profoundly mediated project whose institutions and practices are constantly in flux.
- ItemDo open data impact citizens’ behavior? Assessing face mask panic buying behaviors during the Covid-19 pandemic(2022) Shibuya, Yuya; Lai, Chun-Ming; Hamm, Andrea; Takagi, Soichiro; Sekimoto, YoshihideData are essential for digital solutions and supporting citizens’ everyday behavior. Open data initiatives have expanded worldwide in the last decades, yet investigating the actual usage of open data and evaluating their impacts are insufficient. Thus, in this paper, we examine an exemplary use case of open data during the early stage of the Covid-19 pandemic and assess its impacts on citizens. Based on quasi-experimental methods, the study found that publishing local stores’ real-time face mask stock levels as open data may have influenced people’s purchase behaviors. Results indicate a reduced panic buying behavior as a consequence of the openly accessible information in the form of an online mask map. Furthermore, the results also suggested that such open-data-based countermeasures did not equally impact every citizen and rather varied among socioeconomic conditions, in particular the education level.
- ItemDocumenting Computer Vision Datasets: An Invitation to Reflexive Data Practices(2021) Miceli, Milagros; Yang, Tianling; Naudts, Laurens; Schüßler, Martin; Serbanescu, Diana; Hanna, AlexIn industrial computer vision, discretionary decisions surrounding the production of image training data remain widely undocumented. Recent research taking issue with such opacity has proposed standardized processes for dataset documentation. In this paper, we expand this space of inquiry through fieldwork at two data processing companies and thirty interviews with data workers and computer vision practitioners. We identify four key issues that hinder the documentation of image datasets and the effective retrieval of production contexts. Finally, we propose reflexivity, understood as a collective consideration of social and intellectual factors that lead to praxis, as a necessary precondition for documentation. Reflexive documentation can help to expose the contexts, relations, routines, and power structures that shape data.
- ItemEinführung in das Technikrecht(2021) Zech, HerbertDas Buch will eine Einführung in das faszinierende Rechtsgebiet des Technikrechts geben. Es definiert Technikrecht als techniksteuerndes Recht, umreißt die Ziele des Technikrechts anhand der Chancen und Risiken von Technologien und skizziert die rechtlichen Mittel, mit denen diese verfolgt werden. Als besondere Herausforderungen werden die Dynamik der Technik und die damit verbundene Ungewissheit über die Chancen und Risiken neuartiger Technologien herausgearbeitet. Geordnet nach den Zielen werden die verschiedenen umfassten Rechtsgebiete überblicksartig dargestellt, um schließlich die Frage zu stellen, ob das Technikrecht als ein eigenständiges Rechtsgebiet aufgefasst werden kann.
- ItemExtending the framework of algorithmic regulation. The Uber case(2022) Eyert, Florian; Irgmaier, Florian; Ulbricht, LenaIn this article, we take forward recent initiatives to assess regulation based on contemporary computer technologies such as big data and artificial intelligence. In order to characterize current phenomena of regulation in the digital age, we build on Karen Yeung’s concept of “algorithmic regulation,” extending it by building bridges to the fields of quantification, classification, and evaluation research, as well as to science and technology studies. This allows us to develop a more fine-grained conceptual framework that analyzes the three components of algorithmic regulation as representation, direction, and intervention and proposes subdimensions for each. Based on a case study of the algorithmic regulation of Uber drivers, we show the usefulness of the framework for assessing regulation in the digital age and as a starting point for critique and alternative models of algorithmic regulation.
- ItemFrom Data to Discourse. How Communicating Civic Data Can Provide a Participatory Structure for Sustainable Cities and Communities(Mid Sweden University, 2021) Shibuya, Yuya; Hamm, Andrea; Raetzsch, ChristophThis study explores how Civil Society Organizations (CSOs) have leveraged civic data to facilitate democratic participatory structure for sustainability transitions around the case of bicycle counters in three US cities over a ten-year period (Seattle, San Francisco, Portland). We identified that CSOs have played crucial roles in public discourse by (1) sustaining long-term public issues through shaping affective as well as analytical discourses and (2) fostering citizens’ sense of ownership and contributions toward sensor devices and the data they generate by contextualizing them through local civic life as well as connecting issues to actors in other cities.
- ItemLiability for AI. Public policy considerations(2021) Zech, HerbertLiability for AI is the subject of a lively debate. Whether new liability rules should be introduced or not and how these rules should be designed hinges on the function of liability rules. Mainly, they create incentives for risk control, varying with their requirements – especially negligence versus strict liability. In order to do so, they have to take into account who is actually able to exercise control. In scenarios where a clear allocation of risk control is no longer possible, social insurance might step in. This article discusses public policy considerations concerning liability for artificial intelligence (AI). It first outlines the major risks associated with current developments in information technology (IT) (1.). Second, the implications for liability law are discussed. Liability rules are seen conceptualized as an instrument for risk control (2.). Negligence liability and strict liability serve different purposes making strict liability the rule of choice for novel risks (3.). The key question is, however, who should be held liable (4.). Liability should follow risk control. In future scenarios where individual risk attribution is no longer feasible social insurance might be an alternative (5). Finally, the innovation function of liability rules is stressed, affirming that appropriate liability rules serve as a stimulus for innovation, not as an impediment (6.).
- ItemMapping HCI research methods for studying social media interaction. A systematic literature review(2022) Shibuya, Yuya; Hamm, Andrea; Cerratto Pargman, TeresaIn the last decades, researchers in Human-computer interaction (HCI) took numerous efforts to investigate people’s social media interaction. To understand how researchers in HCI have studied social media interaction, we examined 149 peer-reviewed articles published between 2008 and 2020 in major HCI conference proceedings and journals. We systematically reviewed the methodologies HCI researchers applied, the research topics these methods covered, and the types of data collected. Through the analysis, we make three contributions: We (1) pinpoint the topic trends by identifying three phases in the study of social media interaction in HCI. Namely, the early phase (2008–2012) focused on user behavior, the growing phase (2013–2016), focused on privacy and health, and the latest phase (2017–2020) focused on design. (2) We map methodological trends in the study of social media interaction in HCI. We also illustrate the trends in relation to the types of data collected in the selected works and, (3) identify underexplored study areas.
- ItemMediated democracy – Linking digital technology to political agency(2019) Hofmann, JeanetteAlthough the relationship between digitalisation and democracy is subject of growing public attention, the nature of this relationship is rarely addressed in a systematic manner. The common understanding is that digital media are the driver of the political change we are facing today. This paper argues against such a causal approach und proposes a co-evolutionary perspective instead. Inspired by Benedict Anderson's "Imagined Communities" and recent research on mediatisation, it introduces the concept of mediated democracy. This concept reflects the simple idea that representative democracy requires technical mediation, and that the rise of modern democracy and of communication media are therefore closely intertwined. Hence, mediated democracy denotes a research perspective, not a type of democracy. It explores the changing interplay of democratic organisation and communication media as a contingent constellation, which could have evolved differently. Specific forms of communication media emerge in tandem with larger societal formations and mutually enable each other. Following this argument, the current constellation reflects a transformation of representative democracy and the spread of digital media. The latter is interpreted as a "training ground" for experimenting with new forms of democratic agency.
- ItemMediated trust: A theoretical framework to address the trustworthiness of technological trust mediators(2021) Bodó, BalázsThis article considers the impact of digital technologies on the interpersonal and institutional logics of trust production. It introduces the new theoretical concept of technology-mediated trust to analyze the role of complex techno-social assemblages in trust production and distrust management. The first part of the article argues that globalization and digitalization have unleashed a crisis of trust, as traditional institutional and interpersonal logics are not attuned to deal with the risks introduced by the prevalence of digital technologies. In the second part, the article describes how digital intermediation has transformed the traditional logics of interpersonal and institutional trust formation and created new trust-mediating services. Finally, the article asks as follows: why should we trust these technological trust mediators? The conclusion is that at best, it is impossible to establish the trustworthiness of trust mediators, and that at worst, we have no reason to trust them.
- ItemPersonal information inference from voice recordings: User awareness and privacy concerns(2022) Kröger, Jacob Leon; Gellrich, Leon; Pape, Sebastian; Brause, Saba Rebecca; Ullrich, StefanThrough voice characteristics and manner of expression, even seemingly benign voice recordings can reveal sensitive attributes about a recorded speaker (e. g., geographical origin, health status, personality). We conducted a nationally representative survey in the UK (n = 683, 18–69 years) to investigate people’s awareness about the inferential power of voice and speech analysis. Our results show that – while awareness levels vary between different categories of inferred information – there is generally low awareness across all participant demographics, even among participants with professional experience in computer science, data mining, and IT security. For instance, only 18.7% of participants are at least somewhat aware that physical and mental health information can be inferred from voice recordings. Many participants have rarely (28.4%) or never (42.5%) even thought about the possibility of personal information being inferred from speech data. After a short educational video on the topic, participants express only moderate privacy concern. However, based on an analysis of open text responses, unconcerned reactions seem to be largely explained by knowledge gaps about possible data misuses. Watching the educational video lowered participants’ intention to use voice-enabled devices. In discussing the regulatory implications of our findings, we challenge the notion of “informed consent” to data processing. We also argue that inferences about individuals need to be legally recognized as personal data and protected accordingly.
- ItemPolitik in der digitalen Gesellschaft. Zentrale Problemfelder und Forschungsperspektiven(transcript, 2020) Hofmann, Jeanette; Ritzi, Claudia; Kersting, Norbert; Schünemann, WolfDie Bedeutung der Digitalisierung für Politik und Gesellschaft ist ein hoch aktuelles Themenfeld, das immer stärker auch politikwissenschaftlich beforscht und gelehrt wird. Die Beiträge des Bandes versammeln dazu programmatische Positionen, welche zentrale Aspekte und Perspektiven der sozialwissenschaftlichen Digitalisierungsforschung darstellen und diskutieren. Hierzu zählen u.a. Forschungsfelder aus den Bereichen Partizipations- und Parteienforschung, Governance der Digitalisierung, methodische Reflexionen über Computational Social Science und die Analyse von Demokratie und Öffentlichkeit unter den Bedingungen der Digitalisierung.
- «
- 1 (current)
- 2
- 3
- »