Aufsätze
Dauerhafte URI für die Sammlung
Wissenschaftliche Aufsätze
Listen
Auflistung Aufsätze nach Forschungsbereichen "Verantwortung – Vertrauen – Governance"
Gerade angezeigt 1 - 20 von 25
Treffer pro Seite
Sortieroptionen
- ItemAlgorithmic Governance(2019) Katzenbach, Christian; Ulbricht, LenaAlgorithmic governance as a key concept in controversies around the emerging digital society highlights the idea that digital technologies produce social ordering in a specific way. Starting with the origins of the concept, this paper portrays different perspectives and objects of inquiry where algorithmic governance has gained prominence ranging from the public sector to labour management and ordering digital communication. Recurrent controversies across all sectors such as datafication and surveillance, bias, agency and transparency indicate that the concept of algorithmic governance allows to bring objects of inquiry and research fields that had not been related before into a joint conversation. Short case studies on predictive policy and automated content moderation show that algorithmic governance is multiple, contingent and contested. It takes different forms in different contexts and jurisdictions, and it is shaped by interests, power, and resistance.
- ItemAlgorithmic regulation. A maturing concept for investigating regulation of and through algorithms(2022) Ulbricht, Lena; Yeung, KarenThis paper offers a critical synthesis of the articles in this Special Issue with a view to assessing the concept of “algorithmic regulation” as a mode of social coordination and control articulated by Yeung in 2017. We highlight significant changes in public debate about the role of algorithms in society occurring in the last five years. We also highlight prominent themes that emerge from the contributions, illuminating what is distinctive about the concept of algorithmic regulation, reflecting upon some of its strengths, limitations, and its relationship with the broader research field. In closing, we argue that the core concept is valuable and maturing. It has evolved into an analytical bridge that fosters cross-disciplinary development and analysis in ways that enrich its early “skeletal” form, thereby enabling careful and context-sensitive analysis of algorithmic regulation in concrete settings while facilitating critical reflection concerning the legitimacy of existing and proposed regulatory regimes.
- ItemBias in data‐driven artificial intelligence systems—An introductory survey(2020) Ntoutsi, Eirini; Fafalios, Pavlos; Gadiraju, Ujwal; Iosifidis, Vasileios; Nejdl, Wolfgang; Vidal, Maria‐Esther; Ruggieri, Salvatore; Turini, Franco; Papadopoulos, Symeon; Krasanakis, Emmanouil; Kompatsiaris, Ioannis; Kinder‐Kurlanda, Katharina; Wagner, Claudia; Karimi, Fariba; Fernandez, Miriam; Alani, Harith; Berendt, Bettina; Kruegel, Tina; Heinze, Christian; Broelemann, Klaus; Kasneci, Gjergji; Tiropanis, Thanassis; Staab, SteffenArtificial Intelligence (AI)‐based systems are widely employed nowadays to make decisions that have far‐reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well‐grounded in a legal frame. In this survey, we focus on data‐driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth.
- ItemDas Verfahren geht weit über „die App“ hinaus – Datenschutzfragen von Corona-Tracing-Apps. Einführung in Datenschutz-Folgenabschätzungen als Mittel, gesellschaftliche Implikationen zu diskutieren(2020) Bock, Kirsten; Kühne, Christian Ricardo; Mühlhoff, Rainer; Ost, Měto R.; Pohle, Jörg; Rehak, RainerSeit der Ausbreitung des SARS-CoV-2-Virus in Europa Anfang 2020 wir dan technischen Lösungen zur Eindämmung der Pandemie gearbeitet. Unter den verschiedenen Systementwürfen stechen jene hervor, die damit werben, datenschutzfreundlich und DSGVO-konform zu sein. Die DSGVO selbst verpflichtet die Betreiberïnnen umfangreicher Datenverarbeitungssysteme wie etwa Tracing-Apps zur Anfertigung einer Datenschutz-Folgenabschätzung (DSFA) aufgrund des hohen Risikos für die Rechte- und Freiheiten (Art. 35 DSGVO). Hierbei handelt es sich um eine strukturierte Risikoanalyse, die mögliche grundrechtsrelevante Folgen einer Datenverarbeitung im Vorfeld identifiziert und bewertet. Wir zeigen in unserer DSFA, dass auch die aktuelle, dezentrale Implementierung der Corona-Warn-App zahlreiche gravierende Schwachstellen und Risiken birgt. Auf der rechtlichen Seite haben wir die Legitimationsgrundlage einer freiwilligen Einwilligung untersucht und formulieren die begründete Forderung, dass der Einsatz einer Tracing-App gesetzlich geregelt werden muss. Weiterhin wurden Maßnahmen zur Verwirklichung von Betroffenenrechten nicht ausreichend betrachtet. Nicht zuletzt ist die Behauptung, ein Datum sei anonym, hoch voraussetzungsreich. Anonymisierung muss als ein kontinuierlicher Vorgang begriffen werden, der eine Abtrennung des Personenbezugs zum Ziel hat und auf dem Zusammenspiel von rechtlichen, organisatorischen und technischen Maßnahmen beruht. Der derzeit vorliegenden Corona-Warn-App fehlt es an einem solchen expliziten Trennungsvorgang. Unsere DSFA zeigt dabei auch die wesentlichen Defizite der offiziellen DSFA der Corona-Warn-App auf.
- ItemData Governance and Sovereignty in Urban Data Spaces Based on Standardized ICT Reference Architectures(2019) Cuno, Silke; Bruns, Lina; Tcholtchev, Nikolay; Lämmel, Philipp; Schieferdecker, InaEuropean cities and communities (and beyond) require a structured overview and a set of tools as to achieve a sustainable transformation towards smarter cities/municipalities, thereby leveraging on the enormous potential of the emerging data driven economy. This paper presents the results of a recent study that was conducted with a number of German municipalities/cities. Based on the obtained and briefly presented recommendations emerging from the study, the authors propose the concept of an Urban Data Space (UDS), which facilitates an eco-system for data exchange and added value creation thereby utilizing the various types of data within a smart city/municipality. Looking at an Urban Data Space from within a German context and considering the current situation and developments in German municipalities, this paper proposes a reasonable classification of urban data that allows the relation of various data types to legal aspects, and to conduct solid considerations regarding technical implementation designs and decisions. Furthermore, the Urban Data Space is described/analyzed in detail, and relevant stakeholders are identified, as well as corresponding technical artifacts are introduced. The authors propose to setup Urban Data Spaces based on emerging standards from the area of ICT reference architectures for Smart Cities, such as DIN SPEC 91357 “Open Urban Platform” and EIP SCC. In the course of this, the paper walks the reader through the construction of a UDS based on the above-mentioned architectures and outlines all the goals, recommendations and potentials, which an Urban Data Space can reveal to a municipality/city. Finally, we aim at deriving the proposed concepts in a way that they have the potential to be part of the required set of tools towards the sustainable transformation of German and European cities in the direction of smarter urban environments, based on utilizing the hidden potential of digitalization and efficient interoperable data exchange.
- Item(De)constructing ethics for autonomous cars: A case study of Ethics Pen-Testing towards “AI for the Common Good”(2020) Berendt, BettinaRecently, many AI researchers and practitioners have embarked on research visions that involve doing AI for “Good”. This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, the concept of Ethics Pen-Testing (EPT) identifies challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The current paper reports on a first evaluation of EPT. EPT is applicable to various artefacts that have ethical impact, including designs for or implementations of specific AI technology, and requirements engineering methods for eliciting which ethical settings to build into AI. The current study focused on the latter type of artefact. In four independent sessions, participants with close but varying involvements in “AI and ethics” were asked to deconstruct a method that has been proposed for eliciting ethical values and choices in autonomous car technology, an online experiment modelled on the Trolley Problem. The results suggest that EPT is well-suited to this task: the remarks made by participants lent themselves well to being structured by the four lead questions of EPT, in particular, regarding the question what the problem is and about which stakeholders define it. As part of the problem definition, the need became apparent for thorough technical domain knowledge in discussions of AI and ethics. Thus, participants questioned the framing and the presuppositions inherent in the experiment and the discourse on autonomous cars that underlies the experiment. They transitioned from discussing a specific AI artefact to discussing its role in wider socio-technical systems. Results also illustrate to what extent and how the requirements engineering method forces us to not only have a discussion about which values to “build into” AI systems, the substantive building blocks of the Common Good, but also about how we want to have this discussion at all. Thus, it forces us to become explicit about how we conceive of democracy and the constitutional state and the procedural building blocks of the Common Good.
- ItemDigital democracy(2021) Berg, Sebastian; Hofmann, JeanetteFor contemporary societies, digital democracy provides a key concept that denotes, in our understanding, the relationship between collective self-government and mediating digital infrastructures. New forms of digital engagement that go hand in hand with organisational reforms are re-intermediating established democratic settings in open-ended ways that defy linear narratives of demise or renewal. As a first approach, we trace the history of digital democracy against the background of its specific media constellations, describing continuities and discontinuities in the interplay of technological change and aspirations for democratisation. Thereafter, we critically review theoretical premises concerning the role of technology and how they vary in the way the concept of digital democracy is deployed. In four domains, we show the contingent political conditions under which the relationship between forms of democratic selfdetermination and its mediating digital infrastructures evolve. One lesson to learn from these four domains is that democratic self-governance is a profoundly mediated project whose institutions and practices are constantly in flux.
- ItemDo open data impact citizens’ behavior? Assessing face mask panic buying behaviors during the Covid-19 pandemic(2022) Shibuya, Yuya; Lai, Chun-Ming; Hamm, Andrea; Takagi, Soichiro; Sekimoto, YoshihideData are essential for digital solutions and supporting citizens’ everyday behavior. Open data initiatives have expanded worldwide in the last decades, yet investigating the actual usage of open data and evaluating their impacts are insufficient. Thus, in this paper, we examine an exemplary use case of open data during the early stage of the Covid-19 pandemic and assess its impacts on citizens. Based on quasi-experimental methods, the study found that publishing local stores’ real-time face mask stock levels as open data may have influenced people’s purchase behaviors. Results indicate a reduced panic buying behavior as a consequence of the openly accessible information in the form of an online mask map. Furthermore, the results also suggested that such open-data-based countermeasures did not equally impact every citizen and rather varied among socioeconomic conditions, in particular the education level.
- ItemExtending the framework of algorithmic regulation. The Uber case(2022) Eyert, Florian; Irgmaier, Florian; Ulbricht, LenaIn this article, we take forward recent initiatives to assess regulation based on contemporary computer technologies such as big data and artificial intelligence. In order to characterize current phenomena of regulation in the digital age, we build on Karen Yeung’s concept of “algorithmic regulation,” extending it by building bridges to the fields of quantification, classification, and evaluation research, as well as to science and technology studies. This allows us to develop a more fine-grained conceptual framework that analyzes the three components of algorithmic regulation as representation, direction, and intervention and proposes subdimensions for each. Based on a case study of the algorithmic regulation of Uber drivers, we show the usefulness of the framework for assessing regulation in the digital age and as a starting point for critique and alternative models of algorithmic regulation.
- ItemLiability for AI. Public policy considerations(2021) Zech, HerbertLiability for AI is the subject of a lively debate. Whether new liability rules should be introduced or not and how these rules should be designed hinges on the function of liability rules. Mainly, they create incentives for risk control, varying with their requirements – especially negligence versus strict liability. In order to do so, they have to take into account who is actually able to exercise control. In scenarios where a clear allocation of risk control is no longer possible, social insurance might step in. This article discusses public policy considerations concerning liability for artificial intelligence (AI). It first outlines the major risks associated with current developments in information technology (IT) (1.). Second, the implications for liability law are discussed. Liability rules are seen conceptualized as an instrument for risk control (2.). Negligence liability and strict liability serve different purposes making strict liability the rule of choice for novel risks (3.). The key question is, however, who should be held liable (4.). Liability should follow risk control. In future scenarios where individual risk attribution is no longer feasible social insurance might be an alternative (5). Finally, the innovation function of liability rules is stressed, affirming that appropriate liability rules serve as a stimulus for innovation, not as an impediment (6.).
- ItemMapping HCI research methods for studying social media interaction. A systematic literature review(2022) Shibuya, Yuya; Hamm, Andrea; Cerratto Pargman, TeresaIn the last decades, researchers in Human-computer interaction (HCI) took numerous efforts to investigate people’s social media interaction. To understand how researchers in HCI have studied social media interaction, we examined 149 peer-reviewed articles published between 2008 and 2020 in major HCI conference proceedings and journals. We systematically reviewed the methodologies HCI researchers applied, the research topics these methods covered, and the types of data collected. Through the analysis, we make three contributions: We (1) pinpoint the topic trends by identifying three phases in the study of social media interaction in HCI. Namely, the early phase (2008–2012) focused on user behavior, the growing phase (2013–2016), focused on privacy and health, and the latest phase (2017–2020) focused on design. (2) We map methodological trends in the study of social media interaction in HCI. We also illustrate the trends in relation to the types of data collected in the selected works and, (3) identify underexplored study areas.
- ItemMediated democracy – Linking digital technology to political agency(2019) Hofmann, JeanetteAlthough the relationship between digitalisation and democracy is subject of growing public attention, the nature of this relationship is rarely addressed in a systematic manner. The common understanding is that digital media are the driver of the political change we are facing today. This paper argues against such a causal approach und proposes a co-evolutionary perspective instead. Inspired by Benedict Anderson's "Imagined Communities" and recent research on mediatisation, it introduces the concept of mediated democracy. This concept reflects the simple idea that representative democracy requires technical mediation, and that the rise of modern democracy and of communication media are therefore closely intertwined. Hence, mediated democracy denotes a research perspective, not a type of democracy. It explores the changing interplay of democratic organisation and communication media as a contingent constellation, which could have evolved differently. Specific forms of communication media emerge in tandem with larger societal formations and mutually enable each other. Following this argument, the current constellation reflects a transformation of representative democracy and the spread of digital media. The latter is interpreted as a "training ground" for experimenting with new forms of democratic agency.
- ItemMediated trust: A theoretical framework to address the trustworthiness of technological trust mediators(2021) Bodó, BalázsThis article considers the impact of digital technologies on the interpersonal and institutional logics of trust production. It introduces the new theoretical concept of technology-mediated trust to analyze the role of complex techno-social assemblages in trust production and distrust management. The first part of the article argues that globalization and digitalization have unleashed a crisis of trust, as traditional institutional and interpersonal logics are not attuned to deal with the risks introduced by the prevalence of digital technologies. In the second part, the article describes how digital intermediation has transformed the traditional logics of interpersonal and institutional trust formation and created new trust-mediating services. Finally, the article asks as follows: why should we trust these technological trust mediators? The conclusion is that at best, it is impossible to establish the trustworthiness of trust mediators, and that at worst, we have no reason to trust them.
- ItemPersonal information inference from voice recordings: User awareness and privacy concerns(2022) Kröger, Jacob Leon; Gellrich, Leon; Pape, Sebastian; Brause, Saba Rebecca; Ullrich, StefanThrough voice characteristics and manner of expression, even seemingly benign voice recordings can reveal sensitive attributes about a recorded speaker (e. g., geographical origin, health status, personality). We conducted a nationally representative survey in the UK (n = 683, 18–69 years) to investigate people’s awareness about the inferential power of voice and speech analysis. Our results show that – while awareness levels vary between different categories of inferred information – there is generally low awareness across all participant demographics, even among participants with professional experience in computer science, data mining, and IT security. For instance, only 18.7% of participants are at least somewhat aware that physical and mental health information can be inferred from voice recordings. Many participants have rarely (28.4%) or never (42.5%) even thought about the possibility of personal information being inferred from speech data. After a short educational video on the topic, participants express only moderate privacy concern. However, based on an analysis of open text responses, unconcerned reactions seem to be largely explained by knowledge gaps about possible data misuses. Watching the educational video lowered participants’ intention to use voice-enabled devices. In discussing the regulatory implications of our findings, we challenge the notion of “informed consent” to data processing. We also argue that inferences about individuals need to be legally recognized as personal data and protected accordingly.
- ItemProvenance Management over Linked Data Streams(2019) Liu, Qian; Wylot, Marcin; Le Phuoc, Danh; Hauswirth, ManfredProvenance describes how results are produced starting from data sources, curation, recovery, intermediate processing, to the final results. Provenance has been applied to solve many problems and in particular to understand how errors are propagated in large-scale environments such as Internet of Things, Smart Cities. In fact, in such environments operations on data are often performed by multiple uncoordinated parties, each potentially introducing or propagating errors. These errors cause uncertainty of the overall data analytics process that is further amplified when many data sources are combined and errors get propagated across multiple parties. The ability to properly identify how such errors influence the results is crucial to assess the quality of the results. This problem becomes even more challenging in the case of Linked Data Streams, where data is dynamic and often incomplete. In this paper, we introduce methods to compute provenance over Linked Data Streams. More specifically, we propose provenance management techniques to compute provenance of continuous queries executed over complete Linked Data streams. Unlike traditional provenance management techniques, which are applied on static data, we focus strictly on the dynamicity and heterogeneity of Linked Data streams. Specifically, in this paper we describe: i) means to deliver a dynamic provenance trace of the results to the user, ii) a system capable to execute queries over dynamic Linked Data and compute provenance of these queries, and iii) an empirical evaluation of our approach using real-world datasets.
- ItemPushing the Scalability of RDF Engines on IoT Edge Devices(2020) Le-Tuan, Anh; Hayes, Conor; Hauswirth, Manfred; Le-Phuoc, DanhSemantic interoperability for the Internet of Things (IoT) is enabled by standards and technologies from the Semantic Web. As recent research suggests a move towards decentralised IoT architectures, we have investigated the scalability and robustness of RDF (Resource Description Framework)engines that can be embedded throughout the architecture, in particular at edge nodes. RDF processing at the edge facilitates the deployment of semantic integration gateways closer to low-level devices. Our focus is on how to enable scalable and robust RDF engines that can operate on lightweight devices. In this paper, we have first carried out an empirical study of the scalability and behaviour of solutions for RDF data management on standard computing hardware that have been ported to run on lightweight devices at the network edge. The findings of our study shows that these RDF store solutions have several shortcomings on commodity ARM (Advanced RISC Machine) boards that are representative of IoT edge node hardware. Consequently, this has inspired us to introduce a lightweight RDF engine, which comprises an RDF storage and a SPARQL processor for lightweight edge devices, called RDF4Led. RDF4Led follows the RISC-style (Reduce Instruction Set Computer) design philosophy. The design constitutes a flash-aware storage structure, an indexing scheme, an alternative buffer management technique and a low-memory-footprint join algorithm that demonstrates improved scalability and robustness over competing solutions. With a significantly smaller memory footprint, we show that RDF4Led can handle 2 to 5 times more data than popular RDF engines such as Jena TDB (Tuple Database) and RDF4J, while consuming the same amount of memory. In particular, RDF4Led requires 10%–30% memory of its competitors to operate on datasets of up to 50 million triples. On memory-constrained ARM boards, it can perform faster updates and can scale better than Jena TDB and Virtuoso. Furthermore, we demonstrate considerably faster query operations than Jena TDB and RDF4J.
- ItemRezension von: Willke, Helmut. Komplexe Freiheit. Konfigurationsprobleme eines Menschenrechts in der globalisierten Moderne, 308 S., transcript, Bielefeld 2019.(2020) Eyert, Florian; Irgmaier, FlorianRezension
- ItemScraping the demos. Digitalization, web scraping and the democratic project(2020) Ulbricht, LenaScientific, political and bureaucratic elites use epistemic practices like “big data analysis” and “web scraping” to create representations of the citizenry and to legitimize policymaking. I develop the concept of “demos scraping” for these practices of gaining information about citizens (the “demos”) through automated analysis of digital trace data which are re-purposed for political means. This article critically engages with the discourse advocating demos scraping and provides a conceptual analysis of its democratic implications. It engages with the promise of demos scraping advocates to reduce the gap between political elites and citizens and highlights how demos scraping is presented as a superior form of accessing the “will of the people” and to increase democratic legitimacy. This leads me to critically discuss the implications of demos scraping for political representation and participation. In its current form, demos scraping is technocratic and de-politicizing; and the larger political and economic context in which it takes place makes it unlikely that it will reduce the gap between elites and citizens. From the analytic perspective of a post-democratic turn, demos scraping is an attempt of late modern and digitalized societies to address the democratic paradox of increasing citizen expectations coupled with a deep legitimation crisis.
- ItemThe Language Labyrinth: Constructive Critique on the Terminology Used in the AI Discourse(2021) Rehak, RainerIn the interdisciplinary field of artificial intelligence (AI) the problem of clear terminology is especially momentous. This paper claims, that AI debates are still characterised by a lack of critical distance to metaphors like ‘training’, ‘learning’ or ‘deciding’. As consequence, reflections regarding responsibility or potential use-cases are greatly distorted. Yet, if relevant decision-makers are convinced that AI can develop an ‘understanding’ or properly ‘interpret’ issues, its regular use for sensitive tasks like deciding about social benefits or judging court cases looms. The chapter argues its claim by analysing central notions of the AI debate and tries to contribute by proposing more fitting terminology and hereby enabling more fruitful debates. It is a conceptual work at the intersection of critical computer science and philosophy of language.
- ItemThe sum of its parts. Analysis of federated byzantine agreement systems(2022) Florian, Martin; Henningsen, Sebastian; Ndolo, Charmaine; Scheuermann, BjörnFederated Byzantine Agreement Systems (FBASs) are a fascinating new paradigm in the context of consensus protocols. Originally proposed for powering the Stellar payment network, FBASs can instantiate Byzantine quorum systems without requiring out-of-band agreement on a common set of validators; every node is free to decide for itself with whom it requires agreement. Sybil-resistant and yet energy-efficient consensus protocols can therefore be built upon FBASs, and the “decentrality” possible with the FBAS paradigm might be sufficient to reduce the use of environmentally unsustainable proof-of-work protocols. In this paper, we first demonstrate how the robustness of individual FBASs can be determined, by precisely determining their safety and liveness buffers and therefore enabling a comparison with threshold-based quorum systems. Using simulations and example node configuration strategies, we then empirically investigate the hypothesis that while FBASs can be bootstrapped in a bottom-up fashion from individual preferences, strategic considerations should additionally be applied by node operators in order to arrive at FBASs that are robust and amenable to monitoring. Finally, we investigate the reported “open-membership” property of FBASs. We observe that an often small group of nodes is exclusively relevant for determining liveness buffers and prove that membership in this top tier is conditional on the approval by current top tier nodes if maintaining safety is a core requirement.