Digitale Technologien in der Gesellschaft
Dauerhafte URI für die Sammlung
In diesem Forschungsschwerpunkt sollen der Zusammenhang zwischen Digitalisierung, Teilhabe und Ungleichheit erforscht, die Nutzung digitaler Technologien für Teilhabechancen gestaltend erprobt und gegen neue Ungleichheiten interveniert werden. Dafür werden Perspektiven der Wirtschaftsinformatik, der Designforschung und der Informatik zusammengeführt.
Listen
Neueste Veröffentlichungen
Gerade angezeigt 1 - 5 von 44
- ItemAI Narrative Breakdown. A Critical Assessment of Power and Promise(ACM, 2025) Rehak, RainerThis article sets off for an exploration of the still evolving discourse surrounding artificial intelligence (AI) in the wake of the release of ChatGPT. It scrutinizes the pervasive narratives that are shaping the societal engagement with AI, spotlighting key themes such as agency and decision-making, autonomy, truthfulness, knowledge processing, prediction, general purpose, neutrality and objectivity, apolitical optimization, sustainability game-changer, democratization, mass unemployment, and the dualistic portrayal of AI as either a harbinger of societal utopia or dystopia. Those narratives are analysed critically based on insights from critical computer science, critical data and algorithm studies, from STS, data protection theory, as well as from the philosophy of mind and semiotics. To properly analyse the narratives presented, the article first delves into a historical and technical contextualisation of the AI discourse itself. The article then introduces the notion of "Zeitgeist AI" to critique the imprecise and misleading application of the term "AI" across various societal sectors. Then, by discussing common narratives with nuance, the article contextualises and challenges often assumed socio-political implications of AI, uncovering in detail and with examples the inherent political, power infused and value-laden decisions within all AI applications. Concluding with a call for a more grounded engagement with AI, the article carves out acute problems ignored by the narratives discussed and proposes new narratives recognizing AI as a human-directed tool necessarily subject to societal governance.
- ItemSustainability powered by digitalization? (Re-)politicizing the debate(2025) Steig, Florian; Koenig, Pascal D.; Marquardt, Jens; Oels, Angela; Radtke, Jörg; Rehak, Rainer; Weiland, SabineAs ecological crises escalate, various stakeholders frame digitalization as a key solution for sustainability transformations. Besides incremental optimization, this promise has not materialized yet. We argue that digital solutions toward sustainability objectives are shaped by and reinforce power structures that effectively undermine sustainability outcomes. Academic discourse and governance are often dominated by a technology-centric framing in contrast to technologically informed, power-centric approaches. In this article, we develop an interdisciplinary framework to analyze three interconnected dimensions of power at the sustainability-digitalization-nexus and reveal how they obstruct sustainability. We locate power at the levels of environmental knowledge, governance, and technological materiality. First, digital technologies create representations of the environment that reinforce, reconfigure, or clash with preexisting ones, striving for more and better digital real-time data for technological control. Second, the spread of digital technologies is facilitated by emerging actor coalitions that promote digitalization while employing a reductionist understanding of sustainability. This narrows the policy space to optimization and incremental solutionism, which reproduces the status quo. Finally, the designs and material infrastructures of current digital technologies create path dependencies and lock-in effects while the underlying colonial resource and wealth flows remain hidden. We advocate for a (re-)politicization of digitalization across these dimensions to leverage its potential for sustainability transformations. We conclude that digitalization cannot spare us from political conflicts and deliberation processes about desirable sustainability futures. The debate should re-center fundamental questions about what kind of sustainable futures we want, where technology has a role to play, and where it does not.
- ItemA systematic review of echo chamber research: comparative analysis of conceptualizations, operationalizations, and varying outcomes(2025) Hartmann, David; Wang, Sonja Mei; Pohlmann, Lena; Berendt, BettinaThis systematic review synthesizes research on echo chambers and filter bubbles to explore the reasons behind dissent regarding their existence, antecedents, and effects. It provides a taxonomy of conceptualizations and operationalizations, analyzing how measurement approaches and contextual factors influence outcomes. The review of 129 studies identifies variations in measurement approaches, as well as regional, political, cultural, and platform-specific biases, as key factors contributing to the lack of consensus. Studies based on homophily and computational social science methods often support the echo chamber hypothesis, while research on content exposure and broader media environments, such as surveys, tends to challenge it. Group behavior, cultural influences, instant messaging platforms, and short video platforms remain underexplored. The strong geographic focus on the United States further highlights the need for studies in multi-party systems and regions beyond the Global North. Future research should prioritize cross-platform studies, continuous algorithmic audits, and investigations into the causal links between polarization, fragmentation, and echo chambers to advance the field. This review also provides recommendations for using the EU’s Digital Services Act to enhance research in this area and conduct studies outside the US in multi-party systems. By addressing these gaps, this review contributes to a more comprehensive understanding of echo chambers, their measurement, and their societal impacts.
- ItemOn the (im)possibility of sustainable artificial intelligence(Alexander von Humboldt Institute for Internet and Society, 2024-09-30) Rehak, RainerThe decline of ecological systems threatens global livelihoods and therefore increases injustice and conflict. In order to shape sustainable societies new digital tools like artificial intelligence (AI) are currently considered a “game-changer” by many within and outside of academia. In order to discuss the theoretical concept as well as the concrete implications of 'sustainable AI' this article draws from insights by critical data and algorithm studies, STS, transformative sustainability science, critical computer science, and, more remotely, public interest theory. I argue that while there are indeed many sustainability-related use cases for AI, they are far from being “game-changers” and are likely to have more overall drawbacks than benefits. To substantiate this claim, I differentiate three 'AI materialities' of the AI supply chain: first the literal materiality (e.g. water, cobalt, lithium, energy consumption etc.), second, the informational materiality (e.g. lots of data and centralised control necessary), and third, the social materiality (e.g. exploitative global data worker networks, communities heavily affected by waste and pollution). Effects are especially devastating in the global south while the benefits mainly actualize in the global north. Analysing the claimed benefits further, the project of sustainable AI mainly follows a technology-centred efficiency paradigm, although most literature concludes heavy digital rebound effects in the past and also in the future. A second strong claim regarding sustainable AI circles around so called apolitical optimisation (e.g. regarding city traffic), however the optimisation criteria (e.g. cars, bikes, emissions, commute time, health) are purely political and have to be collectively negotiated before applying AI optimisation. Hence, sustainable AI, in principle, cannot break the glass ceiling of transformation and might even distract from necessary societal change. Although AI is currently primarily used for misinformation, surveillance, and desire creation, I close the article by introducing two constructive concepts for sustainable and responsible AI use, if there is no societal will to refrain from using AI entirely. First, we need to stop applying AI to analyse sustainability-related data, if the related scientific insights available already allow for sufficient action. I call using AI for the sake of creating non-action-related findings unformation gathering, which must be stopped. Secondly, we need to apply the small is beautiful principle, which means to refrain from using very large AI models and instead turn to tiny models or just to advanced statistics. This approach nicely covers virtually all actual AI use cases, is orders of magnitude less resource hungry and does not promote power centralisation as large models do. This article intends to further the academic critical AI discourse at the nexus between useful AI use cases, techno-utopian salvation narratives, the exploitative and extractivist character of AI and concepts of digital degrowth. It aims to contribute to an informed academic and collective negotiation on how to (not) integrate AI into the sustainability project while avoiding to reproduce the status quo by serving hegemonic interests.
- ItemLost in moderation: How commercial content moderation apis over- and under-moderate group-targeted hate speech and linguistic variations(Association for Computing Machinery, 2025) Hartmann, David; Oueslati, Amin; Staufer, Dimitri; Pohlmann, Lena; Munzert, Simon; Heuer, HendrikCommercial content moderation APIs are marketed as scalable solutions to combat online hate speech. However, the reliance on these APIs risks both silencing legitimate speech, called over-moderation, and failing to protect online platforms from harmful speech, known as under-moderation. To assess such risks, this paper introduces a framework for auditing black-box NLP systems. Using the framework, we systematically evaluate five widely used commercial content moderation APIs. Analyzing five million queries based on four datasets, we find that APIs frequently rely on group identity terms, such as “black”, to predict hate speech. While OpenAI’s and Amazon’s services perform slightly better, all providers under-moderate implicit hate speech, which uses codified messages, especially against LGBTQIA+ individuals. Simultaneously, they over-moderate counter-speech, reclaimed slurs and content related to Black, LGBTQIA+, Jewish, and Muslim people. We recommend that API providers offer better guidance on API implementation and threshold setting and more transparency on their APIs’ limitations.Warning: This paper contains offensive and hateful terms and concepts. We have chosen to reproduce these terms for reasons of transparency.