On the (im)possibility of sustainable artificial intelligence
Datum
Herausgeber:innen
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Zusammenfassung
The decline of ecological systems threatens global livelihoods and therefore increases injustice and conflict. In order to shape sustainable societies new digital tools like artificial intelligence (AI) are currently considered a “game-changer” by many within and outside of academia. In order to discuss the theoretical concept as well as the concrete implications of 'sustainable AI' this article draws from insights by critical data and algorithm studies, STS, transformative sustainability science, critical computer science, and, more remotely, public interest theory. I argue that while there are indeed many sustainability-related use cases for AI, they are far from being “game-changers” and are likely to have more overall drawbacks than benefits.
To substantiate this claim, I differentiate three 'AI materialities' of the AI supply chain: first the literal materiality (e.g. water, cobalt, lithium, energy consumption etc.), second, the informational materiality (e.g. lots of data and centralised control necessary), and third, the social materiality (e.g. exploitative global data worker networks, communities heavily affected by waste and pollution). Effects are especially devastating in the global south while the benefits mainly actualize in the global north. Analysing the claimed benefits further, the project of sustainable AI mainly follows a technology-centred efficiency paradigm, although most literature concludes heavy digital rebound effects in the past and also in the future.
A second strong claim regarding sustainable AI circles around so called apolitical optimisation (e.g. regarding city traffic), however the optimisation criteria (e.g. cars, bikes, emissions, commute time, health) are purely political and have to be collectively negotiated before applying AI optimisation. Hence, sustainable AI, in principle, cannot break the glass ceiling of transformation and might even distract from necessary societal change. Although AI is currently primarily used for misinformation, surveillance, and desire creation, I close the article by introducing two constructive concepts for sustainable and responsible AI use, if there is no societal will to refrain from using AI entirely.
First, we need to stop applying AI to analyse sustainability-related data, if the related scientific insights available already allow for sufficient action. I call using AI for the sake of creating non-action-related findings unformation gathering, which must be stopped. Secondly, we need to apply the small is beautiful principle, which means to refrain from using very large AI models and instead turn to tiny models or just to advanced statistics. This approach nicely covers virtually all actual AI use cases, is orders of magnitude less resource hungry and does not promote power centralisation as large models do. This article intends to further the academic critical AI discourse at the nexus between useful AI use cases, techno-utopian salvation narratives, the exploitative and extractivist character of AI and concepts of digital degrowth. It aims to contribute to an informed academic and collective negotiation on how to (not) integrate AI into the sustainability project while avoiding to reproduce the status quo by serving hegemonic interests.