Can't LLMs Do That? Supporting Third-Party Audits Under the DSA: Exploring Large Language Models for Systemic Risk Evaluation of the Digital Services Act in an Interdisciplinary Setting
Datum
Herausgeber:innen
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Zusammenfassung
This paper investigates the feasibility and potential role of using Large Language Models (LLMs) to support systemic risk audits under the European Union’s Digital Services Act (DSA). It examines how automated tools can enhance the work of DSA auditors and other ecosystem actors by enabling scalable, explainable, and legally grounded content analysis. An interdisciplinary expert workshop with twelve participants from legal, technical, and social science backgrounds explored prompting strategies for LLM-assisted auditing. Thematic analysis of the sessions identified key challenges and design considerations, including prompt engineering, model interpretability, legal alignment, and user empowerment. Findings highlight the potential of LLMs to improve annotation workflows and expand audit scale, while underscoring the continued importance of human oversight, iterative testing, and cross-disciplinary collaboration. This study offers practical insights for integrating AI tools into auditing processes and contributes to emerging methodologies for operationalizing systemic risk evaluations under the DSA.