Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine

Lade...
Vorschaubild
Datum
2024
Herausgeber:innen
Autor:innen
Makhortykh, Mykola
Sydorova, Maryna
Baghumyan, A
Vziatysheva, Victoria
Kuznetsova, Elizaveta
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Zusammenfassung

Research on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine. We find major differences between chatbots in the accuracy of outputs and the integration of statements debunking Russian disinformation claims related to prompts’ topics. Moreover, we show that chatbot outputs are subject to substantive variation, which can result in random user exposure to false information.

Beschreibung
Schlagwörter
Perplexity \ Bing Chat \ Google Bard \ disinformation \ chatbot \ Russisch-Ukrainischer Krieg
Verwandte Ressource
Verwandte Ressource
Zitierform
Makhortykh, M., Sydorova, M., Baghumyan, A., Vziatysheva, V., & Kuznetsova, E. (2024). Stochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine. Harvard Kennedy School (HKS) Misinformation Review, 5(4). https://doi.org/10.37016/mr-2020-154