Digitale Märkte und Öffentlichkeiten auf Plattformen
Dauerhafte URI für die Sammlung
Listen
Auflistung Digitale Märkte und Öffentlichkeiten auf Plattformen nach Forschungsgruppen "Plattform-Algorithmen und digitale Propaganda"
Gerade angezeigt 1 - 7 von 7
Treffer pro Seite
Sortieroptionen
- ItemAlgorithmically Curated Lies: How Search Engines Handle Misinformation about US Biolabs in Ukraine(2024) Kuznetsova, Elizaveta; Makhortykh, Mykola; Sydorova, Maryna; Urman, Aleksandra; Vitulano, Ilaria; Stolze, MarthaThe growing volume of online content prompts the need for adopting algorithmic systems of information curation. These systems range from web search engines to recommender systems and are integral for helping users stay informed about important societal developments. However, unlike journalistic editing the algorithmic information curation systems (AICSs) are known to be subject to different forms of malperformance which make them vulnerable to possible manipulation. The risk of manipulation is particularly prominent in the case when AICSs have to deal with information about false claims that underpin propaganda campaigns of authoritarian regimes. Using as a case study of the Russian disinformation campaign concerning the US biolabs in Ukraine, we investigate how one of the most commonly used forms of AICSs - i.e. web search engines - curate misinformation-related content. For this aim, we conduct virtual agent-based algorithm audits of Google, Bing, and Yandex search outputs in June 2022. Our findings highlight the troubling performance of search engines. Even though some search engines, like Google, were less likely to return misinformation results, across all languages and locations, the three search engines still mentioned or promoted a considerable share of false content (33% on Google; 44% on Bing, and 70% on Yandex). We also find significant disparities in misinformation exposure based on the language of search, with all search engines presenting a higher number of false stories in Russian. Location matters as well with users from Germany being more likely to be exposed to search results promoting false information. These observations stress the possibility of AICSs being vulnerable to manipulation, in particular in the case of the unfolding propaganda campaigns, and underline the importance of monitoring performance of these systems to prevent it.
- ItemBlame It on the Algorithm? Russian Government-Sponsored Media and Algorithmic Curation of Political Information on Facebook(2023) Kuznetsova, Elizaveta; Makhortykh, MykolaPrevious research highlighted how algorithms on social media platforms can be abused to disseminate disinformation. However, less work has been devoted to understanding the interplay between Facebook news curation mechanisms and propaganda content. To address this gap, we analyze the activities of RT (formerly, Russia Today) on Facebook during the 2020 U.S. presidential election. We use agent-based algorithmic auditing and frame analysis to examine what content RT published on Facebook and how it was algorithmically curated in Facebook News Feeds and Search Results. We find that RT’s strategic framing included the promotion of anti-Biden leaning content, with an emphasis on antiestablishment narratives. However, due to algorithmic factors on Facebook, individual agents were exposed to eclectic RT content without an overarching narrative. Our findings contribute to the debate on computational propaganda by highlighting the ambiguous relationship between government-sponsored media and Facebook algorithmic curation, which may decrease the exposure of users to propaganda and at the same time increase confusion.
- ItemIn Generative AI we Trust: Can Chatbots Effectively Verify Political Information?(arXiv, 2023) Kuznetsova, Elizaveta; Makhortykh, Mykola; Vziatysheva, Victoria; Stolze, Martha; Baghumyan, Ani; Urman, AleksandraThis article presents a comparative analysis of the ability of two large language model (LLM)-based chatbots, ChatGPT and Bing Chat, recently rebranded to Microsoft Copilot, to detect veracity of political information. We use AI auditing methodology to investigate how chatbots evaluate true, false, and borderline statements on five topics: COVID-19, Russian aggression against Ukraine, the Holocaust, climate change, and LGBTQ+ related debates. We compare how the chatbots perform in high- and low-resource languages by using prompts in English, Russian, and Ukrainian. Furthermore, we explore the ability of chatbots to evaluate statements according to political communication concepts of disinformation, misinformation, and conspiracy theory, using definition-oriented prompts. We also systematically test how such evaluations are influenced by source bias which we model by attributing specific claims to various political and social actors. The results show high performance of ChatGPT for the baseline veracity evaluation task, with 72 percent of the cases evaluated correctly on average across languages without pre-training. Bing Chat performed worse with a 67 percent accuracy. We observe significant disparities in how chatbots evaluate prompts in high- and low-resource languages and how they adapt their evaluations to political communication concepts with ChatGPT providing more nuanced outputs than Bing Chat. Finally, we find that for some veracity detection-related tasks, the performance of chatbots varied depending on the topic of the statement or the source to which it is attributed. These findings highlight the potential of LLM-based chatbots in tackling different forms of false information in online environments, but also points to the substantial variation in terms of how such potential is realized due to specific factors, such as language of the prompt or the topic.
- ItemIn Generative AI We Trust: Can Chatbots Effectively Verify Political Information?(2024) Kuznetsova, Elizaveta; Makhortykh, Mykola; Vziatysheva, Victoria; Stolze, Martha; Baghumyan, Ani; Urman, AleksandraThis article presents a comparative analysis of the potential of two large language model (LLM)-based chatbots—ChatGPT and Bing Chat (recently rebranded to Microsoft Copilot)—to detect veracity of political information. We use AI auditing methodology to investigate how chatbots evaluate true, false, and borderline statements on five topics: COVID-19, Russian aggression against Ukraine, the Holocaust, climate change, and LGBTQ + -related debates. We compare how the chatbots respond in high- and low-resource languages by using prompts in English, Russian, and Ukrainian. Furthermore, we explore chatbots’ ability to evaluate statements according to political communication concepts of disinformation, misinformation, and conspiracy theory, using definition-oriented prompts. We also systematically test how such evaluations are influenced by source attribution. The results show high potential of ChatGPT for the baseline veracity evaluation task, with 72% of the cases evaluated in accordance with the baseline on average across languages without pre-training. Bing Chat evaluated 67% of the cases in accordance with the baseline. We observe significant disparities in how chatbots evaluate prompts in high- and low-resource languages and how they adapt their evaluations to political communication concepts with ChatGPT providing more nuanced outputs than Bing Chat. These findings highlight the potential of LLM-based chatbots in tackling different forms of false information in online environments, but also point to the substantial variation in terms of how such potential is realized due to specific factors (e.g. language of the prompt or the topic).
- ItemProceedings of the Weizenbaum Conference 2023. AI, Big Data, Social Media and People on the Move(Weizenbaum Institute, 2023) Berendt, Bettina; Krzywdzinski, Martin; Kuznetsova, ElizavetaThe contributions focus on the question of what role different digital technologies play for “people on the move” - with “people on the move” being understood both spatially (migration and flight) and in terms of economic and social change (changing working conditions, access conditions). The authors discuss phenomena such as disinformation and algorithmic bias from different perspectives, and the possibilities, limits and dangers of generative artificial intelligence.
- ItemSearch engines in polarized media environment: Auditing political information curation on Google and Bing prior to 2024 US elections(2025) Makhortykh, Mykola; Rorhbach, Tobias; Sydorova, Maryna; Kuznetsova, ElizavetaSearch engines play an important role in the context of modern elections. By curating information in response to user queries, search engines influence how individuals are informed about election-related developments and perceive the media environment in which elections take place. It has particular implications for (perceived) polarization, especially if search engines' curation results in a skewed treatment of information sources based on their political leaning. Until now, however, it is unclear whether such a partisan gap emerges through information curation on search engines and what user- and system-side factors affect it. To address this shortcoming, we audit the two largest Western search engines, Google and Bing, prior to the 2024 US presidential elections and examine how these search engines' organic search results and additional interface elements represent election-related information depending on the queries' slant, user location, and time when the search was conducted. Our findings indicate that both search engines tend to prioritize left-leaning media sources, with the exact scope of search results' ideological slant varying between Democrat- and Republican-focused queries. We also observe limited effects of location- and time-based factors on organic search results, whereas results for additional interface elements were more volatile over time and specific US states. Together, our observations highlight that search engines' information curation actively mirrors the partisan divides present in the US media environments and has the potential to contribute to (perceived) polarization within these environments.
- ItemStochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine(2024) Makhortykh, Mykola; Sydorova, Maryna; Baghumyan, A; Vziatysheva, Victoria; Kuznetsova, ElizavetaResearch on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine. We find major differences between chatbots in the accuracy of outputs and the integration of statements debunking Russian disinformation claims related to prompts’ topics. Moreover, we show that chatbot outputs are subject to substantive variation, which can result in random user exposure to false information.