Digitale Märkte und Öffentlichkeiten auf Plattformen
Dauerhafte URI für die Sammlung
Listen
Neueste Veröffentlichungen
- ItemVeiled conspiracism: Particularities and convergence in the styles and functions of conspiracy-related communication across digital platforms(2025) Buehling, Kilian; Zhang, Xixuan; Heft, AnnettDigital communication venues are essential infrastructures for anti-democratic actors to spread harmful content such as conspiracy theories. Capitalizing on platform affordances, they leverage conspiracy theories to mainstream their political views in broader public discourse. We compared the word choice, language style, and communicative function of conspiracy-related content to understand its platform-dependent differences and convergence. Our cases are the conspiracy theories of the New World Order and Great Replacement, which we analyzed on 4chan/pol/, Twitter, and seven alternative US news media longitudinally from 2011 to 2021. The conspiracy-related texts were comparatively analyzed using a multi-method approach of computational and quantitative text analyses. Our results show that conspiracy narrations are increasingly present in all venues. While language differs vastly between platforms, we observed a style convergence between Twitter and 4chan. The results show how more coded language veils the spread of racist and antisemitic content beyond the so-called dark platforms.
- ItemSearch engines in polarized media environment: Auditing political information curation on Google and Bing prior to 2024 US elections(2025) Makhortykh, Mykola; Rorhbach, Tobias; Sydorova, Maryna; Kuznetsova, ElizavetaSearch engines play an important role in the context of modern elections. By curating information in response to user queries, search engines influence how individuals are informed about election-related developments and perceive the media environment in which elections take place. It has particular implications for (perceived) polarization, especially if search engines' curation results in a skewed treatment of information sources based on their political leaning. Until now, however, it is unclear whether such a partisan gap emerges through information curation on search engines and what user- and system-side factors affect it. To address this shortcoming, we audit the two largest Western search engines, Google and Bing, prior to the 2024 US presidential elections and examine how these search engines' organic search results and additional interface elements represent election-related information depending on the queries' slant, user location, and time when the search was conducted. Our findings indicate that both search engines tend to prioritize left-leaning media sources, with the exact scope of search results' ideological slant varying between Democrat- and Republican-focused queries. We also observe limited effects of location- and time-based factors on organic search results, whereas results for additional interface elements were more volatile over time and specific US states. Together, our observations highlight that search engines' information curation actively mirrors the partisan divides present in the US media environments and has the potential to contribute to (perceived) polarization within these environments.
- ItemStochastic lies: How LLM-powered chatbots deal with Russian disinformation about the war in Ukraine(2024) Makhortykh, Mykola; Sydorova, Maryna; Baghumyan, A; Vziatysheva, Victoria; Kuznetsova, ElizavetaResearch on digital misinformation has turned its attention to large language models (LLMs) and their handling of sensitive political topics. Through an AI audit, we analyze how three LLM-powered chatbots (Perplexity, Google Bard, and Bing Chat) generate content in response to the prompts linked to common Russian disinformation narratives about the war in Ukraine. We find major differences between chatbots in the accuracy of outputs and the integration of statements debunking Russian disinformation claims related to prompts’ topics. Moreover, we show that chatbot outputs are subject to substantive variation, which can result in random user exposure to false information.
- ItemWho reports witnessing and performing corrections on social media in the United States, United Kingdom, Canada, and France?(2024) Tang, Rongwei; Vraga, Emily K.; Bode, Leticia; Boulianne, ShelleyObserved corrections of misinformation on social media can encourage more accurate beliefs, but for these benefits to occur, corrections must happen. By exploring people’s perceptions of witnessing and performing corrections on social media, we find that many people say they observe and perform corrections across the United States, the United Kingdom, Canada, and France. We find higher levels of self-reported correction experiences
- ItemDynamics of opinion expression(2020) Gaisbauer, Felix; Olbrich, Eckehard; Banisch, SvenModeling efforts in opinion dynamics have to a large extent ignored that opinion exchange between individuals can also have an effect on how willing they are to express their opinion publicly. Here, we introduce a model of public opinion expression. Two groups of agents with different opinion on an issue interact with each other, changing the willingness to express their opinion according to whether they perceive themselves as part of the majority or minority opinion. We formulate the model as a multigroup majority game and investigate the Nash equilibria. We also provide a dynamical systems perspective: Using the reinforcement learning algorithm of Q-learning, we reduce the N-agent system in a mean-field approach to two dimensions which represent the two opinion groups. This two-dimensional system is analyzed in a comprehensive bifurcation analysis of its parameters. The model identifies social-structural conditions for public opinion predominance of different groups. Among other findings, we show under which circumstances a minority can dominate public discourse.