• deutsch | english

Debiasing Social Media Use through Cognitive Interface Design

Project description

Societal discourse about social media has radically changed in recent years. In the early 2010s, the many benefits of social media like Twitter and Facebook were emphasized, for instance as hotbeds for democratization. In contrast, recent discourse about social media can be characterized by negative effects. One example is the notion of “echo chambers” on social media where like-minded individuals are communicating about a controversial issue, but constantly perpetuate just one out of several possible perspectives or opinions, while ignoring, sanctioning, or derogating alternative viewpoints. Several theories suggest that a one-sided reception and production of information might give rise to very strong and extreme attitudes, thus creating hotbeds for radicalization and even hate speech.


This project starts by analyzing actual Twitter accounts which produce content about controversial issues (such as Homeland Security). We will test the hypothesis that the language use of tweets gets more extreme, the more a Twitter account is one-sidedly connected with similar Twitter accounts (i.e., followers).


In a second step we want to bring echo chambers into the Lab by simulating conditions that should lead to their emergence and investigating psychological mechanisms. We will analyze in how far ratings tools in an interface (e.g., “thumbs up”) may contribute to a radicalization of tweeted content. Moreover, we will investigate the role of anonymity.


In the third part of the project we will use social psychological theories to derive mechanisms of interface design that should lead to a more balanced selection, processing, and production of information on social media (so-called “debiasing”). For instance, alternative visualizations of rating tools can make balanced content more salient. It is also possible to design interfaces in ways that counteract negative effects of anonymity. Moreover, we want to develop interfaces that will “nudge” users to think more about counterarguments, as this should also lead to less extreme and more balanced attitudes on an issue.

PROJECT TEAM

  • Publications

Buder, J(2017). Learning to think critically: Technologies for debiasing. In J. C. Yang et al. (Eds.), Extended Summary Proceedings of the 25th International Conference on Computers in Education (pp. 4–6). New Zealand: Asia-Pacific Society for Computers in Education.


CONFERENce contributions

Buder, J. (2019, May). Empirical Evidence for the Echo Chamber Hypothesis. 69th Annual International Communication Association Conference. Washington, USA. [Talk]

Buder, J. (2018, November). From artificial intelligence to artificial sociality in learning and education. Artificial Intelligence – International Research and Applications: 1st Japanese-German-French DWIH Symposium.Tokio, Japan. [Talk]

Buder, J. (2018, November). From cognitive to social interfaces. Internationaler Workshop des Leibniz-WissenschaftsCampus Tübingen "WCT meets HCI". Tübingen. [Talk]

Buder, J. (2018, November). Digitale Technologien: Potenziale und Herausforderungen aus der Sicht psychologischer Forschung. Eingeladener Vortrag auf der Jahrestagung der Kirchlich-Theologischen Arbeitsgemeinschaften in Württemberg. Rothenburg o.d.T. [Vortrag]

Rabl, L., Buder, J., Zurstiege, G., Feiks, M.,& Badermann, M. (2020, May). De-biasing social media use. Virtuelles Symposium »What’s Cognitive About Cognitive Interfaces?« zum Abschluss des Leibniz-
WissenschaftsCampus Tübingen. Tübingen. [Talk] www.cognitiveinterfaces.de/symposium.html?project=1

Zurstiege, G., & Badermann, M. (2019). Homophily and attitude strength in social Media: An automated content analysis of Twitter accounts. 69th Annual Conference of the International Communication Association (ICA). Washington, USA. [Talk]