• deutsch | english

Debiasing Social Media Use through Cognitive Interface Design

Project description

Societal discourse about social media has radically changed in recent years. In the early 2010s, the many benefits of social media like Twitter and Facebook were emphasized, for instance as hotbeds for democratization. In contrast, recent discourse about social media can be characterized by negative effects. One example is the notion of “echo chambers” on social media where like-minded individuals are communicating about a controversial issue, but constantly perpetuate just one out of several possible perspectives or opinions, while ignoring, sanctioning, or derogating alternative viewpoints. Several theories suggest that a one-sided reception and production of information might give rise to very strong and extreme attitudes, thus creating hotbeds for radicalization and even hate speech.

This project starts by analyzing actual Twitter accounts which produce content about controversial issues (such as Homeland Security). We will test the hypothesis that the language use of tweets gets more extreme, the more a Twitter account is one-sidedly connected with similar Twitter accounts (i.e., followers).

In a second step we want to bring echo chambers into the Lab by simulating conditions that should lead to their emergence and investigating psychological mechanisms. We will analyze in how far ratings tools in an interface (e.g., “thumbs up”) may contribute to a radicalization of tweeted content. Moreover, we will investigate the role of anonymity.

In the third part of the project we will use social psychological theories to derive mechanisms of interface design that should lead to a more balanced selection, processing, and production of information on social media (so-called “debiasing”). For instance, alternative visualizations of rating tools can make balanced content more salient. It is also possible to design interfaces in ways that counteract negative effects of anonymity. Moreover, we want to develop interfaces that will “nudge” users to think more about counterarguments, as this should also lead to less extreme and more balanced attitudes on an issue.