A platform for research: civil engineering, architecture and urbanism
Misinformation on social media: Investigating motivated reasoning through an identity-protection model
In recent years, the unprecedented dissemination of misleading or false information on social media platforms has become a public concern by posing fundamental threats to democratic political systems. The increased dissemination of such misinformation has intensified research efforts to understand how psychological mechanisms facilitate this misinformation dissemination. One answer comes from motivated reasoning, suggesting that information is not always processed evenly but to maintain or protect existing attitudes, beliefs, or identities (Kunda, 1990). For the context of misleading or false information, motivated reasoning proposes that content confirming a person’s view (congruent) is quickly passed on without further questioning, whereas content that contradicts a person’s view (incongruent) is more likely to be identified as false. This cumulative doctoral dissertation aims to further explore the relationship of misinformation on social media with motivated reasoning. To this end, two broader strategies are applied: First, the empirical effects of motivated reasoning on misinformation sharing are scrutinized. In Study 1, it is tested whether motivated reasoning can explain sharing of hyper-partisan news content on Twitter. By collecting data directly from Twitter, this observational study confirms a sharing process driven by motivated reasoning. Similarly, Study 2 and 3 tested whether motivated reasoning can explain users’ perception and engagement with automated accounts, so-called social bots, on Twitter. Results of both studies indicated that users’ perceptions are, as predicted, biased. Users perceive congruent accounts as more human-like and incongruent accounts as more bot-like. In addition, while users mostly ignore incongruent accounts, independent whether bot or human-run, congruent accounts that behave like social bots are less likely to perceive engagement. Consolidating the effects of motivated reasoning through empirical data in the first three studies, in a second step, the underlying ...
Misinformation on social media: Investigating motivated reasoning through an identity-protection model
In recent years, the unprecedented dissemination of misleading or false information on social media platforms has become a public concern by posing fundamental threats to democratic political systems. The increased dissemination of such misinformation has intensified research efforts to understand how psychological mechanisms facilitate this misinformation dissemination. One answer comes from motivated reasoning, suggesting that information is not always processed evenly but to maintain or protect existing attitudes, beliefs, or identities (Kunda, 1990). For the context of misleading or false information, motivated reasoning proposes that content confirming a person’s view (congruent) is quickly passed on without further questioning, whereas content that contradicts a person’s view (incongruent) is more likely to be identified as false. This cumulative doctoral dissertation aims to further explore the relationship of misinformation on social media with motivated reasoning. To this end, two broader strategies are applied: First, the empirical effects of motivated reasoning on misinformation sharing are scrutinized. In Study 1, it is tested whether motivated reasoning can explain sharing of hyper-partisan news content on Twitter. By collecting data directly from Twitter, this observational study confirms a sharing process driven by motivated reasoning. Similarly, Study 2 and 3 tested whether motivated reasoning can explain users’ perception and engagement with automated accounts, so-called social bots, on Twitter. Results of both studies indicated that users’ perceptions are, as predicted, biased. Users perceive congruent accounts as more human-like and incongruent accounts as more bot-like. In addition, while users mostly ignore incongruent accounts, independent whether bot or human-run, congruent accounts that behave like social bots are less likely to perceive engagement. Consolidating the effects of motivated reasoning through empirical data in the first three studies, in a second step, the underlying ...
Misinformation on social media: Investigating motivated reasoning through an identity-protection model
Wischnewski, Magdalena (author) / Nicole, Krämer
2022-03-08
Theses
Electronic Resource
English
Fakultät für Ingenieurwissenschaften » Informatik und Angewandte Kognitionswissenschaft » Angewandte Kognitions- und Medienwissenschaft » Sozialpsychologie: Medien und Kommunikation , motivated reasoning -- identity-protection cognition -- emotions -- misinformation -- social media -- social bots -- shareworthiness , ddc:150
Where do I go from here? Motivated reasoning in construction decisions
Taylor & Francis Verlag | 2018
|Where do I go from here? Motivated reasoning in construction decisions
British Library Online Contents | 2018
|Bayesian versus politically motivated reasoning in human perception of climate anomalies
DOAJ | 2017
|