This project aims to both document the spread of disinformation within Brazilian politics and design and evaluate tools to combat the spread of disinformation worldwide. We pursue this work through 2 main foci. The first effort aims to illustrate how disinformation is constructed and the second explores its spread.
Construction of political knowledge and mis/disinformation
Date: 2022-PRESENT
Discussions surrounding disinformation have dominated discourses in recent years and evoked adverse repercussions. Responding to the disinformation epidemic, this project draws on the inoculation theory to design and test an interactive AI-powered platform—Aide—that uses language modeling to educate users on common textual strategies employed in pieces of disinformation. Aide is designed as a one-page app implementing a JavaScript-based interface in connection with OpenAI’s GPT-3 API to help users identify misleading tactics employed in pieces of disinformation.
Post preliminary testing, learning occurred among participants of the study and implied how effective Aide potentially could be. For instance, as Aide characterized patterns in disinformation, participants not only indicated a desire to identify such patterns, but also gained the cognitive inertia of pursuing it in the prospective media consumption.
The team will augment the research by collecting data and analyzing how learners would approach news prior, during, and post their interaction with Aide—identifying evidence of learning when users interact with the platform. We conjecture that this research can lead to insight on novel ways to employ technology in the learning process and teaching approach towards disinformation in the K-12 context, ultimately leading to the reduction of disinformation spread.
NetLogo Models and Disinformation Spread
Date: 2022-PRESENT
Research has shown that young adults in the United States have decreased trust in news and their ability to distinguish “fake” news. There is a need for media literacy in the United States. NetLogo Models an interactive tool that leverages computational modeling and visualization to examine the effect that cognitive bias, fact-checking behavior, and audience’s connection have on the spread of disinformation. For the preliminary testing of this model, we recruited four 22-26 year-old students from three different universities. The participants have demonstrated the learning effect in better recognizing their own cognitive biases in consuming news.