Tailoring heuristics and timing AI interventions for supporting news veracity assessments
The detection of false and misleading news has become a top priority to researchers and practitioners. Despite the large number of efforts in this area, many questions remain unanswered about the ideal design of interventions, so that they effectively inform news consumers. In this work, we seek to fill part of this gap by exploring two important elements of tools’ design: the timing of news veracity interventions and the format of the presented interventions. Specifically, in two sequential studies, using data collected from news consumers through Amazon Mechanical Turk (AMT), we study whether there are differences in their ability to correctly identify fake news under two conditions: when the intervention targets novel news situations and when the intervention is tailored to specific heuristics. We find that in novel news situations users are more receptive to the advice of the AI, and further, under this condition tailored advice is more effective than generic one. We link our findings to prior literature on confirmation bias and we provide insights for news providers and AI tool designers to help mitigate the negative consequences of misinformation.