Skip to main content
SearchLoginLogin or Signup

Fighting the Spread of COVID-19 Misinformation in Kyrgyzstan, India, and the United States: How Replicable Are Accuracy Nudge Interventions?

Volume 3, Issue 3: Autumn 2022. DOI: 10.1037/tmb0000086

Published onSep 21, 2022
Fighting the Spread of COVID-19 Misinformation in Kyrgyzstan, India, and the United States: How Replicable Are Accuracy Nudge Interventions?
·

Abstract

The spread of misinformation has generated confusion and uncertainty about how to behave with respect to protective actions during the COVID-19 pandemic, such as social distancing and getting vaccinated. Pennycook et al. (2020) garnered significant press attention when they found that asking people to think about the accuracy of a single headline (i.e., accuracy nudge) improved their discernment in sharing true versus false information related to COVID-19. The present Open Science Framework preregistered experiment sought to replicate the work of Pennycook et al. (2020) and test the generalizability of their findings to three different countries: Kyrgyzstan, India, and the United States. The present study also explores whether findings extend to information related to COVID-19 vaccine acceptance, a timely and important topic at the time of data collection. The accuracy nudge’s effect did not replicate in the Kyrgyzstan sample (n = 1,049). Results were mixed in India (n = 703) and the United States (n = 829); the nudge decreased willingness to share some misinformation but it did not significantly increase willingness to share true information. We discuss potential explanations for these findings and practical implications for those working to combat the spread of misinformation online.

Keywords: misinformation, disinformation, social media, nudge, COVID-19

Acknowledgments: The authors would like to thank Julianne Birungi, Mario Mosquera, Miguel Mateos Munoz, Galina Solodunova, Swathi Vepachedu, and Simon Warren for their contributions to this study.

This project was implemented in collaboration with UNICEF Headquarters, Europe and Central Asia Regional Office, and the Kyrgyzstan Country Office. However, the opinions reflected in this article do not directly reflect the views of UNICEF.

Disclosures: The authors have no conflicts of interest to disclose.

Data Availability: The preregistered design is available at https://osf.io/kepx4/. The experimental materials and data will be made publicly available at time of publication.

Correspondence concerning this article should be addressed to Jenna McChesney, Department of Psychology, North Carolina State University, Box 7650, Raleigh, NC 27695, United States [email protected]


The spread of misinformation in the wake of the COVID-19 pandemic has significantly affected the ability of governments to communicate with citizens and the behavioral choices individuals make with respect to public health recommendations. Understandings of the virus and attitudes toward the response of government officials vary greatly. They are influenced, at least in part, by media consumption and a polarized media landscape. The novelty of COVID-19, as well as disparate government responses and overwhelmed health systems exacerbate the factors that contribute to belief in conspiracy theories including feelings of powerlessness, a desire to cope with uncertainty or threats, and validation of perceived victimization (Schwarz et al., 2016).

Social media is a largely unregulated avenue by which information and misinformation often travels. Verifiably false information about COVID-19’s cures and the virus’s origination can actively endanger individuals; the confusion surrounding hydroxychloroquine and subsequent shortage and the spike in anti-Asian attacks in the United States are two examples of serious consequences resulting from misinformation (Gallagher, 2020; Segarra, 2021). As global availability of vaccines increases, misinformation has created unfounded vaccine hesitancy which can delay or prevent countries from getting the virus under control.

While much blame has been placed on bots and algorithms, Lazer et al. (2018) find that the root of the problem is human behavior and activity. Bots can intensify the cycle of misinformation, but it is humans who ultimately judge stories and decide whether to share them, and humans reliably prefer stories that confirm their beliefs and biases (Lazer et al., 2018). Social media also allows users to spread misinformation with little critical thinking. To minimize the negative impacts of misinformation, it is necessary to adopt interventions that interrupt automatic, fast, unconscious processes referred to as “System 1” thinking (Kahneman, 2011; Stanovich & West, 2000). There is a need to encourage users to slow down and engage in critical thinking before deciding to share information on social media.

Reducing the Spread of Misinformation Online

Research has identified several approaches for engaging potential sharers to limit the spread of misinformation. Arming social media users with awareness of misinformation techniques—through debunking and prebunking—has demonstrated effectiveness. Debunking items postsharing (e.g., by tagging items with a score from a reliable fact-checking service) can interrupt the spread of misinformation by helping people identify the accuracy of a post, but it can also backfire and strengthen certain viewers’ beliefs in a story if it matches their biases (Chan et al., 2017). Roozenbeek and colleagues as well as Lewandowsky and colleagues found that prebunking, or inoculating viewers to identify misinformation strategies, leads to greater resilience against believing misinformation (Lewandowsky et al., 2012; Roozenbeek et al., 2021).

Beyond debunking and prebunking, adding friction to the social media sharing process has shown promise. To understand how cognitive friction might be added, we can conceptualize online sharing behavior in terms of two components: the user’s judgment of the accuracy of a news item and the user’s decision to share that item. Sharing of misinformation tends to capitalize on System 1 thinking—uncritical, emotional, and fast as a click. Interventions most effective at preventing this sharing add friction to the process, forcing users to slow down and activate their more deliberate, critical System 2 thinking (Kahneman, 2011; LeFevre, 2020; Stanovich & West, 2000).

In a Twitter-based field study, Pennycook et al. (2021) investigated why people spread misinformation online. They tested three theories: confusion over accuracy, preference for partisanship over accuracy, and the inattention to accuracy (Pennycook et al., 2021). The study concluded that the quality of news shared online is most heavily associated with attention to accuracy rather than accuracy judgment or partisanship. The inattention-based account of sharing misinformation suggests people generally wish to avoid spreading misinformation and are particularly good at discerning truth from fiction, but the social media context distracts them from considering accuracy by focusing their attention on other factors such as social validation (Pennycook et al., 2021). Consequently, social media users are likely to share information without evaluating its accuracy. Thus, nudges that prompt individuals to think about accuracy could improve the quality of information shared on social media platforms.

Accuracy Nudge

During a global pandemic, the spread of inaccurate information can be a matter of life or death. In March of 2020, shortly after the World Health Organization officially declared COVID-19 a pandemic (American Journal of Managed Care, 2021), Pennycook et al. (2020) conducted a two-part study to investigate whether prompting people to think of accuracy could reduce the spread of misinformation related to COVID-19 in the United States. They tested the effects of placing an accuracy reminder (or nudge) at the beginning of a survey exercise; the nudge simply asked participants to evaluate the accuracy of a headline unrelated to COVID-19.

Pennycook et al. (2020) found that the accuracy nudge nearly tripled participants’ level of discernment, which is conceptually the ratio of true information relative to the false information (misinformation) people share. Conceptualized this way, discernment can be improved by reducing the sharing of misinformation, increasing the sharing of true information, or both. Pennycook and colleagues reported that the accuracy nudge increased discernment; however, the increase in discernment was driven by an increase in sharing true news headlines (rather than a reduction in sharing false headlines). Therefore, Pennycook et al.’s conclusion that nudging people to think about accuracy could be a simple way to prompt System 2 thinking and reduce the spread of misinformation related to a global pandemic on social media is somewhat misleading.

Roozenbeek et al. (2021) pointed out the discrepancy between this conclusion and the analytic approach published when they attempted to replicate Pennycook et al.’s study, again testing an accuracy nudge on two U.S. samples and subsequently measuring the degree to which participants reported a willingness to share true and false headlines related to COVID-19. In their replication study, Roozenbeek et al. used independent-samples t tests to further unpack “discernment” and directly test whether prompting people to think about accuracy increases the likelihood that they will be willing to share true versus false information about COVID-19 online.

Roozenbeek et al. (2021) highlight the importance of independently testing the degree to which the accuracy nudge affects the propensity to share both false and true information. Testing the influence of the accuracy nudge in this manner, Roozenbeek et al. (2021) found that the accuracy nudge slightly decreased willingness to share false information but had no significant effect on willingness to share true information. Their results suggest the opposite of Pennycook and colleagues’ findings, leaving questions of whether the nudge increases the sharing intentions of true information or decreases the sharing intentions of false information. The present study aims to replicate Pennycook et al. and test whether findings from Pennycook et al. or Roozenbeek et al. are replicated in a third U.S. sample.

COVID-19 Vaccination Misinformation

The relationships Roozenbeek et al. (2021) found are weaker than those reported in Pennycook et al. (2020), suggesting that the accuracy nudge’s effect on sharing intentions is very small. The authors speculate that their results could have been weaker than Pennycook et al.’s findings because their replication was conducted at a later stage in the COVID-19 pandemic, and people may have become more attuned to misinformation related to COVID-19. To address this concern, the present two-part study examines headline sharing intentions for both general COVID-19 information (Studies 1 and 2) and for newer, vaccine-related information (Study 2). Because of the recency of vaccine availability at the time Study 2 was conducted, participants may not yet have become as accustomed to vaccine misinformation as they have general COVID-19 misinformation. Additionally, including COVID-19 vaccine information allows us to test whether previous findings from the domain of general COVID-19 misinformation might extend to the spread of misinformation related to vaccines.

Need for Replication Studies and Diverse Samples

Replication studies are essential in behavioral science research to increase confidence in both the validity and the generalizability of study findings. As Henrich and colleagues, Arnett, and many others have pointed out, behavioral research has been overwhelmingly disproportionate in its use of Western, educated, industrialized, rich, and democratic (WEIRD) samples, which may not generalize to lower and middle-income settings (Arnett, 2008; Henrich et al., 2010). WEIRD populations make up 12% of the planet, but 96% of behavioral research subjects (Henrich et al., 2010).

Pennycook et al. (2020) tested their intervention with a strictly U.S.-based sample. Roozenbeek et al. (2021) conducted a replication of the Pennycook et al.’s study, also using a U.S. sample. Yet, social media and misinformation related to COVID-19 extend far beyond the United States. It is vital to study interventions in non-WEIRD populations before claiming broad effectiveness. In this study, we seek to test whether Pennycook et al.’s (2020) or Roozenbeek et al.’s (2021) accuracy nudge findings replicate across three diverse samples: individuals living in Kyrgyzstan, India, and the United States.

Replicating the accuracy nudge studies in the U.S. context helps to test the reliability of the original study results, while conducting the replication study with samples in Kyrgyzstan and India points to broad applicability of the results. Such results will help to inform the future direction of research in this area and whether interventions, programs, and policies based on these findings will affect the desired change.

We chose to replicate the target studies in Kyrgyzstan and India for a few reasons. As in many parts of the world, misinformation spread through informal channels has spurred distrust in official government messaging in Kyrgyzstan and India (Institute for War and Peace Reporting [IWPR], 2020; Kabar, 2019). The Kyrgyzstan portion of this study was conducted in partnership with UNICEF’s Kyrgyzstan country office, which prioritized understanding the spread of COVID-19 misinformation and sought rigorous evidence to inform their programs and policies going forward. Kyrgyzstan is home to an array of private newspapers and online resources that specialize in disseminating false information (Baisalov, 2019). The amount of “fake news” reports doubled in Kyrgyzstan in 2020, and neighboring countries reported finding over 1,300 instances of false stories spread on social media channels in the early months of the COVID-19 pandemic (IWPR, 2020; Kabar, 2019). Efforts to reduce the spread of misinformation in Kyrgyzstan often face resistance (Simpson, 2020). Moreover, distrust of authorities has led many citizens to place higher belief in information received through their personal channels (IWPR, 2020).

In India, stories centered on COVID-19 have also dominated the misinformation landscape (Menon, 2020). Much of this misinformation has targeted marginalized populations and has manifested in the stigmatization of specific ethnicities, religions, and occupations. While a light-touch technological intervention such as an accuracy reminder could help combat the sharing of false information, the effectiveness of this nudge in non-U.S. geographical and cultural contexts is unknown. The present study expands upon findings from Pennycook et al. and Roozenbeek et al. by exploring whether (and which) findings replicate outside of the United States and across different types of misinformation, leading to the following hypotheses:

Hypothesis 1: Prompting people to think about accuracy will decrease the likelihood that they will be willing to share false information about (a) COVID-19 and (b) COVID-19 vaccines on social media in Kyrgyzstan, India, and the United States.

Hypothesis 2: Prompting people to think about accuracy will increase the likelihood that they will be willing to share true information about (a) COVID-19 and (b) COVID-19 vaccines on social media in Kyrgyzstan, India, and the United States.

Method

This research took place between February 1 and June 25, 2021. It was conducted in two parts. Study 1 was developed with UNICEF to inform time-sensitive policy in Kyrgyzstan and was not preregistered prior to data collection. Study 2 included data collection from India and the United States. We preregistered the research questions, design, and data analytic approach of Study 2 on the Open Science Framework (OSF; https://osf.io/z4j2g/). OSF promotes open and transparent science by enabling researchers to publicly post different aspects and products of the research lifecycle (Anderson et al., 2019; Foster & Deardorff, 2017). OSF preregistration is considered best practice to increase transparency in research and decrease questionable research practices (Anderson et al., 2019; Nosek et al., 2015; Yamada, 2018).

Study 1

Kyrgyzstan Sample

To recruit individuals living in the Kyrgyz Republic (Kyrgyzstan), we partnered with Rebicon Research Company. Rebicon has been conducting surveys in Kyrgyzstan since 2014 and was contracted by UNICEF to conduct research in this country. Together, we recruited 1,538 individuals living in Kyrgyzstan using river sampling in which the survey link was advertised through WhatsApp, Facebook, Instagram, Odnoklassniki, and VKontakte (two Russia-based social networking services). Participants were required to provide consent and be at least 18 years old to participate.

Respondents who completed the survey in less than 6.5 min (N = 5) were excluded from analyses. In addition, those who did not complete at least 85% of the survey (N = 484) were excluded from analyses, resulting in a final sample of 1,049 participants. Following preregistration, we ran all analyses on the entire sample (N = 1,538) and found equivalent results to those found with the screened sample (N = 1,049). Throughout the article, we report results from the more restricted, higher quality, sample with 1,049 participants. Of these 1,049 participants, the majority were women (77.7%). On average, participants were moderately young (M = 25.48 years old, SD = 8.83) and reported completing around 14 years of formal education (SD = 3.76). The majority of participants chose to take the survey in Russian (76.8%), while the rest completed the survey in Kyrgyz. Participation was voluntary and was not compensated.

Materials and Procedure

Data were collected in Kyrgyzstan from February 1, 2021, until March 23, 2021. During this period, there were 3,064 new COVID-19 cases in Kyrgyzstan (average 8.76 new cases per million population; the peak of the outbreak in Kyrgyzstan occurred in July 2020 when there were 1,763 new cases per million population). As of March 23, 2021, there had been 87,652 total COVID-19 cases and 1,492 deaths (Ritchie et al., 2020).

The online survey to which participants responded was developed using Qualtrics software. As in previous studies (Pennycook et al., 2020; Roozenbeek et al., 2021), participants in both control and intervention groups were shown 15 true and 15 false headlines generally related to COVID-19 in random order. Headlines were presented below a related graphic or photo thumbnail, similar to the format in which news stories are presented on social media sites such as Facebook. All participants were shown the 30 headlines in a random order and asked “If you saw the above on social media, how likely is it that you would share it?” A 6-point Likert scale ranging from extremely unlikely to extremely likely was used to measure sharing behavior associated with the 30 headlines. After completion of the headline-rating task, participants were debriefed. Debriefing consisted of reshowing only the true headlines and stating that all others presented previously were not true. Upon completion, participants were thanked.

We followed Pennycook et al.’s (2020) study protocol as closely as possible, referencing the Qualtrics surveys and data analysis scripts the authors made publicly available online. However, we adapted the survey in a few important and notable ways. First, headlines were adapted to ensure they would be timely and relevant in the Kyrgyzstan context. As data were collected almost a year after the study by Pennycook et al. (2020), many of the original headlines were outdated and needed to be replaced. Similarly, many of the original headlines were U.S.-centric and irrelevant in Kyrgyzstan and were replaced. When selecting new headlines, we followed the procedures outlined in Pennycook et al. (2020). For example, we used fact-checking websites (e.g., snopes.com) and reputable medical websites (e.g., Mayo Clinic, Center for Disease Control, World Health Organization) to verify whether the headlines were indeed true or false. We also worked with an eye toward including false headlines likely to remain false over time (e.g., “lemon tea found to be a cure for COVID”). All headlines used in our Kyrgyzstan sample are available on OSF here (https://osf.io/z4j2g/).

Second, sources and story ledes were not displayed. Third, headlines were translated into the local languages by Rebicon. Participants had the choice of completing the survey in Kyrgyz or Russian. As part of the translation process, each headline was also back translated into English and checked by the authors for clarity. The full survey translated in English, Russian, and Kyrgyz can be found online (https://osf.io/z4j2g/). Ethics approval was obtained prior to data collection through the Duke University’s institutional review board (IRB) and we followed UNICEF procedure for ethical standards in research.

Design

Two independent variables (IVs) were examined, one of which is between subjects and one of which is within subjects. The between-subjects IV was the accuracy nudge; whether participants were asked about their sharing intentions unprompted (0 = control condition) or were first asked to pause and think about accuracy (1 = accuracy nudge). The within-subjects IV was headline veracity with two levels: true versus false information. The dependent variable was sharing intentions, or how willing participants were to share the information online.

Measures

Headline-Rating Task

The ratings of participants’ likelihood to share the 30 headlines were averaged for true and false COVID-19 general headlines, respectively, such that each participant had a composite score for likelihood to share true and false information.

Background Characteristics

In addition to the headline-rating task, participants across both the control and treatment conditions answered additional questions about themselves. Before the headline-rating task, participants were asked about what types of information they share online and which social media accounts they use. After the headline-rating task, participants were also asked demographic questions, such as their gender and education level. Participants’ age was asked as part of the consent process on the first screen of the survey.

Study 2

India and U.S. Samples

Aligned with our preregistered report, we recruited 2,009 participants using CloudResearch (formerly TurkPrime) powered by Amazon’s MTurk microtasking platform for the second study. Recent research has found that self-reported sharing intentions collected in online surveys through crowdsourcing websites such as MTurk tend to provide meaningful insight into the content that would be shared on actual social media websites, such as Facebook and Twitter (see Mosleh et al., 2020). This suggests MTurk is a convenient and cost-effective source through which to recruit a viable sample for predicting online behavior. Additionally, similar studies have also relied on convenience samples (Pennycook et al., 2020; Roozenbeek et al., 2021). All participants were required to provide consent and be at least 18 years old. We targeted participants from India and from the United States. MTurk has sufficient usership in both countries to collect a large enough sample to be powered for our analysis. In addition, both countries experienced a significant spread of COVID-19-related misinformation. Of the 2,009 participants recruited for Study 2, 1,031 reported living in India and 978 reported living in the United States.

We also used quality checks to screen participants and increase data quality (Meade & Craig, 2012). First, an instructed response (i.e., “please ignore the question and select https://www.foxnews.com/ and https://www.nbc.com/ as your two answers”) was used to screen participants for careless responding; 120 participants (108 from India, 12 from the United States) were excluded from analyses for not following these instructions. In Study 2, everyone who answered the instructed response item correctly also completed more than 85% of the survey.

Second, participants were screened for completing the survey unreasonably quickly. The 124 participants (55 from India, 69 from the United States) who spent less than 7 min and 51 s on the survey were excluded from analyses. We also asked each participant to indicate whether they had responded randomly when filling out the survey. The 233 participants (165 from India, 68 from the United States) who admitted to doing so were excluded from analyses, resulting in a final sample of 1,532 participants. Of these 1,532 participants, 703 reported living in India and 829 reported living in the United States. Our sample sizes are comparable to previous, recently published, replication studies (Roozenbeek et al., 2021). Demographics for each subsample are summarized below.

India Sample

Of the 986 individuals recruited from India, 703 passed each of the screening checks described above. Of these participants, 354 (51%) were assigned to the control condition and 348 (49%) were assigned to the accuracy nudge intervention. Across conditions, the majority of participants were men (68%) and on average reported having 16 years of formal education. The average respondent was 34 years old (SD = 8.28). Of those who reported the region of India in which they resided (N = 561), most were from the South (75%), followed by the North (9%), East (5%), West (5%), Central (3%), Northeast (2%), and other (1%).

Participants were asked about how often and what types of information they share online; 98% said they share information in general, 72% said they share health-related news online, and 71% said they share information related to COVID-19 online. In addition, 95% reported that they proactively checked the news regarding COVID-19. When asked which types of social media accounts they use, Facebook was the most popular with 95% reporting use, followed by WhatsApp (92%), Instagram (73%), Twitter (69%), and Snapchat (15%).

U.S. Sample

Of the 1,041 individuals recruited from the United States, 829 passed each of the screening checks described above. Of these participants, 404 (49%) were assigned to the control condition and 425 (51%) were assigned to the accuracy nudge intervention. Across conditions, a slight majority of participants were men (56%) and on average reported having 15 years of formal education. The average respondent was 40 years old (SD = 11.47). Participants were asked about how often and what types of information they share online; 96% said they share information in general, 57% said they share health-related news online, 52% said they share information related to COVID-19 online, and 92% reported that they proactively checked the news regarding COVID-19. When asked which types of social media accounts they use, Facebook was the most popular with 84% reporting use, followed by Instagram (69%), Twitter (68%), WhatsApp (21%), and Snapchat (21%).

Materials and Procedure

Data were collected from June 15, 2021, to June 25, 2021, through MTurk. During this period, there were 612,262 new COVID-19 cases in India and 139,381 new COVID-19 cases in the United States (average 40.33 new cases per million population and 38.28 new cases per million population in India and in the United States, respectively). As of June 15, 2021, 3.44% of the population was fully vaccinated in India and 43.59% of the population was fully vaccinated in the United States. As of June 25, 2021, there had been 30,183,143 total COVID-19 cases and deaths in India and 394,493 deaths in India and 33,614,196 total COVID-19 cases and 603,594 deaths in the United States (Ritchie et al., 2020).

We collected data from India and the United States simultaneously using the same English language survey, which is publicly available on OSF here (https://osf.io/z4j2g/). Headlines used across samples were mostly consistent. For the headlines used for the India and U.S. samples, 28 of the 30 general COVID-19 headlines were the same as those used in the survey administered in Kyrgyzstan. The two headlines that differed were replaced because they had become out of date or obsolete by the time this survey launched. Similar to Study 1, vaccine-related true headlines were sourced from reputable news sources (BBC World News, Reuters, Al Jazeera) and false headlines were sourced from fact-checking organizations (https://www.snopes.com/, https://www.factcheck.org/).

After completion of the headline-rating task, participants were debriefed. Debriefing consisted of reshowing the true headlines and stating that all others presented previously were not true. In addition, a detailed explanation was given for why three of the most commonly circulating pieces of misinformation were false. Upon completion, participants received a code that was entered for compensation through MTurk. Participants were paid USD $3.00 to complete this study. Most individuals completed the survey in 9 min and the median time was 17 min. Therefore, on average, participants were paid the equivalent of $10.88/hr. Ethics approval was obtained prior to data collection through Duke University’s IRB.

Design

Three IVs were included, one of which is between subjects and two of which are within subjects. The between-subjects IV was the accuracy nudge; whether participants were asked about their sharing intentions unprompted (0 = control condition) or were first asked to pause and think about accuracy (1 = accuracy nudge). The first within-subjects IV was headline veracity with two levels: true versus false information. The second within-subjects IV was information type with two levels: general COVID-19 information and COVID-19 vaccine information. The dependent variable was sharing intentions, or how willing participants were to share the information online.

Measures

Headline-Rating Task

Following our preregistration, the survey for Study 2 included 30 (15 true, 15 false) general COVID-19 headlines and an additional 30 (15 true, 15 false) headlines related to the COVID-19 vaccine. Headlines were presented in a random order; however, one general headline was inadvertently left out of the randomization programming and was not presented to participants. Participants were asked to indicate their likelihood to share each of the headlines on social media. Like Study 1, propensity to share was calculated by averaging ratings for each type of headline: false COVID-19 general, false COVID-19 vaccine, true COVID-19 general, and true COVID-19 vaccine.

Demographics

Demographic information collected in Study 1 with the Kyrgyz sample (e.g., gender, education level, age) was also collected in Study 2. In addition, participants in Study 2 were asked about their location (India or United States) and native language. Those who responded as living in India were asked which region of India they lived in. Those living in the United States were asked about their race and ethnicity.

Table 1 shows differences between surveys administered across the two studies and the three countries.

Table 1
Differences Between Surveys Across Samples

Study

Country

Language of survey

More than one language offered?

COVID-19 general headlines

COVID-19 vaccine headlines

1

Kyrgyzstan

Kyrgyz or Russian

Y

Y

N

2

India

English

N

Y

Y

2

United States

English

N

Y

Y

Note. Y=yes; N = no.

Results

To test whether the accuracy nudge decreased willingness to share false information (Hypothesis 1) and increased willingness to share true information (Hypothesis 2) across countries, we replicated Roozenbeek et al.’s (2021) approach and conducted both traditional and Bayesian one-tailed t tests. Roozenbeek and colleagues did not preregister the application of a correction to control for family-wise error rate. Therefore, in addition to conducting traditional (uncorrected) t tests, they also conducted Bayesian t tests, which are more conservative and largely unaffected by multiple comparisons (see Gelman et al., 2012). To ensure rigorous replication, we used both analyses in our study. For the Kyrgyzstan sample, all analyses were performed once to test the effect of the accuracy nudge on sharing information related to COVID-19 generally. For the India and U.S. samples, analyses were performed twice, once to test the effect related to sharing COVID-19 information generally and a second time to test the effect related to sharing information about the COVID-19 vaccine. Descriptive statistics and findings for each country are also described in detail below.

Kyrgyzstan

Table 2 presents descriptive and correlational statistics for the Kyrgyzstan sample. Random assignment appeared to have worked, as demographic variables are not significantly related to condition.

Table 2
Descriptive Statistics for Kyrgyzstan

Study variable

N

M

SD

1

2

3

4

5

6

1. Age

1,049

25.48

8.83

2. Gender (0 = male, 1 = female)

978

0.04

3. Education

967

14.36

3.76

0.29**

0.10**

4. Language (0 = Kyrgyz, 1 = Russian)

1,049

−0.07*

0.15**

0.13**

5. Condition (0 = control, 1 = nudge)

1,049

0.49

0.50

−0.01

0.01

0.01

0.00

6. Intention to share true COVID-19 headlines

1,049

3.27

1.07

0.01

−0.01

−0.04

−0.18**

−0.02

7. Intention to share false COVID-19 headlines

1,049

2.42

1.03

0.05

0.02

−0.11**

−0.22**

−0.04

0.72**

* Correlation is significant at the .05 level (two-tailed).
** Correlation is significant at the .01 level (two-tailed).

Sharing False Information (Hypothesis 1a)

Prompting individuals in Kyrgyzstan to think about accuracy did not significantly affect their intentions to share false information related to COVID-19. Those who were prompted to think about accuracy reported being slightly less willing to share false information (M = 2.37, SD = .97) than those who were not prompted to think about accuracy (M = 2.46, SD = 1.09); however, this difference was not significant, t(1,047) = 1.32, one-tailed p = .09, d = .09, 95% CI [−.04, .21]. Similarly, a Bayesian t test revealed a Bayes factor (BF10) of .30 (M = .09, 95% CI [.01, .20], error percentage = .01), which is weak evidence in support of the null hypothesis and against the focal hypothesis that those in the treatment group would share less false information than those in the control group (see van Doorn et al., 2021). In sum, Hypothesis 1a was not supported in Kyrgyzstan (see Figure 1, Table 3).

Figure 1

Average Sharing Intentions Across Condition and Information Type for Each Country

Table 3
Summary of Findings Compared to Results Reported by Pennycook et al. (2020) and Roozenbeek et al. (2021)

Prior U.S. Studies

The present study

Hypotheses

Pennycook et al. (2020)

Roozenbeek et al, (2021)

Kyrgyzstan

India

United States

Overall

Hypothesis 1a, b: Prompting people to think about accuracy decreases the likelihood that they will be willing to share false information about …

a. COVID-19

N

Y

N

Y

N

M

b. The COVID-19 vaccine

N

Y

M

Hypothesis 2a, b: Prompting people to think about accuracy increases the likelihood that they will be willing to share true information about …

a. COVID-19

Y

N

N

N

N

N

b. The COVID-19 vaccine

N

N

N

Note. Y = yes: hypothesis supported; N = no: hypothesis unsupported; M = mixed results; — = not tested; overall = considers findings across all studies. Conclusions drawn from all studies can be found bolded in the last column

Sharing True Information (Hypothesis 2a)

Prompting individuals in Kyrgyzstan to think about accuracy also did not significantly increase their willingness to share true COVID-19-related information. Those who were prompted to think about accuracy did not significantly differ in their willingness to share true information (M = 3.25, SD = 1.03) than those who were not prompted to think about accuracy (M = 3.30, SD = 1.10), t(1,047) = 0.76, one-tailed p = .03, d = .05, 95% CI [−.08, .18]. Similarly, a Bayesian t test provided strong support against Hypothesis 2a (BF10 = .04, M = .03, 95% CI [−.11, −.001], error percentage = .001). In sum, we did not find support for Hypothesis 2a in Kyrgyzstan.

India

Table 4 presents descriptive statistics for participants living in India. Random assignment appeared to have worked as intended, as the accuracy nudge condition was not significantly related to any individual demographics. Notably, the accuracy nudge is significantly and negatively related to intentions to share general COVID-19 misinformation (r = −.08, see Table 4). We unpack this more below.

Table 4
Descriptive Statistics for India

Study variable

N

M

SD

1

2

3

4

5

6

7

1. Age

703

34.32

8.28

2. Gender (0 = male, 1 = female)

700

−0.03

3. Education

703

16.09

3.23

0.02

−0.02

4. Condition (0 = control, 1 = nudge)

703

−0.02

0.01

−0.03

5. Intention to share true COVID-19 headlines

703

4.09

0.78

−0.08*

0.09*

−0.07

−0.07

0.21**

6. Intention to share false COVID-19 headlines

703

3.36

1.09

−0.16**

0.17**

−0.15**

−0.08*

0.14**

7. Intention to share true COVID-19 vaccine headlines

703

4.13

0.81

−0.10**

0.09*

−0.10**

−0.05

0.19**

0.59**

8. Intention to share false COVID-19 vaccine headlines

703

3.40

1.07

−0.22**

0.20**

−0.15**

−0.07

0.14**

0.90**

0.58**

* Correlation is significant at the .05 level (two-tailed). ** Correlation is significant at the .01 level (two-tailed).

Sharing False Information (Hypothesis 1a, b)

In India, prompting individuals to think about accuracy did reduce their willingness to share false information about COVID-19. Those who were prompted to think about accuracy were significantly less inclined to share false information about COVID-19 (M = 3.28, SD = 1.13) than those who were not prompted to think about accuracy (M = 3.45, SD = 1.03), t(701) = 2.03, one-tailed p = .02, d = .16, 95% CI [.01, .33]. A Bayesian t test also provided weak support for Hypothesis 1a (BF10 = 1.22, M = .15, 95% CI [.03, .30], error percentage = .003).

Prompting individuals in India to think about accuracy also reduced their willingness to share false information about the COVID-19 vaccine. Those prompted to think about accuracy were significantly less inclined to share false information about vaccines (M = 3.33, SD = 1.11) than those who were not prompted to think about accuracy (M = 3.47, SD = 1.03), t(701) = 1.78, one-tailed p = .04, d = .13, 95% CI [−.01, .30]. However, a Bayesian t test provided support against Hypothesis 1b (BF10 = .76, M = .14, 95% CI [.02, .28], error percentage = .002). Because Bayesian estimates supersede traditional t tests (see Kruschke, 2013), we conclude that Hypothesis 1b was not supported in our India sample.

In summary, we found support for Hypothesis 1a, but not Hypothesis 1b in India (see Table 3). The accuracy nudge reduced the likelihood that individuals would be willing to share misinformation about COVID-19 generally, but not the COVID-19 vaccine specifically (see Figure 1).

Sharing True Information (Hypothesis 2a, b)

Prompting individuals in India to think about accuracy did not significantly increase their willingness to share true information, regardless of content. Those who were not prompted to think about accuracy (M = 4.14, SD = .72) and those who were prompted (M = 4.03, SD = .84) did not significantly differ in their willingness to share true information about COVID-19, t(701) = 1.76, one-tailed p = .46, d = .14, 95% CI [−.01, .22]. Similarly, a Bayesian t test provided strong support against Hypothesis 2a (BF10 = .03, M = −.02, 95% CI [−.10, −.001], error percentage = 1.10 × 10−4). Therefore, we did not find support for Hypothesis 2a in India. Additionally, those prompted to think about accuracy (M = 4.08, SD = .87) did not significantly differ from those who were not prompted to think about accuracy (M = 4.17, SD = .75) in terms of their willingness to share true information about vaccines, t(701) = 1.44, one-tailed p = .43, d = .11, 95% CI [−.03, .21]. Similarly, a Bayesian t test provided strong support against Hypothesis 2b (BF10 = .04, M = −.03, 95% CI [−.11, −.001], error percentage = 9.28 × 10−5). In conclusion and aligned with findings in Kyrgyzstan and Roozenbeek (see Table 4), we did not find support for Hypothesis 2a, b in India.

United States

Table 5 presents descriptive statistics for the U.S. sample. Random assignment appeared to have worked. The accuracy nudge condition did not significantly correlate with any demographic variables. Unlike in India, the nudge condition appears to be negatively related to intentions to share COVID-19 vaccine misinformation (r = −.07; see Table 5).

Table 5
Descriptive Statistics for the United States

Study variable

N

M

SD

1

2

3

4

5

6

7

1. Age

829

39.53

11.48

2. Gender (0 = male, 1 = female)

824

0.19**

3. Education

829

15.26

2.43

0.03

0.00

4. Condition (0 = control, 1 = nudge)

829

0.05

0.01

−0.02

5. Intention to share true COVID-19 headlines

829

3.05

1.23

−0.08*

−0.09*

0.02

−0.01

0.16**

6. Intention to share false COVID-19 headlines

829

1.97

1.16

−0.11**

−0.09**

−0.06

−0.07

0.11**

7. Intention to share true COVID-19 vaccine headlines

829

2.94

1.26

−0.07**

−0.10*

0.05

0.00

0.18**

0.53**

8. Intention to share false COVID-19 vaccine headlines

829

2.07

1.22

−0.10**

−0.09**

−0.08*

−0.07*

0.13**

0.95**

0.51**

* Correlation is significant at the .05 level (two-tailed). ** Correlation is significant at the .01 level (two-tailed).

Sharing False Information (Hypothesis 1a, b)

Like Roozenbeek et al. (2021), we found that prompting individuals in the United States to think about accuracy reduced their willingness to share misinformation about COVID-19. Those who were prompted to think about accuracy were significantly less willing to share false information about COVID-19 (M = 1.90, SD = 1.14) than those who were not prompted to think about accuracy (M = 2.05, SD = 1.19), t(827) = 1.93, one-tailed p = .03, d = .13, 95% CI [−.002, .315]. However, a Bayesian t test revealed weak support against Hypothesis 1a (BF10 = .95, M = .13, 95% CI [.02, .27], error percentage = .002). Therefore, we conclude that Hypothesis 1a was not supported in our U.S. sample, as Bayesian estimates supersede traditional t tests (see Kruschke, 2013).

Prompting individuals to think about accuracy also reduced their willingness to share false information about the COVID-19 vaccine. Those prompted to think about accuracy were less willing to share false information about vaccines (M = 1.99, SD = 1.20) than those who were not prompted to think about accuracy (M = 2.16, SD = 1.23), t(827) = 2.06, one-tailed p = .02, d = .14, 95% CI [.009, .341]. Similarly, a Bayesian t test provided weak support for Hypothesis 1b (BF10 = 1.23, M = .14, 95% CI [.02, .28], error percentage = .004).

Contrary to patterns found in India, we found support for Hypothesis 1b, but not Hypothesis 1a in the United States (see Table 3). The accuracy nudge reduced the likelihood that individuals would be willing to share misinformation about the COVID-19 vaccine, but not COVID-19 generally (see Figure 1).

Sharing True Information (Hypothesis 2a, b)

We found no evidence in the United States that prompting individuals to think about accuracy increased their willingness to share true information, regardless of content. An independent-samples t test showed that those who were prompted to think about accuracy (M = 3.04, SD = 1.24) were not significantly more willing to share true information about COVID-19 than those who were not prompted to think about accuracy (M = 3.07, SD = 1.20), t(827) = 0.36, one-tailed p = .14, d = .02, 95% CI [−.137, .198]. Similarly, a Bayesian t test provided strong support against Hypothesis 2a (BF10 = .06, M = .04, 95% CI [−.14, −.002], error percentage = .003). Compared to those who were not prompted to think about accuracy (M = 2.95, SD = 1.24), those prompted to think about accuracy were also not more willing to share true information about vaccines (M = 2.94, SD = 1.28), t(827) = 0.04, one-tailed p = .97, d = .008, 95% CI [−.167, .175]). Similarly, a Bayesian t test provided strong support against Hypothesis 2b (BF10 = .08, M = −.05, 95% CI [−.15, −.002], error percentage = .03). Consistent with findings in Kyrgyzstan and India, we did not find support for Hypothesis 2a, b in the United States (see Table 3).

Discussion

The spread of misinformation on social media is problematic, particularly during a global pandemic when acting on the wrong information can have grave consequences. Shortly after COVID-19 was officially declared a pandemic, Pennycook et al. (2020) conducted a study to better understand how to slow the spread of COVID-19-related misinformation online. They found that nudging people to think about accuracy (i.e., an accuracy nudge) increased the likelihood people would share true information online. Almost a year later, Roozenbeek et al. (2021) published an independent replication study showing the accuracy nudge decreased the likelihood people would share false information online. The present study expands upon this literature by exploring whether (and which) findings from these two studies replicate inside and outside of the United States and across different types of misinformation.

Our hypotheses attempted to replicate Roozenbeek et al.’s (2021) findings. Roozenbeek et al. tested whether prompting people to think about accuracy reduced their willingness to share misinformation and/or increased their willingness to share true information related to COVID-19 generally. They found the accuracy nudge decreased intentions to share misinformation but did not significantly increase intentions to share true information. Our findings are in partial agreement with Roozenbeek and colleagues. The accuracy nudge did not decrease willingness to share false general COVID-19 information in Kyrgyzstan or the United States. In India, the accuracy nudge did decrease willingness to share general COVID-19 misinformation. However, in full agreement with Roozenbeek and colleagues, we found strong evidence suggesting that the accuracy nudge does not significantly increase the spread of true information in any of the three countries.

Roozenbeek et al. (2021) speculate that they may have found weaker results than Pennycook et al. (2020) because their replication was conducted at a later stage in the COVID-19 pandemic, and people may have become more attuned to COVID-19 misinformation over time. Therefore, we examined intentions to share both general COVID-19 information and newer vaccine-related information in India and the United States to see whether a different pattern of results emerged for older versus newer COVID-19-related information. As can be seen in Table 3, findings were mostly consistent across general COVID-19 and COVID-19 vaccine information for Hypotheses 1 and 2 with a couple of notable exceptions, described next.

The accuracy nudge’s effectiveness in reducing the spread of misinformation (Hypothesis 1) appeared to depend on location and information type. In India, the nudge decreased willingness to share false general COVID-19 information but did not decrease willingness to share vaccine information. Interestingly, an opposite pattern of results was found in the United States. In the United States, the nudge decreased willingness to share false information related to the COVID-19 vaccine but not information related to COVID-19 generally.

There are a few possible explanations for these patterns of findings. Evidence for the nudge’s effectiveness in reducing the spread of newer (i.e., vaccine) misinformation in the United States could be due to Roozenbeek et al.’s (2021) speculation that the nudge may become less effective as individuals become more attuned to COVID-19 misinformation over time. However, this does not explain the opposite pattern of findings in India. A more likely explanation is that the accuracy nudge has a very small effect on reducing the spread of misinformation. Why do we believe this pattern reflects a small effect? Our reasoning lies in the mixed, and sometimes contradictory, findings.

In Roozenbeek et al. (2021), both traditional and Bayesian t tests concluded that the nudge reduced willingness to share misinformation; however, in the present study, we found mixed results. While traditional t tests consistently supported Hypothesis 1a, b, the Bayesian t tests did not. When findings from the two tests differed, we followed best practices (Kruschke, 2013) and drew conclusions from the Bayesian tests only. Importantly, the evidence found from the Bayesian t tests was very weak. Bayesian factors can range from .03 to 30, with values less than 1 (.03–.99) indicating evidence for the null hypothesis and values greater than 1 (1.01–30) indicating support for the alternative hypothesis (van Doorn et al., 2021). A value of exactly 1 indicates equal support for the null and alternative hypothesis; therefore, the closer the Bayesian factor is to 1, the weaker the supporting evidence (BF10 = .33–3). All Bayesian factors calculated for the present study indicated very weak evidence related to Hypothesis 1a, b (BF10 = .30–1.23). With significant findings resulting from traditional t tests and weak evidence resulting from the Bayesian t tests, we conclude that the nudge may decrease the spread of misinformation but that its impact is likely very small. On balance, findings across accuracy nudge studies (i.e., Pennycook et al., 2020; Roozenbeek et al., 2021; and the present study) appear to mostly agree that accuracy nudges are more likely to decrease intentions to share false information than they are to increase intentions to share true information.

Limitations and Future Directions

Although this study provides important contributions to the literature, several limitations are worth noting. Similar to Pennycook et al. (2020), participants were recruited either through social media advertising (Kyrgyzstan) or via MTurk (India, United States), which may not be representative of the respective populations nationally. This is especially true for our Kyrgyzstan sample, which differed from our other two samples in two important ways. First, our Kyrgyzstan sample was relatively young and mostly women. Research shows that men are more likely to share information online than women (Goyanes & Lavin, 2018; Krasnova et al., 2017) and that those who are older (65 years+) are more likely to share “fake news” than those who are younger (Guess et al., 2019). Also, people recruited from India and the United States were compensated for their participation but people from Kyrgyzstan participated voluntarily, which could contribute to sampling bias.

Because the present study is the first to test whether the effectiveness of the accuracy nudge generalizes outside the United States, many of the original headlines needed to be replaced to ensure they were culturally relevant. This also allowed us to respond to Pennycook et al.’s (2020) call for subsequent research to test the generalizability of their findings to other headlines and types of misinformation. While we followed the procedures outlined in Pennycook to replace these headlines, we made some deviations that could have impacted our findings. For example, the news sources used in previous studies were largely American and U.S. centric (e.g., CNN, Fox News) which may have posed a confound in our Kyrgyzstan and India samples. In favor of consistency and understandability across samples, we removed source information and lede sentences for all headlines. The presence of these source attributions may have influenced people’s sharing intentions in previous studies in ways that could not be captured in the present study.

To sample from Kyrgyzstan, a multilingual and lower middle-income country, the survey needed to be translated into local languages (i.e., Kyrgyz and Russian). While the reliability of measures used in this study were within acceptable ranges, culture has been shown to influence many measures of psychological constructs (e.g., Laajaj et al., 2019; McLarnon & Romero, 2020; Riordan & Vandenberg, 1994). More research is needed to determine which psychological measures are most appropriate to administer in Kyrgyzstan and other Central Asian countries. Relatedly, some participants may have taken the survey in their second, third, or even fourth language. While Study 1 participants had the option to take the survey in Kyrgyz or Russian, all participants in Study 2 took the survey in English. For Study 2, participants were required to be fluent in English and almost all participants said they would have taken the survey in English if given the option. However, future research should investigate whether accuracy nudges have a greater effect on discernment when presented in the participants’ first language.

Finally, while Pennycook et al. (2020) found that the accuracy nudge decreased the spread of misinformation, the nudge’s effect on discernment in their study was really driven by an increase in willingness to share true information rather than reduce misinformation. Roozenbeek et al. (2021) were clearer in their testing methods (using t tests) and found that the accuracy nudge improved discernment by reducing the spread of misinformation. Building from Roozenbeek’s approach, we used t tests (instead of regressions) to better understand whether and how the accuracy nudge improved discernment. Future researchers should also be sure to clarify the direction of their hypothesized relationships and consider alternative analyses to test these relationships.

Implications for Research and Practice

The present study narrows the gap between research and practice in a few important ways. First, this study is an example of a scientist–practitioner collaboration between UNICEF and academic collaborators. UNICEF has strong governmental partnerships across more than 190 countries and territories, which allows research findings to directly shape policy and publicly funded interventions. For example, the United Nations launched a “pledge to pause” campaign (see https://www.takecarebeforeyoushare.org/) which consisted of videos, graphics, and colorful gifs encouraging people to consider the accuracy of information before sharing it online; findings from this research will be used to inform the scalability of such campaigns to lower middle-income countries.

Given the mixed results and likely small effect, other interventions (e.g., reputation nudges, priming individuals’ identity around being truthful) should be explored and tested in the field. More investment in field experiments is needed, which requires platforms that allow for experimentation coupled with research know-how to design experiments to test different kinds of nudges. This is often done in the private sector (e.g., A/B testing in marketing terms), but is not common practice in the public sector. Such experimentation could be of great value to governmental and public institutions working to reduce the spread of misinformation online. For example, future research could test the effectiveness of asking “are you sure?” after an individual clicks “share” on a social media post. The effectiveness of relating one’s sharing behavior to their identity—for example, sending people a daily or weekly feedback report on their sharing behavior—could also be tested for its effect on misinformation sharing.

Findings also have implications for vaccine hesitancy. Our findings suggest that the accuracy nudge may be effective in reducing the spread of vaccine-related misinformation and improving individuals’ propensity to discern true versus false vaccine-related information before sharing it online in the United States. Because this study used headlines related to the COVID-19 vaccines specifically, future research should test whether findings extend to other types of vaccines (e.g., polio) and domains (e.g., environmental issues/global warming). Findings from such research could be used to inform communication around the distribution of vaccines even after the COVID-19 pandemic is over.


Comments
0
comment
No comments here
Why not start the discussion?