Skip to main content
SearchLoginLogin or Signup

AI Journalists and Reduction of Perceived Hostile Media Bias: Replication and Extension Considering News Organization Cues

Volume 3, Issue 3: Autumn 2022. DOI: 10.1037/tmb0000083

Published onSep 21, 2022
AI Journalists and Reduction of Perceived Hostile Media Bias: Replication and Extension Considering News Organization Cues


As news organizations struggle with issues of public distrust, artificially intelligent (AI) journalists may offer a means to reduce perceptions of hostile media bias through activation of the machine heuristic—a common mental shortcut by which audiences perceive a machine as objective, systematic, and accurate. This report details the results of two experiments (n = 235 and 279, respectively, U.S. adults) replicating the authors’ previous work. In line with that previous work, the present studies found additional support for the argument that AI journalists’ trigger machine-heuristic evaluations that, in turn, reduce perceptions of hostile media bias. Extending that past work, the present studies also indicate that the bias-mitigation process (if AI, then machine-heuristic activation, therefore perceived bias reduction) was moderated by source/self-ideological incongruity—though differently across coverage of two issues (abortion legalization and COVID-19 vaccine mandates).

Keywords: artificial intelligence, hostile media bias, machine heuristic, defensive processing, value-relevant involvement

Disclosures: The authors have no known conflict of interest to disclose.

Open Science Disclosures: The data are available at The experimental materials are available at The preregistered design and analysis plan is accessible at

Correspondence concerning this article should be addressed to Joshua Cloudy, College of Media and Communication, Texas Tech University, P.O. Box 43082, Lubbock, TX 79409, United States. Email: [email protected].

Interactive content: See Figure 1 and Results sections.

News organizations are increasingly integrating artificial intelligence (AI) technologies into newsroom practices (Ali & Hassoun, 2019), deploying them to write stories on a range of topics from sports and finance to politics (Tatalovic, 2018). At present, “only a minority of pioneering organizations” utilize AI technologies for journalistic purposes and “these algorithms are still not fully autonomic, or present truly AI capabilities, in the sense of expressing a transformative creativity” (Montal & Reich, 2017, p. 843). However, while these technologies are primarily used to generate stories from statistics or reports (van Dalen, 2012), the creative capabilities of these technologies are progressing steadily (de Vries, 2020). They “increasingly engage in traditionally human creative capacities” (Singh et al., 2022, p. 47), and recent research has explored the potential of these technologies to affect news audience’s perceptions of news (Liu & Wei, 2018; Tandoc et al., 2020; Waddell, 2018). To prepare for this broadening of AI’s news coverage capacities, it is prudent to now consider the potential impacts on news processes and effects with respect to complex social issues.

In our past work (Cloudy et al., 2021a), we investigated the potential for AI journalists to trigger mental shortcuts that could mitigate problematic perceptions of bias in news media—the hostile media effect. The hostile media effect is a perceptual effect in which opposing partisans consume an identical news story, yet both perceive the story as biased against their side (Vallone et al., 1985). This tendency for partisans to perceive a hostile media biasthat is, to perceive that the media is biased in a hostile manner against one’s sideoccurs even in the face of objective facts (Reid, 2012). This perceived bias presents severe challenges for media (Feldman, 2017; Perloff, 2015) and for informed democracies more broadly (Tsfati & Cohen, 2005). In that past work, we found that AI journalists activate the machine heuristic (a common mental shortcut that assumes machine sources are objective and systematic; Sundar, 2008) which, in turn, reduces perceptions of hostile media bias; this reduction becomes more extreme as the strength of one’s issue attitude increases (Cloudy et al., 2021a).

Importantly, a limitation of that work is that the stimuli accounted for cues from the news reporter but not for the potential influence of news organization cues which could trigger other heuristics. Thus, in the present studies, we replicated those procedures across two issue topics to test for potential message effects and extend that work by testing the impact of ideological incongruence between the AI’s host news organization and the audience member’s own political ideology. Findings suggest that the indirect relationship between AI journalist and perceived hostile media bias does vary, which we argue is a result of differences in the topics value relevance. This investigation largely replicates the past findings (Cloudy et al., 2021a)—that AI journalists cue the machine heuristic, which reduces perceived hostile media bias—with audience/source ideological incongruity influencing the process differentially across the two topics.

Review of Literature

The mainstream media is facing a crisis of trust that presents potentially serious “threats to our democratic politics and social cohesion” (Lee & Hosam, 2020, p. 1,016). While the magnitude of distrust in the media may have recently reached “an unprecedented magnitude” (Dahlgren, 2018, p. 24), the perception among partisan audiences that the media are biased against their own positions is not new (Vallone et al., 1985). Partisans are those individuals who have unequivocal opinions about a particular issue (Schmitt et al., 2004)—that have a particular stance, often pro or anti with respect to a particular issue. Even when viewing the exact same news content, partisans on both sides of an issue have tendencies to perceive ostensibly neutral media content as hostile in its bias against a given audience member’s side (Perloff, 2015), a phenomenon known broadly as the hostile media effect. The greater a partisan’s attitude extremity—that is, the more entrenched and deeply held their opinion on an issue is—the more likely they are to perceive hostile media bias (Gunther, 2017). The perception of hostile media bias has potentially serious consequences for journalism and democracy. Perceptions of hostile media bias can lead to feelings of indignation among partisans, serving as bridge for public criticism of the media as unjust (Hwang et al., 2008). These perceptions of unfair treatment from the media decrease trust in democracy (Perloff, 2015) and increase partisans’ willingness to use violent forms of resistance (Tsfati & Cohen, 2005).

AI Journalism and the Machine Heuristic: A Summary of Cloudy et al. (2021a)

Despite ample research into hostile media bias, there is limited understanding of how to successfully mitigate this perceptual bias (Feldman, 2017). Past studies have focused on mitigation strategies aimed at increasing the media literacy of news consumers (e.g., Vraga & Tully, 2015). The emergence of AI journalists (algorithmic [demi-]agents that take information and turn it into news reports; Carlson, 2015) may offer a potential source-based solution to mitigating the perception of hostile media bias (Cloudy et al., 2021a). When presented with news information, news audiences often do not have the opportunity to fully process a story (Delgado et al., 2010), are not motivated to deeply engage the news content (Eveland, 2001), and/or do not have access to the underlying motivations of the journalists by which they may interpret the validity of the content (Voakes, 1997). So, instead, they often rely on heuristics (i.e., mental shortcuts often engaged automatically and intuitively) in making assessments of the journalist. With regard to people’s engagement with interactive technologies, broadly, evidence indicates that technological affordances of varied modalities, agency, interactivity, and navigability can cue a range of heuristics that influence credibility judgments (i.e., the Modality, Agency, Interactivity, and Navigability [MAIN] model; Sundar, 2008). AI journalists, specifically, offer particular agency-related cues—that is, the agentic source to which a news story is attributed is often a machine agent. Thus, AI journalists can promote hostile media bias-reductive heuristic judgments (Cloudy et al., 2021a) by tapping into audience perceptions of machines as inherently systematic, objective, and lacking the inherent biases of humans—a set of expectations known as the machine heuristic (Sundar, 2008).

The role that AI journalists may play in reducing perceived hostile media bias has been nascently explored with mixed results; these investigations have operationalized the machine heuristic as moderating the relationship between source and emotional involvement (Liu & Wei, 2018), moderating the relationship between source and perceived hostile media bias (Wang, 2021), or failed to directly measure the machine heuristic (Waddell, 2019). More recently, we argued (Cloudy et al., 2021a) that heuristics influence judgment through a specific mediating (rather than moderating) pathway: “if source cue, then objectivity heuristic, therefore less-biased news” (cf, Bellur & Sundar, 2014, p. 3). Following, the machine heuristic should be considered a mediator between the AI journalist source cue and perceived hostile media bias: if AI, then machine heuristic evaluations, therefore reduced perceptions of hostile media bias. That study found support for this mediating function of the machine heuristic. Additionally, in investigating the moderating role of attitude extremity, we found no direct relationship between attitude extremity and perceived hostile media bias (as suggested in other literature; Hansen & Kim, 2011). Instead, we found that attitude extremity moderated the extent to which the machine heuristic reduced perceptions of hostile media bias, with higher attitude extremity accelerating reduction of perceived hostile media bias.

In approaching these studies’ replication goal, then, we propose the following hypotheses in line with that revised model: When presented with a news story ostensibly written by an AI journalist, partisans encounter source cues suggesting the machinic nature of the author. Cues that suggest the source is a machine should trigger machine-heuristic evaluations—shorthanded judgments that the source is objective, accurate, and systematic (Sundar, 2008). Thus (H1): An AI journalist will generate stronger machine-heuristic evaluations than will a human journalist.

Perceptions of the source as biased against one’s position can contribute to perceptions of hostile media bias even when the news content is focused on objective statistics (Reid, 2012). However, when presented with source cues that promote belief in the source as objective and free of bias, perceptions of hostile media bias may be reduced (Waddell, 2019). The machine heuristic is a mental shortcut indicating intuitively perceived objectivity, thus (H2): Stronger machine-heuristic evaluations will reduce perceived hostile media bias.

Partisans with strongly held attitudes are motivated to engage in defensive processing of information as a means of protecting their prior beliefs and/or their sense of self (Gunther, 2017). In anticipation of consuming potentially self-threatening information, partisans’ typical defensive response is to perceive higher levels of source bias to justify rejection of expected unfavorable information (Carpenter, 2019). However, when presented with a source that is perceived to be objective/accurate/systematic, partisans’ defensive response switches from preparing to justify the rejection of unfavorable information to preparing to assimilate favorable information (Cloudy et al., 2021a). That is, partisans’ processing of information is more malleable than nonpartisans’ because they are constantly seeking ways to maintain their preexisting beliefs (Gunther et al., 2012). When confronted with a human journalist, it is often difficult for audiences to judge the underlying motivations of the journalist (Tsfati & Cohen, 2005). To manage this uncertainty, partisans are motivated to preemptively process news coverage defensively, which increases perceptions of hostility as they are preparing to dismiss information that does not align with their prior beliefs (Tsfati & Cohen, 2013). However, when confronted with an AI journalist perceived as objective and accurate, partisans should shift to preparing to attend to information that is favorable. Thus (H3): Higher issue attitude extremity will intensify the extent to which machine-heuristic evaluations reduce perceptions of hostile media bias.

Disentangling Issue and Source Cues: Extending Cloudy et al. (2021a)

The grounding study for this investigation (Cloudy et al., 2021a) relied on stimuli that engaged only a single partisan topic (abortion) and only a single relatively neutral source organization (both journalist types were affiliated with USA Today). That initial study design, as we acknowledge, limits the ability to determine whether the observed dynamics may differ for other partisan topics and whether the news organization cues (independent of the individual journalist cues, as a distinct agency-cueing affordance) may influence the perceived hostile media bias-mitigation process. Thus, the present investigation extends that work by accounting for those potential dynamics.

It is possible that the initial findings may have been limited to the specific issue and its presentation (i.e., message effects; see Slater et al., 2015). In the source study (Cloudy et al., 2021a), we found evidence to suggest that machine heuristic activation by AI journalists should be topic-independent in that machines were seen as universally less vested in partisan issues than were humans across a range of issues (e.g., global warming, surveillance, gun control). However, it is possible that topic could still play a role in the machine heuristic’s ability to reduce perceived hostile media bias since a machine’s systematicity and accuracy may be seen as more or less relevant to some issues and attitude extremity may be more or less malleable in relation to some issues. Notably, past investigations into perceived hostile media bias have found both consistent (Gearhart et al., 2020) and inconsistent (Giner-Sorolla & Chaiken, 1994) perceptions of bias across different topics. Thus, we here replicate Cloudy et al. (2021a) in using the same long-standing partisan issue of abortion (see Carmines et al., 2010) and extend that work by testing the model with the more recently polarizing issue of COVID-19 vaccine mandates (Largent et al., 2020).

Additionally, the present studies seek to disentangle the impacts of different source cues by parceling out the potential impacts of the journalist’s ontological category cue (i.e., human vs. AI) from that of the journalist’s news organization (i.e., agencies that lean ideologically liberal, moderate, or conservative). These dynamics are critical to understand given the theorized dynamics of ego-protective processing (Tsfati & Cohen, 2013) that may be promoted by the ideological incongruence between an audience member and the media organization deploying an AI journalist. Some hypothesize that social networking services (SNS) encourage increased selective exposure to ideologically congruent news (e.g., Spohr, 2017). Indeed, partisan news consumers are selectively exposed to ideologically congruent news on SNS (Mukerjee & Yang, 2021) which may, in turn, increase the likelihood of ending up in filter bubbles and echo chambers (Cinelli et al., 2021). However, SNS often incidentally expose partisans to counter-attitudinal information (Weeks et al., 2017) because such platforms often serve as a hub for an individual’s ideologically heterogeneous network of family, friends, and acquaintances (Masip et al., 2018). Thus, SNS users are embedded in “egocentric publics” which are “constituted by individuals who treat their extended social networks as a forum for self-expression” (Wojcieszak & Rojas, 2011, pp. 493–494). These loosely connected networks likely lead to greater exposure to diverse and ideologically incongruent news (Rojas et al., 2016) despite overt attempts to select ideologically congruent news (Masip et al., 2020). In short—the contemporary news media landscape presents opportunities for audience’s filter bubbles to be burst through incidental exposure, so it is critical to explore the role of source-audience ideological incongruity.

Partisans are often more attuned to source cues than nonpartisans (Gunther et al., 2012), especially when presented with content that is ostensibly neutral or balanced (Gunther et al., 2012). In particular, partisans are sensitive to cues that suggest a source is an ingroup or outgroup member (Reid, 2012). When presented with a news story from an ideologically incongruent media organization, partisans are likely to see that news as biased against their position; in turn, seeing that same news from an ideologically congruent media organization, they are likely to see it as biased in favor of their position (Yun et al., 2018). AI journalists, however, can reduce perceptions of news bias even when ostensibly deployed by perceptibly partisan news networks (Waddell, 2019). However, past investigations into this dynamic did not account for network/audience ideological (in)congruence.

Given that perceptions of the source as affiliated with an outgroup can increase perceptions of hostility or create those perceptions where they otherwise would not exist (Gunther et al., 2016), perceptions that an AI journalist was created by or is under the control of an outgroup (in this case, an ideologically incongruent news organization) may override expectations of source objectivity. It is known that when people assess their trust in a machine agent, they often consider the attributes of an absent-yet-conspicuous creator of that machine (Fogg & Tseng, 1999; cf, Sullins, 2006) such that they are seen as inherently related, resulting in either halo effects (Nisbett & Wilson, 1977) or horn effects (Burton et al., 2015). That is, any positive evaluations of the creator (or in this case the deploying news organization) enhance those of the machine (here, the AI journalist) or any negative evaluations are tainting. Thus, seeing a news organization as ideologically incongruent (i.e., an outgroup) likely impacts the associated journalist as an outgroup member, resulting in a reduction in perceived objectivity, systematicity, and accuracy (i.e., reduced machine heuristic evaluations). Put differently, the outgroup source cues (i.e., the AI journalist being described as a bot for an ideologically incongruent news organization) will likely put the audience on high alert for potential bias (Gunther et al., 2016). This heightened awareness of potential source bias should override the heuristics derived from any machine cues because the AI journalist would be viewed in tandem with the incongruent, and therefore hostile, news organization. Following, we predict (H4): Stronger perceived ideological incongruence between the audience member and the media organization will suppress machine-heuristic evaluations.

The Present Studies

Based on the literature reviewed, we propose the following model (Figure 1) based on the logic that if the source is an AI journalist, then it will trigger machine-heuristic evaluations of source objectivity, therefore reducing perceptions of hostile media bias (H1–H2). This reduction of perceived hostile media bias will be more extreme as attitude extremity increases (H3). However, machine-heuristic evaluations will be dampened as ideological incongruence increases (H4). We test this model across two different samples, each of which encountered news on a different partisan topic.

Figure 1

Proposed Conceptual Model for AI Journalist-Cued Reduction of Hostile Media Bias Mediated by the Machine Heuristic
Note. AI = artificial intelligence.


This investigation was approached as a self-replication/extension—researchers reconducting the procedures of their own past study in ways that are operationally similar (see Schmidt, 2009) but that add new elements to the design such that the study may both examine similarities or differences with the original work (Kraus, 2015) but also extend the prior results in theoretically relevant ways (Bonett, 2012). In this manner, we may be more confident that the results of the original study were not the result of Type I error (Hüffmeier et al., 2016) as well as consider the potential moderating effects of source incongruity—a potential that emerged in the original work (Cloudy et al., 2021a). In line with the original study logic and following guidance of Bellur and Sundar (2014), we manipulated the cue (human/machine), then measured engagement of the machine heuristic as a distinct variable, and then separately measured the subsequent biased processing.1

This study was approved by the institutional review board at Texas Tech University. All materials for both studies can be found at (Cloudy et al., 2021b), including the preregistered hypothesis and analysis plan, instrumentation, and experimental stimuli. Anonymized data sets, baseline data descriptives, and analysis output files for both studies are also available in that repository.


Participants were recruited from Prolific panels and compensated U.S. $2.38, per Prolific’s standard hourly rate of $9.52 × estimated 15 min. All participants were 18 years of age or older, U.S. residents, U.S. in nationality, and fluent in English. Prolific’s sampling criteria were used to recruit conservatives and liberals. Sampling for the abortion issue (Study 1) occurred prior to sampling for the vaccine issue (Study 2), and those who participated in the former were ineligible to participate in the latter. For each issue, N = 300 participants were initially sampled (in line with the original study). To further ensure partisan samples, for the abortion issue, only those with a declared pro-life or pro-choice stance were recruited and for the vaccine issue only those who indicated they feel positively or negatively about COVID-19 vaccines were recruited. Sampling excluded those indicating a neutral position or who did not disclose a position (since perceptions of hostile media bias should not manifest for those with neutral stances).

The participants in the abortion-issue sample were 18–65 (M = 28.57, SD = 10.10), 69.8% female and 30.2% male, 88.9% liberal and 11.1% conservative, and leaned heavily toward support for abortion (86.8% vs. 13.2% who were against). The participants in the vaccine-issue sample were 18–72 (M = 27.34, SD = 9.24), 74.6% female and 25.4% male, 89.2% liberal and 10.8% conservative, and leaned heavily toward support for vaccines generally (96.4% vs. 3.6% against). The unbalanced sample is noted as a limitation of the present investigation.


The following procedures were identical for both issue samples. Participants were taken to an online survey and then randomly assigned to one of six conditions: 2 × 3, Source (human or AI journalist) × News organization (ideologically liberal, moderate, or conservative) condition. They were then shown a Facebook profile for the journalist displaying the journalist’s name and news organization affiliation and asked to complete the machine heuristic measure and attention checks. Then, participants were shown a news story ostensibly from the journalist that was embedded into a Facebook post after which they completed a measure of perceived hostile media bias and another attention check. Participants then answered questions about their issue stance and attitude extremity. Finally, they completed a manipulation check and answered a political ideology measure for seven news organizations and themselves. Importantly, this procedure follows the if AI, then machine heuristic evaluations, therefore reduced perceptions of hostile media bias time-order, allowing for the model’s first- and second-stage causal claims to be tested (i.e., they see only the source and report machine heuristic activation, and then see the story and assess bias).


We used the source study’s original stimulus (Cloudy et al., 2021a) for the journalist profiles and abortion content, and then used them as a template to also create stimuli for the vaccination-mandate topic and news organizations. In order to ensure that the source was clear to participants, they viewed a profile for the journalist that identified the name and organizational affiliation of the journalist (Figure 2). The human journalist’s name was presented as “Quinn Smith,” and the AI journalist was presented as “NewsBot.” Immediately following the journalist’s name was the news organization affiliation of the journalist. The fictitious news article ostensibly from the journalist was presented as part of a Facebook post from the journalist (Figure 3). The headline and image for both issue conditions were designed to indicate the issue in a neutral manner, in line with the source study’s design. The Facebook post was presented as if it was posted recently and had achieved wide reach—a known condition for perceived hostile media bias to manifest (Gunther & Liebhart, 2006). The three news organizations (CNN, USA Today, and Fox) were selected as being ideologically liberal, moderate, and conservative as indicated by AllSides (2021). As in the original study (Cloudy et al., 2021a), participants did not read an actual news story.

Figure 2

Facebook Stimulus Profiles for Human and AI Journalist Conditions
Note. AI = artificial intelligence. Adapted with permission from “The Str(AI)ght Scoop: Artificial Intelligence Cues Reduce Perceptions of Hostile Media Bias,” by J. Cloudy, J. Banks, and N. D. Bowman, 2021, Digital Journalism, advance online publication ( Copyright 2021 by Taylor & Francis Ltd (

Figure 3
Facebook stimulus posts for human and AI Journalist for abortion and vaccine mandates


The machine heuristic, perceived hostile media bias, and attitude extremity measures are the same as in the source study, and assessments of the source’s ideological incongruence is a novel measure. Unless otherwise indicated, all measures were presented as 7-point semantic differentials.

Machine Heuristic

The machine heuristic was measured with seven items (Banks et al., 2021) that assessed participants’ impressions of the journalist as objective, ordered, efficient, accurate, unemotional, logical, and reliable. Notably, this measurement of the machine heuristic is based on an operationalization that somewhat conflates more general machine heuristic evaluation with source evaluation. This is a function of the strength and high accessibility of the machine heuristic (discussed in Cloudy et al., 2021a) that requires the heuristic be measured as it is anchored to a particular source. The measure was internally consistent for both the abortion data set, M = 4.17 (SD = 1.15, ω = .810) and the vaccine-mandate data set, M = 4.21 (SD = 1.16, ω = .826).

Hostile Media Bias

Perceived hostile media bias was measured using three items adapted from Gunther et al. (2016), which assessed the degree to which participants perceived the news content, evidence, and journalist together to be biased, measured from 1 = strongly biased against [abortion/COVID-19 vaccine mandates] to 7 = strongly biased in favor of [abortion/COVID-19 vaccine mandates]. For those reporting a pro-issue stance, their scores were recoded (reversing scores of 1–7 to 7–1) so that a score of 7 is representative of strongly perceived hostile bias against those positions. For those reporting an anti-issue stance, their scores were not recoded as a score of 7 already represents a strongly perceived hostile bias. For the abortion data set, perceived hostile media bias overall averaged M = 4.40 (SD = 1.28, ω = .913) and for the vaccine-mandate data set, M = 4.06 (SD = 1.17, ω = .892).

Attitude Extremity

Attitude extremity was measured using three items (Cho & Boster, 2005) capturing the extent to which participants viewed the issue as negative (positive), undesirable (desirable), and unfavorable (favorable). The 7-point semantic differential scale values were coded −3 to +3. In order to assess extremity of attitudes independent of attitude valence, the absolute value of the scores (0–3) was used in the analysis. For the abortion data set, attitude extremity overall averaged M = 2.52 (SD = .728, ω = .979) and for the vaccine-mandate data set, M = 2.42 (SD = .786, ω = .960).

Source Incongruity

Every participant separately rated the perceived political ideologies of the three stimulus news organizations plus three additional distractor organizations (very liberal to very conservative); participants also rated their own political ideology on the same scale in a separate item. From those ratings, we calculated source incongruence by taking the absolute value of the difference between the participant’s ideology and the organization’s rated ideology for only the organization viewed in their randomly assigned stimulus (|ideologyself—ideologyorg|). For the abortion data set, source incongruity overall averaged M = 2.73 (SD = 1.88) and for the vaccine-mandate data set, M = 2.47 (SD = 1.83).

Data Cleaning

For each study’s data set, from an initial N = 300, responses were culled if they did not pass attention checks (n = 9 abortion, n = 1 vaccines) and those responses were replaced. Additionally, those that did not pass the source-type manipulation check (n = 4 abortion, n = 7 vaccines—all in the human condition) were removed. Also, because perceived hostile media bias is understood to emerge only among those with partisan issue positions (Gunther, 2017), we removed all cases in which participants (despite our sampling only for those with specific positions) indicated that they had no opinion on the issue (n = 19 abortion, n = 6 vaccines) and those who indicated they were neutral in their political ideology (n = 7 abortion, n = 4 vaccines). This process resulted in N = 235 for the abortion-topic data set and N = 279 for the vaccine-topic data set.


Proposed Model Test

As stated in the preregistered analysis plan (Cloudy et al., 2021b), we used Hayes’ PROCESS macro for SPSS, testing Model 21 for multiple moderated mediation with Stage 1 (W moderating the relationship between X and M) and Stage 2 (Z moderating the relationship between M and Y) moderated mediation on both data sets. Moderated mediation is a test of conditional indirect effects. That is, it is a test of a mediator’s ability to mediate only at certain levels of a moderator(s).

For variance in Model 21 to be interpreted, the confidence intervals around the index of moderated mediation should exclude zero. For both data sets, the index of moderated mediation did not exclude zero: abortion = −.050 (95% LLCI = −.125, ULCI = .003); vaccines = .001 (−.014, 0.016). Lacking this criterion, Model 21 is rejected for both data sets and thus, further interpretation of the model as specified is not warranted (see Hayes, 2018). Equivalence tests indicate that perceived hostile media bias, attitude extremity, and source incongruity did not differ by agent-type experimental condition in either data set (see online materials; Cloudy et al., 2021b).

Post Hoc Modifications

On reflection, at least one potential reason for the results above is that our proposed model overlooked an important relationship already supported in extant literature on hostile media bias: the influence of the source’s perceived ideological incongruity on perceived hostile media bias (rather than merely on machine heuristic activation, as modeled). News perceived to come from congruent sources is more likely to be viewed favorably whereas incongruent sources increase perceptions of hostile media bias (Yun et al., 2018), which is consistent with past work that has demonstrated an interaction between partisanship and source on perceptions of hostile media bias (Reid, 2012). Additionally, outgroup sources have been shown to increase perceptions of hostility even when news content was slanted in favor of the ingroup (Gunther et al., 2016). Indeed, in the present studies, the bivariate relationship between source incongruity and perceived hostile media bias was significant for both data sets: abortions r = .419, p < .001; vaccines r = .444, p < .001. However, there was no evidence of a statistically significant Stage 1 highest-order unconditional interaction for either the abortion, F(1, 231) = 3.63, p = .058, ΔR2 = .01, or vaccine, F(1, 275) = 0.13, p = .715, ΔR2 < .001, data set (H4 not supported). One way to account for the influence of source incongruity on perceived hostile media bias is to specify Stage 2 moderated mediation for the effect. Given this, we respecified the model to include source incongruity only as a Stage 2 moderator for both data sets (see Figure 4) as we found no evidence of Stage 1 interaction in the original model proposed in our preregistration plan (see online materials; Cloudy et al., 2021b).

Figure 4

Post Hoc Model for AI Journalist-Cued Reduction of Hostile Media Bias Mediated by the Machine Heuristic
Note. AI = artificial intelligence.

Learning check: why did the authors update their conceptual model?

Study 1: Abortion Data Set

The respecified model for the abortion data set was tested using Hayes (2018) PROCESS, Model 16. There was a main effect of agent type on machine heuristic evaluations, B = .60 (.314, .884). Additionally, the highest-order unconditional interaction for both moderators was statistically significant, F(2, 228) = 10.38, p < .001, ΔR2 = .07. The index of partial moderated mediation for source incongruity was significant, −.06 (−.127, −.014), as was the index of partial moderated mediation for attitude extremity, −.15 (−.301, −.015). Probing this interaction showed conditional indirect effects when source incongruity was at the mean and attitude extremity was at the maximum, −.16 (−.300, −.044), when source incongruity was one standard deviation above the mean and attitude extremity was at the mean, −.20 (−.367, −.071), and when source incongruity was one standard deviation above the mean and attitude extremity was at the maximum, −.27 (−.453, −.121; see Figures 5 and 6).

Figure 5

Moderation Effect of Attitude Extremity on the Relationship Between Machine Heuristic and Hostile Media Bias for Abortion Data Set
Note. Top chart displays moderation effects at +1 and 0 SD and the maximum of attitude extremity, and the bottom chart displays moderation effects at all observed levels of the moderation, including the range of statistical (Attitude extremity ≥ 2.19).

Figure 6

Moderation Effect of Source Incongruity on the Relationship Between Machine Heuristic and Hostile Media Bias for Abortion Data Set
Note. Top chart displays moderation effects at +1, 0, and −1 SD of source incongruity, and the bottom chart displays moderation effects at all observed levels of the moderation, including the range of statistical (Source incongruity ≥ −1.83 and 2.77).

Study 2: Vaccine-Mandate Data Set

The respecified model for the vaccine data set was tested using Hayes (2018) PROCESS, Model 16. There was a main effect of agent type on machine heuristic evaluations, B = .53 (.265, .799), and the highest-order unconditional interaction for both moderators was statistically significant, F(2, 272) = 5.08, p = .007, ΔR2 = .03. The index of partial moderated mediation for source incongruity was significant, −.05 (−.089, −.010), but the index of partial moderated mediation for attitude extremity was not, .02 (−.059, .098). Thus, it is appropriate to infer that source incongruity negatively moderates the indirect effect of machine heuristic evaluations on perceptions of hostile media bias independent of any moderation of the indirect effect by attitude extremity; however, no inference can be made that attitude extremity independently moderates the indirect effect. To probe the interaction of one moderator in a moderated mediation model with two moderators, a value must be chosen for the second moderator as a function of the program (Hayes, 2018). So, when source incongruity is at the mean there is a significant effect at all levels of attitude extremity, one standard deviation below the mean, −.13 (−.234, −.030), at the mean, −.11 (−.193, −.036), and at the maximum, −.10 (−.192, −.015). Additionally, source incongruity was also significant at one standard deviation above the mean at all levels of attitude extremity, one standard deviation below the mean −.21 (−.357, −.080), at the mean −.19 (−.317, −.080), and at the maximum −.18 (−.311, −.065; see Figure 7).

Figure 7

Moderation Effect of Source Incongruity on the Relationship Between Machine Heuristic and Hostile Media Bias for Vaccine-Mandate Data Set
Note. Top chart displays moderation effects at +1, 0, and −1 SD of source incongruity, and the bottom chart displays moderation effects at all observed levels of the moderation, including the range of statistical (Source incongruity ≥ −1.87).

With respect to our initial hypotheses in relation to these post hoc models, in the abortion data set, an AI journalist generated higher machine heuristic evaluations compared to a human journalist (H1 supported). In turn, perceptions of hostile media bias reduced (H2 supported), and this reduction intensified as attitude extremity and source incongruity increased (H3 supported). For the vaccine data set, an AI journalist generated higher machine heuristic evaluations compared to a human (H1 supported) which in turn reduced perceptions of hostile media bias (H2 supported)—this reduction was strongest at higher levels of source incongruity (a respecified relationship).


The present studies largely replicated the findings from Cloudy et al. (2021a) for the issue of abortion (Study 1): An AI journalist did activate the machine heuristic, which, in turn, reduced perceptions of hostile media bias—especially among those with the strongest attitudes. For the issue of vaccine mandates (Study 2), an AI journalist did also activate the machine heuristic, which in turn reduced perceptions of hostile media bias, however there was no evidence to support the independent effect of attitude extremity to moderate the relationship. The present studies extended the original work by demonstrating that for both abortion and vaccine mandates the machine heuristic-induced reduction of perceived hostile media bias intensified as source incongruity increased. This investigation’s outcomes, then, generally continue to suggest that problematic perceived media bias among partisan audiences may be mitigated by deployment of AI journalists; however, further investigation is required into the exact dynamics of defensive processing, machine heuristic resistance, and ideological incongruity. These results are consistent with past work that has demonstrated AI journalists’ ability to reduce perceptions of bias (Waddell, 2019) and that this ability may be contingent on audiences possessing intuitive beliefs that machines are inherently objective and accurate (Liu & Wei, 2018; Wang, 2021). Importantly, there is some recent research that suggests AI journalists may not reduce relative perceptions of hostile media bias when controlling for the machine heuristic (Jia & Liu, 2021). We would argue that this further suggests the machine heuristic as the mechanism explaining AI journalists’ ability to reduce perceptions of hostile media bias.

Machine Heuristic as a Hostile Media Bias Mitigator Under Conditions of Source Incongruity

We found that when source incongruity is low, or when people have low-strength attitudes, there is no impact of the machine heuristic on perceptions of hostile media bias. This is consistent with past evidence that strong attitudes are typically needed to generate perceptions of hostile media bias (Giner-Sorolla & Chaiken, 1994) and that news coming from an ingroup source is unlikely to be viewed as hostile (Gunther et al., 2016; Reid, 2012). Thus, the machine heuristic likely has no impact on perceptions of hostile media bias when source incongruity or attitude extremity is low because of a basement effect—an individual’s perception of hostility is already low and therefore has little room to vary. However, that past evidence would suggest as source incongruity increases, particularly among those with strong partisan attitudes, perceptions of hostile media bias should increase as well. Contrary to that logic, our analysis showed that as source incongruity and attitude extremity (for those in the abortion sample) increased, there was even greater reduction of perceived hostile media bias through the machine heuristic.

These findings are, however, consistent with our original argument (Cloudy et al., 2021a) that when partisans are confronted with a source they view as objective, they are motivated to attend to positive information as a form of ego defense. Typically, strong partisans are motivated to perceive media coverage as hostile in order to justify rejecting unfavorable information (Gunther, 2017). That is, when confronted with messages that may contradict their already established attitude, partisans engage in biased processing of the message to reduce potential cognitive dissonance (Carpenter, 2019); however, the presence of an AI journalist appears to shift their motivation from preparing to reject potentially unfavorable information in order to protect the self to preparing to assimilate favorable information to protect the self (see Gunther & Schmitt, 2004). This is a particularly notable result given that the presence of an outgroup source has been shown to “trigger a defensive toggle switch” such that “favorable slant went unnoticed and was judged no less hostile than the balance” news coverage (Gunther et al., 2016, p. 15). This inability of favorable content to reduce perceptions of hostile media bias in the presence of an outgroup source suggests source plays a particularly potent role in perceived hostile media bias. That an AI journalist seems to overcome, and even reverse, the expected effect of an outgroup source cue suggests the machine heuristic is a particularly powerful heuristic in news perceptions.

Considering Values and Identities as Machine Heuristic-Resistant

While these studies largely replicated the results from Cloudy et al. (2021a), we did not replicate the relationship between attitude extremity and perceptions of hostile media bias for the issue of vaccine mandates (Study 2). We suggest this topic-specific difference may be the result of abortion being a longstanding issue often tied to important belief systems whereas the issue of COVID-19 vaccine mandates is relatively novel such that people may not have developed deep attachments to their attitudes around this issue. For those who consider themselves pro-choice or pro-life, the issue of abortion is likely connected to deeply held values (see Johnson & Eagly, 1989) that are “central to and strongly connected with [their] social identities” (Killian & Wilcox, 2008, p. 569). Issues that are relevant to people’s value system, and, thus, their self-concept, are likely to be highly correlated with attitude extremity (Cho & Boster, 2005). COVID-19 vaccine mandates, on the other hand, may not be closely tied to important values and/or longstanding beliefs about vaccine mandates. Indeed, the support for COVID-19 vaccine mandates is significantly lower than support for vaccine mandates more generally (Haeder, 2021).

While both issue attitudes for abortion (Carmines & Woods, 2002) and COVID-19 related issues (Collins et al., 2021) tend to split along the left–right political divide, we argue that the abortion divide is reflective of differences in issue-related values whereas the COVID-19 vaccine mandate divide is not. Conflicting attitudes on abortion are the result of conflicting value systems (Harris & Mills, 1985) and beliefs (Tamney et al., 1992). That abortion has become aligned with political identity is reflective of political sorting based on abortion attitudes (Adams, 1997). That is, abortion partisans are likely to move to the party that aligns with their attitudes on abortion because the issue of abortion is linked to their deeply held values (Killian & Wilcox, 2008). We argue that for COVID-19 vaccine mandates, sorting is occurring in the opposite direction. That is, people are aligning their beliefs on COVID-19-related issues with their political identity (see Conway et al., 2021).

COVID-19 quickly became politically polarized (Gadarian et al., 2021), which, in turn, likely increased the salience of political identity and the political divide surrounding COVID-19 beliefs. When this sort of divide is made salient in a policy conflict, political partisans are likely to shift their beliefs toward the perceived beliefs of their group (Bonomi et al., 2021). Indeed, partisans have embraced COVID-19 related beliefs and attitudes consistent with their political identity (Gadarian et al., 2021) and COVID-19-related policy issues have provided them with the opportunity to symbolically perform their political identity as a means of social differentiation (Kenworthy et al., 2021). This sort of performativity is consistent with impression-relevant involvement, which differs from value-relevant involvement in that it is not focused on preserving “one’s belief systems” rather it is focused on conforming to the “implicit and explicit expectations of others” (Cho & Boster, 2005, p. 240). That is, people’s attitudes are motivated, not by their values or core beliefs about an issue, but by a desire to “maintain positive relationships with other people” who are important to them (Johnson & Eagly, 1989, p. 310). Therefore, these attitudes are not likely to be as strong as they are not rooted firmly in one’s values, but instead must remain flexible and responsive to potential changes in the socially desirable attitude of one’s reference group. As such, impression-relevant involvement is not related to attitude extremity (Cho & Boster, 2005). Future research should explore the dynamics of held values and identities in the extent to which the machine heuristic may (not) mitigate perceptions of hostile media bias.

Limitations and Future Research

These studies are subject to standard limitations of online survey methodologies, most notably a reliance on self-report measures and assumptions that participants paid attention to the stimuli. As with the original study, attention and manipulation checks were included throughout to help ensure that they did pay attention and understood the content. Additionally, the sample leans largely female and liberal. Thus, caution must be used when interpreting these findings. Future research should explore these dynamics with more balanced samples, particularly along gender and ideological lines.

As a replication, these studies feature many of the same limitations detailed in the original study (Cloudy et al., 2021a). In order to confidently establish the cause-and-effect relationship proposed in these studies, some limitations were maintained in order to minimize the impact of external factors that may operate differently or be present in the real world. First, to ensure that participants were first encountering the source cue, the Facebook profile for the source was shown before the Facebook post (i.e., on separate pages of the survey), which may not reflect the only or dominate way people would encounter news posted on the platform in real life. Second, in an attempt to minimize confounds, the source was presented in a gender-neutral way with little physical or demographic indicators. While this is likely consistent with how an AI journalist would be presented, it may not be consistent with how a human journalists would present themselves on social media. Third, the stimulus stories featured thumbs up, heart, and surprise-indicative emojis, but no angry emoji reactions which could have served as social proofing of a lack of negative reaction to the story; further, those displayed emojis could themselves be subject to differential, biased interpretations (cf, Hayes et al., 2016). Finally, the title “AI journalist” was used to represent the broad category of automated technology used to produce written content. However, there are different kinds of AI technologies (e.g., neural networks, machine learning) and audiences may or may not be savvy to the nuances of the—and so may or may not engage machine-heuristic judgments. Future research should attend to these limitations by exploring the media context and formats, source-agent cues, message valence and issue framing, depth of processing of a complete news article, and extant mental models for AI in the course of consuming AI-presented news content. Additionally, as a self-replication there is the potential for experimenter bias (Kunert, 2016), therefore, future research should investigate if these results replicate when conducted by other scholars.

In addition, one limitation is that USA Today was chosen as the moderate news organization based on its past use as a neutral source (Cloudy et al., 2021a; Gunther et al., 2016) and its AllSides (2021) media rating as being in the center. However, the AllSides media rating for USA Today shifted during the course of these studies from center to lean-left, which is consistent with how participants in our sample rated it. Further, the current studies’ design did not assess the extent to which individuals’ technical knowledge of AI might influence outcomes. Thus, future research should consider both context/format differences inherent to forms of news consumption as well personological factors related to machine heuristic activation. Additionally, we did not assess how the presence of an AI journalist affected relevant behavioral outcomes. Future research should consider what effect an AI journalist has on behavioral outcomes important to journalism (e.g., sharing the article).


Perceptions of hostile media bias present a key challenge for media institutions and democracy (Tsfati & Cohen, 2005). Yet, there is still little understanding of how news producers can overcome this perceptual bias (Feldman, 2017). Our findings suggest that AI journalists can reduce perceptions of hostile media bias through the activation of machine heuristic-evaluations of source objectivity—and that source incongruity and attitude extremity influence these dynamics. This presents both potential benefits and ethical challenges for the media. In order to meet the present challenges, the media should lean on innovation guided by “a dedication to the pursuit of truth and accuracy in reporting, and ethics” (Pavlik, 2013, p. 181).

Supplemental Materials

Received January 16, 2022
Revision received May 13, 2022
Accepted May 16, 2022
No comments here
Why not start the discussion?