Skip to main content
SearchLoginLogin or Signup

Unaware and Unaccepting: Human Biases and the Advent of Artificial Intelligence

Volume 5, Issue 2. DOI: 10.1037/tmb0000128

Published onApr 01, 2024
Unaware and Unaccepting: Human Biases and the Advent of Artificial Intelligence
·

Abstract

Generating accurate assessments of future technological capabilities becomes increasingly difficult as the rate at which technological progress accelerates. The current research reviews the rapid growth of artificial intelligence (AI) and examines the human biases that impede its assessment. In two online experiments (N = 161; N = 151), we find evidence that people are prone to underestimate AI capabilities due to the exponential growth bias (i.e., the tendency to underestimate exponential growth). Moreover, we find evidence that people reject the aversive implications of rapid technological progress even in cases in which they themselves predict the growth rate, due to the motivated reasoning bias (i.e., the desire to search for and interpret information in ways consistent with one’s desires).

Keywords: exponential growth bias, motivated reasoning, artificial intelligence

Funding: There are no funding sources to report.

Disclosures: The authors have no conflicts of interest to disclose.

Data Availability: All preregistered hypotheses, data, and materials can be found online in the links listed in the article. Preregistrations can be found in the link mentioned: Study 1: https://osf.io/w5h4m/?view_only=577a2dcac 556416e9a97eb3176cff55b, Study 2: https://osf.io/a9rmz/?view_only=731 81c1aecb44344b268a9e0839c3028. All data and materials can be found at https://osf.io/q253m/?view_only=078f631fc32a499197d0e6349313599a (Meikle, 2023). The data in this article have not been used or published in any other study.

Correspondence concerning this article should be addressed to Nathan L. Meikle, School of Business, The University of Kansas, 1654 Naismith Drive, Room 3172, Lawrence, KS 66045, United States. Email: [email protected]


Video Abstract


We argue that the two primary barriers to the understanding and prediction of technological change are motivated reasoning and the exponential growth bias. Current technology is used to create new technology, ad infinitum, and as such technological innovation progresses at an exponential, positive rate (Brynjolfsson & McAfee, 2014). However, people consistently underestimate how quickly exponential growth curves accelerate (i.e., exponential growth bias; Stango & Zinman, 2009; Wagenaar & Sagaria, 1975). Furthermore, exponential progress in technology may result in frightening potential outcomes (e.g., mass unemployment, reduced autonomy, existential risk; Bostrom, 2014; Ford, 2015; Frey & Osborne, 2017), which may further bias assessments given the human tendency to arrive at conclusions we prefer regardless of their accuracy (i.e., motivated reasoning; Dawson et al., 2002; Kunda, 1990; Rousseau & Tijoriwala, 1999).

In the current article, we examine the underestimation of technological development within the context of one of the most relevant, fast-moving, and potentially frightening technological domains: the field of artificial intelligence (AI), which, according to Stephen Hawking may be the most consequential event in the history of our civilization, and also the last (Roberts, 2016). We demonstrate how both motivated reasoning and the exponential growth bias radically distort human understanding of the present and likely future development of AI. We then discuss the implications of our findings with respect to the readiness of humans for the inevitable rise of artificial intelligence.

Exponential Growth

The exponential growth bias, although usually examined in the context of financial returns with growth rates typical of that domain (e.g., 10% annual growth; Hubbard et al., 2016; McKenzie & Liersch, 2011), is also relevant to technological progress. A clear demonstration of exponential growth in technology can be found in the field of computing. In 1965, Gordon Moore—who later became the chairman of Intel—noticed that computing capability doubled every 2 years, while costs simultaneously decreased (Schaller, 1997; Waldrop, 2016). This resulted in an exponential growth in the price–performance of computation (i.e., performance increased exponentially as price decreased). For example, in 1968, $1 could purchase one transistor, whereas in 2002, $1 could purchase 10 million transistors; moreover, transistors in 2002 were nearly 1,000 times faster than transistors in 1968 (Kurzweil, 2005). In sum, “the critical building blocks of computing: microchip density, processing speed, storage capacity, energy efficiency, download speed, and so on, have been improving exponentially” (Brynjolfsson & McAfee, 2014, p. 49).

One particularly impactful area of technology that is benefitting from the exponential progress in computing speed is artificial intelligence (computer systems that are able to perform tasks that normally require human intelligence; Russell & Norvig, 2009). Given that computing capability is the backbone of AI (Brynjolfsson & McAfee, 2014), AIs are expected to improve in tandem with the progress of computing in general (Dobbs et al., 2015). Recent progress in AI demonstrates how quickly AIs have progressed in the past several years. AIs are now diagnosing cancer (Rose, 2016), driving cars (Levandowski, 2016), trading stocks (Peltz, 2017), translating languages (Marshall, 2017), producing music (Goldhill, 2016), evaluating legal contracts (Rosenbaum, 2016), authoring news articles (Oremus, 2016), and outperforming humans in a variety of cognitive challenges (Ferrucci et al., 2013; Silver et al., 2016). According to some estimates, the rapid progress and widespread adoption of AI will happen faster, and at a broader scale, than did the innovations associated with the industrial revolution (Dobbs et al., 2015).

Motivated Reasoning

In numerous domains, individuals’ predictions of what will happen tend to reflect what they would like to happen (Buehler et al., 1997; Johnson & Sherman, 1990; Taylor & Brown, 1988). Although people are motivated to make accurate assessments (Petty & Cacioppo, 1986), people are also motivated strongly to arrive at conclusions they prefer (Kunda, 1987, 1990). For example, people think they are more likely than their peers to experience positive financial and health-related outcomes (Perloff, 1983; Weinstein, 1980, 1983, 1984). People also tend to believe that they are less likely than their peers to be victims of crime (Perloff, 1983). More recently, researchers have found that people are prone to expect favorable future outcomes (as they themselves define them) in terms of politics, scientific beliefs, entertainment value, and product preferences (Rogers et al., 2017).

The prospect of living in a world in which machines are more capable than humans is not comforting or appealing to most people (Ryan & Deci, 2000). Although people may willingly accept that AI is progressing rapidly and that it will continue to do so, research on motivated reasoning (Kunda, 1990) suggests that people will discount the aversive implications of rapid technological progress (e.g., mass unemployment, reduced autonomy, existential risks). Given that one’s motives can affect the beliefs about information to which people are exposed (Kappes et al., 2018; Simmons et al., 2011), we expect people to be resistant to the idea of AIs causing, for example, massive unemployment. Furthermore, a recent study argues that 47% of total U.S. employment is at “high risk” of being automated in the next 10–20 years (Frey & Osborne, 2017). If indeed AIs replace such a large percentage of workers, and those workers are unable to retrain fast enough to compete in the dynamically changing labor market, there could be widespread unemployment (Ford, 2015). When one considers the current capabilities of AI, and then considers even a modest rate of progress, it becomes progressively harder to imagine industries that will not be altered drastically by AI.

Overview of Experiments

We present two experiments examining the effects of motivated reasoning and exponential growth bias on human judgment, building to the specific case of the human judgment of the advancement of artificial intelligence and its effects on society. Together these studies provide a depth of understanding about why human acceptance of the emergence of AI is a difficult process: People are both unwilling (due to motivated reasoning) and unable (due to exponential growth bias) to appreciate the world-changing implications of artificial intelligence.

Study 1

Study 1 demonstrates how negatively participants feel about AIs potentially surpassing human intelligence.Hypothesis 1: Participants do not feel positively about a future where AIs are more intelligent than humans.

Method

Sample

One hundred sixty-one participants in the United States completed the study via Connect, a branch of Amazon Mechanical Turk (36.0% female, 2.5% other; Mage = 37.8, 18.6% Black or African American, 70.2% White). Both Studies 1 and 2 were approved by the University of Kansas institutional review board.

Procedure

Participants read the following:

For the purposes of this study please note the following definitions: Intelligence: defined as the ability to accomplish complex goals. Artificial intelligence (AI): defined as a computer system that is able to perform tasks that normally require human intelligence. Please answer the following four questions with these definitions in mind.

Participants then answered the following four questions, presented in random order, on a scale of 1 (not at all positive) to 7 (extremely positive):

Imagine 20 years into the future. How positive do you feel about the future you just imagined?

Imagine 20 years into the future and AIs are more intelligent than humans. How positive do you feel about the future you just imagined?

Imagine 20 years into the future and AIs are equal in intelligence to humans. How positive do you feel about the future you just imagined?

Imagine 20 years into the future and humans are more intelligent than AIs. How positive do you feel about the future you just imagined?

For the purpose of discussing these four questions, we refer to the first question as “neutral,” the second as “AIs superior,” the third as “equal intelligence,” and the fourth as “humans superior.”

Results

See Table 1 for descriptive statistics. As predicted, a paired-samples t test revealed that participants felt significantly more positive about a future in which humans are more intelligent than AIs (M = 5.17, SD = 1.55) compared to a future in which AIs are more intelligent than humans (M = 3.22, SD = 1.94); t(160) = 9.56, p < .001. Furthermore, participants felt more positive about the neutral future state (M = 4.39, SD = 1.76) compared to the future state in which AIs are more intelligent than humans (M = 3.22, SD = 1.94); t(160) = 7.35, p < .001. Finally, participants felt more positive about the equal intelligence future state (M = 3.90, SD = 1.91) compared to the future state in which AIs are more intelligent than humans (M = 3.22, SD = 1.94); t(160) = 5.67, p < .001. Thus, Hypothesis 1 was supported.

Table 1
Study 1: Comparison of Future States

Future state

M

SD

Neutral

4.39

1.76

AIs superior intelligence

3.22

1.94

Equal intelligence

3.90

1.91

Humans superior intelligence

5.17

1.55

Discussion

Study 1 provides evidence that people do not feel positively about a future in which AIs surpass human intelligence. Therefore, if AIs were expected to surpass human intelligence, we would expect people to engage in motivated reasoning when facing the prospect of AIs surpassing human intelligence. We conducted Study 2 to examine participants’ perceptions when AIs are proposed to surpass human intelligence.

Study 2

Study 2 examines whether participants agree with assumptions about technological growth but then reject the implications of those assumptions when the implications are revealed to be aversive (specifically, when AIs are more intelligent than humans). A primary metric in the development of AI is the comparison to the capabilities of AIs to those of humans. Inherent in this is the threat of AIs superseding our own abilities. Given humans’ desires to be competent and powerful (McClelland, 1965; Ryan & Deci, 2000), we predict that participants choosing among different estimates about (a) the present intelligence of AIs relative to humans and (b) the growth rate of AI intelligence, will engage in motivated reasoning when faced with the implications of their own chosen estimates and reject the plausibility of their estimates’ actual extrapolations.Hypothesis 2: Participants will reject the implications of rapid improvement in AI intelligence.

Method

Sample

One hundred fifty-one participants in the United States completed the study via Connect, a branch of Mechanical Turk (53.6% female, .7% other; Mage = 39.3; 73.5% White, 12.6% Black or African American).

Procedure

Participants read the following: “For the purposes of this question ‘Intelligence’ is defined as the ability to accomplish complex goals.” Participants then chose the statement they most agreed with out of four options: (a) Humans are approximately 10 times more intelligent than computers, (b) humans are approximately 1,000 times more intelligent than computers, or (c) computers are approximately 10 times more intelligent than humans, or (d) computers are approximately 1,000 times more intelligent than humans. Then participants read, “Computers are becoming more intelligent each year. In other words, they are improving in their ability to accomplish complex goals. At approximately what rate do computers become more intelligent each year?” Participants then selected the rate at which computers are becoming more intelligent each year (10%, 100%, or 200%). These rates were chosen to represent the range of rates that computers might be expected to improve for the foreseeable future.

Participants then saw the two previous answers they had chosen and were asked, “How much do you agree with these two statements you selected?” Participants answered on a scale of 1 (not at all) to 7 (strongly agree). Thus, to summarize, participants: (a) indicated their belief about the technological state of computer intelligence relative to human intelligence and (b) indicated their belief about the growth rate of computer progress. Then they indicated how much they agreed with their first two selections (i.e., the Time 1 dependent variable [DV]).

Once participants made their Time 1 selection, they were shown the implications of those selections. For example, if a participant indicated that they believed that humans are approximately 1,000 times more intelligent than computers and that computers become 10% more intelligent each year, they would then read the following:

Based on your first two answers, humans will be approximately 8.5 times more intelligent than computers in 50 years (assuming that humans do not become meaningfully more intelligent during this time period and assuming computers become more intelligent at a constant growth rate). How much do you agree with the statement that humans will be approximately 8.5 times more intelligent than computers in 50 years?

Participants then made their selection on the same scale as before (i.e., the “Time 2” DV). Of note, we chose a 50-year time window to represent the amount of change an average person could reasonably expect to see in their lifetime.

Results

A paired-samples t test revealed a significant difference between the Time 1 DV (M = 5.28; SD = 1.30) and Time 2 DV (M = 4.42, SD = 1.75); t(150) = 5.93, p < .001. As predicted, participants agreed more with their estimates than with the implications of those estimates. This result supports Hypothesis 2.

Discussion

Study 2 provides evidence that people engage in motivated reasoning when they evaluate the prospect of AIs becoming more intelligent than humans. Given the exponential growth bias, people are likely to be surprised at the implications of their first selections. Then when faced with the prospect of either accepting the implications of their previous choices or engaging in motivated reasoning, we find that people appear to engage in motivated reasoning. That is, when participants saw the implications of their choices (e.g., that AIs will far surpass humans in intelligence), they were suddenly less likely to agree with the implications, even though the implications were simply mathematical extensions of their own judgments.

General Discussion

Artificial intelligence is advancing at an exponential rate with what, to some, are terrifying implications. In the current research, we find evidence that people do not feel positively about a future in which AIs are more intelligent than humans, and furthermore, people appear to use motivated reason to reject the implications of the emergence of AI and the technology that drives it.

In Study 1, participants consistently reported that they do not feel positively about a future in which AIs are more intelligent than humans. Study 2 provides evidence that participants were biased by motivated reasoning when thinking about technological progress. People were likely to show confidence in their estimates about the current state of AIs relative to humans, but once they saw the implications of their own estimates, they rejected the logical extensions of those beliefs.

This is important because it shows that it is not only technological progress that is daunting, but also, specifically, that this progress seems to leave the capabilities of our own species radically diminished by comparison. People are prone to believe in a favorable future in terms of politics, scientific beliefs, and entertainment preferences (Rogers et al., 2017). As a corollary, the current research suggests that people are also prone to disbelieve an unfavorable future in terms of technological progress.

Limitations and Future Directions

Although this article provides evidence of the exponential growth bias and the motivated reasoning bias as it relates to technological advancement, the results should be interpreted with caution. For example, in Study 1, participants report that they do not feel positively about a future where AIs are more intelligent than humans. However, AIs are already more intelligent than humans in several domains (such as playing chess). Whereas participants may not feel positively about AIs surpassing human intelligence in all regards (e.g., artificial general intelligence), participants may not mind if AIs surpass human intelligence on specific tasks (e.g., artificial narrow intelligence). Future research could examine the degree to which participants feel negatively/positively about artificial general intelligence versus artificial narrow intelligence.

In Study 2, when participants saw the implications of their Time 1 selections (e.g., that computers would be vastly more intelligent than humans), participants may have balked at the implications not solely due to motivated reasoning, but due to their incredulity about exponential growth over long time periods. Participants may have been so surprised by the implications of their Time 2 selections, so surprised by the margin that AIs would surpass humans in intelligence (due to the exponential growth bias), that participants expressed disagreement with their Time 1 estimates by doubting the assumptions of the scenario—that AIs could progress exponentially for 50 years. In an effort to address this possible alternative explanation, researchers told participants multiple times that, for the purposes of the experiment, participants were to assume that computers become more intelligent at a constant growth rate. But we cannot be sure that participants adopted this assumption.

Second, although participants may not have believed that AIs will progress exponentially for the next 50 years, it is not out of the realm of possibility, given that (a) computing technology has progressed exponentially for decades (Brynjolfsson & McAfee, 2014) and (b) each time a new technology is invented, that new technology can be used to invent subsequent, more powerful, technologies. Whereas we recognize that there may be limits to how long a technology can progress exponentially, we do not believe that 50 years of further exponential growth is entirely unreasonable.

To be clear, our argument is not that AIs will, with certainty progress exponentially for 50 years, but rather that if AIs do continue to progress exponentially, people will likely be surprised at the capabilities of AI. Furthermore, if AIs do in fact vastly surpass human intelligence, Study 1 indicates that people will not feel positively about that future. Each of the possible alternative explanations above are still consistent with our expectation that people will be prone to underestimate AI if it keeps progressing rapidly, irrespective of the specific growth function.

Third, it is possible that participants’ answers were biased by the general tenor around AI development. For example, would people be equally biased regarding AI if it was less controversial? What if AIs were expected to cure all sickness? Or develop abundant, clean energy? Would participants then be overly optimistic about the progress of AI, in which case the results here might be reversed? In other words, might motivated reasoning bias people more than the exponential growth bias? We believe these questions are interesting avenues for future research. For example, future research could further examine the effects of the motivated reasoning bias and the exponential growth bias by varying future states along positive and negative dimensions. Researchers could also vary the information participants have about the capabilities and impacts of AI. Doing so could allow researchers to disentangle the degree to which motivated reasoning biases people compared to the degree the exponential growth bias biases people.

Conclusion

In conclusion, it is important to acknowledge that advancements in AI may indeed lead to numerous negative outcomes. Consequently, people’s desire for a favorable future may impede them from accurately assessing current and/or future AI capabilities. For example, technologist Elon Musk tweeted, “We need to be super careful with AI. Potentially more dangerous than nukes” (Khatchadourian, 2015, para. 5). Bill Gates shares Musk’s concern, “When people say [AI] is not a problem, then I really start to get to a point of disagreement. How can they not see what a huge challenge this is?” (Khatchadourian, 2015, para. 5). Experts are rightly concerned about the unanticipated and unintended consequences of developing AIs, especially if AIs vastly surpass human capabilities.

The rapid advancement of AI is expected to fundamentally alter society, especially as AIs surpass human ability in more and more domains. If technology continues to progress at an exponential rate, as it has done for decades (Brynjolfsson & McAfee, 2014), we are prone to vastly underestimate its progress. Moreover, due to the motivated reasoning bias, our expectations may be utterly out of sync with reality. An awareness of these biases and their implications is the first step in preparing society for a future that may be vastly different from our present.


Received January 13, 2021
Revision received January 10, 2024
Accepted January 12, 2024
Comments
0
comment
No comments here
Why not start the discussion?