Skip to main content
SearchLoginLogin or Signup

Public Perceptions of ChatGPT: Exploring How Nonexperts Evaluate Its Risks and Benefits

Volume 5, Issue 4, https://doi.org/10.1037/tmb0000140

Published onOct 21, 2024
Public Perceptions of ChatGPT: Exploring How Nonexperts Evaluate Its Risks and Benefits
·

Abstract

Despite the media hype and contentious debate surrounding generative artificial intelligence technologies, there is a dearth of research on how these technologies are perceived by the general public. This study aimed to bridge this gap by investigating (a) how people perceive the risks and benefits of ChatGPT and (b) the antecedents of such perceptions. A U.S. national survey (N = 1,004) found that individuals with higher education levels, interest in politics, knowledge about ChatGPT, leftist ideology, a sense of personal relevance, and a skeptical view of science tend to perceive greater risks associated with ChatGPT. These results challenge the conventional “knowledge deficit” model, suggesting that negative perceptions of technology are not merely a result of insufficient knowledge; such perceptions can also stem from a critical mindset that approaches artificial intelligence technology with caution. In contrast, individuals who have previously used ChatGPT, regard it as personally relevant to their lives, and show a keen interest in new media technologies in general tend to recognize its benefits. These patterns suggest that risk perceptions involve more complex cognitive information processing, while benefit perceptions often arise from relatively intuitive decision-making processes. Our findings underscore the vital role of science communication and education in facilitating informed discussions about the risks and benefits of emerging technologies like ChatGPT.

Keywords: artificial intelligence, ChatGPT, risk perception, benefit perception

Disclosures: While the authors have used this data once for another publication, it focused on ChatGPT adoption, which is a different focus from the current article. The authors have no conflicts of interest to declare. No funding was received for this article.

Data Availability: The data is publicly available on the Open Science Framework at https://osf.io/jfsy6.

Open Access License: This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY- NC-ND). This license permits copying and redistributing the work in any medium or format for noncommercial use provided the original authors and source are credited and a link to the license is included in attribution. No derivative works are permitted under this license.

Correspondence concerning this article should be addressed to Sangwon Lee, School of Journalism, School of Media and Communication, Korea University, Media Hall, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Korea. Email: leesangwon@korea. ac.kr


Video Abstract


Spurred by the release of ChatGPT in November 2022, generative artificial intelligence (AI) technology has generated a lot of buzz and attention within the tech industry and beyond. Generative AI, defined as “computational techniques that are capable of generating seemingly new, meaningful content such as text, images, or audio from training data” (Feuerriegel et al., 2024, p. 111), stands out from other technological advancements by creating more natural, human-like texts, images, and nuanced interactions through vast data sets and deep learning models. Ronge et al. (2024) systematically summarized four central aspects of generative AI: multimodality, interaction, flexibility, and productivity, distinguishing it from other related concepts. On the one hand, the versatility of ChatGPT has sparked excitement about its potential. By using ChatGPT to complete a variety of mundane tasks more quickly and efficiently, people can free up their time to focus on more challenging, higher value work (Marr, 2023). It can also assist people with disabilities or language barriers in communicating more effectively with others (da Silva, 2023). On the other hand, ChatGPT carries a tremendous risk of misuse. For example, OpenAI researchers have warned about generative AI being used maliciously, expressing concern that these tools could reduce the costs of disinformation campaigns and be exploited for monetary gain, political agendas, or creating chaos and confusion (Hsu & Thompson, 2023). Concerns have also been raised about unintended bias in the training data, which could lead to the model reinforcing stereotypes or discrimination (Goldstein et al., 2023).

As demonstrated in the recent open letter from current and former employees of OpenAI (Roose, 2024), the heated debate over the risks and benefits of generative AI technology has predominantly been characterized by highly technical discourse. However, the views of nonexperts on this emerging technology are less clear, except for the descriptive survey results from the Pew Research Center (2023), which found that the majority of American citizens are either wary of AI’s expanding presence in their daily lives or experience a combination of both excitement and concern. While a few recent studies have explored this question beyond Pew Research Center’s descriptive report (e.g., Tossell et al., 2024; Zhou & Sanfilippo, 2023), they predominantly relied on content analyses or were based on small convenience samples (e.g., college students). Therefore, how the general public evaluates the risks and benefits of generative AI technology and the psychological mechanisms underlying such perceptions remain understudied. Considering that public reaction is a crucial social dimension in the discourse surrounding generative AI technology, and that public opinions influence the trajectory of scientific development and technological adoption (Cheng & Jiang, 2020; Connor & Siegrist, 2016; Ho et al., 2019; S. Lee et al., 2020), it is important to understand how the general public forms judgments about the complexities of generative AI technology.

When examining nonexperts’ views on new technologies, early studies often relied on the knowledge deficit model, which emphasized a lack of knowledge as the primary factor influencing nonexperts’ views (Bauer & Schoon, 1993; Miller et al., 1997). However, this narrow focus on knowledge has been criticized for creating a limited understanding of how nonexperts develop their opinions and attitudes about novel technologies (Hansen et al., 2003; Scheufele & Lewenstein, 2005). Moreover, as Hansen et al. (2003) pointed out, nonexperts’ perceptions of risks and benefits need to be “understood and evaluated in their own terms” (p. 113). Hence, a more comprehensive approach is necessary, one that considers not only the knowledge individuals possess but also their value predispositions, interests, and sociodemographic contexts (Brossard & Nisbet, 2007; Ho et al., 2019; Scheufele & Lewenstein, 2005).

Against this backdrop, this study examines how the U.S. public perceives the risks and benefits associated with generative AI technology and explores potential antecedents of such perceptions, focusing on ChatGPT as it was the most recognized generative AI tool during the data collection period. In doing so, we drew on the cognitive miser model (Fiske & Taylor, 1991) in addition to the knowledge deficit model to better elucidate how nonexperts form judgments about generative AI technology. Understanding nonexperts’ perceptions of the risks and benefits of ChatGPT will provide valuable insights that might be overlooked in expert-level formal and numerical assessments. This understanding can also serve as a foundational guide for shaping technology development policies.

Risk and Benefit Perceptions

When a new technology emerges, people start assessing its associated risks and benefits to make decisions about its use or adoption (Gould et al., 1988). Risk perception is broadly understood as subjective judgments about the likelihood and severity of negative outcomes associated with a risk (Slovic, 2010). More specifically, technological risk perception denotes “the processing of physical signals and/or information about potential hazards and risks associated with a technology and the formation of a judgment about the seriousness, likelihood, and acceptability of this technology” (Renn & Benighaus, 2013, p. 295). On the other hand, benefit perception refers to “the perception of the positive consequences that are caused by a specific action” (Leung, 2013, p. 1450). In the context of technology adoption, benefit perception entails the perceived likelihood of various positive outcomes resulting from the use of that technology.

New technological innovations have often spurred strong reactions from the public. Research on voice-based digital assistants (e.g., Vimalkumar et al., 2021), chatbots (e.g., Cheng & Jiang, 2020), and AI algorithms (e.g., M. K. Lee, 2018) is especially relevant as each of these technologies shares similarities with generative AI. For instance, voice-based digital assistants and chatbots provide conversational interactions similar to those offered by generative AI, while algorithms embedded in social media platforms and other recommendation systems share the foundational technology of AI. It is important to note that despite the similarities, the range and quality of ChatGPT’s capabilities are significantly advanced, stemming from its ability to generate organized and creative texts and images (see Fowler, 2023). As such, ChatGPT presents itself as a disruptive technology (e.g., Silver, 2023), which can be best viewed through the lens of a multifaceted perception of both risks and benefits to society.

Although little has been studied about the factors that predict risk and benefit perceptions of generative AI technology, some studies have explored these questions in other technology contexts. One of the most commonly examined factors is knowledge of or familiarity with technologies (Satterfield et al., 2009). It is often argued that knowledge and familiarity boost benefit perceptions and reduce risk perceptions (Brossard & Shanahan, 2003; Miller & Kimmel, 2001). For example, knowledge about the recommendation algorithms mitigated the perceived risk of algorithms (Rohden & Zeferino, 2023). On the other hand, skeptics of the role of knowledge suggest that nonexperts develop risk and benefit perceptions of new technologies not based on their level of knowledge, but on their holistic views on technologies (Cobb & Macoubrie, 2004; Earle & Cvetkovich, 1995; Satterfield et al., 2009) or their personal interest or relevance to the topic (Z. Liu & Yang, 2023). Recent studies found that trust in technology is negatively associated with risk perceptions regarding chatbots (Silva et al., 2023) and voice-based digital assistants (Vimalkumar et al., 2021). Lastly, numerous studies have investigated how individual factors such as age, race, gender, education, and political attitudes affect their risk and benefit perceptions of new technologies (Besley, 2010; Brossard & Nisbet, 2007; Ho et al., 2019; Satterfield et al., 2009). In this study, we look into how these potential antecedents of risk and benefit perceptions may apply to the context of generative AI.

Understanding nonexperts’ perceptions of the risks and benefits associated with generative AI technology is important because these collective or particularly salient perceptions determine individuals’ attitudes toward and acceptance of new technologies (Cheng & Jiang, 2020; Ho et al., 2019; Rathore & Mahesh, 2021). These decisions are grounded in a careful weighing of the risks and benefits. For example, significant risks might be considered acceptable if they bring substantial benefits and there are no effective ways to mitigate the risks. Conversely, even small risks may be deemed unacceptable if they yield minor benefits or the risk can be easily reduced (National Research Council, 2009; R. Wilson & Crouch, 2001). One example of this kind of calculation is the widespread adoption of telehealth services during a global health emergency. The appeal of receiving medical care without leaving one’s home, enabling the avoidance of potential health risks, as well as the broader benefits such as bringing medical services to remote or underserved areas, has all contributed to a more favorable view of telehealth during the coronavirus pandemic (Appleton et al., 2021).

Knowledge Deficit Model Versus Cognitive Miser Model

Science communication scholars have traditionally relied on the knowledge deficit model (also known as the science literacy model) to explain how nonexperts form views on controversial technologies (Miller et al., 1997; Miller & Kimmel, 2001). According to this model, when people lack knowledge about science and technology and do not understand the mechanisms behind them, they are more likely to hold reservations and worries, while being less likely to recognize the benefits of technology (Sjöberg & Drottz-Sjöberg, 1991). For example, a meta-analysis of nearly 200 surveys conducted across 40 countries found that the more scientific knowledge people had, the more likely they were to hold favorable attitudes toward science (Allum et al., 2005). Hence, proponents of the knowledge deficit model have emphasized the importance of providing scientific information to the general public.

Yet, other studies have found that knowledge only explained a small amount of the variance in public views about technology (e.g., Brossard & Shanahan, 2003; Priest, 2001) or failed to find a significant relationship between knowledge and public perceptions of risks and benefits (e.g., Kamarulzaman et al., 2020; Park & Ohm, 2014). Some scholars even contended that knowledgeable individuals would not reject science but rather be critical of a mythical view of science and express their opinions accordingly (Cámara et al., 2018). In light of these mixed findings, we explore the role of knowledge in shaping nonexperts’ perceptions of the risks and benefits of ChatGPT with the following research question.Research Question 1 (RQ1): How is knowledge about ChatGPT associated with the perceived benefits (a) and risks (b) of ChatGPT?

While knowledge may explain a certain portion of nonexperts’ judgments about ChatGPT, a growing body of literature argues that people tend to rely on various value predispositions to guide their judgments about novel technologies (Brossard & Nisbet, 2007; Ho et al., 2019; Scheufele & Lewenstein, 2005). The cognitive miser model provides psychological explanations for this perspective; as part of human nature, individuals are cognitive misers who put minimal cognitive effort into reaching a decision (Fiske & Taylor, 1991). This tendency is particularly relevant in the realm of emerging technologies, where most people have little or no direct experience or knowledge. That is, faced with limited time and resources, people look for mental shortcuts such as readily available personal values or schema in forming perceptions about complex technologies (Scheufele & Lewenstein, 2005). Several studies have found that, in addition to knowledge about the technology, various factors such as trust in social institutions, sociodemographic factors, and political ideology predict how individuals perceive the benefits and risks of a new media technology (Brossard & Nisbet, 2007; Ho et al., 2008, 2019). In this light, we examine how the following factors predict nonexperts’ perception of risks and benefits associated with ChatGPT.

Trust in Science

Defined as the extent to which individuals believe that scientific institutions and experts are competent, reliable, and honest (Nisbet et al., 2015), trust in science has been identified as a key factor influencing public attitudes toward new technologies and scientific advancements. Most people do not possess the expertise required to make a rational assessment of complex technologies. For example, according to a Pew Research Center survey, only 39% of U.S. adults answered at least nine out of 11 questions about basic scientific concepts correctly (Kennedy & Hefferon, 2019). As a result, people often use their trust in science as a shortcut to form opinions about new technologies and make decisions about them (Cobb & Macoubrie, 2004; Earle & Cvetkovich, 1995). Studies have found that trust in science provides people with cognitive shortcuts for assessing the risks and benefits of emerging technologies. For instance, individuals with high trust in science were more likely to predict that the benefits of nanotechnology would outweigh the risks, while those who distrust science predicted that the risks would outnumber the benefits (Cobb & Macoubrie, 2004). Other studies have also found that trust in science is positively associated with the perceived benefits of stem cell research (H. Liu & Priest, 2009) and negatively associated with the perceived risks of a nuclear waste repository (Flynn et al., 1992). These findings suggest that, in the absence of sufficient knowledge, people rely on their trust in science to alleviate the fear of the unknown and the complexity of the situation (Siegrist & Cvetkovich, 2000). When people trust scientific institutions and scientists, they tend to focus more on the benefits of the technology, believing that it has been developed by competent and reliable scientists (Freudenburg, 1992, 1993; Slovic, 1999). Conversely, when people lack such trust, they are more likely to focus on the risks, assuming that the scientists behind the technology are not reliable or honest. In the context of this study, public awareness of ChatGPT is still in its infancy, and the public’s knowledge about ChatGPT is quite limited. Hence, we expect that trust in science would be positively associated with the perceived benefits of ChatGPT and negatively associated with the perceived risks of ChatGPT.Hypothesis 1 (H1): (a) Trust in science is positively associated with the perceived benefits of ChatGPT. (b) Trust in science is negatively associated with the perceived risks of ChatGPT.

Personal Relevance

Personal relevance refers to the degree to which people perceive an object (e.g., messages, products, and issues) as important or consequential in their own lives (Petty & Cacioppo, 1986). Previous studies found that personal relevance is an important factor in both risk and benefit perceptions (Z. Liu & Yang, 2023). When a technology holds personal relevance, individuals are more likely to engage cognitively and emotionally with it, thereby influencing their perceptions of the associated benefits and risks. Moreover, personal relevance serves as a motivating factor, stimulating individuals to actively process additional information about a technology, including its various benefits and potential drawbacks. First, when it comes to risk perception, the majority of the studies have found that a higher personal relevance is positively associated with a higher risk perception (e.g., Huurne & Gutteling, 2008; Z. Liu & Yang, 2023). The risk convergence model also suggests that personal relevance leads to greater risk perception, both directly and indirectly (So & Nabi, 2013). Applying these findings to the AI context (particularly the ChatGPT context), if one believes that AI will directly impact their lives (e.g., losing a job, dealing with plagiarism), they may hold a more negative view (risk perception) of the technology. Studies suggest that personal relevance also plays a role in benefit perception. For instance, Wang et al. (2020) found that one’s involvement in the energy policy issue was positively associated with the benefit perception of nuclear technology. Cui and Wu (2021) also found that personal relevance is positively associated not only with risk perception but also with benefit perception of AI. If people think AI can significantly improve their personal lives, they will likely recognize the possible benefits it can bring about.

All these studies suggest that personal relevance would be positively associated with both risk and benefit perceptions about the technology. Psychology literature suggests that personal relevance is essential in determining the way people process messages and form attitudes (Petty & Cacioppo, 1986). That is, when something is of great importance to an individual (high personal relevance), they will be driven to put in more thoughts and attention in order to comprehend it, which will make them both assess the positive (benefits) and negative (risks) aspects of the technology. Thus, we propose the following hypothesis.Hypothesis 2 (H2): Personal relevance is positively associated with the perceived benefits (a) and risks (b) of ChatGPT.

Interest in New Technologies

Interest can be defined as a relatively long-term orientation an individual has toward an object or an area of knowledge (Schiefele, 1991). It is often thought to involve a motivational component that promotes behaviors that satisfy their interests (Ryan & Deci, 2000). Interest in new technologies may impact risk/benefit assessments of new technological advances for several reasons. First, individuals with a greater interest in new technologies may be inclined to pay closer attention to news coverage of new technologies due to their motivational tendencies (Brossard & Shanahan, 2003). To the extent that media coverage is generally positive—which is usually the case for new scientific breakthroughs and innovations (e.g., Stephens, 2005)—interest in new technologies may lead to more positive perceptions about the technology’s potential (i.e., benefits). Second, individuals interested in new technologies may seek out others who share their interest to engage in discussions and conversations. These interactions can frequently serve to correct misperceptions or inaccuracies about risks (Ho et al., 2022). Taken together, it is likely that those with interest in new technologies will perceive greater benefits than risks.Hypothesis 3 (H3): (a) Interest in new technologies is positively associated with the perceived benefits of ChatGPT. (b) Interest in new technologies is negatively associated with the perceived risks of ChatGPT.

Political Variables

Key political variables, such as political interest and ideology, constitute important components of an individual’s predispositions that influence their attitudes (Gauchat, 2012; S. Lee, Taylor, et al., 2024). In this study, it is anticipated that these factors will have an impact on the perception of risks and benefits associated with ChatGPT. The literature in political psychology indicates that political conservatives, compared to liberals, are known to be more sensitive to external threats (Jost et al., 2003), less tolerant of the unfamiliar (G. D. Wilson, 1973), and more cautious toward novel stimuli (Shook & Fazio, 2009). All these characteristics suggest that conservatives are likely to perceive greater risks and fewer benefits of cutting-edge innovations. However, existing research has been mixed, with some studies finding that conservatives are more concerned about new technological innovations, perceiving fewer benefits (e.g., Mack et al., 2021) and others finding heightened risk perceptions among liberals (e.g., Kim et al., 2014). It appears that the effect of political ideology may depend on the specific issue/technology examined (see Drummond & Fischhoff, 2017). As ChatGPT is a nascent technology that has yet to be politicized, it is unclear whether an interest in politics would have any effect on perceptions of the technology. Thus, the impacts of political interest and ideology are examined through two research questions.Research Question 2 (RQ2): How are political variables (political interest and political ideology) associated with the perceived benefits (a) and risks (b) of ChatGPT?

Sociodemographics

Lastly, we examined the effects of sociodemographic variables using Ho et al.’s (2019) framework. Sociodemographic factors such as age, gender, race, level of education, and income can affect one’s life experiences and worldview, thus indirectly influencing one’s perceptions of new technologies. Multiple studies have found that socially dominant groups, such as White males, high-income individuals, and those with higher education, tend to perceive more benefits and lower risks associated with new technologies compared to socially marginalized groups (Besley, 2010; Flynn et al., 1994; Ho et al., 2019; Marshall, 2004; Satterfield et al., 2009), although some studies have noted that demographics do not play a significant role in explaining risk perception or benefit behavior (e.g., Kleijnen et al., 2004; Lindell & Perry, 2012; Meuter et al., 2005).

In terms of age, younger people tend to perceive fewer obstacles and hold more open attitudes toward potential benefits in their acceptance and use of emerging technologies (Staddon, 2020). For example, research has found that younger adults tend to accept autonomous vehicles more and perceive fewer risks than older adults (Hulse et al., 2018). However, a recent meta-analysis showed that the public’s risk perception toward emerging technologies is not affected by age (Li & Li, 2023). Overall, given the inconsistent patterns of the roles of demographics on the outcome variables, we propose a research question rather than formulating a clear hypothesis.Research Question 3 (RQ3): How are demographic variables associated with the perceived benefits (a) and risks (b) of ChatGPT?

Method

Data

Data was collected from an online panel registered with Dynata. To complement the nonrepresentative nature of an online panel, samples were drawn to match the demographic distribution of U.S. adults and their characteristics such as age, gender, education, income, and race/ethnicity. The online survey was launched between February 20 and 27, 2023. The particularity of the data collection period should be noted. The public release of ChatGPT in November 2022 brought about a range of concerns, especially in the early period when people were uncertain about its uses and implications (Heaven, 2023). Specifically, there were concerns about ChatGPT being used for criminal purposes (e.g., scams, deepfakes, and hacking; Browne, 2023), its potential impact on jobs and education (e.g., Heaven, 2023), and its use of various sources of data for training (e.g., Thorbecke, 2023). This period also coincided with accelerated developments in the AI sector, with companies like Google and Microsoft taking initiatives to position themselves as leaders in AI (Firth-Butterfield, 2023).

A total of 2,103 panel members were invited to participate in the survey, and they were instructed to provide their perceptions about AI. In the end, 1,004 respondents successfully completed the survey, yielding a response rate of 47.7% (American Association for Public Opinion Research Cooperation Rate 1). The median response time was 9 min and 40 s. Those who failed the attention check were screened out, and their data were not included in the final sample (i.e., not counted in the N). The sample closely mirrored the national population in age, gender, education, income, and race. The details of demographic variables are presented in Table 1. The data is publicly available on the Open Science Framework at https://osf.io/jfsy6. This study was approved by the Institutional Review Board at Nanyang Technological University on February 18, 2023 (NTU IRB-2023-100, titled “Perceptions of AI”).

Table 1

Demographics of the Sample

Demographic

%

Age

 18–24

18.7

 25–34

21.4

 35–44

19.7

 45–54

20.5

 55–64

5.1

 Age 65+

14.6

Sex

 Male

49.1

 Female

50.3

 Nonbinary

0.6

Race

 White

72.2

 Black

11.5

 Hispanic

6.8

 Asian

17.9

 Other

2.2

Education

100.00

 Less than high school graduate

2.8

 High school graduate

21.6

 Some college

30.4

 Associate’s degree

11.4

 Bachelor’s degree

29.2

 Graduate degree

14.5

Income

100.00

 Under $10,000

7.1

 $10,000–$29,999

15.3

 $30,000–$49,999

18.7

 $50,000–$69,999

17.6

 $70,000–$99,999

18.0

 $100,000–$144,999

12.3

 Over $150,000

11.0

Measures

Risk Perception

This study primarily focuses on the social dimensions of risk and benefit perceptions beyond individual satisfaction and convenience. Thus, we examine the economic, workplace, and educational impacts over the entertainment and social applications of the technology. As there were no validated scales that measure ChatGPT’s risk and benefits, we constructed our scale items based on a Pew survey on automation (Smith & Anderson, 2017) and a recent review article on ChatGPT’s impact (Evans et al., 2023). We measured risk perceptions across different domains, including the economy (two items: “ChatGPT will increase inequality between the rich and poor” and “People will have a hard time finding jobs”), the workplace (two items: “ChatGPT will distract employees from their tasks, leading to lower efficiency” and “ChatGPT will lead people to make unethical judgments”), and the education (“ChatGPT will reduce critical thinking skills” and “Chat GPT will be used for cheating and plagiarism”). We also measured the broad/general risk perception toward ChatGPT (i.e., “ChatGPT will threaten human society”; total of seven items measured on a 5-point scale; range: 1–5; Cronbach’s α = .77, M = 3.35, SD = .70).

Benefit Perception

Since respondents’ benefit perceptions toward the technology might vary across different areas (e.g., economy, education), we measured benefit perceptions across different domains, including the economy (two items: “The economy will be much more efficient” and “ChatGPT will create many new, better paying human jobs”), the workplace (two items: “People will focus less on mundane work and more on what really matters” and “ChatGPT will help decision-makers make more informed decisions”), and the education (“ChatGPT will be effectively used to supplement traditional classroom instruction” and “ChatGPT will increase accessibility to education”). We also measured the broad/general benefit perception toward ChatGPT (i.e., “ChatGPT will help solve the problems facing human society”; total of seven items measured on a 5-point scale; range: 1–5; Cronbach’s α = .90, M = 3.13, SD = .87). Domain-specific items have commonly been used in research on risks and benefits (e.g., Weber et al., 2002), and while it is less common to include a general, cross-domain item in the mix, a post hoc confirmatory factor analysis showed support for our single-factor model (χ2 = 51.33, p < .001; root-mean-square error of approximation = .064 [.047, .082], p-close = .080; comparative fit index = .98; Tucker–Lewis index = .95; standardized root-mean-square residual = .03). This gives some validation to our single-dimensional approach that includes both broad and specific items.

Previous Usage

As previous experience with using ChatGPT might influence users’ perceptions of the technology, we included a dichotomous variable for prior usage of ChatGPT as a predictor of the outcome variables (previous usage = 21.5%).

Science Trust

Based on previous research (S. Lee, Jones-Jang, et al., 2024; Nisbet et al., 2015), science trust was measured by asking to what extent they agree or disagree with three statements: “I have very little confidence in the science community,” “The science community often does not tell the public the truth,” and “I am suspicious of the science community” (all items measured on a 5-point scale). These items were reverse-coded and averaged (range: 1–5; Cronbach’s α = .85, M = 2.75, SD = 1.22).

ChatGPT Knowledge

Adopting the typical ways of measuring science knowledge (see National Science Foundation science knowledge scale from Roose, 2024), we constructed a measurement for ChatGPT knowledge. Since there was no preexisting measurement for ChatGPT knowledge given its nascent nature, we created knowledge items that tap into how much people know about ChatGPT. More specifically, we presented participants with eight statements, including both true (four statements; e.g., “ChatGPT can generate biased responses”) and false (four statements; e.g., “ChatGPT is designed by Google”) statements covering various aspects of ChatGPT (the full items are in the Appendix). Participants were then asked to rate whether they think each statement is true or false, with an additional “Don’t know” option. Correct responses were coded as 1, while incorrect responses and “Don’t know” responses were coded as 0. Correct scores were added to create an index of objective knowledge. Higher scores indicate greater ChatGPT knowledge (range: 0–8; M = 2.65, SD = 1.50). Reflecting criticisms from some scholars regarding collapsing “don’t know” responses with inaccurate responses in measuring one’s knowledge, we conducted an additional analysis by excluding “don’t know” responses. The results remained consistent with the original findings, showing no changes in the significance of variables, only slight adjustments in a few coefficient sizes. Our results are based on a version where inaccurate answers and “don’t knows” are collapsed, which aligns with the majority of studies in this area. Additionally, we have made the additional analysis available in the Open Science Framework at https://osf.io/jfsy6.

Personal Relevance

Based on previous research (e.g., Cui & Wu, 2021), involvement in ChatGPT was measured by asking to what extent they agree or disagree with the following statement “ChatGPT would greatly affect my life (1 = strongly disagree to 5 = strongly agree)” (range: 1–5; M = 3.12, SD = 1.11).

Interest in New Technologies

Respondents were asked to indicate how interested they are in new media technologies (range: 1–5; M = 3.42, SD = 1.14).

Political Variables

First, to measure political interest, respondents were asked to indicate how interested they are in politics on a 5-point scale ranging from 1 (not interested at all) to 5 (very interested; range: 1–5; M = 3.06, SD = 1.32). To measure political ideology, respondents were asked to place themselves on a 7-point political ideology scale ranging from 1 (strong liberal) to 7 (strong conservative; range: 1–7; M = 4.02, SD = 1.73).

Demographics

We also controlled the demographic variables as they can also influence one’s risk and benefit perception regarding ChatGPT. These variables include age (M = 41.75, SD = 16.59), gender, education, race, and income. Detailed breakdowns for brackets for each variable are presented in Table 1. For the gender variable, there were insufficient numbers to permit statistical analysis for the nonbinary group; thus, we dichotomized the variable without including them, consistent with previous research. Lastly, we present the correlations of all variables in Table 2.

Table 2

Zero-Order Correlations

Variable

1

2

3

4

5

6

7

8

9

10

11

12

13

14

1. Age

2. Gender

.08*

3. Education

.21***

−.09**

4. Income

.12***

−.12***

.46***

5. Race (White)

.25***

.05

.09**

.17***

6. Benefit perception

−.23*

−.06

.03

.03

−.07*

7. Risk perception

−.10**

−.04

.13***

.06

−.04

.16***

8. Previous ChatGPT use

−.26***

−.23***

.06*

.06

−.04

.30***

.17***

9. ChatGPT knowledge

−.07*

−.20***

.15***

.07*

−.04

.09**

.24***

.18***

10. Science trust

.02

.05

−.04

.03

.05

−.10**

−.43***

−.12***

−.18***

11. Interest in new technologies

−.25***

−.22***

.08*

.07*

−.14***

.43***

.11***

.27***

.20***

.01

12. Personal relevance

−.31***

−.05

−.01

−.00

.10**

.58***

.20***

.29***

.09**

−.16***

.39***

13. Political interest

.12***

−.16***

.21***

.16***

.04

.19***

.21***

.15***

.21***

−.12***

.37***

.20***

14. Political ideology

.16***

−.05

−.01

−.01

.07*

−.05

.02

−.03

−.01

−.26***

−.05

−.01

−.00

Note. Cell entries are two-tailed zero-order correlation coefficients. For dichotomous variables, Pearson’s point-biserial correlations were used.
*p < .05. **p < .01. ***p < .001.

Results

First, we ran variance inflation factor and the Durbin–Watson statistic to check the assumptions of ordinary least squares (OLS) regression. Variance inflation factors were all below 1.5, and the Durbin–Watson value of 2.05 is between 1.5 and 2.5, which indicates that there was neither multicollinearity nor autocorrelation issue. Additionally, the assumption of homoscedasticity has been satisfied, which altogether suggests that the assumptions of OLS regression are met.

Then, to test all the hypotheses and research questions, we conducted a series of OLS regression analyses. RQ1 explored the relationship between ChatGPT knowledge and people’s benefits (RQ1a) and risk perception (RQ1b) of ChatGPT. As presented in Table 3, knowledge about ChatGPT was not significantly associated with benefit perception (B = −.01, p = .53), but it was positively associated with risk perception (B = .06, p < .001).

H1 predicted that trust in science would be positively related to benefit perception (H1a) while negatively related to risk perception about ChatGPT (H1b). As presented in Table 3, the results suggest that trust in science was not significantly associated with benefit perception (B = −.02, p = .29), while it was negatively associated with risk perception (B = −.22, p < .001). In other words, those who trust the science/science community perceived less risk associated with the technology compared to their counterparts. Thus, H1 is partially supported.

H2 predicted that personal relevance is positively associated with both benefit (H2a) and risk (H2b) perception about ChatGPT. As presented in Table 3, the results suggest that personal relevance is strongly positively associated with both benefit (B = .35, p < .001) and risk (B = .05, p = .02) perception. That is, those who believe that ChatGPT will have a great impact on their lives tend to recognize both the risks and benefits associated with it compared to those who do not. Thus, H2 is supported.

H3 predicted that interest in new media technologies would be positively related to benefit perception while negatively related to risk perception about ChatGPT. As indicated in Table 3, the results indicate a strong positive association between interest in new media technologies and benefit perception (B = .18, p < .001), while there was no significant association with risk perception (B = −.02, p = .40). In essence, individuals with a greater interest in new media technologies tend to perceive greater benefits from the technology, yet they do not necessarily hold greater risk perceptions compared to their counterparts. Thus, H3 is partially supported.

Table 3

Predicting Risk and Benefit Perception About ChatGPT

Variable

Risk perception

Benefit perception

Demographics

 Age

−.003 (.00)*

.00 (.00)

 Gender (female)

.06 (.04)

.07 (.05)

 Education

.04 (.02)**

−.00 (.02)

 Race (White)

−.01 (.05)

.03 (.05)

 Income

.00 (.01)

.003 (.01)

Value predispositions and other heuristics

 Trust in science

−.22***

−.02 (.02)

 Interest in new technologies

−.02

.18 (.02)***

 Personal relevance

.05*

.35 (.02)***

 Political interest

.07***

.01 (.02)

 Ideology (high = conservative)

−.03*

−.02 (.010)

Previous usage and knowledge

 Previous ChatGPT usage

.08 (.05)

.26 (.16)***

 ChatGPT knowledge

.06 (.01)***

−.01 (.02)

N

995

995

 Adjusted R2 (%)

25.2

39.1

Note. Entries are unstandardized regression coefficients, with standard error in parentheses.
† p<.10. *p<.05. **p<.01. ***p<.001.

Lastly, we explored whether and how political values (RQ2) and demographic factors (H4 and RQ1) influence one’s risk and/or benefit perception. First, none of the demographics and political factors were related to one’s benefit perception of ChatGPT (H4a not being supported). But many demographic and political factors did influence the risk perception of ChatGPT. Specifically, we found that younger (B = −.003, p = .01), more educated (B = .04, p = .009), liberal people (B = −.03, p = .02), and those with higher political interest (B = .07, p < .001) held higher risk perceptions of the technology compared to their counterparts. Thus, H4a (socially dominant groups being less likely to perceive risks associated with ChatGPT) is not supported, while we found a significant negative relationship between age and risk perception toward ChatGPT.

Discussion

ChatGPT has the potential to revolutionize the way we interact with our environment. From customer service solutions to personal assistants, the technology is being adopted by more and more companies, allowing us to communicate with machines in a more natural way. Its applications are expected to be used in areas like business, education, and health care as well. However, little is known about public perception of ChatGPT. Therefore, we offer initial evidence of how people understand this technology, particularly focusing on their risk and benefit perceptions and the factors influencing those perceptions. Investigating these questions has enabled us to gain an understanding of how people perceive this technology, providing insight into how we should coexist with it in the future.

First of all, we found that those who trust science less tend to perceive higher risks associated with ChatGPT compared to those who trust science more. This demonstrates that people’s attitudes or perceptions of new technology are linked to the degree to which they trust science in general. In the early stages of technology development, when much of its underlying mechanisms, evolution, and social impacts are unknown, those with less trust in science tend to have more negative views of the technology. Fundamentally, these negative concerns are understandable, as the benefits of science have not always been equally distributed across society, often ignoring historically disadvantaged communities. For example, numerous studies show that a wide range of AI applications perpetuate harmful stereotypes and racist beliefs (e.g., Akter et al., 2021; Obermeyer et al., 2019), and similarly with ChatGPT (e.g., Deshpande et al., 2024; Kidd & Birhane, 2023). Given the relationship between trust in science and people’s perceptions of emerging technology, reducing negative perceptions or concerns about technology requires fundamental efforts to increase trust in science.

Our findings also indicate that demographic and political factors play a role in people’s risk perception of ChatGPT. Specifically, those who are interested in politics, liberal, more educated, more interested in science, and who have higher knowledge about ChatGPT perceive higher risks than their counterparts. These findings refute the traditional scientific literacy model (i.e., knowledge deficit model), which assumes that negative perceptions of science/technology are largely due to a lack of knowledge (Miller, 1998). Indeed, those who tend to have high political interest, are more educated, are more knowledgeable, and have a critical mindset (termed as “critical engagers” by Cámara et al., 2018) exhibit increased risk perceptions toward the technology. This heightened awareness has multiple psychological underpinnings.

First, more educated and knowledgeable people typically display enhanced cognitive complexity, enabling them to discern and process a broader range of information, including potential risks. Additionally, a critical mindset encourages the evaluation of various perspectives, extending beyond mere convenience and leading to heightened sensitivity and awareness of technological risks. Such individuals often employ more advanced information processing strategies, allowing them to assess potential dangers posed by technology. Consequently, this group is likely to reject “blind enthusiasm,” opting instead for a more scrutinized stance toward scientific developments (Cámara et al., 2018). This phenomenon presents a challenge to the “axiom of Public Understanding of Science (PUS),” which posits that “the more you know, the more you love it” (see Bauer, 2008).

Overall, our findings indicate that the knowledge deficit model alone is insufficient to understand how nonexperts form their judgments about generative AI technology, as suggested by some scholars (Hansen et al., 2003; Scheufele & Lewenstein, 2005). This highlights the importance of considering the cognitive miser model in understanding public attitudes toward generative AI technology. At the same time, our findings contribute to the growing scholarly emphasis on how individuals’ existing values and beliefs influence opinion formation about technologies (Brossard & Nisbet, 2007; Ho et al., 2019).

These findings also have practical implications for the scientific community and scientists. Rather than disregarding concerns about potential risks associated with ChatGPT, it is essential for the scientific community and tech companies to acknowledge these concerns and maintain transparency with the public. This proactive approach aims to cultivate confidence in the technology, recognizing that perceived risks are not rooted in ignorance or baseless fear. Instead, they often stem from forward-thinking and sophisticated considerations. While risk perception regarding ChatGPT varied significantly across the aforementioned factors, the perception of benefits associated with ChatGPT remained relatively consistent, except for one’s previous experience with ChatGPT, interest in new technologies, and personal relevance. That is, individuals who have already used ChatGPT, have a high interest in new technologies, and perceive it as highly relevant in their lives are likely to feel excited about its development, leading to a greater perception of its benefits. Lastly, unlike the antecedents of the risk perception, those of the benefit perception about ChatGPT did not vary across any of the demographic (age, gender, education, and income) and/or political (political interest and political ideology) factors, briefly speculating on this below.

These different patterns of antecedents predicting risk and benefit perceptions again confirm that risk and benefit perceptions are not two poles of the same continuum but rather separate dimensions (Miller et al., 1997). In some instances, they do go in the opposite direction, as might be expected. For instance, trust in science was negatively associated with benefit perception while being positively associated with risk perception. In other instances, antecedents affected only one of the perceptions. For example, prior experience using ChatGPT increased benefit perception while having no effect on risk perception, highlighting that actual experience triggers different perceptual processes (e.g., Taylor & Todd, 1995). In other cases, this was not true. For example, personal relevance was both a positive predictor of risk and benefit perceptions. This indicates that those who believe that ChatGPT will have a great impact on their lives tend to hold both higher risk and benefit perceptions compared to their counterparts. Furthermore, benefit perceptions were predicted by just three factors (previous use, interest in technology, and personal relevance), while risk perceptions were predicted by many other factors, including political variables. This finding aligns with previous studies indicating that benefit perception is largely “experientially driven” (e.g., whether individuals have used it before and whether it reduced their workload), involving relatively simple and intuitive processing, whereas risk perception results from more complex cognitive information processing (see Cox & Cox, 2001; Fischer & Frewer, 2009; Verbeke et al., 2005). While this interpretation is based on previous studies, we have not directly tested these mechanisms. Therefore, further work is needed to more adequately explain the asymmetry between risk and benefit perceptions.

Several limitations in the present study point to directions for future research. First of all, there are variations in people’s familiarity with this technology as it is a fairly new technology that is still in development. This implies that if people get to know more about this technology and become more familiarized with it (a large proportion of our sample reported that they have not yet “used” this technology), their risk and benefit perceptions toward technology as well as their attitude and behavioral intention toward technology can become different. For instance, the risk and benefit perceptions of the technology are close to the scale midpoint. One reason for this may be that the technology is in its early stages, and people may not yet have a clear idea about it, which could have impacted the study’s results. While we believe that the rapidly evolving nature of technology is certainly one limitation of our study, such a limitation is understandable and natural for all kinds of “new” technology. Given little to none direct experiences, individuals may form their risk and benefit perceptions relying on information from the media and elite opinions. Future studies may replicate or extend this study after more people get to know and become more familiarized with ChatGPT.

Second, we need to conduct further research to uncover the underlying psychological mechanisms in order to gain a deeper understanding of our findings. Due to the nascent stage of this line of research, our focus was primarily on exploring how a wide range of variables, previously known to be correlated with technology acceptance, relate to the perception of risks and benefits associated with the technology, rather than directly testing the psychological mechanisms. However, this approach has limitations when it comes to explaining certain observed patterns in our study, such as why individuals with a higher level of political interest perceived more risks related to ChatGPT. In this way, our research is informative given the nascent stage of ChatGPT-related studies, but it is not theoretically groundbreaking. Future studies can address these limitations by employing structural equation modeling within survey design or conducting focus-group interviews to delve deeper into how people perceive this technology and the reasons behind their perceptions.

Third, while the knowledge deficit model primarily focuses on “scientific knowledge and information,” our emphasis lies on “technological knowledge,” strictly speaking. Nonetheless, this discrepancy is not critical, as technological knowledge in this context relates to understanding how ChatGPT—generative AI—operates, which is inherently intertwined with the functioning of science. Furthermore, previous studies have found that “technological knowledge” is also linked to perceptions of risk and benefit associated with the technology, as presented above.

Fourth, while the demographic distribution of the sample collected from Dynata may resemble that of the population, there is a limitation due to the self-selection bias, influenced by the monetary incentives provided by the survey company and differences in interest in this topic.

Lastly, due to the cross-sectional nature of the data, this study is unable to establish causal relationships between the key variables. While this approach is typical for the nascent stage of research, and we are not particularly interested in causal relationships at this stage (given the early development of the technology), future research could employ a panel survey design or experimental methodology to more effectively assess causal relationships as people become more familiar with ChatGPT.

Conclusion

As ChatGPT is a relatively new technology, scholars and society have yet to reach a consensus as to how much people should be concerned about it. Therefore, the purpose of this study is not to determine how to increase or decrease risk and benefit perceptions but rather to identify and test social and demographic factors that predict the risk and benefit perceptions of this particular technology. This will help policymakers understand how different people perceive the risk and benefits of ChatGPT differently, which can inform policy directions or priorities. It is important to note that the perceived risks and benefits of new technology can often be more influential than the actual risks and benefits in terms of application and policy considerations. Our findings suggest that risk perceptions about the technology are not simply driven by fear, myth, or a lack of knowledge; in fact, educated and knowledgeable individuals exhibited higher levels of risk perception than their counterparts. Should the scientific community and tech companies naively assume that the public will automatically endorse a technology once they use it and recognize its benefits, such an assumption could be a misperception. While tech companies may not prioritize gaining public trust, as exemplified by the case of OpenAI (e.g., OpenAI being accused of secret web-scraping; Creamer, 2023), ignoring public trust could potentially produce negative perceptions of the technology or harm the brand image in the long term.

Taken together, it is essential for the scientific community and tech companies to take these concerns into account and be transparent with the public in order to build trust in the technology and, ultimately, use it to benefit our society.

Appendix
ChatGPT Knowledge Items

Table 4

Item

True

False

Don’t know

ChatGPT is an artificial intelligence software.

ChatGPT can provide a personalized response.

ChatGPT can generate biased responses.

ChatGPT is the first chatbot.

ChatGPT is designed by Google.

ChatGPT cannot give you real-time data.

ChatGPT can search the internet.

ChatGPT cannot take responsibility for the content.


Received February 28, 2024
Revision received July 12, 2024
Accepted July 28, 2024
Comments
0
comment
No comments here
Why not start the discussion?