Skip to main content
SearchLoginLogin or Signup

A Longitudinal Analysis of the Willingness to Use ChatGPT for Academic Cheating: Applying the Theory of Planned Behavior

Volume 5, Issue 2. DOI: 10.1037/tmb0000133

Published onMay 20, 2024
A Longitudinal Analysis of the Willingness to Use ChatGPT for Academic Cheating: Applying the Theory of Planned Behavior
·

Abstract

The rise of chatbot technology has raised concerns about the potential for these technologies to facilitate cheating behaviors. Therefore, it is crucial to investigate the determinants of individuals’ utilization of chatbot-generated texts in a fraudulent manner. Based on the theory of planned behavior, we investigated antecedents of the intention and behavior regarding the use of chatbot-generated texts for academic cheating. Participants (N = 610) provided data on their attitudes, subjective norms, perceived behavioral control, and intentions to use chatbot-generated texts for academic cheating. Three months later, 212 of these participants reported on whether they had actually used chatbot-generated texts for academic cheating during the last 3 months. Results showed that attitudes, subjective norms, and perceived behavioral control significantly predicted intentions. Intentions, in turn, predicted future usage. Importantly, this relationship remained significant when accounting for the influence of past usage of chatbot-generated texts for academic cheating. These findings underscore the relevance of the theory of planned behavior in understanding academic cheating intentions and behavior, providing insights for potential interventions to reduce academic cheating.

Keywords: ChatGPT, academic cheating, theory of planned behavior

Funding: There was no external funding.

Disclosures: The authors report there are no competing interests to declare. Data Availability: The data have not been published previously. The data are publicly available at https://osf.io/hgd6r/?view_only=294f0c280dae43aca d1595d3a71e4f3f

Open Access License: This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY- NC-ND). This license permits copying and redistributing the work in any medium or format for noncommercial use provided the original authors and source are credited and a link to the license is included in attribution. No derivative works are permitted under this license.

Correspondence concerning this article should be addressed to Tobias Greitemeyer, Department of Psychology, Universität Innsbruck, Universitätsstraße 5–7, 6020 Innsbruck, Austria. Email: [email protected]


Video Abstract


The issue of academic cheating remains prevalent in educational environments, and the emergence of sophisticated language models, like ChatGPT, has heightened concerns about their role in facilitating dishonest behaviors. The capability to produce high-quality text makes ChatGPT an enticing resource for students looking to effortlessly complete their assignments. In addition, its ability to generate text quickly and accurately increases the temptation for students to use it for unethical purposes, such as plagiarizing content or fabricating responses.

Indeed, the widespread use of ChatGPT and similar artificial intelligence (AI) technologies in education has raised concerns about potential harms to students, institutions, and society. Students relying on chatbot-generated texts may experience limited learning opportunities, decreased critical thinking skills, and academic penalties for dishonest practices. Institutions risk reputational damage and erosion of trust if academic cheating becomes widespread, compromising academic credentials and qualifications. Finally, society as a whole may be affected by the normalization of academic dishonesty facilitated by AI, as it could undermine broader values and ethical standards, leading to increased dishonest behavior in other domains as well. Hence, it becomes essential to investigate the determinants influencing both the intention and actual usage of chatbot-generated texts without appropriate attribution, constituting an act of academic cheating.

In the present research, we apply the theory of planned behavior (Ajzen, 1991) to understand the psychological factors that drive students’ decisions to cheat. The theory of planned behavior posits that an individual’s intentions to perform a specific behavior are influenced by their attitudes toward the behavior, their perception of social norms, and their perceived behavioral control. These intentions, in turn, predict whether the individual will actually engage in the behavior. We thus reasoned that a positive attitude toward using chatbot-generated texts for academic cheating, the perception that using chatbot-generated texts for academic cheating is socially accepted within one’s peer group or academic environment (i.e., social norms), and the belief that one possesses the necessary knowledge and skills to utilize chatbot-generated texts (i.e., perceived behavioral control) lead to a higher intention to use chatbot-generated texts for academic cheating. The intention to use chatbot-generated texts for academic cheating should, in turn, be the proximal determinant of whether an individual will actually engage in such behavior in the future.

To examine these ideas, participants were asked to report their attitudes, social norms, perceived behavioral control, and intentions to use chatbot-generated texts for academic cheating during an initial data collection. After a 3-month interval, they were again surveyed to determine whether they had actually used chatbot-generated texts for academic cheating. Of particular importance, participants were also queried about their past usage of chatbot-generated texts for academic cheating during the first data collection. We postulated that intentions would still be connected with later actual usage even when accounting for the impact of past usage. This pattern of findings would provide strong support for the central prediction of the theory of planned behavior that intentions determine actual behavior.

The Theory of Planned Behavior and Cheating Behavior

The theory of planned behavior aims to explain and predict human behavior (Ajzen, 1991; Armitage & Conner, 2001) and has been widely applied in various fields, including health promotion, consumer behavior, environmental psychology, and understanding social behaviors (Ajzen, 2020; Bosnjak et al., 2020). Applied to understanding and addressing academic cheating, the theory of planned behavior would predict that how students perceive academic dishonesty (attitudes), the belief that their peers or the academic environment condone or even encourage cheating (subjective norms), and their perception of their ability to cheat (perceived behavioral control) influence their intentions and in turn their subsequent actual cheating behavior.

A number of studies have tested the theory of planned behavior in the context of dishonest behavior. For example, Beck and Ajzen (1991) showed that attitudes, subjective norms, and perceptions of behavioral control were related to intentions, which in turn were predictive of actual cheating, lying, and shoplifting behavior. Using structural equation modeling, Stone et al. (2009) found that attitudes, subjective norms, and perceptions of behavioral control were significantly related to intentions to engage in academic misconduct, and intentions were significantly related to actual cheating behavior. These findings have been conceptually replicated in a study examining cheating behavior during online exams (Ababneh et al., 2022). Another study (Wang et al., 2022) found that the intention to engage in internet ethical behaviors is influenced by one’s attitude toward ethical behavior, subjective norms, and perceived behavioral control. Similar results were reported in a study that utilized undergraduate students from a Malaysian public university as participants (Yusliza et al., 2022). In a French context, Hendy and Montargot (2019) found that attitude, subjective norms, and perceived behavioral control explained 33% of variance in academic dishonesty. In cross-cultural research, comprising university students from seven countries (Poland, Ukraine, Romania, Turkey, Switzerland, United States, and New Zealand), attitudes and perceived behavioral control significantly predicted students’ intentions to participate in academic dishonesty, whereas subjective norms had a relatively minor impact (Chudzicka-Czupała et al., 2016).

The Temporal Connection Between Intentions and Behavior and the Role of Past Behavior

These studies suggest that the theory of planned behavior is a valid and effective framework for understanding and predicting dishonest behavior in various contexts. On the other hand, most of the prior work had limitations related to the assessment of intentions and actual behavior and the control for past behavior. One limitation is that some of the studies relied solely on intentions and did not assess actual dishonest behavior (e.g., Chudzicka-Czupała et al., 2016; Wang et al., 2022; Yusliza et al., 2022), while other studies assessed behavior without measuring intentions (e.g., Hendy & Montargot, 2019).

When actual behavior and intentions (and attitudes, subjective norms, and perceptions of behavioral control) were assessed, a cross-sectional research design was typically employed (e.g., Ababneh et al., 2022; Juan et al., 2022; Stone et al., 2009; Uzun & Kilis, 2020; Yu et al., 2021; Yusliza et al., 2022). That is, intentions and actual behavior were assessed at the same time. However, when measured concurrently, the direction of the relationship between intention and actual behavior remains ambiguous or unknown. As the theory of planned behavior asserts that intentions predict actual behavior, methodologically, behavior and its reference point should be assessed at a later time than the assessment of intentions. Using a longitudinal design establishes a clear temporal order, allowing stronger conclusions whether intentions precede and drive subsequent behavior and whether the influence of intentions on subsequent actions is enduring over time. A further advantage of separating the assessment of intentions and actual behavior is the reduction of common method bias, because participants are less likely (and able) to consciously connect their responses.

In these aforementioned cross-sectional studies, the absence of controlling for past behavior is a further limitation. Prior research has consistently revealed a strong association between past cheating behavior and subsequent instances of cheating (e.g., Harding et al., 2007; Passow et al., 2006; Whitley, 1998). Controlling for past behavior offers several advantages. When predicting intentions from attitudes, subjective norms, and perceptions of behavioral control, incorporating past behavior into the analysis can explore the potential influence of additional factors on intention formation that are not fully captured by attitudes, subjective norms, and perceived behavioral control alone. When predicting actual behavior from intentions, incorporating past behavior helps to isolate the unique predictive power of intentions in determining future behavior. As the present study utilizes a longitudinal design, it offers the further advantage of controlling for past behavior.

Notably, Beck and Ajzen (1991) did assess actual behavior several months later than intentions were measured, and they also controlled for past behavior. However, the small number of participants (N = 34) who provided reports of their actual behavior in the second phase limits the statistical power and generalizability of the findings. Therefore, we considered it essential to conduct a study with a large sample size to ensure sufficient statistical power. In addition, we applied the theory of planned behavior to a novel context—the use of advanced language models like ChatGPT for academic misconduct. Applying the theory of planned behavior to this novel context can extend the theory’s applicability beyond traditional scenarios by exploring how attitudes, subjective norms, and perceived behavioral control influence intentions and behavior in a technologically mediated academic environment.

The Present Research

Based on the theory of planned behavior, we hypothesized that attitudes, subjective norms, and perceptions of behavioral control would predict the intention to use chatbot-generated texts for academic cheating. In addition, we anticipated that these intentions would be the proximal predictor of the actual use of chatbot-generated texts for cheating purposes. Importantly, the study incorporates a longitudinal design, assessing behavior 3 months after measuring intentions, which allows gaining insights into the temporal dynamics of the intention–behavior relationship. The longitudinal design also allows to control for the impact of past usage of chatbot-generated texts for academic cheating, so that the unique contribution of intentions to the prediction of future behavior can be disentangled.

We preregistered (https://aspredicted.org/HMM_MXF) the following hypotheses:

Hypothesis 1: Attitudes, subjective norms, and perceptions of behavioral control predict the intention to use chatbots-generated texts for academic cheating.

Hypothesis 2: Intentions in turn are the direct antecedent of the actual use of chatbots-generated texts for academic cheating, even when controlling for the past use of chatbots-generated texts for academic cheating.

To test our hypotheses, we conducted correlation and regression analyses. In ancillary analyses, we report on the results of structural equation modeling that provide a test of the full theory of planned behavior model. According to the model, not only intentions but also perceived behavioral control may directly influence behavior (Ajzen, 1991). Hence, in the full model, attitudes, subjective norms, and perceptions of behavioral control were treated as predictor variables of intention, while intention and perceptions of behavioral control were predictor variables of behavior. Nevertheless, given the established precedence of intentions over perceived behavioral control in predicting behavior (e.g., Kaiser & Gutscher, 2003), our hypotheses focused solely on the direct relationship between intention and future behavior. The study received ethical approval from the internal review board for ethical questions by the scientific ethical committee of our university (Certificate of good standing, 90/2022).

Method

Participants

All students of an Austrian university, which has a total of almost 28,000 students, including about 15,000 female and 13,000 male students, were invited via the university’s mailing list. At the beginning of the survey, they were informed about the purpose of the study and voluntary consent was obtained. It was stressed that all responses will be treated confidentially, and no personal conclusions can be drawn. At the beginning of the summer semester, in March 2023, the first data collection occurred. Six hundred twelve students completed the questionnaire (388 women, 218 men, six diverse, mean age: 23.8, SD = 5.92). A sensitivity analysis showed that with this sample size, the study had 80% statistical power to detect a correlation of r ≥ 0.11, corresponding to small sized (and larger) effects. Among all participants, three cash prizes of 50 Euros were raffled. At the end of the summer semester, in June 2023, all individuals who participated at Time 1 were invited to fill out the second questionnaire. There were 212 individuals (144 women, 67 men, one diverse, mean age: 24.7, SD = 7.99) who completed both questionnaires. A sensitivity analysis showed that with this sample size, the study had 80% statistical power to detect effects that explain ρ2 = .05 of the proportion of variance in the criterion variable (actual usage of chatbot-generated texts for academic cheating) by the two predictor variables (intentions and past behavior) in a multiple regression. Among all participants that completed both questionnaires, one additional cash prize of 50 Euros was raffled.

There were some gender differences. Women had lower scores on attitudes, perceived behavioral control, and intentions than men (no gender differences were found for subjective norms, past behavior, and actual usage of chatbot-generated texts for academic cheating). However, when controlling for gender in the multiple regression analysis, the pattern of the main findings did not change. Hence, this variable is not considered further.

Age of participants was significantly negatively correlated with attitudes, subjective norms, intentions, and actual usage of chatbot-generated texts for academic cheating (it was not significantly correlated with perceived behavioral control and past behavior). However, as controlling for participant age in the multiple regression analysis did not change the pattern of the findings, this variable is also not considered further.

Procedure and Measures

After participants submitted their demographic data, they were informed about the development of new AI tools, like ChatGPT, capable of generating artificial text. By using algorithms and accessing a vast database, texts are created that can be used by anyone. For instance, students could have their seminar papers (or parts of them) written by the computer. In addition, they were made aware that these artificially generated texts are original, not reliant on preexisting sources, and often indistinguishable from texts authored by humans. They were then provided with a sample text, on measures to combat inflation, developed by the ChatGPT tool.

Next, participants’ past use of chatbot-generated texts was assessed. Concretely, they were asked: “How often have you used artificial intelligence like ChatGPT without indicating (i.e., not specifying that you used artificial intelligence) in the last 12 months for your seminar papers?” The scale included the following options: never, once, twice, three times, four times, five times, and more than five times.

Subsequently, participants’ attitudes toward the use of artificial intelligence without indication were assessed. The measure was adapted from Beck and Ajzen (1991) and involved five evaluative semantic differential scales: Good–Bad, Pleasant–Unpleasant, Foolish–Wise, Useful–Useless, Unattractive–Attractive (McDonald’s ω = .86). The scale for these and the following items was from 1 to 7.

Subjective norms related to the use of artificial intelligence without indication were measured using three items, also adapted from Beck and Ajzen (1991): “If I were to use artificial intelligence without indicating it in a seminar paper, most people who are important to me would not caredisapprove” (recoded), “None of the people who are important to me think it’s okay to use artificial intelligence in seminar papers without indicating it (agreedisagree),” and “Most people who are important to me would look down on me if I used artificial intelligence without indicating it in a seminar paper (likelyunlikely).” Scale reliability was: McDonald’s ω = .78.

Perceived behavioral control was assessed with two items: “For me, using artificial intelligence in seminar papers without indication is easydifficult” (recoded) and “If I wanted to, I could use artificial intelligence for a seminar paper without indicating it (truefalse)” (recoded). Both items were significantly correlated, r(610) = .43, p < .001.

Intentions were assessed with the following five items: “I could imagine using artificial intelligence without indicating it in a seminar paper, even if I hadn’t planned to do so beforehand (likelyunlikely)” (recoded), “Even if I had a good reason, I couldn’t bring myself to use artificial intelligence in a seminar paper without indicating it (likelyunlikely),” “If given the opportunity, I would use artificial intelligence without indicating it in a seminar paper (likelyunlikely)” (recoded), “I would never use artificial intelligence without indicating it in a seminar paper (truefalse)” and “Perhaps in the future, I might use artificial intelligence in a seminar paper without indicating it (truefalse)” (recoded). Scale reliability was: McDonald’s ω = .92.

At Time 2, the sole measure employed was the use of chatbot-generated texts for academic cheating. Participants were asked: “How often have you utilized artificial intelligence like ChatGPT without indicating it (i.e., not specifying that you used artificial intelligence) in the past 3 months for your seminar papers?” As at Time 1, the scale was from never to more than five times. The “past 3 months” was used to ensure responses were confined to behaviors since the initial questionnaire.

The data are publicly available at https://osf.io/hgd6r/?view_only=294f0c280dae43acad1595d3a71e4f3f.

Results

Descriptive statistics and intercorrelations of all measures are shown in Table 1. As can be seen, attitude, subjective norm, and perceived behavioral control were all positively correlated with the intention to use chatbot-generated texts for academic cheating. To examine whether attitude, subjective norm, and perceived behavioral control independently predict the intention to use chatbot-generated texts for academic cheating, a multiple regression was performed on the data. The overall regression was significant, F(3, 606) = 239.00, p < .001, R2 = .54. All predictor variables received a significant regression weight (attitude: β = .44, p < .001; subjective norm: β = .22, p = .001; perceived behavioral control: β = .29, p < .001). Thus, Hypothesis 1 received strong support from the data.

Table 1
Means, Standard Deviations, and Bivariate Correlations

Variable

M

SD

1

2

3

4

5

6

1. Attitude

3.67

1.53

2. Subjective norm

4.30

1.66

.47***

3. Perceived behavioral control

4.12

1.74

.37***

.33***

4. Intention

3.55

1.83

.64***

.52***

.52***

5. Past chatbot use

1.53

1.27

.38***

.23***

.24***

.37***

6. Chatbot use at Time 2

1.81

1.56

.22**

.27***

.27***

.31***

.33***

Note.N = 610 for all measures except chatbot use at Time 2 (N = 212).
**p < .01. ***p < .001.

Hypothesis 2 stated that intentions measured at Time 1 should be the direct antecedent of future use of chatbot-generated texts for academic cheating (measured at Time 2) and that this relationship remaines significant when controlling for the impact of past usage of chatbot-generated texts for academic cheating. Please recall that 610 individuals participated at Time 1, of whom 212 reported on their actual use of chatbot-generated texts for academic cheating at Time 2. The 212 participants were relatively similar to the 398 participants that did not complete the questionnaire at Time 2 in terms of their attitudes, subjective norms, perceived behavioral control, intention, and past behavior, multivariate F(5, 604) = 1.59, p = .161, η2 = .01. There was a significant increase in the use of chatbot-generated texts from past behavior (M = 1.40, SD = 1.12) to future behavior (M = 1.81, SD = 1.56), t(211) = 3.78, p < .001, d = 1.60.

To test Hypothesis 2, a multiple regression analysis was conducted using intentions and past behavior to predict the usage of chatbot-generated texts for academic cheating at Time 2. The overall regression was significant, F(2, 209) = 17.14, p < .001, R2 = .14. Past behavior received a significant regression weight, β = .24, p < .001. Importantly, intentions did so as well, β = .20, p = .004.1 Thus, Hypothesis 2 received support from the data.

Structural equation modeling (SEM) was performed on an exploratory basis, meaning it was not preregistered, to assess the validity of the full theory of planned behavior model (Figure 1). The model provided a good fit to the data, χ2(97, N = 212) = 196.25, p < .001, comparative fit index = .952, root-mean-square error of approximation = .069. Attitude, β = .44, SE = .12, p < .001, subjective norm, β = .16, SE = .09, p = .041, and perceived behavioral control, β = .36, SE = .13, p = .002, were all significantly related to intentions to use chatbot-generated texts for academic cheating. In terms of predicting the actual usage of chatbot-generated texts (Time 2), perceived behavioral control, β = .32, SE = .17, p = .044, emerged as a significant direct predictor. Intentions, β = .09, SE = .13, p = .489, did not exhibit a significant effect.

Figure 1

Results of Structural Equation Model
Note. PBC = perceived behavioral control.

Discussion

With the advancements in artificial intelligence and natural language processing, language models like ChatGPT have become widely accessible. While their increasing popularity and usage present numerous opportunities, they also come with potential risks. Notably, concerns have arisen about their capacity to facilitate cheating behaviors, especially in academic settings, given their ability to produce text that closely resembles human-generated content.

The present study applied the theory of planned behavior to investigate the antecedents of the intention to use chatbot-generated texts for academic cheating and to explore the relationship between intentions and actual usage over time. As hypothesized (Hypothesis 1), attitudes, subjective norms, and perceived behavioral control significantly predicted participants’ intention to use chatbot-generated texts for academic cheating. Furthermore, the incorporation of past chatbot usage as an additional predictor in the regression analysis revealed that although past behavior significantly influenced intentions, its incremental predictive power was relatively low. Notably, when past chatbot usage was incorporated as an additional predictor, the influence of attitudes, subjective norms, and perceived behavioral control remained significant, thereby reinforcing the notion that these factors primarily drive the intention to engage in a behavior.

Hypothesis 2 was also supported, as intentions were found to be a direct antecedent of future use of chatbot-generated texts for academic cheating. Notably, most prior research applying the theory of planned behavior to dishonest and cheating behavior employed a cross-sectional design in that intentions (and attitudes, subjective norms, and perceived behavioral control) and actual behavior were assessed at the same time (e.g., Ababneh et al., 2022; Juan et al., 2022; Stone et al., 2009; Uzun & Kilis, 2020; Yu et al., 2021; Yusliza et al., 2022), making it difficult to establish directional effects. In contrast, as we used a longitudinal design, stronger conclusions are warranted that intentions are indeed predictive of behavior. Furthermore, by assessing behavior after a significant time interval, the study minimizes potential biases arising from participants’ desire to provide consistent data (i.e., to report that their behavior matches their stated intentions, even if their behavior deviates from those intentions).

Consistent with the previous research (e.g., Harding et al., 2007; Passow et al., 2006; Whitley, 1998), our study found a positive relationship between past cheating behavior and future cheating behavior. However, and supporting Hypothesis 2, results also showed that the relationship between intention and future behavior remained significant even after controlling for the impact of past usage of chatbot-generated texts for academic cheating. A notable limitation of previous cross-sectional work on the application of the theory of planned behavior to dishonest and cheating behavior has been the absence of controlling for past behavior. By incorporating past behavior into the analysis, the unique predictive power of intentions in determining future behavior can be isolated.

On an exploratory basis, meaning not hypothesized in our preregistration, we ran structural equation modeling to examine whether not only intentions but also perceived behavioral control directly influences behavior. Consistent with the previous research (e.g., Beck & Ajzen, 1991; Stone et al., 2009), perceived behavioral control emerged as a significant direct predictor of future chatbot usage for academic cheating. In general, perceived behavioral control directly influences behavior when individuals have the necessary resources and opportunities to act accordingly. However, if actual control is lacking, the impact of perceived behavioral control on behavior may diminish or become less direct (Ajzen, 2020). Given the simplicity of using AI tools like ChatGPT, individuals have high actual control over whether they choose to use chatbot-generated texts for academic cheating, which explains why perceived behavioral control directly influences the future use of chatbot-generated texts for academic cheating.

It is noteworthy that intentions did not exhibit a direct effect on chatbot usage for academic cheating in the structural equation model when controlling for the influence of perceived behavioral control. This finding underscores the idea that intentions alone may not be sufficient to predict behavior; the perceived ease or difficulty of performing the behavior is also crucial.

Theoretical and Practical Implications

Our findings have important theoretical and practical implications. By applying the theory of planned behavior to the domain of academic cheating facilitated by AI, this research extends the scope of the theory beyond its traditional applications. The findings demonstrate that the theory of planned behavior’s framework of attitudes, social norms, and perceived behavioral control is relevant for understanding complex ethical decision-making behaviors in an academic context.

The inclusion of past behavior as a predictor of intentions offers insight into the temporal dynamics of intention formation and subsequent behavior. The study demonstrates that while past behavior can impact intentions, attitudes, subjective norms, and perceived behavioral control remain primary determinants of intentions. This indicates that the theory of planned behavior can effectively account for people’s intentions, even when prior behaviors are considered.

Similarly, despite participants’ prior usage of chatbot-generated texts for academic cheating being a predictor of future behavior, intentions still exerted a significant influence on behavior beyond the impact of past conduct. This finding highlights the independent and unique role of intentions in guiding subsequent actions and strongly supports the key notion of the theory of planned behavior that individuals’ intentions play a critical role in shaping their behavior.

Overall, the findings provide some support for the theory of planned behavior. The model’s effectiveness in predicting intentions to use chatbot-generated texts highlights its applicability in understanding individual behaviors related to AI technology adoption. This validation reinforces the significance of attitudes, subjective norms, and perceived behavioral control in shaping intentions across various contexts, while intentions then serve as the proximal determinant of future behavior. Importantly, the study’s design, with a 3-month interval between measuring intentions and actual behavior, offers insights into the temporal dynamics of intentions and behavior in the context of academic cheating.

With regard to practical implications, the study’s insights can be relevant for educators and policymakers seeking to promote the responsible and effective integration of AI technologies in educational practices. Understanding the factors that influence students’ intentions to cheat can help educators and administrators identify at-risk students early on. By monitoring attitudes, social norms, and perceived behavioral control, educational institutions can implement targeted interventions to prevent cheating behaviors.

Moreover, the study findings suggest that addressing students’ attitudes, social norms, perceived behavioral control, and intentions regarding ethical decision making in educational settings can play a crucial role in reducing cheating behavior. For example, educators may foster academic integrity by promoting ethical values, providing support to develop ethical competencies, and creating a supportive learning environment. In addition, integrating peer influence strategies, such as peer-to-peer discussions and interventions emphasizing the importance of academic honesty, can further reinforce positive norms and discourage cheating. This could encourage students to resist temptation and uphold academic integrity.

Limitations and Future Directions

Despite the significant findings, some limitations should be acknowledged. First, the study relied on self-report measures, which may be subject to social desirability bias. Future research could incorporate objective measures to strengthen the study’s findings. Moreover, attrition led to a reduced sample size at the follow-up assessment, with only 212 out of the initial 610 participants providing data. However, it is noteworthy that the 212 participants who completed the questionnaire at Time 2 were relatively similar to the 398 participants who did not in terms of their attitudes, subjective norms, perceived behavioral control, intention, and past behavior. Nonetheless, it is important to acknowledge the possibility that the participants who dropped out of the study differed in certain regards from those who remained.

Another limitation of this study is its single university setting, which restricts the generalizability of the findings. The sample used may have unique demographic, cultural, or contextual factors that limit the extent to which the results can be applied to a broader population. In addition, the study’s focus on academic cheating narrows its scope to the academic setting. Future research could explore ethical decision-making processes related to AI use in diverse domains beyond academia.

The present study suggests that intentions and perceived behavioral control are primary determinants of using chatbot-generated texts for academic cheating. However, these are certainly not the only predictors. For example, personality traits have consistently emerged as significant predictors of various unethical and immoral behaviors, including cheating. Within the Honesty-Humility, Emotionality, eXtraversion, Agreeableness, Conscientiousness, Openness to Experience model, for instance, a recent meta-analysis confirms a strong negative association between Honesty–Humility and unethical actions like lying, cheating, and stealing (Zettler et al., 2020). Another meta-analysis examining the Big Five personality traits found that Conscientiousness is negatively associated with academic dishonesty (Giluk & Postlethwaite, 2015). Other research focused specifically on the dark side of human personality and found positive connections between all Dark Triad traits (narcissism, Machiavellianism, and psychopathy) and academic misconduct (e.g., Curtis et al., 2022; Smith et al., 2023). Corroborating these previous findings, recent research showed that Honesty–Humility and Conscientiousness (both negative) and narcissism, Machiavellianism, and psychopathy (all positive) were predictive of the intention to use chatbot-generated texts for academic cheating (Greitemeyer & Kastenmüller, 2023).

Integrating personality with the theory of planned behavior offers the potential for a more comprehensive understanding of academic cheating behavior. Personality traits may directly influence behavior, regardless of intentions, thus making a unique contribution to predicting academic cheating. They can also act as moderators in the relationship between intentions and behavior. This means that even with strong intentions to cheat, individuals with specific personality traits may be more or less inclined to act on those intentions. For instance, an individual with high conscientiousness may possess a strong intention to cheat (due to other factors like positive attitudes toward cheating or perceived behavioral control), but their strong sense of duty and ethics could diminish the likelihood of following through on those intentions.

Conclusion

In conclusion, the present study shows that the theory of planned behavior can be successfully employed to explain individuals’ intentions to use AI technology for academic cheating and in turn their actual utilization of chatbot-generated texts in a fraudulent manner. By employing a longitudinal design, this study establishes a clear temporal order and provides robust evidence that intentions precede and drive future actions, with the influence of intentions on subsequent behavior persisting over time. Moreover, the finding that intentions remained a significant influence on behavior beyond the impact of past conduct emphasizes the independent and unique role of intentions in guiding subsequent actions and strongly supports the central premise of the theory of planned behavior—that individuals’ intentions play a critical and influential role in shaping their behavior. By recognizing the significant impact of intentions (and perceived behavioral control) on the use of chatbot-generated texts for academic cheating, educators and policymakers can better target interventions to encourage responsible conduct among individuals in academic settings.


Received October 4, 2023
Revision received March 15, 2024
Accepted March 16, 2024
Comments
1
tynjda sayla:

This implies that individuals with particular personality traits may be more or less likely to act on their intentions, even if they have strong intentions to geometry dash lite deceive.