Skip to main content
SearchLoginLogin or Signup

Social Need Fulfillment Model for Human–AI Relationships

Volume 5, Issue 4, https://doi.org/10.1037/tmb0000141

Published onOct 21, 2024
Social Need Fulfillment Model for Human–AI Relationships
·

Abstract

Humans have diverse social needs (e.g., security, relatedness), which when fulfilled can produce concrete (e.g., pleasure, satiation of a need) and symbolic outcomes (e.g., what this instance of need fulfillment means about one’s relationship with the source of fulfillment). Advances in large language models have exponentially increased the potential that intelligent agents hold for social need fulfillment. Although some evidence shows that modern intelligent agents can meet or facilitate social need fulfillment, other sources suggest that fulfillment generated by intelligent agents is less effective than that generated by humans. In this review, we introduce a model of social need fulfillment in human–artificial intelligence relationships, which proposes that the majority of humans process most of their interactions with intelligent agents automatically at first and then quickly engage in deliberative processing. In these cases, we expect only concrete outcomes of need fulfillment are obtained, as the human rationally considers that machines cannot care about them. However, in some situations (e.g., lonely human, responsive agent), deliberative processing may be bypassed, and both concrete and symbolic outcomes may be obtained, mirroring what occurs in human–human interactions. In these cases, going forward, we expect both types of outcomes will continue to be available from interactions with intelligent agents. We close with a big-picture analysis of the potential that artificial intelligence holds to meet human social needs, including its promise and potential pitfalls.

Keywords: human–machine communication, artificial intelligence, human–computer interaction, social need fulfillment, relationships

Disclosures: None of this work has been presented elsewhere in any form. The authors have no conflicts of interests to disclose. This work was not funded.

Data Availability: This is a theoretical review (i.e., it required no data, statistical analyses, or study materials). As such, no data, analyses, or study materials are available.

Open Access License: This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY- NC-ND). This license permits copying and redistributing the work in any medium or format for noncommercial use provided the original authors and source are credited and a link to the license is included in attribution. No derivative works are permitted under this license.

Correspondence concerning this article should be addressed to Laura V. Machia, Department of Psychology, Syracuse University, 765 Irving Avenue, Suite 352, Syracuse, NY 13244, United States. Email: [email protected]


Video Abstract


The U.S. Surgeon General recently released a report noting that lack of social connection is now an epidemic and that “loneliness and isolation represent profound threats to our health and well-being” (Murthy, 2023, p. 5). This trend is seen worldwide, with problematic rates of loneliness found nearly everywhere (Surkalim et al., 2022). Loneliness is so detrimental to health not just because of the day-to-day wear-and-tear of social isolation but also because of the broader implications of what it means to be isolated as a social being (e.g., perceptions of oneself as incapable of connecting; perceptions of the world as a hostile place). As such, there is a critical need to identify remedies for social disconnection and sources for obtaining social connection more broadly (Holt-Lunstad et al., 2017). Recent advances in large language models have exponentially increased the potential that interacting with intelligent agents hold for social need fulfillment, but theorists diverge in their predictions about whether intelligent agents can provide sufficient need fulfillment (e.g., Nass & Moon, 2000), or not (e.g., Cocking & Kennett, 1998). In the current article, we propose a model to reconcile these perspectives that clarifies the potential outcomes of need fulfillment (i.e., concrete and symbolic) and details when, and for whom, an interaction with an intelligent agent will have similar potential to meet social needs comparably to how interactions with human sources can. Put simply, instead of describing the power of human–intelligent agent interactions, we argue for describing the power of what a given human makes of their interactions with intelligent agents. Our goal is to provide a model that is generative in capturing the potential for intelligent agents to meet humans’ social needs, thus providing an additional avenue for facilitating social connection at scale. Regarding linguistic choices, “intelligent agent” is the term we will use to refer to technology with which humans can interact that artificially simulate human intelligence processes (e.g., learning, reasoning). Others have been more specific, speaking about the exact form of the intelligent agent (e.g., chatbots, socially assistive/companion robots), but for the purpose of model building, we will consider all of these under the same umbrella term.

Interdependence Theory and Need Fulfillment

Humans are motivated to behave in ways that ensure their core social needs are fulfilled (Finkel et al., 2014), including their needs for caregiving (i.e., giving support and protection), security (i.e., being supported, protected), sexual intimacy (i.e., engaging in sexual activity or talk), and relatedness (i.e., sharing time and activities; Fraley & Shaver, 2000; Knee et al., 2013). Some of these needs are pervasive, whereas others are specific to particular situations and interaction partners (Rusbult & Van Lange, 2003), yet according to interdependence theory, social need fulfillment broadly provides the most potent, important outcomes derived from relationships (Le & Agnew, 2001). Insofar as a given human relationship provides ample need fulfillment, then the partners experience greater well-being (Patrick et al., 2007), secure attachment (La Guardia et al., 2000), commitment to the relationship (Drigotas & Rusbult, 1992), and the odds of the relationship persisting increase (VanderDrift & Agnew, 2012).

Whereas romantic partners are the typical preferred source of social need fulfillment for adults in most cultures (Finkel et al., 2014), myriad sources can and do fulfill social needs. Friends and family are potent interpersonal sources of social need fulfillment (Carbery & Buhrmester, 1998). Coworkers and casual/nonintimate connections can be as well (Fingerman, 2009). In terms of noninterpersonal sources, there is evidence that vocation and recreation can both fulfill social needs, including work (Baard et al., 2006), participation in sports (Gagne, 2003), volunteerism (Millette & Gagné, 2008), and religion (Wesselmann et al., 2016). Finally, recent research has highlighted the potential of sources that fall outside of the interpersonal/noninterpersonal dichotomy to meet social needs. For example, sources such as pets (Chopik et al., 2023), parasocial relationships (Branch et al., 2013), and importantly, robots (Kolb, 2012) can meet social needs. Some robots (e.g., socially assistive robots, pet and humanoid companion robots) were even designed with the purpose of meeting social needs and are thought to do similarly to pets (Darling, 2021).

Although there is evidence that nonhuman sources can meet social needs, further explanation of social need fulfillment can help clarify the question of whether and when interactions with nonhuman sources (and especially intelligent agents) are capable of providing sufficient need fulfillment. Interdependence theory defines social need fulfillment by its function, which is to provide two types of outcomes: concrete and symbolic (Rusbult & Van Lange, 2003; Thibaut & Kelley, 1959). Concrete outcomes are the direct experiences of pleasure and pain that accompany need fulfillment and thwarting, respectively. Symbolic outcomes, conversely, rest on the broader implications of the fulfillment (i.e., what this instance of need fulfillment means about the source of fulfillment, about one’s relationship with the source of fulfillment, or about oneself). For example, someone who has their security need met by receiving social support has obtained the direct experience of a fulfilled need, so they will feel satisfied, sated, or pleased by the interaction (concrete outcomes). They also can infer that their partner (the source of fulfillment) cares about them and may continue to fulfill needs for them reliably, or that they are a person worthy of care, and so forth (i.e., symbolic outcomes). Meanwhile, someone whose partner accidentally meets their need in an attempt to meet their own need obtains just the concrete outcomes (VanderDrift & Agnew, 2012), as whether their partner will be available or not for future need fulfillment, or whether they are a person worthy of care, cannot be inferred from the fulfillment.

Concrete and symbolic outcomes are both vital to well-being and feelings of social connection, but arguably, symbolic outcomes may be more so.1 It is symbolic outcomes that teach people about themselves, about their sources of need fulfillment, and ultimately build social bonds and relationships (VanderDrift & Agnew, 2012). Therefore, it is symbolic outcomes that cumulate into a person feeling secure, lovable, and connected (Rusbult & Van Lange, 2003). A one-off interaction in which someone meets one’s need for companionship—for example, some brief small talk on the bus with a fellow rider—provides a temporary boost of positive affect due to the fact that it fulfilled an important need (i.e., concrete outcomes are strong). However, when the fellow rider gets off at their stop and goes about their day, one does not necessarily feel any less isolated or secure that they have built a connection (i.e., symbolic outcomes are weak). On the other hand, a series of sustained and responsive interactions—for example, daily commutes with a fellow rider in which the riders disclose personal feelings to each other and those disclosures are responded to with care—can provide not only concrete outcomes but also symbolic outcomes (e.g., a meaningful connection; a positive sense of self). Symbolic outcomes are uniquely poised to ameliorate loneliness, and as such, understanding whether and when interactions with intelligent agents have the same potential as interactions with humans to provide both concrete and symbolic outcomes is imperative to answering the question of whether intelligent agents can meet humans’ needs effectively.

A Model of Social Need Fulfillment of Human–Artificial Intelligence Relationships

In the rest of this work, we consider a model of intelligent agents’ capacity as sources of social need fulfillment (see Figure 1 for a visual depiction of our model). In this model, we propose that (a) upon first interacting with an intelligent agent, humans will automatically process their interaction as a human interaction and obtain concrete outcomes. (b) Characteristics of the situation, the human, and the intelligent agent influence whether someone will, in a given interaction, reflect on the interaction more deliberatively. (c) If they do not deliberate, they will additionally obtain symbolic outcomes, (d) but if they do deliberate, the nature of their deliberation will dictate whether they will obtain symbolic outcomes or not. Finally (e), those who obtain only concrete outcomes will represent the intelligent agent as primarily utilitarian, whereas those who obtain symbolic outcomes will represent the agent as primarily human and enter future interactions with intelligent agents with that representation being salient. In the sections below, we detail each of these propositions and describe the available empirical evidence for each.

Figure 1

Theoretical Model
Note. Figure depicts (a) upon first interacting with an intelligent agent, humans will automatically process their interaction as a human interaction and obtain concrete outcomes (Phase 1). (b) Characteristics of the situation, the human, and the intelligent agent influence whether someone will, in a given interaction, reflect on the interaction more deliberatively (Phase 2). (c) If they do not deliberate (dotted line around Phase 2 box), they will additionally obtain symbolic outcomes from Phase 1, (d) but if they do deliberate, the nature of their deliberation will dictate whether they will obtain symbolic outcomes or not (stop sign with dotted border). Finally (e), those who obtain only concrete outcomes will represent the intelligent agent as primarily utilitarian, whereas those who obtain symbolic outcomes will represent the agent as primarily human and enter future interactions with intelligent agents with that representation being salient.

Automatic and Deliberative Processing of Social Interactions With Intelligent Agents

The first question to consider in building a model of human–intelligent agent need fulfillment is how humans process their interactions with intelligent agents. According to dual system theories, thinking and reasoning occurs either through automatic processes that are outside of explicit awareness, or through more deliberate processes that can be directly verbalized (Rips, 2001; Sloman, 1996). Automatic processes are posited to be cognitively efficient and involve relatively little effort, whereas deliberate processes are believed to be more effortful and place a greater strain on working memory (Evans, 2003, 2008; Smith & DeCoster, 2000). Dual systems theories provide a useful framework for predicting when human interactions with intelligent agents are able to meet humans’ needs.

We propose that when humans first interact with intelligent agents (labeled “Phase 1” in the figure), the social scripts activated by the interaction format (e.g., oral or written conversations) prompt them to automatically feel like they are interacting with a human. In other words, their neurological, emotional, and behavioral responses arise in the manner they do when the human interacts with humans. According to the computers as social actors paradigm, people interact with computers as if they are social beings (Nass & Moon, 2000), following the same social rules of interactions (Edwards et al., 2016) and experiencing the same feelings with computers as they do with humans (Reeves & Nass, 1996). Findings from neuroscience additionally suggest that similar neurological mechanisms are activated when we interact with humans and robots (Chaminade et al., 2010). Collectively, the evidence suggests that humans default to thinking of intelligent agents as humans when interacting with them (Reeves & Nass, 1996). As such, automatic processes drive these interactions and lead humans to engage with intelligent agents in many of the same ways that they engage with humans (Xu & Li, 2024).

After the interaction is underway, humans have the potential to engage in deliberative processing (labeled “Phase 2” in the figure). This type of processing is slower, can be verbalized, and requires more cognitive resources (Smith & DeCoster, 2000). For these reasons, deliberative processing is typically more reasoned and less based on mental shortcuts (Corral & Rutchick, 2024; Petty & Cacioppo, 1986). Thus, we argue that humans’ deliberative processing of the interactions with intelligent agents will be based on their conscious beliefs, which might be derived from explicit reasoning about intelligent agents.

For most people, the implication of this idea is that they will be more aware that they are engaging with a machine that cannot care about them (Cocking & Kennett, 1998; Croes & Antheunis, 2021). Indeed, according to much of the research on intelligent agents—including chatbots, virtual assistants, and task-designed robots—people are very commonly consciously aware that they are interacting with a nonhuman partner (Bylieva et al., 2020; Cowan et al., 2017; Kolb, 2012). However, it is also possible that some people might believe that intelligent agents are superior to humans in their abilities (including to care for humans). In sum, regarding the processing of interactions with intelligent agents, we argue that people begin processing automatically, guided by social scripts and the humanlike feel of the interaction. However, people do have the potential to engage in deliberative processing, which will be guided by reasoned beliefs about intelligent agents. In the next section, we will consider the implications of these dual systems for need fulfillment.

Concrete and Symbolic Outcomes as a Result of Interaction Processing

We propose that how a human processes an interaction with an intelligent agent dictates the outcomes of need fulfillment (these processes are depicted by the arrows from “Phase 1” to “Concrete Outcomes” and “Symbolic Outcomes” and the dotted arrow from “Phase 2” to “Symbolic Outcomes”). We argue that when interactions with an intelligent agent are processed fully automatically, they will likely provide both concrete and symbolic outcomes, but when deliberative processing is involved, whether symbolic outcomes are possible depends on the content of that processing.

As described previously, people interact with intelligent agents as if they are humanlike. In other words, interactions with intelligent agents feel interpersonal to humans. If and when humans process these interactions fully automatically, the outcomes they provide are the same as those from human–human interpersonal interactions. In other words, the unexamined “social” interaction with an intelligent agent is experienced as a full-fledged social interaction and activates the possibility for all outcomes experienced in human–human interactions (Reis & Holmes, 2012). Therefore, like human social interactions, interactions with intelligent agents that are processed fully automatically can provide need fulfillment, including both concrete and symbolic outcomes (Rusbult & Van Lange, 2003). For example, someone who has their relatedness need met by an intelligent agent obtains direct experience of pleasure of a fulfilled need, so they will feel sated or pleased by the interaction (concrete outcomes). Moreover, when this interaction is processed fully automatically, the need fulfillment builds trust, self-esteem, and confidence that the agent can be counted upon to have the self’s best interest in mind in the future (symbolic outcomes). These broader implications of the need fulfillment do not require conscious deliberation, as humans can automatically categorize others as friend or foe, update their perceptions of others as trustworthy, and update their perceptions of themselves as efficacious or lovable based on their experiences. These fundamental human social processes can occur outside of awareness (Bargh & Pietromonaco, 1982). Thus, as long as humans are processing automatically, we expect they will interpret an interaction with an intelligent agent as interpersonal, and in such cases, the interaction has the possibility of affording concrete and symbolic outcomes of need fulfillment.

When people transition to thinking deliberatively about their interaction with an intelligent agent, however, we expect the possible outcomes of the interaction to change. When thinking deliberatively, people place interaction targets into ontological categories (Jipson & Gelman, 2007), which are useful for conjuring heuristics and allowing deliberative processing to require less cognitive resources (Petty & Cacioppo, 1986). There is good evidence that intelligent agents comprise their own ontological category, distinct from humans and inanimate objects (Kahn et al., 2011), and this unique categorization means that humans have different expectations for intelligent agent behavior than they do for human behavior (Banks, 2021). Most saliently, a majority of humans believe that machines do not have similar life experiences to humans and therefore cannot care about humans (Cocking & Kennett, 1998; Xu & Li, 2024). As such, these people will not derive broader meaning from the interaction regarding their relationship with the agent or regarding their own worth or lovability as a human. In simple terms, for most humans, deliberatively processing an interaction with an intelligent agent allows the interaction to be need fulfilling in that the interaction itself will produce the concrete outcomes of social needs being sated, but will not produce many symbolic outcomes that rest on the broader implication of need fulfillment.

Of course, our logic that deliberative thinking will yield less symbolic outcomes rests on the assumption that when humans think deliberatively about intelligent agents, they will come to the conclusion that they are machines who cannot care about them. This type of scenario might not be true for everyone, in all interactions (as indicated by the dotted lines connecting “Phase 2” to “Symbolic Outcomes” in the figure). Furthermore, people and interactions may differ in their propensity to engage deliberative processing, with some people in some interactions being more likely to process the entire interaction automatically, whereas others might engage deliberative processing at some point (as indicated by the dotted line around the “Phase 2” box in the figure). In the next section, we will consider characteristics of intelligent agents, humans, and the contexts surrounding their interactions that dictate whether a human is likely to deliberatively process the interactions, and if so, whether they are likely to conclude that the agent can or cannot care about them.

Who, When, and With Whom Will Humans Process Deliberatively

We expect that the machine design of the intelligent agent can affect how deliberately it is processed, with those machines that seem more “humanlike” being less likely to trigger deliberative processing. For example, those that have humanlike physical forms (e.g., robots) are perceived as more humanlike than those without (e.g., chatbots; Lee et al., 2006), which suggests that interactions with agents with physical forms are less likely to be deliberatively processed. A notable exception is agents that physically resemble humans closely but not perfectly, which seem to trigger deliberate processing as well as negative feelings (Mathur et al., 2020). In addition to physical form, intelligent agents that have greater cognitive, relational, and emotional competencies are more likely to be processed as humanlike, in that the users show more engagement with such agents (Chandra et al., 2022). Put differently, intelligent agents that are responsive (i.e., accepting, understanding, and nonjudgmental) provide the basis for a humanlike, need fulfilling relationships (Skjuve et al., 2021), and provide no reason to engage in deliberative processing. As such, when intelligent agents act responsively, people not only receive more fulfillment and liking of the interaction (Hoffman et al., 2014), but also expect the intelligent agent will be a comfort to them in the future interactions (Birnbaum et al., 2016). Finally, when intelligent agents disclose emotional content to human users (i.e., sate the human’s need for caregiving), the users experience greater satisfaction and intend to use the intelligent agent again in the future (Park et al., 2023).

We also expect that individuals differ in how much they engage in deliberative processing when interacting with an intelligent agent. Specifically, we expect that people whose social needs are unmet are less likely to engage in deliberative processing about the nature of their interaction with the agent because they may not want to “look behind the curtain” and lose a potential source of fulfillment. People who are especially lonely are more likely to have turned to an intelligent agent for relational need fulfillment (Siemon et al., 2022) and to experience interacting with intelligent agents as feeling like human or social interactions (Lee et al., 2006). In both cases, they interpret the agent’s actions as more humanlike and have greater positive responses to the agent than do those who are less lonely. Socially isolated older adults find that interactions with intelligent agents that are programmed to provide medication reminders do meet their need for companionship and improve their quality of life (Valtolina & Hu, 2021). The behavior of the human also seems to matter, such that people who disclose information to an intelligent agent experience as many emotional, relational, and psychological benefits of interacting with a human partner, including fulfillment of the need for self-affirmation (Ho et al., 2018). Each of these effects depicts both concrete and symbolic outcomes. Whether these outcomes were obtained because the human did not deliberatively process their interaction with an intelligent agent due to their own personal characteristics, or if they did, but choose to process it as a humanlike interaction is unclear from the current data. Regardless, these data suggest that characteristics of people can influence whether they obtain concrete and/or symbolic outcomes from an interaction with an intelligent agent.

Finally, there are situations in which people are less likely to engage in deliberative processing with regard to intelligent agents. Again, these situations are ones in which people are in need of sources for their social need fulfillment and may be motivated to not process their interactions with intelligent agents in a particularly reasoned way. One such situation occurred during the COVID-19 pandemic in 2020–2021, when the majority of social life in most places was shut down. During that time, intelligent agents boomed in popularity, and anecdotally, people who had a chatbot “friend” felt more connected (Metz, 2020). Another example comes from military deployments, in which soldiers occasionally use intelligent agents in the form of robots for tasks. In most cases, soldiers view the intelligent agent as equipment only and may feel fondness for the agent, but nothing akin to closeness (Kolb, 2012). However, in some rare cases (e.g., extreme prolonged stress), some soldiers do seem to have formed such a significant bond with their intelligent agents that they engage in rituals such as funerals upon the agent’s “death” (Garber, 2013). More commonly, when important sources of need fulfillment are not available, interacting with an intelligent agent feels like a “safe space” to some (Ta et al., 2020), again suggesting a more humanlike, if not even superior to humanlike, relationship has formed. In all cases, again, aspects of the situation reduced the propensity to deliberately process an interaction with an intelligent agent and thus allowed for more symbolic outcomes to materialize.

Humans’ Cognitive Representations of Intelligent Agents

Thus far, we have conducted a theoretical analysis of how humans process interactions with intelligent agents and how that affects the need fulfillment outcomes they receive. The final question to consider is how people typically think about and represent intelligent agents. It is important to note that human representation is extremely flexible, as humans have the capacity to represent any given object or concept in many different ways (Chalmers et al., 1992; Corral et al., in press; French, 1997; Medin et al., 1993; Mitchell & Hofstadter, 1990; Shafto et al., 2011). For example, consider a pen: This object can be thought of as a writing instrument, as a human-made tool, as evidence of intelligence, as evidence of civilization, as an artistic expression, as a hair accessory, and so forth. Critically, because humans have representational flexibility, they can adopt any of these representations and can switch between them depending on circumstance (Chalmers et al., 1992).

Regarding intelligent agents, humans interact with intelligent agents for both utilitarian/functional and relational reasons. Both types of interaction seem to be self-reinforcing such that the more one has that kind of interaction the more they think of the intelligent agent as useful for that type of interaction (Mou & Xu, 2017). In other words, what one achieves from the interaction forms the basis for representations about what intelligent agents can provide. Regarding our model, this idea suggests that whether someone receives only concrete outcomes from their need fulfilling interactions with intelligent agents or if they receive both concrete and symbolic outcomes dictates how they represent agents going forward and whether deliberative processing (Phase 2) occurs. (This is depicted by the arrows from “Concrete Outcomes” and “Symbolic Outcomes” to “Representation” in the figure.)

Some people obtain both concrete and symbolic outcomes in their need fulfilling interactions with intelligent agents and thus represent intelligent agents as mostly human. We expect that going forward, they are even less likely to deliberatively process their interactions with agents and, thus, will continue to obtain concrete and symbolic outcomes.2 Conversely, others obtain only concrete outcomes from their need fulfilling interactions and thus represent intelligent agents as mostly utilitarian in this regard. In these cases, it is likely that going forward they will likewise expect to gain only concrete outcomes. However, these cases may very well not require any additional deliberative processing, because once someone already represents an intelligent agent as utilitarian, they might view them as utilitarian with little or no deliberative thought. As described previously, once a human has identified the appropriate ontological category for an intelligent agent, the associated heuristics become accessible (Banks, 2021). As such, we expect that once someone has obtained a given outcome, they will obtain a similar outcome in the future with little or no deliberative thought (as depicted in the figure by the arrow leading from “Representation” to “Symbolic Outcomes”). This idea is consistent with other recent theorizing, which suggests social scripts are only needed for emergent technologies and that once a user is familiar with a technology, the need for heuristics and scripts is lessened (Heyselaar, 2023).

Summary and Future Directions

In sum, intelligent agents offer a potential solution for social need fulfillment for humans. Our model, as depicted in the figure, holds that interactions with intelligent agents are automatically processed, leading to the potential for concrete and symbolic outcomes of need fulfillment. However, for most people in most situations with most agents, deliberative processing comes online as well, which is likely to limit the symbolic outcomes of need fulfillment. Based on people’s experiences in need fulfilling interactions, they come to represent intelligent agents as primarily human or primarily utilitarian, which has implications for their future interactions. This model provides a roadmap for understanding not only when intelligent agents may be useful for human need fulfillment but also what types of outcomes should be expected from that fulfillment. We close this article by considering the implications of humans relying on intelligent agents for need fulfillment.

Assuming that intelligent agents will become more humanlike over time and will increasingly be able to meet human’s social needs (and yield both concrete and symbolic outcomes), we propose that there will be diverse (good, bad, and even troubling) consequences. On the positive side, humans who are otherwise not able to meet their social needs could find a reliable source of need fulfillment in intelligent agents. As depicted in our model, some particular characteristics of the human and of the situation could reduce the likelihood of deliberate processing and allow interactions with intelligent agents to be more fulfilling and beneficial. For instance, people who experience chronic social isolation (e.g., the elderly, incarcerated populations; Anderson & Thayer, 2018; Cornwell & Waite, 2009; Schliehe et al., 2022) and people experiencing discrete instances of social isolation (e.g., astronauts on missions with limited communication capabilities) could rely on humanlike intelligent agents to meet social needs. Individuals who have difficulty maintaining close social relationships due to persistent insecurity, personality disorders, substance use disorders, posttraumatic stress disorder, or other mental health challenges that interfere with constructive relational behavior (e.g., Candel & Turliuc, 2019; Colman & Widom, 2004; Lavner et al., 2015; Rodriguez et al., 2014; South, 2021) may also be able to capitalize on need fulfillment offered by intelligent agents, at least during times of particular impairment.

In such cases, intelligent agents may serve as an option to have one’s own needs met without needing to reciprocate and meet another person’s needs. Further, the intelligent agent may have greater capacity (than a human partner) to react responsively even to hurtful behavior or inconsistent interactions (e.g., periods of disconnection during active substance use), allowing for a steadier need fulfillment source. Ideally, responsive experiences with the intelligent agent could provide a corrective experience that enables individuals with difficulty maintaining close relationships to build their capacity for human relationships over time, similarly to how a therapeutic relationship can foster security and enhance one’s capacity for developing better relationships outside of therapy (e.g., Mikulincer et al., 2013; Slade & Holmes, 2019). In our model, the loop back from having formed a representation of intelligent agents to the symbolic outcomes they obtain depicts how this outcome can occur (i.e., people come to think of their interactions with social others as fulfilling and they then are able to obtain fulfillment from them in the future).

Despite these potential benefits, we can also envision negative consequences of relying on intelligent agents for need fulfillment. The same way that corrective experiences feedback into expectations for fulfilling interactions with social others and promote healthier relationships, the experiences can also feedback into less adaptive patterns. Instead of enabling individuals who have difficulty maintaining human relationships to gradually improve their relational skills and form human relationships, intelligent agents may initially offer such an on-demand, one-sided need fulfillment experience that human users develop unrealistic relational expectations. These expectations may then hinder their ability to thrive in subsequent relationships (be them human–artificial intelligence or human–human). Relationship thriving requires mutual fulfillment (i.e., both parties feel fulfilled and both parties work to fulfill each other’s needs; Machia & Proulx, 2020). As such, people are unlikely to feel that they have a thriving relationship insofar as they perceive that their relationship is one-sided and does not require them to meet their partner’s needs (whether the partner is human or artificial intelligence; Banks, 2023). Finally, even if humans continue to maintain social relationships while having some needs met by intelligent agents, their human relationships may be negatively impacted by the time and attention diverted to interactions with intelligent agents (Machia & Proulx, 2020), consistent with research on “technoference” in relationships (e.g., McDaniel et al., 2020).

Other troubling implications of need fulfillment by intelligent agents relate to the companies that produce the intelligent agents and obtain data from human users. To initiate the processes in our model, humans self-disclose to intelligent agents, meaning that they will share sensitive personal data (e.g., interests, fears, desires, even crimes). This data could be used in myriad unethical or simply undesirable ways ranging from targeted advertising to coercion. Humans (particularly vulnerable populations) who rely on intelligent agents for psychological need fulfillment may also be susceptible to exploitation or radicalization if “bad actors” abuse users’ relationships with intelligent agents to solicit sensitive financial information, to spread misinformation, or to control users in other ways. These concerns necessitate clear boundaries and safeguards before humans turn to intelligent agents as primary sources of need fulfillment. Nevertheless, we believe that—given appropriate boundaries and safeguards—intelligent agents can be a potent, reliable, cost-effective source of genuine human need fulfillment.


Received April 10, 2024
Revision received July 29, 2024
Accepted July 31, 2024
Comments
0
comment
No comments here
Why not start the discussion?