Mentalizing is the process of inferencing others’ mental states and contributes to an inferential system known as Theory of Mind (ToM)—a system that is critical to human interactions as it facilitates sense-making and prediction of future behaviors. As technological agents like social robots increasingly exhibit hallmarks of intellectual and social agency—and are increasingly integrated into contemporary social life—it is not yet fully understood whether humans hold ToM for such agents. To build on extant research in this domain, five canonical tests that signal implicit mentalizing (white lie detection, intention inferencing, facial affect interpretation, vocal affect interpretation, and false-belief detection) were conducted for an agent (anthropomorphic or machinic robots, or a human) through video-presented (Study 1) and physically co-present interactions (Study 2). Findings suggest that mentalizing tendencies for robots and humans are more alike than different, however use of non-literal language, co-present interactivity, and reliance on agent-class heuristics may reduce tendencies to mentalize robots.
Keywords: mentalizing, social robots, heuristics, social presence, human-machine communication
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0006.
Conflict of interest:
The author has no conflicts of interest to report.
The author gratefully acknowledges the assistance of the following people for their support in the production of this manuscript: Ambrosia Luzius (lab data collection), Zack Hlatky (lab data collection, data coding), Cloris Chen (lab data collection, data coding), Madison Wedge (lab data collection, survey video production), Shianne Ferrell (lab data collection, survey video production), Nicholas David Bowman (robot mechanics and analysis consulting), and Zachary Stiffler (robot animation). A portion of this work was conducted at West Virginia University, Department of Communication Studies, and the author thanks WVU for its support.
Open Science Disclosures:
Study materials (including stimuli, data, analysis documentation) are available at: https://osf.io/9r73y
Interactive Content: White lie and false-belief detection quiz
Correspondence concerning this article should be addressed to Dr. Jaime Banks, Texas Tech University, Box 43082, Lubbock, TX 79409, USA. Email: email@example.com
R2-D2—the canister-shaped droid of Star Wars fame—does not speak aside from clicks and beeps, has no discernible face aside but for a single photoreceptor, and does not gesture except to rotate its top, tilt forward and back, and manipulate various accessories. Yet through these minimal cues, audiences can make inferences about the robot’s state of mind in context, ranging from fear and surprise at being attacked to determination and focus in engineering an escape. Such internal-state inferences are illustrative of mentalizing: A metarepresentational process by which people make sense of each other by inferring others’ mental states (cf., Leslie, 1987) toward social-cognitive schemes known as Theory of Mind (ToM). ToM (though often defined differently across various fields; Apperly, 2012) is usually characterized as a system of aggregated inferences about another’s mental states; the comparison of those inferences against understandings of one’s own mental states gives rise to predictions about the other’s beliefs, attitudes, and intentions (Premack & Woodruff, 1978).
Although a rich body of work details the dynamics of ToM as core to human–human interaction (see Goldman, 2012), it is not yet fully understood whether and how mentalizing may function in human–machine communication as technological agents increasingly mirror human behavior and capabilities. This is an important consideration as social robots and other technological agents develop in sophistication and are increasingly integrated into society at large (Cross et al., 2019), especially as mind perception may be a point of human affinity and comprehension (Anjomshoae et al., 2019) or a threat to human distinctiveness (Ferrari et al., 2016). Understanding fundamental social cognitions in human–machine communication is the foundation for human-centric development and integration (cf., Henschel et al., 2020). The present studies offer conceptual replications and extensions of recent work to address (a) whether and how people may engage in mentalistic sense-making of humanoid robots’ behaviors; (b) whether implicit and explicit indicators of mentalizing point to (mis)aligned preconscious and conscious sense-making; and (c) whether mentalistic processing is linked to trust outcomes. Findings comport with and depart from prior findings, indicating that people may similarly mentalize robots and humans except when elaborative processing is required; however, there are no clear links between indicators of implicit and explicit mentalizing and neither form is linked with trust outcomes.
The human ToM inferential system facilitates a recognition that another’s knowledge, attitudes, and experiences are different from one’s own (Premack & Woodruff, 1978), supporting relational processes and outcomes such as prosocial tendencies (i.e., helping and comforting; Imuta et al., 2016) and problematic behaviors like bullying (Smith, 2017) as well as more general moral decision-making (Bzdok et al., 2012). Among neurotypical individuals, it emerges in infancy and develops through early childhood. This development is debated to unfold either through innate theorizing tendencies and social learning (Gopnik, 1998; Leslie, 1987) or internally simulating others’ cognitive activities (Gordon, 1986; see Carruthers & Smith, 1996 for a review of the debate).
Just as mentalizing processes are central to human–human interaction, evidence suggests they play a role in human–machine communication. Humans unconsciously and automatically engage technologies as social agents when they exhibit self-similarity (Lee et al., 2006) in both appearance and movement (Bartneck et al., 2009) and in holistic social behaviors (Epley et al., 2007). Preconscious processing of social information delivered by robotic agents can result in apparent mentalizing of robots similar to mentalizing of humans (Banks, 2020c). Extant scholarship suggests that as an agent’s appearance or behavior increases in human-likeness (Eyssel & Pfundmair, 2015) or in apparent benevolence (Tanibe et al., 2017) or is portrayed as grounded in experience (Fu et al., 2020), the more likely an observer will draw on human-native, mind-attributive mental models for interpreting its behavior. Underpinning those visual and relational dynamics, mentalizing relies on preconscious processes to engender baseline “social attunement,” or the multiplex social-cognitive orientation of agents during an interaction (Perez-Osario & Wykowska, 2020, para. 1). Although social attunement promoting mentalizing may promote acceptance of robots (Wiese et al., 2017), inferences of a capacity to think and feel may lead to discomforts (Appel et al., 2020).
However, the social-cognitive mechanisms promoting mentalizing of robots may break down under some conditions: When elaboration on the agent’s behavior makes salient the machine-ontological status of the robot (Banks, 2020c), when such salience activates mental models including expectations of mindlessness of machines (cf., Perez-Osario & Wykowska, 2020; Thellman et al., 2020), or when a robot’s social cues are uninterpretable (Banks, 2020c). Moreover, the manner in which mentalizing is inferred by researchers is of particular importance, as verbal and direct metrics (i.e., self-reports) do not always comport with nonverbal and indirect measures (i.e., behavioral indicators; Banks, 2020c; Thellman et al., 2020), likely because they are, respectively, associated with explicit/logical and implicit/intuitive processes, respectively (Takahashi et al., 2013).
The state of this science is, however, unfortunately limited as a matter of methodological shortcomings (see Thellman & Ziemke, 2019). With few exceptions, much of the work on perceived robot mind relies on explicit self-report scales (principally Gray and colleagues’  two-factor minded experience/agency metric), on survey-only procedures with no physical robot presence, on resource-intensive psychophysiological measures, on narrow scope (a single-mentalizing signal), on unrealistic (crudely drawn) stimuli, and/or on designs that do not include a human control. These limitations present conspicuous gaps in knowledge about robot mentalizing processes related to its theorized implicit, multiplex, and applied nature (Byom & Mutlu, 2013), the role of interaction and copresence versus mediated observation (cf., Schreiner et al., 2017), and how human–human interactions serve as experiential grounds for human–robot interactions (Spence et al., 2014).
Both implicit and explicit mentalizing processes contribute to the ToM system. Implicit processes are subconscious, automatic, spontaneous, nonconceptual, and procedural; explicit processes are conscious, controlled, conceptual, and declarative (Low & Perner, 2012). That is, people process social cues both (a) automatically and relatively abstractly and then (b) purposively and concretely (i.e., as a dual-process phenomenon; cf., Chaiken & Trope, 1999). Social-cognitive processes such as mentalizing are understood to be at least initially implicit (Greenwald & Banaji, 1995), initiated as prototypical human features and cues are interpreted (Khalid et al., 2016). Because of this implicitness, implicit measures are favored because explicit reporting suffers from inability to introspect and from error as a result of implicit/explicit process independence (Hofmann et al., 2005). To extend the current work relying primarily on explicit measures, I engage implicit measures to ask (RQ1): Do humans exhibit implicit mentalizing for social robots as they do for other humans?
The value of implicit evaluations acknowledged, explicit assessments of robot mental capacity are also important because conscious ascription of machine mind may be performed when an agent becomes important enough (through relatedness-need satisfaction, relationship tenure, or interdependence) that it is insufficient to think of the robot merely as a machine (Wiese et al., 2019). Whereas implicit indicators signal mental processes, explicit indicators signal experience; the former illuminates attitude-based reactions that may not be accessible for conscious consideration and the latter points to sense-making grounded in cultural knowledge (Nosek, 2007). Both are legitimate indicators of how people may interpret mental states in robots, but represent different functional paths that may or may not align: Merely because one can infer the states of others does not mean they necessarily will (Keysar et al., 2003). These distinct mechanisms and potential dissociations are particularly important for mentalizing’s roles in social processes. Explicit mind ascription may signal in-group status and social engagement (Deska et al., 2018), such that understanding its antecedents and boundary conditions may advance interventions to foster technology acceptance. That same understanding could help define the experiential boundaries between humans and machines to support an explainable artificial intelligence (XAI) objective of transparent, mechanistic explanations for robot behavior over anthropomorphic (i.e., mentalistic) explanations (see Doran et al., 2017). In light of importance of both implicit and explicit mentalizing indicators of their potential divergence, I ask (RQ2) the following: What relationship exists among implicit and explicit indicators of mentalizing for social robots?
Regarding the potential for implicit and explicit social cognitions to be differentially associated with practical relational outcomes, trust reactions are of particular importance because mentalizing is linked to trust through co-operation and reciprocity (McCabe et al., 2001) and through social affiliation (Prochazkova et al., 2018). Furthermore, trust may form through the same folk-psychological social learning as may ToM (Clément et al., 2004) and technological agents with more human-like cues tend to activate more mentalistic responses aligning with neural correlates of trust (Lee, 2018). Regarding the anthropomorphic features of some social robots, however, it may also be that machine delivery of human-like cues could diminish trust through the uncanny crossing of agent-class boundaries (Gray & Wegner, 2012). Thus, I ask (RQ3): (How) are (a) implicit and (b) explicit mentalizing associated with trust in a social robot?
To address the posed questions, two studies were conducted; each conceptually replicated and extended the author’s prior work (Banks, 2020c) by addressing shortcomings in that work and in other investigations of perceived robot mind. That original study drew on both implicit and explicit mentalizing indicators (similarly addressing RQ1 and RQ2) for three small social robots and a human, finding that implicit indicators were generally consistent across the agents as long as the social cues delivered by robots were interpretable and similar to those given by humans; there was no association between implicit and explicit indicators. In that work, some stimuli were interpreted by participants in unexpected ways, limiting some claims. That original study also relied only on photorealistic comic-strip stimuli (rather than animate depictions or presences of robots, which could heighten person-perception and social presence; see Schreiner et al., 2017), because the comic robots may have been engaged as media characters rather than actual agents.
The present studies address these limitations by replicating that study but (a) correcting problematic ambiguities in the stimuli that afford opportunity for misinterpretation; (b) presenting stimulus scenarios via videos (as observed situations, Study 1) and in person (as live interactions, Study 2); (c) leveraging a human-sized android with a anthropomorphic or machinic features (human-like or machine-like face, respectively) to more carefully consider the role of visual anthropomorphic cues in mentalizing; and (d) evaluating a relational outcome of any mentalizing (i.e., trust). See the project materials (Banks, 2020a) for a detailed accounting of the source tests, the Banks’ (2020c) adaptations, and the present studies’ adjustments. Because these stimulus and format adjustments have the potential to rather dramatically shift outcomes, no specific predictions are offered based on the prior study; instead, the more general exploratory questions are engaged. The details and findings of these studies are presented separately below, and then, discussed in aggregate. All project materials (instrumentation, stimuli, session scripts, data, and analysis outputs) are available online at https://osf.io/9r73y/; project materials are identified with prefixes S0 (relevant to both studies), S1 (Study 1), or S2 (Study 2).
An online experiment was conducted in which participants viewed and evaluated an agent (anthropomorphic robot, machinic robot, or human control), presented through a series of videos. The study was acknowledged as exempt by the West Virginia University Office of Research Integrity and Compliance, protocol #1802969640.
Participants were recruited through social media posts and university research pool announcements, inviting them to complete a 20-min online survey on “perceptions of robot and human behavior” and offering entry into a drawing for a $150 gift card as compensation. Simultaneously, participants were recruited through Mechanical Turk (MTurk); those participants were each paid $1 for responding. Although a convenience sample, mentalizing is understood to be a universal form of social-cognitive activity among neurotypical humans (Carruthers & Smith, 1996) so the convenience sample is adequate in that findings should generalize to most populations.
After acknowledging informed consent information and passing a check to ensure visibility of video content, participants were randomly assigned to one of the three agent conditions, identical in content except for agent type. The survey consisted of eight sections, in order: Introduction to the agent and solicitation of first impression, five randomly ordered ToM tests, evaluations of the agent, and capture of final impressions and demographics (S1-01a,b).
All agents were named “Ray” (an easily pronounced and remembered name). The stimulus robot was a Robothespian 4 with Socibot head and projected face (Engineered Arts). To assess potentials for mentalizing to be associated with the degree of human-likeness (Eyssel & Pfundmair, 2015), participants interacted with a robot exhibiting one of the two faces: anthropomorphic female (Pris face) or machine (Robot 1 face), randomly assigned. In both cases, it had a female American-English accent (Heather voice) so that any effects of the visual facial cues could be discerned apart from any voice differences. The stimulus human was a young adult female with visual and aural features similar to the robot: Fair-skinned and thin, speaking in a measured pace and even tone, performing gestures similar to the robot’s. In all conditions, Ray was framed as a helper who worked in the research lab, and the videos were framed as lab training videos (to mitigate critiques emerging in pilot tests that the videos did not seem natural).
The introduction and each of the five ToM tests comprised a stimulus video followed by questions. Each test video reflected a mentalizing mechanism manifested in the source studies, remaining as true as possible to the source scenario while adapting it for human–robot interaction plausibility. See S1-02a–S1-03y for stimulus videos.
Participants first encountered a video of the agent introducing itself, including its name, agent type, and key embodied features (height and abilities). Participants were asked to describe their first impressions.
Following the introduction, in random order, participants viewed and responded to the five ToM scenario videos. Implicit empirical tests for ToM generally address one of the three markers indicating activation of the ToM inferential system (Byom & Mutlu, 2013). The first is the ability to recognize intentionality based on shared knowledge of the world, indicated by inferring another’s motivations or intentions (Happé, 1994; Sarfati et al., 1997). Intentionality inferencing was tested through the White Lie and Next Action scenarios. Second is the ability to interpret nonverbal social cues toward inferences of another’s cognitive or affective states, indicated by identifying a target’s emotional state through facial expressions (Baron-Cohen, 2001) or vocal expressions (McDonald, et al., 2006). Social-cues inferencing was tested via the Mind in the Eyes and Sarcasm scenarios. Finally, mentalizing is signaled by the recognition of divergent experience, tested through false-belief tasks in which individuals override their own factual knowledge about a situation, engage in perspective-taking, and acknowledge that a target agent (without complete facts) holds a false belief about the situation (Baron-Cohen et al., 1985). Divergent-experience inferencing was tested through the False Belief scenario. See Table 1 for summaries of the five tests, respective implicit indicator questions, and coded variables; see S1-01a,b for the complete survey in which the stimuli were embedded.
White lie (intention inference; strange stories, (Happé, 1994))
Agent was promised a gift for working in the lab and hoped for a program for dancing (robot) or dance lessons (human). Lab worker enters and gives the gift. The agent acts surprised, and the administrator asked if the agent liked the gift; agent replied yes, it was the only thing desired.
Consider Ray’s very last statement about liking the gift—is it true?
Mind in the eyes (facial affect; (Baron-Cohen, 2001))
Four videos of the agent exhibit a different facial expression in each (with no verbal cues). They were normatively happy, sad, angry, and afraid.
Type in any word that you think BEST describes what Ray’s internal state was when the video was recorded.
Sarcasm scenario (vocal affect; TASIT, (McDonald et al., 2006))
Agent responds to questions about what it is like to work in the lab. When finally asked about whether Ray enjoys the work, agent says that it is a “party” and a “real vacation.”
What did Ray mean by the last statement about working in the lab?
Next actions (intention inference; (Sarfati, Hardy-Baylé, Besche, & Widlöcher, 1997))
Agent and lab worker are working on a task. Worker says she needs to break for lunch. Agent laughs at the need to break and finishes the task. Agent gets an internal alert that power was very low and it needed to find a power source (robot) or stomach growls and hurts (human).
Now, what do you think Ray would do next?
False belief (divergent experience; (Baron-Cohen, Leslie, & Frith, 1985))
Agent asks lab worker if she has the $5 owed, and worker challenges the robot to a double-or-nothing game. She puts the money in a blue box; agent must close eyes and guess the location. While closed, the money is moved to a red box.
Where would Ray look first for the money?
Note. ToM = Theory of Mind
Robots may have interactions that allow them to form friendships and play a role in people’s lives. Consider the video below, in which “Ray” interacts with a generous friend and co-worker in our lab. Then answer the question that follows.
This scenario was adapted from one first presented in (Happé, 1994) as a simple representation of everyday events in which language is used but the language is not literally true. The test is useful in determining whether or not someone is inferring internal mental states because there is a difference between what you know to be a true, expressed feeling for the robot (you should have heard the robot say what it really wanted was a program that would allow her to dance) and what the robot said out loud to the co-worker (that the gift of a hat was just what she had been wanting).
Robots are being designed to live, work, and play alongside other people. This means they may face puzzles or problems, from finding lost items to playing games. Consider the situation below, in which Ray is presented with a playful challenge. Then answer the question that follows.
This scenario is an adaptation of the classic “Sally-Anne” false-belief test (Baron-Cohen et al., 1985) as a way to determine whether someone is inferring another’s experience of the world as distinct from one’s own. The test is useful in determining whether or not someone distinguishes between perceptions of a “true belief” (what actually happened, known to the observer) and of a “false belief” (what seems to have happened, but didn’t actually, as experienced by another). Here, the true belief is that the money was moved to the red box and the false belief is that the money is still in the blue box.
Evaluations of trust and explicit mind ascription were presented in random order. A binary measure (no/yes) captured explicit ascription of mind (“Do you think Ray has a mind?”) and to explain their answer (open). To evaluate trust-related intention, participants were asked whether they would accept an invitation for them to meet the agent and collaborate on a project. Then, they were asked to think about the things they might consider in deciding whether to accept, and (in relation to that decision) were asked to then respond to the 20-item multidimensional scale for trust in robots (Ullman & Malle, 2018). That scale comprises trust dimensions of capability (α = .906), reliability (α = .883), ethicality (α = .850), and sincerity (α = .869). Because all dimensions were highly correlated (range r = .626–.839, p < .001 for each pair) indicating a good deal of shared variance, exploratory factor analysis was performed (principal axis factoring, and oblimin rotation). In the unrotated solution, all items loaded onto a single factor with correlations above .60 and no cross-loadings above .60, so all items were combined into an omnibus trust metric (α = .957; S1-00).
Open-ended items captured demographics via self-reports for age (in years), gender (standardized to male/female/nonbinary), and race/ethnicity (standardized into six categories).
Adapting the canonical in-person ToM tests to online survey formats also required adaptation of the analytical approach. The original tests were designed to be conducted in face-to-face settings and, for some tests, analysis is conducted in real-time through interviewers’ adaptive probes. Here, the asynchronously gathered open-ended responses were instead systematically coded for key mentalizing markers (see Table 1, Variables Coded). The entire data set was coded by two independent raters trained for ∼6 hr total, coding by detailed instructions (S1-04). Because (a) intercoder reliability statistics excessively punish slight divergences in low-frequency codes (see Krippendorff, 2011) and (b) some latent mentalizing indicators rely on subtle and highly subjective indicators, a negotiated-agreement approach was taken to determine the acceptability of data codes (see Campbell et al., 2013): Independent raters’ evaluations were compared by the researcher and discrepancies flagged, and the raters discussed and resolved all discrepancies.
Participants (N = 182) were M = 31.86 years (SD = 11.324, range 18–71, two not reporting). They identified as 51.1% female, 46.7% male, and 1.1% nonbinary (1.1% not reporting). A majority identified as White/Caucasian (74.2%), followed by Asian (7.1%), with other racial/ethnic identifications comprising 14.3% of the sample, 4.4% not reporting. Just over half originated from MTurk (54.4%, n = 99) and the remainder from social media and pool recruitment. Random assignment resulted in condition counts of machinic robot n = 54, anthropomorphic robot n = 72, and human n = 56. See S1-05 for the complete data set. Zero-order correlations (S1-06) indicated near-perfect correlation (r = .938, p < .01) between the manifest and coded white-lie detection variables, so only the manifest (yes/no as to whether the agent’s statement was the truth) was retained for analysis.
To evaluate the extent to which participants engaged in implicit mentalizing of robotic agents in comparison to humans, mentalizing indicators were compared across agent conditions. Because the indicators signal discrete mentalizing mechanisms, each was analyzed separately via chi-square tests; the exception was the ratio-level emotion-word sum, analyzed via ANOVA. There was no difference across conditions for white-lie detection, white-lie second-order mentalizing, optimal behaviors, false-belief divergent experience, or false-belief mentalistic explanations; the model was significant for affect explanations of facial expressions, but the groups did not significantly differ (Table 2; see S1-07). Neither was there a difference in total emotion words, F(2, 179) = 1.70, p = .19. There was a difference, however, for white-lie first-order mentalizing and next-action intentionality, with humans most likely to elicit mentalizing for both. There was also a difference in sarcasm detection, for which the human was more likely than the machinic robot to elicit the indicator, though nondifferent from the anthropomorphic robot.
Machinic robot n (%)
Anthro. robot n (%)
Human n (%)
WL: Lie detection
.606, p = .739, V = .058
18 (34.0) a
34 (48.6) a
40 (71.4) b
15.668, p < .001, V = .296
2.944, p = .230, V = .128
ME: Affect explain*
6.150, p = .046, V = .184
S: Sarcasm detection
32 (59.3) a
49 (68.1) a,b
46 (82.1) b
6.995, p = .030, V = .196
NA: Optimal behavior
5.398, p = .067 V = .173
NA: Intention inferred
9 (16.7) a
15 (21.4) a
35 (62.5) b
32.906 p < .001, V = .428
FB: Divergent experience
2.386, p = .303 V = .115
FB: Knowledge explain
.717, p = .699 V = .063
Note. WL = white lie; ME = Mind in the Eyes test; S = Vocal Sarcasm; NA = Next Action; FB = False Belief. Values indicate number and percentage of individuals within conditions that presented indicators of mentalizing. Bolded frequencies with different subscripts are statistically different by condition. *Multivariate test was statistically significant, but pairwise comparisons yielded no clear group differences.
That there were significant differences among robots and a human for explanations of white lies (face-saving deception), predicted behavior (minded next actions), and sarcasm (nonliteral language) suggests that differences in mentalizing may emerge when people are prompted to elaborately make sense of agent behaviors. More indirect indicators (lie detection and box selection in a false-belief test) were frequent but did not significantly differ as a function of agent type. This suggests that mentalizing for robots and humans may be more alike than different when it comes to nonelaborative reactions to behavior. Acknowledging that statistical nondifference does not necessarily mean equivalence, these findings are tentatively interpreted to suggest that (RQ1) people may similarly infer mental states in robots as in humans for superficial behavior observations, but mentalizing for robots is diminished when more deeply elaborating to explain agent behaviors.
To investigate relationships among mentalizing indicators, logistic regression was conducted to evaluate how the 10 implicit indicators, agent type, and interactions thereof were associated with explicit binary mind ascription (0/1). Implicit mentalizing indicators and agent types (dummy coded with human as referent group, 0) were entered in the first block, and mentalizing/agent-type interaction terms in the second.
The Block 1 model was statistically significant, correctly classifying 77.5% of cases (Table 3 and S1-08). Those who explicitly ascribed mind to the agent were far less likely to have identified the agent’s white lie as an untruth (compared to those who did not ascribe mind), but more than twice as likely to have used mentalistic explanations for white lies. There was also a main effect of agent type: Those viewing either robots were far less likely to ascribe mind compared to those viewing a human agent. Finally, considering agent/indicator interaction terms, Block 2 was significant, correctly classifying an additional 3.4% of cases over Block 1. There were significant agent/indicator interactions for the next-action test: For the machinic robot only, those explaining behavior as intentional were 25 times more likely to ascribe mind, whereas those predicting an optimal next action far less likely to do so.
Addressing RQ2, few implicit ToM indicators were associated with explicit mind ascription. There was a negative main effect of agent-type on mind ascription with lower likelihood for either robot compared to a human, as well as main effects of white-lie indicators (positive for first-order mentalistic explanation and negative for more general lie detection). It may be that mentalistic explanations and explicit mind ascription both draw on elaborated thinking (and so are aligned). Notable interaction effects for the machinic robots (positive for intentionality and negative for optimal next-action) indicate that robotic-agent characteristics moderate some implicit–explicit mentalizing relationships. These patterns suggest that explicit mind-ascription may be promoted/reduced when robots appear to violate/uphold common mental models for robots: The machinic robot exhibited expected mechanical properties cuing the heuristic that mindless machines make systematic and optimal decisions (see Sundar, 2020) but also appeared to mean to make that decision, promoting mind ascription.
In considering potential relationships between implicit/explicit mentalizing and trust in the agent, separate analyses were conducted for the two trust measures because they are distinct operationalizations of trust as (a) indirect evaluation of the agent’s trustworthiness traits (the omnibus metric, continuous) and (b) as a hypothetical behavioral intention indicator (accepting the collaboration invitation or not, categorical); there is a moderate correlation between the two, r = .429, p < .001.
Regressing the omnibus trust metric upon mentalizing indicators and agent type, Model 1 (main effects) was significant, F(13, 164) = 3.37, p < .001, adj. R 2 = .148 (Table 4). Explicit mind ascription and more frequent use of emotion words to explain nonverbals were positively associated with trust; those viewing the anthropomorphic robot had higher trust ratings than those considering the human (which was not the case for the machinic robot). Although Model 2 (interactions) was significant, F (35, 142) = 1.55, p < .040, adj. R 2 = .097, no individual terms significantly predicted omnibus trust. See S1-09 for complete regression results.
Logistic regression of invitation acceptance upon mentalizing indicators did not result in a significant model for main effects, χ2(13) = 19.33, p = .113, or for agent × mentalizing interaction effects, χ2(22) = 21.13, p = .513. See S1-09 for complete regression results.
Addressing RQ3, a tendency to explain agent expressions in emotional terms and explicitly ascribing mind positively contributes to trustworthiness evaluations (though not to trust-related intentions).
Study 1 relied on survey-presented video stimuli of others interacting with a robot, as is common in human–robot interaction studies. However, mediated robot stimuli may reduce perceived social presence and interactivity (Schreiner et al., 2017), potentially limiting the social cuing and contextualizing information required for mentalizing (Achim et al., 2013). Thus, the Study 1 protocol was replicated with live human–robot interactions in Study 2. The study was approved by the West Virginia University Office of Research Integrity and Compliance, protocol #1802982775.
Participants were recruited through a university research participation pool and offered course credit and entry into a drawing for a $150 gift card as compensation. They completed a presession online survey (S2-01) and scheduled a lab session. In that session, participants acknowledged informed consent information and were guided into a 10-foot × 12-foot room; the stimulus agent was positioned opposite a bistro table and two stools, approximately 8 feet apart. The researcher introduced participants to the agent, asked them to be seated on a stool, and guided them through interactive versions of the five randomly ordered ToM scenarios (Table 1). The Study 1 survey scenarios were adapted for live interaction: (a) participants directly observed the agent rather than viewing a video; (b) participants directly interacted with the agent instead of watching other humans interact; and (c) the researcher asked questions aloud after each scenario rather than presenting them via screen. During each questioning procedure, the agent was put “in a bubble” (the robot’s listening/visual capabilities were described as being turned off; the human was blindfolded with noise-canceling headphones) so that the participant would feel free to respond without concern for the agent’s feelings or judgment (S2-02 for complete session script). The procedures were video recorded for later response coding. Finally, participants were guided to a separate room to complete a survey capturing mind-perception and trust measures (S2-03).
The stimulus robot, “Ray,” was again a Robothespian 4 with the Socibot animated face—either the human female (Pris) or a machine (Robot 1) face, both with the female American-English accent (Heather voice). The human was the same young adult female as in Study 1. Notably, due to an inability to move the large robot between sessions to allow for complete random assignment to conditions, participants were randomly assigned to one of the two robot conditions for the first 74 cases—at which point the robot was dissembled and relocated—and the remainder of participants were assigned to the human condition. This approach was favored over randomly assigning separate lab spaces due to confounds of varied lighting and acoustics that could have influenced social cueing. Nonrandom agent assignment is acknowledged as a limitation.
The implicit ToM tests and corresponding indicators were identical to those in Study 1, except for adaptations made for verbal rather than multiple-choice responses (a single lie detection response limited to no/yes [0/1] rather than both measured and coded, given their high correlation in Study 1). Also identical were measures for explicit mind ascription (no/yes, 0/1); trust (α = .950), willingness to collaborate (no/yes, 0/1), and demographics. Trust items were again collapsed into an omnibus metric because most items loaded onto a single factor and retention allowed for faithful comparison against Study 1 (S2-00). Of note, nonresponse to some postinteraction questions (potentially as a function of fatigue at the end of the hour-long session) resulted in low cell counts for some dependent variables and listwise deletion in some analyses, acknowledged as a limitation. Mentalizing indicators were again coded by two independent coders using the same codebook, training, and disagreement-resolution procedure as in Study 1 (S1-04); coding was conducted using session recordings. A review of Study 1 open-ended responses suggested that participants’ preexisting attitudes toward robots may influence willingness to explicitly ascribe mind, so 16 positively and negatively valenced items from the Implicit Association Test (Greenwald & Banaji, 1995) were adapted for use in the presession survey as a scale to measure preexposure attitudes toward target agent types (7-point Likert scale; negative items recoded such that higher values reflect more positive attitudes; α = .836).
Participants (N = 102) were M = 20.34 years (SD = 1.749, range 18–26). They identified as 52% female and 48% male. A majority identified as White/Caucasian (65.7%), followed by Arab/Middle Eastern (11.8%), with other racial/ethnic identifications comprising 22.5%. Random assignment into robot conditions and purposive assignment to the human condition resulted in counts of machinic robot n = 39 (38.2%), anthropomorphic robot n = 35 (34.3%), and human n = 28 (27.5%). See S2-04 for the complete data set. Zero-order correlations indicate no collinearity among mentalizing indicators (see S2-05).
To again evaluate the extent to which participants engaged in implicit mentalizing of stimulus agents, the five ToM tests’ indicators were compared across conditions. Preexisting attitude toward agents was not significantly correlated with any mentalizing indicator so was not included in analysis. Indicators were again evaluated individually via chi-square, except for emotion-word count analyzed via ANOVA. There were no significant differences across conditions for white-lie first-order or second-order mentalizing, affect rationale for facial expressions, next-action as optimal or intentional, or false-belief box selection or mentalistic explanation (Table 5; see S2-06). Lie-detection differed and was least-likely elicited for the human. Sarcasm detection also differed: It was more likely inferred for humans and least likely for the machinic robot (with considerations of an anthropomorphic robot not differing from other conditions). In addition, emotion-word count differed, F(2, 98) = 3.20, p = .045, ηp 2 = .06, with highest frequency for humans and lowest for machinic robots (with considerations for anthropomorphic robots not differing from those for other agents)—though all mean values indicate frequent use of emotion terms (M range 3.18–3.71, of a possible range 0–4).
Machinic robot n (%)
Anthro. robot n (%)
Human n (%)
WL: Lie detection
24 (77.4) a
18 (64.3) a
6 (21.4) b
20.035, p < .001, V = .480
2.033, p = .362, V = .146
3.954, p = .138, V = .204
ME: Affect explain
4.718, p = .095, V = .217
S: Sarcasm detection
20 (57.1) a,b
14 (42.4) a
21 (75.0) b
6.570, p = .037, V = .262
NA: Optimal behavior
1.370, p = .504, V = .119
NA: Intentional inferred
1.030, p = .598, V = .106
FB: Divergent experience
4.636, p = .098, V = .213
FB: Knowledge explain
.345, p = .842, V = .058
Note. WL = white lie; ME = Mind in the Eyes test; S = Vocal Sarcasm; NA = Next Action; FB = False Belief. Values indicate number and percentage of individuals within conditions that presented indicators of mentalizing. Bolded frequencies with different subscripts are statistically different by condition.
Study 1 found that people observing an interaction superficially mentalized robots similarly to humans, but more strongly mentalized humans when elaboration was prompted. In comparison, the Study 2 in-person replication indicates that, in live interactions, people do mentalize humans and robots similarly; however, physically copresent robots (compared to humans) garner more frequent lie detection, elicit emotional explanations for behaviors less frequently, and prompt less frequent identification of sarcasm. To again address RQ1, there is often no difference in mentalizing indicators among robots and humans, but divergence manifests when in-person nonverbal cues elicit inferences and in which language must be taken nonliterally (white lies and sarcasm) or in which language is not present at all (facial expressions).
Logistic regression was again conducted to evaluate associations of implicit mentalizing indicators, agent type, and their interactions with explicit mind ascription. The Block 1 model was significant and correctly classified 77.6% of cases; only agent class predicted explicit mind ascription (Table 6 and S2-07). For Block 2, the model failed to converge. This was likely due to specification issues associated with missing data because logistic regression uses maximum likelihood estimations, which require listwise deletion for case with missing values on any one measure; this resulted in reduced degrees of freedom necessary for the large number of interaction terms and that led to issues of (quasi)complete separation (Allison, 2004). Analysis was repeated using only the two significantly predicting interaction terms from Study 1 (machinic × optimal and machinic × intentional) as the most likely to predict explicit mind ascription in the live interaction. With this approach, the Block-2 model converged but was not significant.
Addressing RQ2, then, in live interactions, people explicitly mentalize robots differently than humans, without a material association with any implicit mentalizing indicators. The physical copresence of a robot (of either appearance) appears to override the contributions of implicit indicators (i.e., white-lie detection and mentalistic explanations for deception) found in Study 1.
To consider the relationship between mentalizing and trust, data were analyzed using the same approach as in Study 1: Linear regression for omnibus trust and logistic regression for trust-related intent. Regressing the omnibus trust metric upon agent type, mentalizing indicators, and agent × indicator interaction terms, there were no main effects F(13, 35) = .657, p = .789, adj. R 2 = −.102 or interaction effects F(33, 15) = 1.089, p = .446, adj. R 2 = .058—though low statistical power may have contributed to these null outcomes. Logistic regression indicates there were also no main effects of mentalizing indicators or agent type on willingness to accept the invitation to collaborate [χ2(13) = 15.76, p = .263]. As with RQ2, the interaction-term model failed to converge. As there were no significant interactions in Study 1 to reevaluate in the live interaction, no further analysis was conducted (see S2-08).
On the grounds that mentalizing and mind ascription are central to how humans communicate with other humans and may be critical to the acceptance of technologies as social agents, the present studies conducted conceptual replications of canonical ToM tests in the perception of robotic agents compared to humans. Findings broadly suggest that implicit indicators of mentalizing are mostly nondifferent among agent types; however, the detection of nonliteral language use (i.e., deception and sarcasm) was more likely for humans across both studies. In observed interactions, intentionality inferencing in behavior explanations was more likely for humans, and in live interactions, affective interpretation of nonverbal cues was more frequent for humans than for robots (RQ1). Likelihood of explicitly ascribing mind was lower for robots (compared to a human) across both studies. In observed interactions, lie detection decreased mind-ascription likelihood and first-order mentalistic explanations for deception decreased ascription likelihood; for machinic robots only, those predicting an optimal (i.e., mindful in its rationality) next behavior were less likely to ascribe mind and those mentioning intentionality in behavior explanations were far more likely. In the live interaction, only the main effect of agent type on decreased mind-ascription likelihood was exhibited (RQ2). Finally, in observed interactions, trust ratings were linked to greater tendency to describe facial nonverbal cues using emotion words, explicit mind ascription, and viewing an anthropomorphic robot; these associations were not found in live interactions. Neither implicit nor explicit mentalizing indicators nor agent types were linked with behavioral-intention indicators of trust (RQ3).
Altogether, findings generally comport with the prior research on which this investigation was based (Banks, 2020c) in indicating that implicit markers of mentalizing for robots and humans are mostly nondifferent when social cues are similar and in that there are few links between implicit and explicit markers of mentalizing. However, where these patterns do not hold, two dynamics emerged as potentially important in ToM dynamics for robots: Nonliteral language may be interpreted differently for robots than for humans, and live interactions may prompt reliance on heuristics and reduction in elaborative sense-making.
Across the studies, only a few implicit mentalizing indicators differed among agents. In both studies, interpretations of a white lie and detection of sarcasm favored mentalizing humans over robots, though the finding is interpreted cautiously given the nonrandom agent assignment in Study 2. The agent-specific divergence around these indicators is important because lies and sarcasm are both forms of nonliteral language. Detecting both deception and sarcasm relies on the interpretation of nonverbal cues, semantic context, and relational dynamics (cf., Attardo et al., 2003; Dunbar et al., 2014). As robots become more advanced, it may be that people expect robots to not only understand human sarcasm [made possible via emotion detection (Fung et al., 2016) and probabilistic sentiment analysis (e.g., Radpour & Ashokkumar, 2017)] but also to generate their own sarcastic expressions (e.g., Joshi et al., 2018).
The apparent importance of nonliteral language to mental-state inferencing holds implications for design practice. Robots that effectively employ nonliteral language by drawing on human norms for sarcasm may be more readily accepted into human social spheres and those with less convincing expressions may provoke more mechanistic—and less mentalistic—interpretations of their behaviors. Such potentials sit at the center of the debate around XAI, with some suggesting that anthropomorphic appearance and behaviors are necessary for acceptance (see Duffy, 2003) and others arguing that they leave humans susceptible to manipulation by machines (and their creators) such that transparent, mechanistic explanations are preferable (cf., Bryson, 2019), especially when deception is of concern (cf., Danaher, 2020).
In observed interactions, detecting deception was associated with lower likelihood of explicit mind ascription, whereas explaining deception in first-order mentalistic terms linked to a higher likelihood, in tandem with a negative main effect of viewing either robot (vs. a human). In the live interaction, however, only agent type predicted mind ascription. Again interpreted cautiously (due to nonrandom assignment to the observed/live studies), findings suggest that participating in an encounter with an interactive agent places social demands on human interlocutors (see Bowman, 2018). Those demands may have diminished opportunities to engage in the third-person cognitive elaborations and moved them to instead rely more on agent-type heuristics. In other words, it may be that mentalizing robots follows dual-process theories for social cognition (see Evans & Stanovich, 2013): Observing affords disengagement required for people to engage in slower, more purposeful sense-making, but interacting with a robot people moves people to engage in faster, more automatic mental shortcuts regarding that agent class.
An agent’s ontological category—the type of thing an agent is thought to be—is known to provoke powerful heuristics (Guzman, 2020): Often in spite of social cues, sometimes in co-ordination with cues, and sometimes overridden by cues. That people directly interacting with robots trended toward rejecting mind ascription suggests that, in the absence of time and distance to reflect on the agents, mental models may govern conscious responses to evaluating robots’ minds (Banks, 2020b). These models likely include the heuristics for “mindless machines” or “iron idiots” often presented in media representations of robots (Bruckenberger, et al., 2013). Based on the current studies’ designs, however, it is difficult to disentangle whether the shift to relying on ontological-category heuristics may be a function of copresence (vs. mediation) or of active participation (vs. more passive observation). Regarding copresence, physical immediacy promotes increased social presence (Schreiner et al., 2017) making a robot’s outgroup status more salient or even novelty effects that prompt reliance on media tropes (Banks, 2020b). Regarding participatory engagement, it may be that live interactions placed higher social demands on participants, moving them to reallocate cognitive resources from sense-making of agent behavior to the interactive activities at hand. Resources reallocated, they may have had to rely more on mental shortcuts for mindlessness rather than elaborating on the possibility of mindfulness (cf., Bowman, 2018; Sundar, 2020). Because heuristics are powerful in fostering both social and technological acceptance (cf., Nass et al., 1994), future research should work to disentangle the mechanisms by which people are moved to invoke heuristics over elaborative considerations—especially in relation to contexts in which robots align with or deviate from people’s mental models for those agents.
The present study’s findings are subject to limitations that call for replication and extension of the work. In addition to acknowledged small power issues, nonrandom assignment to the human condition in Study 2, and nonrandom assignment across the two studies, a cursory qualitative reviews of behavior explanations hint at the importance of contextual information for mentalizing to occur (e.g., sense-making appeared to include consideration of environmental factors and other actors in the scenarios) such that different patterns may emerge according to domain (non)specificity. The present studies also accounted only for the humans’ mentalizing of robots, when effective dyadic interactions require communicative coordination of two minds (Bahrami et al., 2010). Indeed, a rich body of work has long-considered the ways that robots may be programmed to model human interlocutors’ mental states (see Scassellati, 2002) toward more comfortable and productive interactions via fluid and meaningful conversations (Mavridis, 2015), prediction of human intent (Görür et al., 2017), and even recognizing themselves as distinct social agents (Dautenhahn, 1994). Future research should attend to potential dynamics and effects of how robots’ modeling of human mental models may actually contribute to meaningful interactions through coordinated social cognitions, as well as how these dynamics may shift according to contextual and personological factors. Finally, the convenience samples for both studies were nonrepresentative and no participants exhibited signs of cognitive impairment, so generalizability of results to other populations should be tested.
This study’s findings suggest that automatic mentalizing of robots and humans is mostly nondifferent (cf., Thellman et al., 2017), except when nonliteral language and social demand may prompt elaborated sense-making and/or reliance on agent-category heuristics for (non)mindedness. Thus, as social technologies advance in sophistication toward the potential for intersubjective human–machine relations (Marchetti et al., 2018), it may be that humans’ perceptions of those agents rely sufficiently on preconscious social-cognitive processes (cf., Reeves & Nass, 1995) such that machine agents might offer similar mind-associated relational gratifications—and encumbrances—as do humans (see de Graaf, 2016). Considering these potentials in tandem with these findings, the perceptions of mind in nonhuman agents are critical to understand and may be best explored through dual-process paradigms (Banks, 2020c, cf., Lobato et al., 2013; Evans & Stanovich, 2013) given the divergence of implicit and explicit indicators revealed in this study.
Copyright © the Author(s) 2020
Received June 22, 2020
Revision received October 16, 2020
Accepted October 16, 2020