Volume 5, Issue 2. DOI: 10.1037/tmb0000131
Reciprocity is central to the formation and maintenance of relationships. Reciprocity and relationship formation change with children’s development and are key aspects in human–robot interaction. So far, it is unclear how children reciprocate and build a relationship with a social robot and how this reciprocity develops with age. In the present study, we collected data from 147 children aged 5–12 years to investigate age differences in reciprocity and relationship formation toward a social robot. To test reciprocity, children completed an Alternated Repeated Ultimatum Game with a social robot and another child. Children also completed a survey on relationship formation to assess robot-related closeness, trust, and social support. Results from a linear-mixed effects Bayesian analysis indicated that children reciprocated similarly to a robot as to another child. While reciprocity differed across age with lower values for 8–10-year-olds compared to younger and older children, these age differences in reciprocity were also observed when children interacted with the robot. Children’s relationship formation with a social robot also changed with age. Our findings suggest that established theories from human–human literature (e.g., age differences in reciprocity) are also relevant for human–robot interaction. Children’s age is an important determinant for how children interact with and perceive robots.
Keywords: human–robot interaction, child–robot interaction, reciprocity, relationship formation, development
Acknowledgments: The authors thank the statistics support at Radboud University and Utrecht University for their support with the analyses and Mieke Oldeman, Fenna Andriessen, Lobke de Bruin, Ann Hogenhuis, and Ineke Heyselaar for their help with the data collection. The authors extend special thanks to Floor Burghoorn and Bernd Figner.
Funding: Funding was received by the Behavioral Sciences Institute, Nijmegen.
Disclosures: The authors have no conflicts of interest to report.
Data Availability: The collected data, including all measures and materials, can be found in the Open Science Framework project (Leisten et al., 2021a; https://osf.io/z9kxb/). The analysis code and game code, including the programming of the robot, can be found on Gitlab (https:// gitlab.com/human-plus/reciprocity-project). The authors report all study measures, all manipulations, any data exclusions, and the sample size determination rule.
Preregistration: The study’s procedure and analyses were preregistered on the Open Science Framework (Leisten et al., 2021b; https://osf.io/5djbc).
Open Science Disclosures: The data are available at https://osf.io/z9kxb/. The experimental materials are available at https://gitlab.com/human- plus/reciprocity-project and https://osf.io/z9kxb/. The preregistered design and analysis plan (transparent changes notation) are accessible at https://osf.io/5djbc.
Open Access License: This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC-BY- NC-ND). This license permits copying and redistributing the work in any medium or format for noncommercial use provided the original authors and source are credited and a link to the license is included in attribution. No derivative works are permitted under this license.
Correspondence concerning this article should be addressed to Luca M. Leisten, Department of Humanities, Social, and Political Sciences, Eidgenössische Technische Hochschule Zurich, 8092 Zurich, Switzerland, or Ruud Hortensius, Department of Psychology, Utrecht University, 3584 CS Utrecht, Netherlands. Email: [email protected] or [email protected]
Reciprocity, fundamental for cooperation and observed across every human culture, is central to social interactions (Fehr & Fischbacher, 2003, 2004; Gouldner, 1960; Wörle & Paulus, 2019). It is crucial to building and maintaining relationships and is differently displayed toward friends and nonfriends (Buunk & Schaufeli, 1999; Clark & Ayers, 1988; Moore, 2009). Reciprocity and the formation of relationships develop with age (Chernyak et al., 2019; Furman & Bierman, 1984). These processes are not only relevant for human–human interaction but have also been identified as key behaviors in human–robot interaction (HRI; Kahn et al., 2007; Lorenz et al., 2016). So far, it is unknown how they are displayed during HRI and how this is impacted by age. Age-related differences might have important implications for the development of engaging and individualized robotics systems. If children across age differ in their perception of and interaction with robots, researchers might need to treat children of different ages as heterogeneous groups to fully capture the dynamics during those interactions. In the present study, we therefore seek to answer these questions and investigate 5–12-year-olds’ reciprocity and relationship formation toward a social robot.
Reciprocity has been defined as the compulsion to return a favour or gift in human relationships (Gouldner, 1960). Over the years, this definition was adapted to include responding to friendly actions with being nice and more cooperative while being more nasty and even brutal in response to hostile actions (Fehr & Gächter, 1998). This definition is in line with Falk and Fischbacher’s theory of reciprocity (2006) and has been used in previous work on HRI (Sandoval et al., 2016). Reciprocity can be divided into positive (responding to a positive action with a positive action) and negative (responding to a negative action with a negative action) reciprocity (Bereby-Meyer & Fiks, 2013; Chernyak et al., 2019; Schug et al., 2016). Here, we focus on overall reciprocity, in line with previous work (Sandoval et al., 2021), as this is most relevant to HRI, in which both positive and negative actions might be reciprocated. Importantly, reciprocity, which is related to other concepts such as cooperation (Axelrod, 1984; Fehr et al., 2008), fairness (Bereby-Meyer & Fiks, 2013), and altruism (Kolm, 2006; Murnighan & Saxon, 1998), is a social, normative behavior (Gouldner, 1960; Wörle & Paulus, 2019). It is not altruistic but is elicited due to its long-term benefits for the individual through a potential payback by the interaction partner (House et al., 2013). Consequently, it assumes an already-developed understanding of social norms and does not include simple mimicry or imitation.
Although there is no universally accepted developmental trajectory, reciprocity in human–human interaction develops with age. While most studies agree that reciprocity starts to develop early, the actual emergence of reciprocal behavior varies between 3 and 8 years, depending on the operationalization and measurement (e.g., Chernyak et al., 2019; Vaish et al., 2018; Warneken & Tomasello, 2013). Children start behaving more fairly at age 7 (Fehr et al., 2008) and show a linear increase in reciprocity until approximately 8 years (Bereby-Meyer & Fiks, 2013; House et al., 2013; Warneken & Tomasello, 2013). Meanwhile, van den Bos et al. (2011) found no differences in reciprocity between 12- and 14-year-old children, adolescents, and adults, hinting toward a completed development of reciprocity between 8 and 12 years.
Like reciprocity, relationship formation is a crucial aspect in human–human interaction and develops with age (Buunk & Schaufeli, 1999; Clark & Ayers, 1988). During middle childhood (5–12 years), friendships evolve from coordinated fantasy play to emphasizing acceptance, common interests, social support, and commitment (Bigelow, 1977; Bigelow & LaGaipa, 1980; Furman & Bierman, 1984; Parker & Gottman, 1989).
Reciprocity and relationship formation are relevant not only in human–human interaction but also in HRI. Reciprocity potentially improves human–robot collaboration and trust and has been called a “key social mechanism” to build successful companion robots (Burger et al., 2017; Kahn et al., 2007; Lorenz et al., 2016). For instance, reciprocity supports collaboration with a robot during a joint judgment task (Zonca et al., 2021). Despite research on adults’ reciprocity toward robots (e.g., Hsieh et al., 2020; Sandoval et al., 2016, 2021), little is known about the development of reciprocity in child–robot interaction (CRI).
Previous work on the development of reciprocity in CRI only covered related concepts like sharing and prosocial behavior. For instance, 5–6-year-olds apply the same social norms (such as dividing resources equally) to robots as to humans (Di Dio et al., 2020), and 8–10-year-olds’ prosocial behavior is influenced by a robot’s previously exhibited prosocial behavior (Peter et al., 2021). Directly comparing children of two age groups (4–5-year-olds and 8–9-year-olds), Nijssen et al. (2021) found no differences in sharing resources with a robot.
Relationship formation in CRI has been studied, but usually without consideration of potential age differences within samples (Belpaeme et al., 2013; Kruijff-Korbayová et al., 2015; Leite et al., 2009). For example, van Straten et al. (2020) found 7–11-year-olds to perceive closeness, trust, and social support from a robot, but they did not report on potential age differences. Other work investigated 6–7- and 11–12-year-olds and found younger children to spend more time with the robot and maintain their interest for longer, compared to older children (Kanda et al., 2004). While these studies provide important general insights into children’s reciprocity and relationship formation with social robots, they lack consideration of age-related differences and their consequences on CRI.
Our study aims to investigate age-related differences in reciprocity and relationship formation during CRI in middle childhood (5–12 years) and whether children’s reciprocity toward robots differs from their reciprocity toward another child. We hypothesize a nonlinear development of reciprocity across age. Specifically, we hypothesize a linear increase of reciprocity until preadolescence (∼8–10 years), followed by no further increase in reciprocity after approximately 8–10 years (see Supplemental Materials A, Supplemental Figure S1, and Hypothesis 1). This is based on previous work in which children show a linear increase in reciprocity from preschool until approximately 8 years (Bereby-Meyer & Fiks, 2013; House et al., 2013; Rotenberg & Chase, 1992). Additionally, van den Bos et al. (2011) found no increase in general reciprocity between 12 and 22 years, indicating no further development at some point between ∼8 and 12 years. As reciprocity is a social, normative behavior (Gouldner, 1960) and children apply social norms to robots (Di Dio et al., 2020), the development of reciprocity toward children and robots could be similar. However, due to the lack of previous research, age could also be expected to moderate the association between reciprocity and partner type (robot vs. child). This is therefore an open research question with two possible outcomes: a difference in the reciprocity development between robot and child or no difference (Supplemental Materials A, Supplemental Figure S1, and Hypotheses 2.1–2.2).
Regarding relationship formation, we hypothesize that the developmental pattern of relationship formation in CRI is similar to the pattern in human–human interaction (Supplemental Materials A, Supplemental Figure S1, and Hypothesis 3). We expect a nonlinear, quadratic association between age and relationship formation. Children in preadolescence (8–10 years) will show a stronger relationship formation with a social robot than younger children (5–7 years), as they build friendships through games and contests (Parker & Gottman, 1989) and can potentially make better use of a robot’s functionality. Additionally, they lay a stronger emphasis on intimacy, common activities, and friends’ personality features (e.g., being nice) for building friendships than younger children (Parker & Gottman, 1989). These are characteristics a robot could potentially fulfill, leading those children to build a stronger relationship with the robot than younger children. After preadolescence (10+ years; World Health Organization, 2021), we expect the slope of relationship formation to turn negative as intimacy, loyalty, and social support become more important for building friendships (Bigelow, 1977; Furman & Bierman, 1984), which might be more difficult to achieve with a robot in a short period of time.
We recruited 147 Dutch-speaking participants (69 girls, 78 boys) from two schools in the Netherlands (School 1: n = 64; School 2: n = 83). Participants were between 5 and 12 years old, with a mean age of 8.31 years (SD = 1.98; see Supplemental Material D for the gender distribution per age). Sample size calculations for mixed model analysis are highly complex and require prior information, which we did not have. Our sample size was therefore based on the class sizes and consent rates of the participating schools. Based on an expected no-consent rate between 50% and 70% and a maximum number of 375 children in the schools, we preregistered an expected sample of 110–180 participants and stopped data collection after collecting data in the two participating schools.
As participants were minors, their parents were informed through information letters and provided active consent prior to the study. Participants and their parents were informed that the subject of the study was to investigate the relationship formation between children and robots but were otherwise naive about the objective of the study. The study was approved by the Ethics Committee of the Faculty of Social and Behavioural Sciences of Utrecht University (protocol number: 21-412).
We used a one-factorial task with the within-subjects factor opponent (robot vs. child) and a relationship survey with subscales for closeness, trust, and social support. The children engaged in a group session (30 min) and an individual session (20 min), in which they played an Alternated Repeated Ultimatum Game (ARUG; Sandoval et al., 2021) against a Cozmo robot and an ostensible child and filled out a survey on relationship formation. Children were informed that they played against another child but played against a computer. This procedure is in line with previous work and holds the advantage of controlling the opponent’s game behavior, which is especially relevant when investigating young children. Out of eight reciprocity-related papers, four used a computerized “human” (masked as humans or actual humans; Bereby-Meyer & Fiks, 2013; Chernyak et al., 2019; Di Dio et al., 2020; van den Bos et al., 2011), two used puppets that were played by experimenters, and only two used face-to-face interactions with children (House et al., 2013; Warneken & Tomasello, 2013).
We used a Cozmo robot (Digital Dream Labs; Anki) due to its child-friendly appearance and easily programmable behavior. The robot is a small, palm-sized, social robot toy (size: 5″ × 7.25″ × 10″) with an light-emitting diode display (128 × 64 resolution), movable fork arms, and track-based movement. The Cozmo robot works with an application on a tablet or smartphone and interacts with the user through free play or programmed games; it has face recognition and a distinct character with emotional reactions. It has a large number (>700) of emotional and behavioral animations that are regulated and triggered through the built-in emotion engine (Hortensius et al., 2022). We programmed Cozmo’s behavior using Python (version: 3.7; Python, 2022) and the Cozmo software development kit (App version: 3.4.4).
The ARUG (Sandoval et al., 2021) is a modified version of the classic Ultimatum Game (UG; Güth et al., 1982), used to assess reciprocity and fairness in adult and children populations (e.g., Bereby-Meyer & Fiks, 2013; Sandoval et al., 2016; Sutter, 2007; Wittig et al., 2013). Test–retest reliability of the UG for adults is good (r = .82; Gilam et al., 2019), while test–retest reliability for children has not yet been established. In contrast to the classic version of the UG, the ARUG has players alternate roles every round. This role exchange allows for the measure of the interdependencies of cooperation and reciprocity and was therefore chosen for this study. In each round, one player distributes a given number of resources between themselves and a receiver. The receiver can accept or reject the offer. If accepted, the resources are distributed according to the initial offer. If rejected, neither agent receives any resources (Figure 1). To maximize one’s outcome, players should logically always accept any given offer, as rejecting would result in not gaining any resources. People do not always react in this manner and tend to use social punishment to react to unfair offers (Oosterbeek et al., 2004).
To accommodate young children while ensuring hygienical regulations due to the COVID-19 pandemic, a virtual–physical version of the ARUG was designed, and wooden coins were used to model the typical resource distribution in the game (Figure 2A). A game board (40 × 60 cm) with five differently colored fields was built (Figure 2B). The distribution options were limited to five, to not overwhelm children with choices and to reduce their decision time. Each field included a possible distribution of physical wooden coins (1 cm), the subsequent numbers written above the coins, and a physical button that could be pressed to indicate an offer and which recorded the response in Python. A smaller yes/no board (9 × 20cm; Figure 2C) was built to accept or reject offers. The game mechanics (i.e., the distribution of coins after each round) were programmed using Python and were displayed on a computer screen (Figure 2F).
To ensure consistency and account for the influence of the starting player and opponent’s game strategy, we maintained a consistent starting player (ostensible human or Cozmo robot), always started with a 5:5 distribution, and employed a tit-for-tat game strategy, following Sandoval et al. (2016, 2021). In this strategy, the player always reciprocates the previous offer or response. In each round, the starting agent distributed 10 wooden coins between themselves and the receiver, according to the distributions 1:9, 3:7, 5:5, 7:3, or 9:1, by pressing a button (child), by displaying the decision on the game screen (ostensible human), or by driving to the subsequent field and displaying the distribution on its screen (Cozmo robot, Figure 2D). The receiver could then accept or decline the offer by pressing the yes/no button (child; Figure 2C) or by displaying yes/no on the screen (ostensible human and Cozmo robot, Figure 2E). Accepted offers resulted in the distribution of coins based on the offer, while declined offers led to neither player receiving any coins (Figure 2F). The offers were always presented with Cozmo/human on the left and the participant on the right to ensure that mimicry could not be mistaken for reciprocity (i.e., if Cozmo offered 1:9, children could reciprocate with 9:1 or mimic with 1:9). A level screen indicated the current round, followed by screens showing the active player and both players’ scores. In the Cozmo condition, the robot waited after making an offer until the child responded and performed a happy or sad animation (depending on the decision of the child). During the child’s turn, the Cozmo robot awaited the child’s offer and displayed yes or no on its screen (Figure 2E), followed by a corresponding animation.
Children’s offers and responses were coded as reciprocal if they corresponded to the previously made offer or response (i.e., offer 9:1 in Round 1, offer 1:9 in Round 2; response yes in Round 1, yes in Round 2), following Sandoval et al. (2021; Table 1). For our exploratory analysis, we additionally coded positive and negative reciprocity. Reciprocity was coded as positive if the child distributed more coins to Cozmo/human than themselves, following a round in which the child received more coins than their opponent (e.g., Round 1: Cozmo offers 1:9, Round 2: child offers 7:3 or 9:1). Consequently, reciprocity was coded as negative if the child offered less coins to Cozmo, following a round in which they were offered less (e.g., Round 1: Cozmo offers 9:1; Round 2: child offers 3:7 or 1:9).
A survey measuring closeness (five items, e.g., “Cozmo is a friend”), trust (four items, e.g., “I feel that I can trust Cozmo”), and social support (four items, e.g., “If I were in trouble I could rely on Cozmo”) was used to assess children’s relationship formation with Cozmo (van Straten et al., 2020). The survey was validated by van Straten and colleagues on a sample of 7–11-year-olds (M = 9.17, SD = 0.85) using a Nao robot (SoftBank Robotics) and showed acceptable to good internal consistency (closeness: α = .84; trust: α = .86; social support: α = .71) and an item difficulty index between .71 and .94. Test–retest reliability has not yet been established. Children could indicate their response on a scale from 1 (does not apply at all) to 5 (applies completely; translated from Dutch). As our sample consisted of younger children, we decided to use smileys instead of increasing bars, as they seemed more intuitive.
Data collection took place in two schools in the Netherlands, facilitated through personal contacts. Children participated in a 30-min group session within their classes, where they were introduced to the Cozmo robot. The session included two groups playing with the Cozmo robot, and one group performing a coloring activity related to the robot. Groups were alternated every 5–7 min, and children could take pictures with the robots.
Following the group session, children were guided to the experiment room in groups of three (on the same or following day). The room was set up with three separate experimental stations, ensuring privacy and preventing observation between participants (Figure 2A). Children started their individual session by watching a 4-min instructional video, where children were informed that they would play the game twice, once against the Cozmo robot and once against another human. We did not test if children believed in the human manipulation, but children often asked about the other human during their game (e.g., “Am I playing against my friend?”, “Is the other child playing in a different school?”), making us confident that the manipulation worked. To ensure comprehension, children completed two test rounds against a computer and answered two test questions. If one or two questions were answered incorrectly (one question incorrect: n = 43; two questions incorrect: n = 6), an experimenter explained the game again. The high number of incorrect answers was mainly due to children misunderstanding one of the questions. We solved this by explaining and asking the incorrect question again after explaining the game a second time. If the experimenter was confident that the child understood the game mechanics, the game was started.
Each child played a total of eight game rounds, four against the Cozmo robot and four against the ostensible human. The order of games was counterbalanced across participants, but technical difficulties resulted in changes for some participants (59% started against the ostensible human, 41% started against the Cozmo robot; n = 20 played more rounds, n = 12 played fewer than four rounds). The actions of the ostensible human and the robot were automated using Python. After the game, children completed the relationship formation survey in Qualtrics (Qualtrics, 2022). The order of survey items was randomized. To assist younger children with limited reading abilities and to minimize the involvement of experimenters, all text was read out using text-to-speech software. If children did not understand a question, an experimenter read out the question and answers again. Finally, children received stickers as a reward and were debriefed. The individual session took approximately 20 min.
All data analyses were performed in R (R Core Team, 2020) using RStudio (RStudio Team, 2020). More information on packages and versions can be found in Supplemental Material B.
We performed a confirmatory factor analysis and calculated Cronbach’s α (for each scale and for each age group individually) and the omega higher order coefficient. These calculations can be found in Supplemental Materials C–E.
The data frame contained 1,477 observations of 147 individuals. Missing values due to technical errors or incomplete surveys were present on a trial basis (0.81%) and on an item basis (0.47%−3.25%). More information regarding outliers, distributions, and means can be found in Supplemental Material F. In the ARUG, 858 rounds were recorded and coded. Of these, 601 were nonreciprocal, and 257 were reciprocal (Figure 3A). The distribution of reciprocity by age and condition can be seen in Figure 3B. To ensure the reliability of the ARUG across ages, we ran preliminary analyses (Supplemental Material G). The distribution of means across scale and across age by scale can be seen in Figures 3C and 3D.
After consulting with two statistical experts, we decided to deviate from the preregistered general additive mixed model. The reasons for this are further discussed in Supplemental Material H. We used a Bayesian framework using the brm function of the brms package (version: 2.12.4; Bürkner, 2017, 2018). We followed the advice by Barr et al. (2013) to use a maximal random-effects structure. The nested, repeated measures nature of the data was modeled by including a by-participant and -class random slope and intercept for condition and random slopes for age and its interaction with condition by class (reciprocity model) and by including a random intercept for participant identifier nested within class and a random slope for age within class (relationship model). We used uninformative (default, flat) priors (Student’s t distribution with a mean of 0, scale of 2.5, and 3 degrees of freedom). Follow-up tests were performed by using emmeans and emtrends commands of the emmeans package (Lenth, 2021).
To investigate reciprocity, the model included the polymain effect of age (linear, quadratic, and cubic; continuous), the main effect of condition (sum-to-zero coded; human coded as 1, Cozmo as −1), and the interaction between age (linear, quadratic, and cubic) and condition. The model converged, fitting a Bernoulli distribution with four chains and 4,000 iterations. Model diagnostics were checked using the plot and pp_check functions of the bayesplot package (Gabry & Mahr, 2021) and the Rhat value.
To investigate relationship formation, we specified a model including the survey mean as the outcome variable, the polymain effect of age (linear, quadratic, and cubic; continuous), the effect of scale (categorical, sum-to-zero coded, closeness and trust coded as 1 and 0 subsequently, and social support coded as −1), and the interaction between age and scale (linear, quadratic, and cubic). The model converged, fitting a skewed normal identity distribution with four chains and 4,000 iterations. Model diagnostics were checked using the plot and pp_check functions.
We tested a linear association between reciprocity and relationship formation. The model specifications, results, and discussion can be found in Supplemental Material I. Additionally, we split reciprocity into positive and negative reciprocity. We then tested their association with age in two separate models, which were equivalent to our general reciprocity model. We also calculated chance-level reciprocity by age group (Supplemental Material J). Last, we performed exploratory analyses separately for offer and response reciprocity (Supplemental Material K).
Our analysis showed a quadratic association between age and children’s reciprocity (linear: log odds estimate = −13.25, 95% CI [−27.45, 0.85]; quadratic: log odds estimate = 14.97, 95% CI [1.52, 28.46]; cubic: log odds estimate = 5.71, 95% CI [−6.88, 18.40], Figure 4A, Supplemental Material L). As indicated by Figure 4B, the association between age and reciprocity followed a U-shape, in which the odds of reciprocal behavior decreased until around 8–10 years and increased for children aged 11 and 12. This association across age did not differ between the robot and the human condition (linear: log odds estimate = 1.14, 95% CI [−7.03, 9.02]; quadratic: log odds estimate = 1.25, 95% CI [−6.81, 9.24]; cubic: log odds estimate = −3.32, 95% CI [−10.75, 4.09]; Figure 4D). Likewise, there was no overall difference in reciprocity between the robot and the human condition, regardless of age (log odds estimate = −0.16, 95% CI [−0.40, 0.07]; Figure 4C). These findings provide the first evidence for a general age effect on children’s reciprocity regardless of agent, indicating that across age children showed similar reciprocity toward a social robot as to another child.
We found no association between age and relationship formation (linear: estimate = −1.38, 95% CI [−4.02, 1.36]; quadratic: estimate = 0.04, 95% CI [−2.38, 2.49]; cubic: estimate = 0.36, 95% CI [−1.90, 2.78]; Figures 5A and 5B, Supplemental Material L) but did find an effect of scale on relationship formation (closeness: estimate = 0.07, 95% CI [0.02, 0.13]; trust: estimate = 0.09, 95% CI [0.04, 0.14]; social support: estimate = −0.16, 95% CI [−0.21, −0.11]; Figure 5C). There was a negative quadratic interaction between age and closeness (estimate = −1.10, 95% CI [−2.12, −0.07]) and a positive linear interaction between age and trust (estimate = 1.39, 95% CI [0.34, 2.40]; Figure 5D). Closeness increased between 5 and 8 years and decreased for children above 8 years. Trust linearly increased between 5 and 12 years. There was no interaction effect between social support and age. These results indicate a general effect of children’s age on their relationship formation with a social robot, but they are dependent on the subscale of relationship formation.
We tested potential differences between positive and negative reciprocity. Children showed positive reciprocity in 124 trials and negative reciprocity in 78 trials; 55 trials were neutral trials with a 5:5 choice (Figure 6A). For positive reciprocity, there was a positive quadratic effect of age (linear: estimate = −4.76, 95% CI [−14.07, 4.23]; quadratic: estimate = 9.86, 95% CI [1.19, 18.92]; cubic: estimate = 2.53, 95% CI [−5.56, 10.84]; Figure 6B). There was no difference between the human and the Cozmo condition (estimate: −0.24, 95% CI [−0.53, 0.03]) and no interaction between condition and age (linear: estimate = 1.95, 95% CI [−4.57, 8.29]; quadratic: estimate = 0.84, 95% CI [−5.64, 7.24]; cubic: estimate = −2.06, 95% CI [−8.13, 4.05]). For negative reciprocity, there was no effect of age (linear: estimate = −1.05, 95% CI [−10.94, 9.36]; quadratic: estimate = 3.14, 95% CI [−6.40, 12.81]; cubic: estimate = 4.24, 95% CI [−4.22, 13.16]). There was again no difference between conditions (estimate = −0.04, 95% CI [−0.37, 0.29]) and no interaction between age and condition (linear: estimate = −2.19, 95% CI [−10.59, 5.56]; quadratic: estimate = 0.92, 95% CI [−6.68, 8.56]; cubic: estimate = −1.39, 95% CI [−8.78, 5.94]; Figure 6C). In sum, age has a differential effect on positive and negative reciprocity, with only positive reciprocity differing across age. This effect has the same U-shape as overall reciprocity.
The exploratory analysis of chance-level reciprocity showed that middle-aged children (9-year-olds) reciprocated below chance level, while younger children (5-year-olds) reciprocated above chance level. Future studies should further test if younger children make substantially more positive offers than older children and if this effect gradually disappears with age as hinted by our results (Supplemental Material J). Our results suggest that middle-aged children might have used a different game strategy than reciprocity but still consistently used the same strategy toward a human and a robot.
Our analyses showed differences in offer and response reciprocity across age compared to overall reciprocity. Offer reciprocity showed no association with age instead of the quadratic relationship in overall reciprocity. Response reciprocity showed the same quadratic U-shape association across age, as overall reciprocity (Supplemental Material K).
In this study, we investigated children’s reciprocity toward a social robot across age and compared it to their reciprocity toward a human. As reciprocity is essential for building and maintaining relationships (Buunk & Schaufeli, 1999; Clark & Ayers, 1988), we also assessed children’s relationship formation with a robot across age. Children’s reciprocity changed with age, and this found U-shaped pattern was similar for humans and robots. We found the same pattern for positive but not for negative reciprocity. Additionally, we found that children’s age had varying effects on their relationship formation with a social robot, with trust increasing linearly, closeness showing a negative quadratic association, and social support remaining stable across age. Overall, our results suggest that, across age, children reciprocate similar toward a social robot as to another human.
We did not observe the positive linear association between age and reciprocity reported in the literature (Bereby-Meyer & Fiks, 2013; Fehr et al., 2008; Schug et al., 2016). A first possible explanation is the overall lack of a general conceptualization of reciprocity. Previous work often focused on fairness, for example, on the acceptance/rejection of unfair offers (Bereby-Meyer & Fiks, 2013; Schug et al., 2016), the preference for equal shares (Fehr et al., 2008), or on the allocations made by participants (Schug et al., 2016). This “fairness” reciprocity might be a different construct than “overall” reciprocity (e.g., Sandoval et al., 2021; Wörle & Paulus, 2019), with the latter one showing a different trajectory over age. Second, other studies often investigated reciprocity in a one-time situation and sometimes even defined (negative) reciprocity as a reaction “to hostile actions without expecting material gains either in the present or in the future” (Bereby-Meyer & Fiks, 2013, p. 397). As we were particularly interested in the natural dynamics of reciprocity that occur within multiple encounters in which both agents can punish or reward previous actions, we used repeated interactions. In turn, this repetition might measure a different dynamic and could ultimately also reveal a different pattern of development of reciprocity. To our knowledge, no previous study investigated the entire age range of middle childhood in one experiment, making it unclear how different conceptualizations, scales, and measurements influenced age differences in each of those studies and what the trend of reciprocity would look like in a combined study like ours. Another explanation could be the limited interaction time, possibly resulting in a lack of potential for relationship building and reciprocity.
Against our expectations, reciprocity did not increase with age but showed a positive quadratic association; it decreased until approximately 10 years, with an increase onward. An explanation for the lower reciprocity around 10 years might be the distinction between negative and positive reciprocity (e.g., Bereby-Meyer & Fiks, 2013; Chernyak et al., 2019; Schug et al., 2016). Our exploratory analysis confirms that there is a difference in negative and positive reciprocity over age (i.e., positive reciprocity followed a U-shape, while negative reciprocity was stable across age). Interestingly, the slope of positive nor general reciprocity is not in line with previous work. Chernyak et al. (2019) found positive reciprocity only to develop after the age of 7. The stable presence of negative reciprocity in our study, on the other hand, is in line with Chernyak’s study, who found negative reciprocity throughout their 4–7-year-old sample. A possible explanation of our findings can be found in Murnighan and Saxon’s (1998) work, which found evidence for an increase of reciprocity approximately between 8 and 12 years. The authors also found 5–6-year-olds accepting lower offers than older children, hinting toward a less strategic and more hedonistic game strategy of younger children. These results could potentially explain why younger children showed higher reciprocity than older children in our sample, causing a decrease in reciprocity between 5 and 8 years and an increase later in childhood.
Our results additionally suggest that children show similar levels of reciprocity toward a robot as toward another human. Reciprocity is a conditioned, social, and normative behavior (Gouldner, 1960; Wörle & Paulus, 2019). It is not altruistic but is elicited due to its long-term benefits for the individual through a potential payback by the interaction partner (House et al., 2013). Our findings could thus indicate that children perceived the social robot as a social entity capable of remembering and reciprocating their behavior. This explanation is also plausible considering previous work showing that children and adults can attribute socialness and agency to robots (e.g., Alač, 2016; Brink & Wellman, 2020; Jackson & Williams, 2019; for a review, see Hortensius & Cross, 2018), that adults treat humans and robots alike in the Ultimatum Game ( ), and that children, although indirectly, tend to interact with robots similarly to other humans (e.g., Baxter et al., 2017; Belpaeme et al., 2018; Kanda et al., 2004). On the other hand, our exploratory analysis suggested that middle-aged children reciprocated below chance level and that positive offers decreased with age. This could indicate that middle-aged children might have used a different strategy than reciprocity but still consistently used the same strategy toward a human and a robot. Different strategies that might have influenced children’s game choices could be the development of fairness norms or strategic decision making. Potentially, middle-aged children were more focused on fairness or equity (Fehr & Schmidt, 1999), or they might have employed strategic decision-making choices to maximize their own gains rather than strict reciprocity (Camerer, 2003). This potential explanation requires further research.
Last, our results suggest that the change of reciprocity across age follows a similar trajectory in the human and in the robot condition. The question remains if this trajectory is supported by the same mechanism. Previous work showed that important concepts such as anthropomorphism and the perception of mental states in robots change across age. For example, 5-year-olds have been found to anthropomorphize robots more than 7- and 9-year-olds (Murnighan & Saxon, 1998). Kahn et al. (2012) found 9- and 12-year-olds to perceive a robot more like a mental and social entity than 15-year-olds. While these studies highlight the age-related differences in some important child–robot-related concepts, our results suggest that while children’s treatment of robots changes across age, this difference is stable in the human and the robot condition. Even acknowledging the difficulty in operationalizing reciprocity, we—at a minimum—observed that children change their strategy over the years: Younger children seem to like positive offers, but their behavior might not be intentional reciprocal. As children grow older, they gradually shift their behavior toward more competitive game strategies, significantly reducing their reciprocal behavior. As children grow even older, they potentially start to take the opponent’s offer into account, as we see a small trend toward positive reciprocity again. This behavior seems to be independent of the opponent, at least in a computer-mediated situation.
Contrary to our hypothesis, we found no overall effect of age on relationship formation but a different age effect for each scale. First, closeness had a negative quadratic association with age. This is in line with our hypothesis, as we expected children in preadolescence (8–10 years) to show a stronger relationship formation with a social robot than both younger and older children. Children in preadolescence (8–10 years) build friendships through games and contests (Parker & Gottman, 1989), which fits perfectly into the interaction they had with the robot. Children in early adolescence (10–14 years) lay higher emphasis on loyalty, commitment, and social support (Bigelow & LaGaipa, 1980). As the robot’s social support was perceived to be rather low (compared to the other two scales), it is plausible that the children for whom social support was more important built a weaker relationship with the Cozmo robot.
Second, contrary to our expectations, trust had a positive linear association with age. That is, trust did not decrease after preadolescence but increased further. Reasons for this could be the potential better technological understanding of older children (i.e., technological trust; van Straten et al., 2018) or their general greater exposure to technology, which could have resulted in higher trust from these children. However, no previous work investigated the development of trust in a robot during middle childhood.
Last, social support ratings did not change over age. This finding is rather intuitive, considering the items of the social support scale (e.g., “If I were in trouble Cozmo would be willing to help me,” “If I were in trouble Cozmo would stand up for me”). These items assume a very high level of agency of the robot that goes beyond the potentially attributed agency necessary for reciprocal behavior as outlined above. We believe that while children perceived some agency of the robot, these items might have been over the top for the Cozmo robot’s abilities. This is supported by informal comments made by some children during the experiment, such as “But Cozmo cannot speak” or “I cannot take Cozmo home,” as well as by the seemingly lower mean score of social support compared to the other two scales.
While we found evidence that, across age, children reciprocated similarly toward humans and robots, future studies should replicate this finding with other agents and computers (Henschel et al., 2020). This would also allow to test the association between children’s age and their agency and social perception of different agents, as previous work showed a possible association (Manzi et al., 2020). In the present study, children did not play with an actual human opponent but with a computerized ostensible human. While this procedure has the advantage of controlling the game strategies of the opponent (especially when testing younger children), this also carries the risk of children not perceiving the opponent as a human but as a computer. Nevertheless, other work has used the same strategy (e.g., Bereby-Meyer & Fiks, 2013; Chernyak et al., 2019; van den Bos et al., 2011), with only few studies using face-to-face interaction when studying children (e.g., House et al., 2013; Warneken & Tomasello, 2013), and showed no difference between results of a face-to-face versus computerized opponent. We are therefore confident that our results are not due to the children perceiving both conditions as “computer conditions.” Second, while the Ultimatum Game is a good measure for reciprocity, it should be further adapted for easier understanding by, for example, reducing the answer options and making the shared resources more intuitive for children. Last, a common definition of reciprocity is needed to ensure comparability and generalization of results.
To conclude, we found that children showed similar levels of reciprocity toward a social robot as to a fellow human. We found the general age effect to follow a U-shape with lower values for 8–10-year-olds. Children’s relationship formation with a social robot was influenced by age, with the directionality of effects differing across closeness, trust, and social support. These findings suggest that possibly similar mechanisms underlie reciprocity during interactions between children and with robots and that age is an important consideration when designing robot interventions or studying CRI, as age might influence children’s interaction of and perception with social robots.
https://doi.org/10.1037/tmb0000131.supp