Skip to main content
Sage Choice logoLink to Sage Choice
. 2025 Jul 8;79(3):708–718. doi: 10.1177/17470218251359985

Robots to the rescue: Robot discouragement reduces young adults’ risk-taking

Michaela Gummerum 1,, Yaniv Hanoch 2, Daniel Hernandez Garcia 3, Angelo Cangelosi 4
PMCID: PMC12901644  PMID: 40626565

Abstract

A large body of evidence shows that peer pressure can increase risky behaviour, with more limited evidence indicating that peer pressure can also reduce risky behaviour. However, whether robots can extract a similar influence is an open and important question. To study this problem, 172 participants completed the balloon analogue risk task under three conditions: control (no robot present), the presence of an encouraging robot and the presence of a discouraging robot. Participants also completed a self-report measure evaluating their risk attitude and one designed to assess attitudes towards robots. Results revealed that participants in the robot-discouraging condition exhibited significantly reduced risky behaviours compared to those in the robot-encouraging and control conditions. They pumped significantly fewer times, experienced significantly fewer balloon explosions and earned significantly less money compared to the control or encouraging condition. However, we did not find a significant effect between the encouraging and the control conditions. Moreover, a more positive impression of the robot increased the effect of the robot’s discouraging statements on risk-taking. The results of our study open new possibilities for the employment of robots in preventive programmes designed to reduce or alter risky behaviour.

Keywords: Human–robot interaction, peer pressure, risk reduction, risk-taking, robots

Introduction

Human–robot interaction (HRI) has moved from the realm of science fiction to reality. Robots tutor children in schools, provide therapeutic care for older adults, act as tour guides in museums and serve customers in restaurants. As an article in The Atlantic stated, ‘For better and for worse, robots will alter humans’ capacity for altruism, love, and friendship’ (Christakis, 2018). Indeed, we have been witnessing a surge of interest in HRIs, with the scope and activities of HRIs constantly broadening. One key question to emerge from this line of work is whether and how robots can influence and impact human behaviours. To investigate this question, we focus on robots’ ability to influence one key human behaviour: Risk-taking. Risk-taking is a ubiquitous phenomenon that affects people across their lifespan (Rolison et al., 2017) and a wide spectrum of contexts, such as financial, health (Hanoch et al., 2019), social (Schweizer et al., 2022) and online (White et al., 2015). Gaining a better understanding of how (and whether) robots can influence human risk-taking behaviour can thus have wide-ranging and important applications.

Previous research (Gardener & Steinberg, 2005) has shown that what other humans do and recommend affects individuals’ risky behaviours, especially among young adults. Such peer influence serves as a key factor in starting to smoke (Evans et al., 1978), experimenting with drugs (Dielman et al., 1987), drinking alcohol, engaging in risky sexual behaviours (Potard et al., 2008) and risky driving behaviour (Simons-Morton et al., 2012; for a meta-analytical review, see Powers et al., 2022). In one illustrative study, researchers asked 18- to 24-year-old participants to complete the balloon analogue risk task (BART) – a well-validated and extensively used lab-based measure of risk-taking – alone or in triads (Reniers et al., 2017). As predicted, when participants completed the BART alone, they took fewer risks compared to completing the BART in groups of three.

Importantly, there is a growing body of work showing that peer influence can work both ways. That is, it can both increase and decrease risky behaviour. Using a novel version of the BART that evaluates social rather than financial risk, participants were placed in two conditions, viewing others engaging in high or low risks. Data from this study demonstrate that viewing other participants engaging in high risks was associated with riskier behaviour, while safer decisions by others were linked to reduced risky behaviour (Tomova & Pessoa, 2018). Others (Osmont et al., 2021), using a somewhat similar modified design, report comparable results. These studies indicate that peers’ choices, either risky or cautious, affect adolescents’ and adults’ own risk-taking, particularly in situations where explicit information about the probabilities of possible outcomes is missing – as is the case with the BART. These findings mirror those reported in Asch’s (1956) classical conformity paradigm, demonstrating that people adhere to others’ judgements despite knowing that these judgements are incorrect. Research that has attempted to replicate these findings with robots (Brandstetter et al., 2014; Salomons et al., 2018; Vollmer, et al., 2018; Xu & Lombard, 2017) has, however, been inconclusive. For example, a study by Xu and Lombard (2017) found that adults conform to an incorrect judgement of a robot majority. By contrast, a study by Vollmer et al. (2018) could not replicate these findings in adults, but only in children.

Other investigations have explored whether robots’ recommendations or explicit encouragement or discouragement influence human judgements more generally and risk-taking specifically. Hanoch et al. (2021), using the BART, showed that adult participants who were encouraged by a robot to take more risks exhibited a higher risk-taking tendency compared to participants in conditions where a robot was present but did not interact with participants and a control condition where participants conducted the BART by themselves. Ren and Belpaeme (2022) expanded this line of work in important ways, showing that including tactile interaction between robots and humans, plus robot encouragement to take risks, increased risk-taking even further. Di Dio et al. (2023) examined whether interacting with a virtual agent (either a human or a robot) can lead to similar results. In this study, participants completed the BART either alone or in the presence of a human/robot avatar, where the avatar either encouraged or discouraged participants to take more or fewer risks. Data revealed that participants in the discouraging condition (both human and robot) took significantly fewer risks compared to those who played the BART alone. Counter to the researchers’ prediction, no significant differences were reported for the encouraging condition.

The above studies (Hanoch et al., 2021; Ren & Belpaeme, 2022) have demonstrated that robots can increase risk-taking behaviour. They have, however, largely failed to examine whether robots can also discourage risky behaviour (Di Dio et al., 2023). This is an important omission, as academics, policymakers and other stakeholders have long been interested in developing interventions to decrease people’s risk-taking tendencies. Indeed, while programmes or interventions designed to increase risk-taking behaviour are rather rare, interventions aimed at reducing risky behaviour, especially among young people, are ubiquitous. The literature is rife with interventions designed, for instance, to reduce risky sexual behaviour in young adults (Jackson et al., 2012), risky driving behaviour (Cutello et al., 2021), smoking among young adults (Flay, 1985) and drug use, to name a few. Whether robots can help reduce people’s risk-taking behaviour is an open question, one that has important theoretical and practical implications.

Drawing on earlier investigations (Di Dio et al., 2023; Hanoch et al., 2021, Ren & Belpaeme, 2022), the present study was designed to empirically examine whether robots can encourage and, more importantly, discourage risk-taking behaviour. To do so, we evaluated participants’ risk-taking tendencies when they completed the BART either in the presence of a discouraging robot, an encouraging robot or with no robot present. We predicted that participants in the discouraging condition would exhibit lower risk-taking behaviour compared to both the control and the encouraging robot condition. Likewise, it was predicted that participants in the encouraging robot condition would show higher risk-taking tendencies compared to the control and the discouraging robot condition.

A second objective of this study was to investigate whether participants’ (positive) impression of the robot might serve as a moderator of the effect of the robot’s encouragement/discouragement on risk-taking. Ren and Belpaeme (2022) found that tactile interaction with a robot was associated with reduced negative impressions of the robot and increased the effect of risk-encouraging statements. Consequently, we expected that the effect of encouraging or discouraging statements by the robot on risk-taking would be increased for participants with positive impressions of the robot.

Method

Participants

Using G*Power (Faul et al., 2009), an a priori Power Analysis for Poisson regression assuming the smallest increase in response rate, Exp(B1), beyond the base rate reported by Hanoch et al. (2021) at 1.23 (i.e. an increase in response rate of 23% of participants exposed to a robot over participants in the control group), a base rate Exp(B0) of 0.43, mean exposure = 30, alpha = .05, Power = 0.80 and a binomial distribution revealed a minimum overall sample size of 52. We aimed to recruit at least 50 participants per experimental condition. Data were collected from 176 participants between March and November 2023. Participants were recruited from an undergraduate student participant pool at the University of Warwick. The responses of four participants were excluded because of equipment failure. The final sample contained 172 participants (138 females, 27 males, 2 non-binary and 5 did not indicate their gender; MAge = 18.83, SD = 1.74). Participants received course credit and the chance to win one out of 20 £10 shopping vouchers.

Design

This study used a between-subjects experimental design with three conditions. In the robot-encouraging condition (n = 60), the robot uttered statements encouraging participants to take more risks. In the robot-discouraging condition (n = 61), the robot uttered statements discouraging risk-taking. In the control condition (n = 51), participants engaged in the risk-taking task by themselves without a robot present. The impression of the robot was the moderator variable. Self-reported risk-taking tendencies were collected and used as a control variable.

Measures

Balloon analogue risk task

The BART (Lejuez et al., 2002) was the main measure used to assess participants’ risk-taking. Thirty trials were presented to participants. In each trial, participants were asked to inflate a balloon on the computer screen using the computer mouse or spacebar. With each click, the balloon was inflated by 1° (about 0.3 cm in all directions), and 1 penny (U.K. currency) was added to the participant’s ‘temporary money bank’ also shown on the screen. The money bank represented the total earnings for each balloon. After the first pump, a ‘Collect reward’ button displayed on the screen could be clicked by the participant to ‘cash in’ the winnings for the current balloon. By clicking the ‘cash in’ button, the total earnings for the current balloon were added to participants’ overall earnings (also displayed on the screen), and participants were moved automatically to the next balloon/trial. If, however, the balloon exploded after a pump, all earnings for that balloon were lost, and participants moved on to the next balloon without adding to their overall earnings. A random number generator (determined once on the creation of the balloon and not with every pump) determined when the balloon would explode, with the constraint that the probability that a balloon would explode increased with each pump that was made (1/128, 1/127, etc.). The explosion point of each balloon (an integer number) was randomly chosen from a uniform distribution with a range from 1 to 127. Thus, the highest number of possible pumps was 128. Each participant received a unique series of randomly chosen balloon explosion points for the 30 balloons/trials.

For each trial, four scores were derived: (1) The number of pumps made by participants; (2) the explosion point of each balloon (randomly determined by the programme—see above); (3) whether the balloon exploded or not and (4) participants’ earnings (in U.K. pennies) for each balloon. The number of pumps, explosions and earnings were summed up across the 30 trials.

Self-reported risk-taking

Participants’ self-reported risk-taking attitude was measured (Dohmen et al., 2011) by a single item: ‘How do you see yourself? Are you generally a person who is fully prepared to take risks or do you try to avoid taking risks?’ Participants responded on a Likert-type scale from 0 (not at all willing to take risks) to 7 (very willing to take risks).

Impression of robots

The Godspeed (Bartneck et al., 2009) scale was used to measure participants’ impression of the robot on four subscales, anthropomorphism (5 items; α = .68), animacy (6 items; α = .62), likeability (5 items; α = .89) and perceived intelligence (5 items; α = .79). Items were rated on a 5-point semantic differential scale. There were strong positive and significant correlations between all subscales, rs(166) = .23 to .56, all ps < .001. Therefore, scores were averaged to create one ‘robot impression’ score (α = .84); higher scores represent more positive impressions of the robot. Previous work has shown that higher scores on the Godspeed scales are associated with more positive impressions of the robot (e.g. Craenen et al., 2018; Tobis et al., 2023).

Robot

One SoftBank Robotics Pepper robot was used in the robot-encouraging and robot-discouraging conditions. Pepper is a medium-sized, 1.21-meter-tall humanoid robot with 25 degrees of freedom designed primarily for HRI. The robot was fully autonomous, running bespoke software that allowed it to be controlled by the software running on the experimenter’s laptop. This robot performed scripted utterances and behaviours that were identical for all participants in a condition (see Supplemental Material). The robot stood on the floor beside the participants’ desks.

Procedure

The study received ethical approval from the University of Warwick Ethics Committee. All participants gave informed consent before taking part in the study.

Participants completed the experiment in the same lab room at the University of Warwick premises and were randomly assigned to one of the three experimental conditions. First, participants were given instructions for the BART task. Participants were told that their earnings in the BART would be transformed into raffle tickets for winning one out of 20 £10 vouchers. The more earnings participants accrued, the more raffle tickets they receive and the higher their chances of winning. In the control condition, instructions were presented on the computer screen but were also given verbally by the experimenter. In the two robot conditions, instructions were provided on screen and by the Pepper robot. Afterwards, the experimenter left the room. Participants in the control condition completed the BART by themselves. In the two robot conditions, while participants made decisions in the BART, the robot was present and either provided risk-encouraging (e.g. ‘Why did you stop pumping?’) or risk-discouraging statements (e.g. ‘I think you are risking too much’; see Supplemental Material for a full list). In the encouraging condition, at the start of the experiment, the robot chooses six balloon numbers randomly, which it will prompt the participant specifically only for these six balloons. Specifically, when the participant tries to collect the reward, if the robot has not spoken for that balloon (being one of the balloons it will talk for), the robot will say: ‘Are you sure? Why not try one more?’ If, after this prompt, the participant continues pumping the balloon, the robot will say one of the encouraging sentences (see Supplemental Material). When a participant tries to collect the reward for the rest of the balloons for which the robot was not trying to incite them (i.e. not one of the randomly chosen six), if the participant stops before 50 pumps, the robot randomly says one out of 13 possible encouraging sentences (see Supplemental Material). The discouragement condition followed precisely the same procedure, only instead of encouraging statements, the participants heard discouraging statements (see Supplemental Material).

After participants completed the BART, the experimenter re-entered the lab and presented participants with the single-item self-assessment of their risk-taking and the Godspeed questionnaire via a Qualtrics link. Because participants in the control condition did not see or interact with the robot, the Godspeed questionnaire was not presented to them. Participants also indicated their age in years and gender (with options not to disclose). At the end of the study, participants were thanked and debriefed verbally and in writing. At the end of data collection, the winning raffle tickets were determined, and vouchers were distributed accordingly.

Results

Preliminary analyses

Participants in the three conditions did not differ by gender, χ2(4) = 7.24, p = .20, or age, F(2, 159) = 0.79, p = .46. However, participants in the three conditions significantly differed in their self-reported risk-taking attitudes, F(2, 159) = 3.25, p = .04, with participants in the control condition reporting higher risk-taking attitudes (M = 4.07, SD = 1.09) than participants in the risk-encouraging (M = 3.52, SD = 1.21) or risk-discouraging (M = 3.52, SD = 1.32) conditions. Therefore, in exploratory analyses, we controlled for and added self-reported risk-taking attitudes as a moderator in the subsequent analyses.

Number of pumps

A Poisson regression (with robust standard errors) indicated that the number of pumps across the 30 trials was significantly lower in the robot-discouraging condition than the control condition, B = −0.47 [−0.64, −0.31], SE = 0.08, p < .001. The number of pumps did not differ in the robot-encouraging and the control conditions, B = −0.04 [−0.15, 0.08], SE = 0.06, p = .53. Thus, compared to those in the control condition, participants in the discourage condition were 0.47 times less likely to pump, in the encouraging condition 0.03 times less likely (see Figure 1a). Figure S1 shows the median number of pumps across the 30 trials by condition.

Figure 1.

The images illustrate the balloon analogue risk task results from the study by Harkness (2020). The graph (a) depicts the number of pumps corresponding to different skin conductance levels (SCL) for each condition: control, discourage, and encourage. Graph (b) shows the number of explosions at each SCL level for the same three conditions. Lastly, graph (c) represents the payment corresponding to each SCL level across the conditions. The data was analyzed using t-tests comparing the means of the three conditions.

Risk-taking behaviour measured by the balloon analogue risk task by condition: (a) number of pumps, (b) number of explosions and (c) payment.

In an exploratory analysis, we added Self-reported Risk-taking as well as the interaction between Self-reported Risk-taking × Condition to the Poisson regression. The difference in the number of pumps between the robot-discouraging and the control conditions remained significant, B = −0.47 [−0.61, −0.30], SE = 0.08, p < .001, and self-reported risk-taking significantly and positively predicted the number of pumps, B = 0.05 [0.02, 0.11], SE = 0.004, p < .001. None of the interaction effects reached statistical significance.

Number of explosions

A Poisson regression (with robust standard errors) showed that across the 30 trials, the number of explosions was significantly lower in the robot-discouraging condition than the control condition, B = −0.40 [−0.55, −0.26], SE = 0.07, p < .001. Number of explosions did not differ between the control and the robot-encouraging conditions, B = −0.03 [−0.12, 0.07], SE = 0.05, p = .59 (see Figure 1b).

Exploring the effect of self-reported risk-taking, a Poisson regression including the condition, self-reported risk-taking and the interaction between the two variables indicated that the number of explosions was significantly lower in the robot-discouraging condition, B = −0.41 [−0.85, −0.03], SE = 0.17, p = .02. Self-reported risk-taking approached statistical significance, B = 0.05 [−0.006, 0.11], SE = 0.03, p = .08. No other main or interaction effects reached statistical significance.

Payment (number of points gained)

A Poisson regression (with robust Standard Errors) indicated that participants in the robot-discouraging condition gained significantly fewer points than participants in the control condition, B = −0.45 [−0.59, −0.31], SE = 0.07, p < .001. There was no significant difference in the number of points gained between the robot-encouraging and the control conditions, B = −0.06 [−0.16, 0.03], SE = 0.05, p = .20 (Figure 1c).

In an exploratory analysis, a Poisson regression with the predictors condition, self-reported risk-taking and their interaction showed that the difference in points gained between the control and robot-discouraging condition remained statistically significant, B = −0.28 [−0.76, −0.19], SE = 0.06, p < .001. Self-reported risk-taking significantly and positively predicted pay, B = 0.04 [0.03, 0.10], SE = 0.01, p < .001. Furthermore, the interaction between the robot-discouraging condition and self-reported risk-taking was significant, B = −0.04 [−0.16, −0.03], SE = 0.01, p = .01. Whereas in the control condition, pay increased with increasing self-reported risk-taking, pay remained stable across levels of self-reported risk-taking in the robot-discouraging condition.

Explosion effect

We investigated whether participants would adjust their behaviour after experiencing an explosion in the previous round by reducing the number of pumps after an explosion. This ‘explosion effect’ was quantified as the number of pumps in the trial after an explosion divided by the number of pumps in the trial before an explosion. An explosion effect <1 denotes a reduction in pumps (compared to pumps in the previous round) after an explosion; an explosion effect >1 denotes an increase in pumps (compared to pumps in the previous round) after an explosion. The median explosion effects were 1.06 in the control, 1.10 in the robot-encouraging and 1.00 in the robot-discouraging conditions. Binomial tests showed that the median explosion effect did not differ significantly from 1 in the control (p = .40) and the robot-discouraging (p = .80) but approached significance in the robot-encouraging condition (p = .053). A Kruskal–Wallis H-test indicated no significant difference in medians across the three conditions, χ2(2) = .58, p = .75.

In exploratory analyses, we investigated whether the robot’s verbal interventions would affect the explosion effect, that is, whether participants increased or decreased their pumps after an explosion. In the robot-encouraging condition, the robot’s verbal statements were positively and significantly correlated with the explosion effect, ρ(548) = .10, p = .02. This indicates that a robot’s risk-encouraging statements were associated with an increase in pumps (compared to pumps in the previous round) after an explosion. However, in the robot-discouraging condition, there was no association between the robot’s verbal statements and the explosion effect, ρ(388) = .03, p = .58. Further exploratory analyses on the effect of the robot’s verbal statements on the number of pumps are shown in the Supplemental Material.

Impression of the robot

We investigated whether participants’ (positive) impression of the robot interacted with the condition in affecting participants’ risk-taking. We conducted three moderated regressions (one for each dependent variable), entering mean-centred Robot Impression, Condition (encouraging, discouraging) and their interaction.

For number of pumps, Condition, B = −389.49 [−519.20, −259.78], SE = 65.47, p < .001 and the interaction of Robot Impression × Condition, B = −277.56 [−544.27, −10.86], SE = 134.61, p = .041, reached statistical significance. The number of pumps was significantly lower in the discouraging than in the encouraging condition. Simple-slope analysis showed that with an increasing positive impression of the robot, the number of pumps decreased in the discouraging condition, t(112) = −2.02, p = .045. While the number of pumps increased with increasing positive impressions of the robot in the encouraging condition, this was not statistically significant, t(112) = 0.89, p = .37 (Figure 2a).

Figure 2.

The image depicts graphs illustrating the interaction between a robot's impressio and human responses (encouraging or discouraging) in terms of both the number of pumps and the number of explosions. It shows that while the impressio of the robot increases, the encouraging responses generally increase, while discouraging responses decrease, suggesting a positive correlation between robot impressio and encouraging behaviors.

Interaction effect of condition (encouraging and discouraging) and impression of robot on (a) number of pumps and (b) number of explosions.

For the number of explosions, Condition, B = −5.49 [−7.28, −3.69], SE = 0.91, p < .001, reached statistical significance with participants in the discouraging condition experiencing fewer explosions. The interaction of Robot Impression × Condition was marginally significant, B = −3.69 [−7.38, 0.006], SE = 1.86, p = .050. Simple-slope analysis revealed that in the discouraging condition, the more positive the impression of the robot, the fewer explosions were experienced, t(112) = −2.44, p = .02. In the encouraging condition, the number of explosions did not change with the impression of the robot, t(112) = 0.35, p = .73 (Figure 2b).

Concerning payment (number of points gained), only the effect of Condition, B = −39.40 [−53.56, −25.24], SE = 7.14, p < .001, reached statistical significance with participants in the discouraging condition receiving less payment than those in the encouraging condition. For neither condition did payment vary by participants’ impression of the robot.

Discussion

Since the pioneering work of Asch (1956), there has been a surge of interest in factors that influence conformity and peer influence on a wide range of behaviours. One domain that has attracted much attention is risk-taking, with research consistently demonstrating that peer influence is linked to an increased tendency to engage in risky behaviour (e.g. Gardener & Steinberg, 2005), such as smoking and substance abuse (Henneberger et al., 2021), sexual behaviour (Widman et al., 2016) and driving (Møller & Haustein, 2014). While much of the literature focuses on the so-called dark side of peer influence, a parallel line of work has shown that peer influence can have a positive impact. Research by Tomova and Pessoa (2018), for example, has demonstrated that peers can also help reduce risky behaviour, especially in cases that involve uncertainty (see also Reiter et al., 2021; Slagter et al., 2023). The notion that robots might be able to extract a similar influence on human behaviour has risen with the development and introduction of human-like robots. Indeed, researchers have started to investigate whether robots can impact human risk-taking behaviour – thus far, however, mostly whether they can increase rather than decrease risky behaviour (Hanoch et al., 2021; Ren & Belpaeme, 2022). The present study was designed to address this important gap in the literature.

Our data revealed that robots can impact young adults’ risk-taking and actually reduce risky behaviour. In other words, participants who interacted with a robot that discouraged them from taking risks exhibited significantly reduced risk-taking behaviour compared to those in the control condition and the ones in the encouraging condition. This is particularly important given the well-documented general increases in risk-taking behaviour and the effects of peer pressure on risk-taking in young people (e.g. Gardener & Steinberg, 2005; Shepherd et al., 2011; Steinberg & Monahan, 2007). Thus, robot peers or other artificial agents could be used to modulate risk-taking behaviour in this age group. While a multitude of research has documented the negative effects of peer influence on young people’s behaviour (e.g. substance abuse, misconduct), our research contributes to findings that show how peer influence can lead young people to engage in positive behaviours (e.g. donating, volunteering; see Laursen & Veenstra, 2021).

Our work also revealed that a positive attitude towards the robot enhanced the robot’s impact, most consistently in the discouraging condition. That is, participants in the discouraging condition who had a more favourable impression of the robot followed its instructions or recommendations more closely. Our findings build on earlier work showing that scores on the Godspeed are positively related to the perceived similarity between the robot’s personality and the participant (Craenen et al., 2018). They are also in line with earlier investigations among young adults, demonstrating that peer influence is impacted by social and cultural closeness (Liu et al., 2017). Indeed, according to the Influence-Compatibility Model, one of the mechanisms through which peers exert influence, particularly in young people, is through establishing positive affiliations, perceived similarity, compatibility and closeness (Laursen & Veenstra, 2021). This has implications for how robot peers and their advice should be designed in programmes or systems that try to curb risky behaviours in young people.

Our results, at first glance, might seem at odds with the growing literature on algorithm aversion and automation bias—the tendency to reject superior but imperfect algorithms in decision-making (see Burton et al., 2020). However, our work focused on only advice or recommendations given by a robot, not by a human. Thus, we are unable to compare or judge whether similar advice by a human agent would have had a more (or less) pronounced impact on our participants. Moreover, in our study, unlike studies into algorithm aversion, there is a real robot present rather than just advice. The actual presence of a robot renders comparison between the two lines of investigation difficult (but see Di Dio et al., 2023). On the flip side, some studies have shown over-trust in robotic systems, that is, human participants following a robot’s lead or robots’ social influence, even when this was not appropriate and had negative consequences (see Christoforakos et al., 2021; Robinette et al., 2017). With the growing influence of AI and increased HRIs, it is imperative to determine what constitutes suitable levels of trust in artificial agents and, practically, to measure participants’ attitudes to robots and artificial agents in future research.

Counter to previous work (Hanoch et al., 2021; Ren & Belpaeme, 2022), participants in the robot-encouraging condition did not show more risk-taking behaviours (number of pumps, explosions of the balloon) compared to the control condition, but they did exhibit more risk-taking behaviours compared to the robot-discouraging group. First, it is important to note that these findings are compatible with those of Di Dio et al.’s (2023), who similarly showed significant differences when an online avatar encouraged or discouraged risk-taking in the BART. Second, participants in the control and the two experimental groups significantly differed in their self-reported risk-taking attitudes, indicating that, unfortunately, randomization of participants to the three conditions in terms of risk-taking attitudes was not successful. Furthermore, self-reported risk-taking attitudes positively predicted risk-taking behaviour in the BART. Indeed, previous research has identified how certain inter-individual tendencies are associated with risk-taking (e.g. Levenson, 1990), and particularly sensation-seeking and impulsivity have been consistently associated with risky behaviours (Zuckerman & Kuhlman, 2000). In line with this research, our findings point to the fact that risk-taking behaviour is affected by both personality (e.g. inter-individual differences in sensation-seeking) and situational factors (e.g. peer pressure) and that the differential influence of these variables likely varies in different risk-taking domains and across the life-span (see Rolison et al., 2014).

Our work, thus, provides further evidence of the role robots can play in shaping and impacting behaviour that can lead to positive consequences for participants (see Natori et al., 2025; Toh et al., 2016). With multiple prevention programmes designed to reduce risk-taking across domains and particularly among young people, our results point in a promising direction. Our chief aim was to evaluate whether robots can discourage risk-taking, a phenomenon that has received little to no attention. In line with our prediction, our data revealed that robots can serve to discourage and reduce risk-taking tendencies, at least in one domain. Our data also extend earlier work (Jackson et al., 2012) showing that virtual agents (whether robots or humans) can reduce risk-taking behaviour, as well as work showing that human peer pressure can serve to discourage risky behaviour (Osmont, et al., 2021; Tomova & Pessoa, 2018). Indeed, most interventions related to risky behaviour are designed to reduce risk-taking—in areas such as financial (e.g. gambling) and health (e.g. smoking).

Robots could potentially play a significant role in aiding and developing novel interventions to tackle a wide range of risky behaviours. Robots could, for example, be incorporated into driving prevention or training programmes for young adults to improve their safety driving (Ivers et al., 2009; Krasniuk et al., 2024). Likewise, they could assist in developing intervention programmes designed to prevent and reduce alcohol, drug and smoking misuse. While the BART is a predictor for a range of risky behaviours (Lejuez et al., 2003, 2004), our results might be most applicable to financial risks. As such, robots might fit perfectly well in programmes designed to aid those with problem gambling or other financial risks. To the best of our knowledge, this line of research has not been explored yet. Unlike human peers, robots can be present at all times, engage for long periods and provide consistent messages. Future studies will need to explore these (and other) avenues. Our intuition, however, is not without merit and precedent. Robots have already been assuming a greater role in the classroom, helping to improve learning and pro-social behaviour (Peter et al., 2021), as well as fend off loneliness (Lederman & Jecker, 2023). We see no reason, therefore, why robots could not be utilized to assist in school-based risk-behaviours interventions.

Several notable limitations of our study must be acknowledged. First, our participants were all students, with a clear majority of female participants. It is unclear, therefore, if a more representative and diverse sample would have yielded similar results. Data from other studies show that age has little to no impact on attitude towards robots (Backonja et al., 2018), but that culture might (Nomura et al., 2008). On the other hand, there is a large corpus of data showing that females tend to take fewer risks compared to males (Powell & Ansic, 1997), different cultures tend to exhibit different (financial) risk tendencies (Weber et al., 1998) and age matters for financial risk-taking (Jianakoplos & Bernasek, 2006). At the same time, we know that adolescence is associated with experimentation with a range of risky behaviours—such as alcohol, drugs and gambling (Fontaine et al., 2023). Thus, while our results might not be generalizable to participants from other cultural contexts and of different ages, our sample does capture the risk-taking behaviour of an important age cohort.

Second, demographic factors might also moderate how social influence or conformity affects risk-taking behaviour. For example, with age, people tend to exhibit less conformity (Pasupathi, 1999). Other studies found females to show higher conformity than males (e.g. Eagly et al., 1981). Some personality traits, such as stability or plasticity, are also positively or negatively related to conformity (DeYoung et al., 2002). While we did not measure personality traits in this investigation, previous work (Rossi et al., 2020) has reported a positive association between openness to experience and successful HRI. Moreover, the authors suggest that anxiety and trust, for instance, can also impact individuals’ intention to accept technology. Given that risk-taking has also been linked to personality traits (e.g. Zuckerman & Kuhlman, 2000), future studies should consider examining whether (and how) social influence, conformity and personality traits are linked to the ability of robots to influence risk-taking.

In this study, we only used one measure of risk-taking, namely the BART. While previous work has shown that the BART is a good predictor of other risky behaviours (e.g. smoking), it is unknown whether our results are robust enough to predict whether robots can discourage or encourage other risky behaviours such as gambling, driving and alcohol use. Extending our research paradigm to another type of risky behaviour, thus, is urgently needed. Furthermore, participants did receive financial incentives, which could have impacted their behaviour, especially in the risk-encouraging condition (Camerer & Hogarth, 1999). Finally, consistent with previous studies (Di Dio et al., 2023; Hanoch et al., 2021; Ren & Belpaeme, 2022) and the original BART set-up (Lejuez et al., 2002), the explosion point of each balloon was determined randomly, so every participant received a different order. Using a pseudo-random sequence of explosion points would make it possible to directly compare participants’ behaviour across conditions, for example, how quickly the number of pumps ‘recover’ after an explosion.

The scope of HRIs is growing consistently, with novel and important results and applications being reported. As Christakis (2018) argued, and increasing data show, robots will alter human behaviour. What Christakis overlooked is their capacity to meaningfully impact human risky behaviour. Our results indicate this is a clear possibility.

Supplemental Material

sj-docx-1-qjp-10.1177_17470218251359985 – Supplemental material for Robots to the rescue: Robot discouragement reduces young adults’ risk-taking

Supplemental material, sj-docx-1-qjp-10.1177_17470218251359985 for Robots to the rescue: Robot discouragement reduces young adults’ risk-taking by Michaela Gummerum, Yaniv Hanoch, Daniel Hernandez Garcia and Angelo Cangelosi in Quarterly Journal of Experimental Psychology

Footnotes

Data availability statement: Data for the study are available upon request.

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The work was partially supported by the UKRI TAS Trust Node and the Horizon/UKRI grants MUSAE and MSCA DN TRAIL.

Ethical consideration: Ethical approval for the study was provided by the Health and Human Ethics Committee of the University of Warwick.

Consent to participate: Informed consent was provided by all individual participants included in the study.

ORCID iD: Michaela Gummerum Inline graphic https://orcid.org/0000-0001-5746-8841

Supplemental material: The Supplemental Material is available at: qjep.sagepub.com.

References

  1. Asch S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70, 1–75. [Google Scholar]
  2. Backonja U., Hall A. K., Painter I., Kneale L., Lazar A., Cakmak M., Thompson H. J., Demiris G. (2018). Comfort and attitudes towards robots among young, middle-aged, and older adults: A cross-sectional study. Journal of Nursing Scholarship, 50, 623–633. 10.1111/jnu.12430 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bartneck C., Kulić D., Croft E., Zoghbi S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1, 71–81. [Google Scholar]
  4. Brandstetter J., Rácz P., Beckner C., Sandoval E. B., Hay J., Bartneck C. (2014). A peer pressure experiment: Recreation of the Asch conformity experiment with robots. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. (pp. 1335–1340). IEEE 10.1109/IROS.2014.6942730 [DOI]
  5. Burton J. W., Stein M. K., Jensen T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239. [Google Scholar]
  6. Camerer C. F., Hogarth R. M. (1999). The effects of financial incentives in experiments: A review and capital-labor-production framework. Journal of Risk and Uncertainty, 19(1), 7–42. [Google Scholar]
  7. Christakis N. (2018). How AI will rewire us: For better or for worse, robots will alter humans’ capacity for altruism, love, and friendship. The Atlantic. https://www.theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204/
  8. Christoforakos L., Gallucci A., Surmava-Große T., Ullrich D., Diefenbach S. (2021). Can robots earn our trust the same way humans do? A systematic exploration of competence, warmth, and anthropomorphism as determinants of trust development in HRI. Frontiers in Robotics and AI, 8, 640444. 10.3389/frobt.2021.640444 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Craenen B., Deshmukh A., Foster M. E., Vinciarelli A. (2018). Do we really like robots that match our personality? The case of Big-Five traits, Godspeed scores, and robotic gestures. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 626–631). IEEE. [Google Scholar]
  10. Cutello C. A., Gummerum M., Hellier E., Hanoch Y. (2021). Evaluating safe driving interventions: Taking the fear out of virtual reality. Risk Analysis, 41(9), 1662–1673. [DOI] [PubMed] [Google Scholar]
  11. DeYoung C. G., Peterson J. B., Higgins D. M. (2002). Higher-order factors of the Big Five predict conformity: Are there neuroses of health? Personality and Individual differences, 33(4), 533–552. [Google Scholar]
  12. Di Dio C., Manzi F., Miraglia L., Gummerum M., Bigozzi S., Massaro D., Marchetti A. (2023). Virtual agents and risk-taking behavior in adolescence: The twofold nature of nudging. Scientific Reports, 13, 11242. 10.1038/s41598-023-38399-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Dielman T. E., Campanelli P. C., Shope J. T., Butchart A. T. (1987). Susceptibility to peer pressure, self-esteem, and health locus of control as correlates of adolescent substance abuse. Health Education Quarterly, 14(2), 207–221. 10.1177/109019818701400207. [DOI] [PubMed] [Google Scholar]
  14. Dohmen T., Falk A., Huffman A., Sunde U., Schupp J., Wagner G. (2011). Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association, 9(3), 522–550. [Google Scholar]
  15. Eagly A. H., Wood W., Fishbaugh L. (1981). Sex differences in conformity: Surveillance by the group as a determinant of male nonconformity. Journal of Personality and Social Psychology, 40(2), 384–394. 10.1037/0022-3514.40.2.384 [DOI] [Google Scholar]
  16. Evans R., Rozelle R., Mittlemark M., Hansen W., Blane A., Harris J. (1978). Deterring the onset of smoking in children: Knowledge of immediate psychological effects and coping with peer pressure, media pressure, and parent modeling. Journal of Applied Social Psychology, 8(2), 126–135. [Google Scholar]
  17. Faul F., Erdfelder E., Buchner A., Lang A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. 10.3758/BRM.41.4.1149. [DOI] [PubMed] [Google Scholar]
  18. Flay B. R. (1985). Psychosocial approaches to smoking prevention: A review of findings. Health Psychology, 4(5), 449–488. 10.1037/0278-6133.4.5.449. [DOI] [PubMed] [Google Scholar]
  19. Fontaine M., Lemercier C., Bonnaire C., Giroux I., Py J., Varescon I., Le Floch V. (2023). Gambling and aging: An overview of a risky behavior. Behavioral Sciences (Basel), 13(3), 437. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Gardener M., Steinberg L. (2005). Peer influence on risk-taking, risk preference, and risky decision-making in adolescence and adulthood: An experimental study. Developmental Psychology, 42(4), 625–635. [DOI] [PubMed] [Google Scholar]
  21. Hanoch Y., Rolison J., Freund A. (2019). Reaping the benefits and avoiding the risks: Unrealistic optimism in the health domain. Risk Analysis, 9(4):792–804. [DOI] [PubMed] [Google Scholar]
  22. Hanoch Y., Arvizzigno F., Hernández García D., Denham S., Belpaeme T., Gummerum M. (2021). The robot made me do it: Human–robot interaction and risk-taking behavior. Cyberpsychology, Behavior, and Social Networking, 24(6), 337–342. [DOI] [PubMed] [Google Scholar]
  23. Henneberger A. K., Mushonga D. R., Preston A. M. (2021). Peer influence and adolescent substance use: A systematic review of dynamic social network research. Adolescent Research Review, 6(1), 57–73. [Google Scholar]
  24. Ivers R., Senserrick T., Boufous S., Stevenson M., Chen H. Y., Woodward M., Norton R. (2009). Novice drivers’ risky driving behavior, risk perception, and crash risk: Findings from the DRIVE study. American Journal of Public Health, 99(9), 1638–1644. 10.2105/AJPH.2008.150367. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Jackson C., Geddes R., Haw S., Frank J. (2012). Interventions to prevent substance use and risky sexual behaviour in young people: A systematic review. Addiction, 107(4), 733–747. 10.1111/j.1360-0443.2011.03751.x. [DOI] [PubMed] [Google Scholar]
  26. Jianakoplos N. A., Bernasek A. (2006). Financial risk taking by age and birth cohort. Southern Economic Journal, 72, 981–1001. [Google Scholar]
  27. Krasniuk S., Toxopeus R., Knott M., McKeown M., Crizzle A. M. (2024). The effectiveness of driving simulator training on driving skills and safety in young novice drivers: A systematic review of interventions. Journal of Safety Research, 91, 20–37. [DOI] [PubMed] [Google Scholar]
  28. Laursen B., Veenstra R. (2021). Toward understanding the functions of peer influence: A summary and synthesis of recent empirical research. Journal of Research on Adolescence, 31, 889–907. 10.1111/jora.12606 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Lederman Z., Jecker N. S. (2023). Social robots to fend off loneliness? Kennedy Institute of Ethics Journal, 33, 249–276. [DOI] [PubMed] [Google Scholar]
  30. Lejuez C. W., Aklin M. W., Jones H. A., Richards J. B., Strong D. R., Kahler C. W. (2003). The balloon analogue risk task (BART) differentiates smokers and non-smokers. Experimental and Clinical Psychopharmacology, 11(1), 26–33. [DOI] [PubMed] [Google Scholar]
  31. Lejuez C. W., Read J. P., Kahler C. W., Richards J. B., Ramsey S. E., Stuart G. L., Strong D. R., Brown R. A. (2002). Evaluation of a behavioral measure of risk-taking: The balloon analogue risk task (BART). Journal of Experimental Psychology: Applied, 8(2), 75–84. [DOI] [PubMed] [Google Scholar]
  32. Lejuez C. W., Simmons B. L., Aklin W. M., Daughters S. B., Dvir S. (2004). Risk-taking propensity and risky sexual behavior of individuals in residential substance use treatment. Addictive Behaviours, 29, 1643–1647. [DOI] [PubMed] [Google Scholar]
  33. Levenson M. R. (1990). Risk taking and personality. Journal of Personality and Social Psychology, 58, 1073. 10.1037/0022-3514.58.6.1073 [DOI] [PubMed] [Google Scholar]
  34. Liu J., Zhao S., Chen X., Falk E., Albarracín D. (2017). The influence of peer behavior as a function of social and cultural closeness: A meta-analysis of normative influence on adolescent smoking initiation and continuation. Psychological Bulletin, 143(10), 1082–1115. 10.1037/bul0000113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Møller M., Haustein S. (2014). Peer influence on speeding behaviour among male drivers aged 18 and 28. Accident Analysis & Prevention, 64, 92–99. [DOI] [PubMed] [Google Scholar]
  36. Natori T., Iio T., Yoshikawa Y., Ishiguro H. (2025). Impact of table-top robots on questionnaire response rates at a science museum. International Journal of Social Robotics, 17, 729–742. [Google Scholar]
  37. Nomura T., Suzuki T., Kanda T., Han J., Shin N., Burke J., Kato K. (2008). What people assume about humanoid and animal-type robots: Cross-cultural analysis between Japan, Korea, and the USA. International Journal of Humanoid Robotics, 5(1), 25–46. [Google Scholar]
  38. Osmont A., Camarda A., Habib M., Cassotti M. (2021). Peers’ choices influence adolescent risk-taking, especially when explicit risk information is lacking. Journal of Research on Adolescence, 31(3), 402–416. 10.1111/jora.12611. [DOI] [PubMed] [Google Scholar]
  39. Pasupathi M. (1999). Age differences in response to conformity pressure for emotional and nonemotional material. Psychology and Aging, 14(1), 170–174. 10.1037/0882-7974.14.1.170 [DOI] [PubMed] [Google Scholar]
  40. Peter J., Kühne R., Barco A. (2021). Can social robots affect children’s prosocial behavior? An experimental study on prosocial robot models. Computers in Human Behavior, 120, 106712. [Google Scholar]
  41. Potard C., Courtois R., Rusch E. (2008). The influence of peers on risky sexual behaviour during adolescence. European Journal of Contraception and Reproductive Health Care, 13(3), 264–270. 10.1080/13625180802273530. [DOI] [PubMed] [Google Scholar]
  42. Powell M., Ansic D. (1997). Gender differences in risk behaviour in financial decision-making: An experimental analysis. Journal of Economic Psychology, 18, 605–628. [Google Scholar]
  43. Powers K. E., Schaefer L., Figner B., Somerville L. H. (2022). Effects of peer observation on risky decision-making in adolescence: A meta-analytic review. Psychological Bulletin, 148(11–12), 783–812. 10.1037/bul0000382. [DOI] [Google Scholar]
  44. Reiter A. M., Moutoussis M., Vanes L., Kievit R., Bullmore E. T., Goodyer I. M., Fonagy P., Jones P. B.; NSPN Consortium, & Dolan RJ. (2021). Preference uncertainty accounts for developmental effects on susceptibility to peer influence in adolescence. Nature Communications, 12(1), 3823. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Ren Q., Belpaeme T. (2022). Tactile interaction with a robot leads to increased risk-taking. In Cavallo F., Fiorini L., He H., Matsumoto Y. (Eds.), Social Robotics. ICSR 2022. Lecture Notes in Computer Science (pp. 120–129). Springer. 10.1007/978-3-031-24667-8_11 [DOI] [Google Scholar]
  46. Reniers R. L. E. P., Beavan A., Keogan L., Furneaux A., Mayhew S., Wood S. J. (2017). Is it all in the reward? Peers influence risk-taking behaviour in young adulthood. British Journal of Psychology, 108(2), 276–295. 10.1111/bjop.12195. [DOI] [PubMed] [Google Scholar]
  47. Robinette P., Howard A. M., Wagner A. R. (2017). Effect of robot performance on human–robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4), 425–436. 10.1109/THMS.2017.2648849 [DOI] [Google Scholar]
  48. Rolison J. J., Wood S., Hanoch Y. (2017). Age and adaptation: Stronger decision updating about real world risks in older age. Risk Analysis, 37, 1632–1643. [DOI] [PubMed] [Google Scholar]
  49. Rolison J. J., Hanoch Y., Wood S., Liu P. J. (2014). Risk-taking differences across the adult life span: A question of age and domain. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 69(6), 870–880. 10.1093/geronb/gbt081 [DOI] [PubMed] [Google Scholar]
  50. Rossi S., Conti D., Garramone F., Santangelo G., Staffa M., Varrasi S., Di Nuovo A. (2020). The role of personality factors and empathy in the acceptance and performance of a social robot for psychometric evaluations. Robotics, 9(2), 39. [Google Scholar]
  51. Salomons N., Van Der Linden M., Strohkorb Sebo S., Scassellati B. (2018). Humans conform to robots: Disambiguating trust, truth, and conformity. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 187–195). Association for Computing Machinery. [Google Scholar]
  52. Schweizer P. J., Goble R., Renn O. (2022). Social perception of systemic risks. Risk Analysis, 42(7), 1455–1471. [DOI] [PubMed] [Google Scholar]
  53. Shepherd J. L., Lane D. J., Tapscott R. L., Gentile D. A. (2011). Susceptible to social influence: Risky “Driving” in response to peer pressure 1. Journal of Applied Social Psychology, 41(4), 773–797. 10.1111/j.1559-1816.2011.00735.x [DOI] [Google Scholar]
  54. Simons-Morton B., Ouimet M., Chen R., Klauer S. G., Lee S. E., Wang J., Dingus T. A. (2012). Peer influence predicts speeding prevalence among teenage drivers. Journal of Safety Research, 43(5), 397–403. 10.1016/j.jsr.2012.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Slagter S. K., van Duijvenvoorde A. C., van den Bos W. (2023). Adolescents seek social information under uncertainty. Journal of Experimental Psychology: General, 152(3), 890–905. [DOI] [PubMed] [Google Scholar]
  56. Steinberg L., Monahan K. C. (2007). Age differences in resistance to peer influence. Developmental Psychology, 43(6), 1531. 10.1037/0012-1649.43.6.1531 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Tobis S., Piasek-Skupna J., Suwalska A. (2023). The godspeed questionnaire series in the assessment of the social robot tiago by older individuals. Sensors, 23(16), 7251. 10.3390/s23167251 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Toh L. P. E., Causo A., Tzuo P. W., Chen I. M., Yeo S. H. (2016). A review on the use of robots in education and young children. Educational Technology & Society, 19(2), 148–163. [Google Scholar]
  59. Tomova L., Pessoa L. (2018). Information about peer choices shapes human risky decision-making. Scientific Reports, 8, Article 5129. 10.1038/s41598-018-23455-8. [DOI] [PMC free article] [PubMed]
  60. Vollmer A.-L., Read R., Trippas D., Cangelosi A., Belpaeme T. (2018). Children conform, adults resist: A robot group induced peer pressure on normative social conformity. Science Robotics, 3(20), eaat7111. 10.1126/scirobotics.aat7111. [DOI] [PubMed] [Google Scholar]
  61. Weber E. U., Hsee C. K., Sokolowska J. (1998). What folklore tells us about risk and risk taking: Cross-cultural comparisons of American, German, and Chinese proverbs. Organizational Behavior and Human Decision Processes, 75(2), 170–186. [DOI] [PubMed] [Google Scholar]
  62. White C., Gummerum M., Hanoch Y. (2015). Adolescents’ and young adults’ online risky behaviours: Insights from Fuzzy-Trace theory. Risk Analysis, 35, 1407–1422. [DOI] [PubMed] [Google Scholar]
  63. Widman L., Choukas-Bradley S., Helms S. W., Prinstein M. J. (2016). Adolescent susceptibility to peer influence in sexual situations. Journal of Adolescent Health, 58(3), 323–329. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Xu K., Lombard M. (2017). Persuasive computing: Feeling peer pressure from multiple computer agents. Computers in Human Behavior, 74, 152–162. [Google Scholar]
  65. Zuckerman M., Kuhlman D. M. (2000). Personality and risk-taking: Common bisocial factors. Journal of Personality, 68(6), 999–1029. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-docx-1-qjp-10.1177_17470218251359985 – Supplemental material for Robots to the rescue: Robot discouragement reduces young adults’ risk-taking

Supplemental material, sj-docx-1-qjp-10.1177_17470218251359985 for Robots to the rescue: Robot discouragement reduces young adults’ risk-taking by Michaela Gummerum, Yaniv Hanoch, Daniel Hernandez Garcia and Angelo Cangelosi in Quarterly Journal of Experimental Psychology


Articles from Quarterly Journal of Experimental Psychology (2006) are provided here courtesy of SAGE Publications

RESOURCES