Abstract
To address manpower shortages, health care leaders recommend technology, including robots, to facilitate and augment processes for delivery of efficient, safe care. Little is known regarding older adults’ perceptions of socially assistive robots (SARs). Using the Unified Theory of Acceptance and Use Technology framework, a survey was developed and tested for capturing older adults’ likelihood to use SARs. The Robot Acceptance Survey (RAS) comprises three subscales: Performance Expectancy, Effort Expectancy, and Attitude. Older adults completed the RAS pre- and post-experimental procedure with a SAR. Cronbach’s alpha coefficients for the subscales ranged from 0.77 to 0.89. Subscales were sensitive to change, with more positive reactions after exposure to SAR activities. Future studies must identify robotic programming capable of providing cognitive, physical, and social assistance, as well as person-, activity-, situation-, and robot-specific factors that will influence older adults’ acceptance of SARs.
The aging population, with its concomitant medical conditions and physical and cognitive impairments at a time of strained resources, establishes the urgent need to explore advanced technologies that may enhance older adults’ function and quality of life. Non-pharmacological interventions exist that address older adults’ cognitive and physical function as well as their social conditions, and have had inconsistent results; these interventions include physical activity, exercise, social interaction and engagement, cognitive stimulation, music, art therapy, reminiscence therapy, and caregiver interventions (Cohen-Mansfield, Marx, Dakheel-Ali, & Thein, 2015; Goris, Ansel, & Schutte, 2016). Multimodal strategies tailored to the individual appear most successful, but can be resource intensive (Cohen-Mansfield et al., 2015), which is problematic because of health care manpower shortages in a variety of settings (American Health Care Association, 2012; Harrington, Schnelle, McGregor, & Simmons, 2016). To address these shortages, health care leaders recommend technology to facilitate and augment the delivery of efficient, safe care (Hussain, Rivers, Glover, & Fottler, 2012; Institute of Medicine, 2011). Hence, there is an urgent need for efficacious strategies that are tailored to the individual and can exist within various resource-strained environments.
Robotic systems show promise in addressing older adults’ physical, cognitive, and social needs (Bedaf, Gelderblom, & de Witte, 2015). Compared to other interactive technologies, robotic systems can embed novel quantitative metrics; develop sensor-based, non-invasive methods; and incorporate physical movement into realistically embodied interactions (Louie, McColl, & Nejat, 2014; Mann, MacDonald, Kuo, Li, & Broadbent, 2015). Socially assistive robots (SARs) are one type of robot that aid humans through social interaction and comprise pets, companions, service robots, or both companion and service (Bedaf et al., 2015).
With the growth and anticipated use of robotic technology, determining how and to what extent older adults accept and respond to SARs is critical for their successful implementation. Human Robot Interaction (HRI) science is a fairly new interdisciplinary field of research; as such, there is lack of consensus on the measurement strategies of individuals’ acceptance and subsequent use of robots (Broadbent, 2017; Dautenhahn, 2007; Sim & Loo, 2015). Moreover, there are numerous dimensions of HRI, further complicating assessment and evaluation strategies (Broadbent, 2017; Dautenhahn, 2007).
Several human factors that influence whether an individual will accept and use SARs are beginning to emerge (Broadbent, 2017; Sim & Loo, 2015). The first is how an individual perceives the robot, which in turn influences his/her intention to use the robot technology (Heerink, Kröse, Evers & Wielinga, 2009, 2010). The Unified Theory of Acceptance and Use Technology (UTAUT) framework proposed by Venkatesh, Morris, Davis, and Davis (2003) has been used to varying degrees in HRI studies (Heerink et al., 2009, 2010; Sim & Loo, 2015). Key constructs of the UTAUT are performance expectancy (i.e., will the technology achieve goals?), effort expectancy (i.e., ease of use), social influence (i.e., subjective norms), and facilitating conditions (i.e., infrastructure supports the technology). Moderators include gender, age, experience, and voluntariness of use. Another factor is anxiety, which has been postulated by some to interfere with individuals’ use and interaction with a robot (Sim & Loo, 2015). Attitude, which encompasses positive or negative feelings toward use of the robot, may also influence individuals’ acceptance (Heerink et al., 2009). The degree to which an individual ascribes human-like attributes and characteristics to the robot (i.e., anthropomorphism) may have an important role in whether he/she reacts positively (Broadbent, 2017).
As part of a larger study examining the use of SARs with older adults, the current authors designed and evaluated an instrument to measure older adults’ perceptions of interacting with a robot. The goal was to evaluate the content validity of the instrument and internal consistency of its major constructs. A second goal was to determine whether individuals’ perceptions changed after exposure and interaction with a SAR, and whether the instrument was sensitive to this change.
METHOD
Design
Development and evaluation of the instrument was based on three phases: a conceptual step with content validity examination, a cross-sectional survey, and a pre–post experimental survey. The cross-sectional and pre–post experimental surveys were reviewed and approved by the university institutional review board.
Phase I: Instrument Development and Content Validity Evaluation
Instrument Development
The current authors developed a 35-item Robot Acceptance Survey (RAS) adapted from the UTAUT model and Heerink et al. (2009, 2010). The UTAUT was formulated from several existing technology acceptance tools and posits several factors can influence one’s intention to use and accept technology. Heerink et al. (2009, 2010) used the UTAUT and other surveys to measure individuals’ acceptance of robots. For the current study, UTAUT items were modified to reflect three UTAUT constructs: performance expectancy (i.e., degree to which the individual believes that the robot improves performance or function), effort expectancy (i.e., degree of ease associated with use of the robot), and attitude (i.e., overall affective reaction to using the robot). Based on the literature, several items were added to the attitude construct that reflected a range of reactions, including enjoyment, anxiety, and sense of human-like attributes within the robot (Heerink et al., 2009, 2010; Joosse, Sardar, Lohse, & Evers, 2013). Each item was mapped to the corresponding construct and formatted using a 7-point Likert response structure (always to never).
Content Validity Determination
When constructs were selected and items mapped to the corresponding construct, content validity determination was conducted using Lynn’s (1986) approach. An expert panel of nurses (n = 3), physicians (n = 3), and engineers (n = 2) was convened. Nurse experts included two doctoral-prepared and one master’s-prepared nurse who had prior clinical or research experience in introducing technology into the health care setting. All three physicians had additional graduate degrees as well as expertise in introducing technology into the health care setting. Both engineers were doctoral-prepared and conducted research on artificial intelligence and robots.
Each expert received a packet of information explaining the purpose of the instrument, definitions of the constructs, and instructions for completing the survey evaluation. Experts independently rated each item’s relevance to the construct (i.e., performance expectancy, effort expectancy, attitude, and anxiety) and the clarity of the item. Experts were also asked to determine if content was missing that should be included under the three broad constructs. Ratings ranged from 1 (not at all relevant; not clear) to 4 (very relevant; succinct). For any item rated a 1 or 2, experts were asked to comment on their rating and provide suggestions for enhancing the relevance of the item or its clarity. Items rated as 3 indicated a need for minor revision. A content validity index (CVI) was calculated for each item as the proportion of experts who rated each item a 3 or 4. Several items had a CVI <0.88—the minimum cutoff agreement based on six of eight experts. Based on expert panel recommendations, several items underwent revisions and were re-examined. A final instrument CVI of 1.00 was achieved based on these revisions.
Phase II: Field Test 1
Design
A cross-sectional survey using a simulated approach (Broadbent, 2017) was used to conduct initial examination of the internal consistency of the RAS.
Sample/Setting
The study took place in the waiting room of an emergency department (ED) in a large academic medical center. Participants were any teenagers or adults who were willing and able to complete the survey while in the waiting room.
Data Collection
The initial 35-item instrument was formatted using a 7-point Likert response structure (always to never). Several demographic items were at the end of the survey. Trained research assistants visited the ED waiting room on 20 consecutive days between 10 a.m. and 8 p.m. They explained the study’s goal of using a robot to help screen and triage patients in the ED. Participants were shown a figure of the potential robot and asked if they would be willing to complete a survey regarding their thoughts and perceptions of a robot in the ED setting (i.e., simulated approach). All surveys were voluntary and anonymous.
Data Analyses
Data were entered into REDCap, an encrypted web-based data management system (Harris et al., 2009). All variables were analyzed using SPSS version 22. Descriptive statistics were generated for all variables. Internal consistency was evaluated using Cronbach’s alpha coefficient for the overall instrument. Item analysis was conducted by examining whether deletion of the item diminished the overall Cronbach’s alpha coefficient.
Results
Participants completing the first version of the RAS comprised 247 patients and families, ages 15 to 88 years. Cronbach’s alpha coefficient was 0.95 for the full scale. Cronbach’s alpha coefficient for each item (if deleted) was examined to determine whether the item diminished the overall alpha. Based on this initial analysis, several items were refined and several items that pertained specifically to anxiety were deleted (Table 1).
TABLE 1.
PRE-POST ROBOT ACCEPTANCE SURVEY RESULTS (N = 19)a
Item | Median (Interquartile Range) | p Value | ||
---|---|---|---|---|
Pre–Experiment | Post–Experiment | Difference | ||
Performance expectancy | ||||
I think the robot could be useful to me. | 4.0 (3, 4) | 3.0 (2, 4) | −1 (−1, 0) | |
I would trust the robot to give me good advice. | 4.0 (3, 4) | 4.0 (2, 5) | 0 (−1, 1) | |
I know enough of the robot to make good use of it. | 3.0 (2, 5) | 4.0 (2, 5) | 0 (−1, 1) | |
I would follow the advice the robot gives me. | 4.0 (2, 5) | 2.0 (2, 4) | −1 (−1, 0) | |
It’s good to make use of the robot. | 3.0 (2, 4) | 2.0 (1, 3) | −2 (−2, 0) | |
I felt like the robot understood me. | 4.0 (3, 4) | 2.0 (2, 5) | −2 (−2, 1) | |
I think the robot could help with many things. | 3.0 (2, 3) | 2.0 (1, 3) | −1 (−1, 0) | |
Aggregate score | 3.4 (2.8, 4.3) | 2.7 (1.7, 2.7) | −0.7 (−1.0, 0.2) | 0.028 |
Effort expectancy | ||||
Where I live, I would have everything I need to make good use of a robot. | 4.0 (1, 5) | 3.0 (2, 4) | 0 (−2, 1) | |
I think I would know quickly how to use the robot. | 4.0 (3, 5) | 3.0 (2, 4) | −1 (−2, 0) | |
I found the robot easy to use. | 4.0 (2, 4) | 1.5 (1, 3) | −2 (−3, 0) | |
I think I could use the robot without any help. | 4.0 (2, 5) | 2.0 (1, 4) | −1 (−2, 0) | |
I think I could use the robot if I had a good manual. | 3.0 (1, 4) | 2.0 (1, 4) | −1 (−1, 0) | |
I think I could use the robot if someone is around to help me.a | 1.0 (1,2) | 2.0 (1,2) | 0 (−1, 2) | |
If I should use the robot, I would be afraid to make mistakes with it.b,c | 2.0 (1, 3) | 3.0 (2, 3) | 1 (0, 1) | |
If I should use the robot, I would be afraid to break something.b,c | 2.0 (1, 3) | 2.0 (1, 3) | 0 (−1, 0) | |
Aggregate score (without “b” items) | 3.6 (2.2, 4.8) | 2.7 (1.6, 3.8) | −0.8 (−1.6, 0.1) | 0.005 |
Attitude | ||||
I think it is a good idea to use a robot. | 3.0 (2, 4) | 3.0 (1, 3) | 0 (−2, 1) | |
I enjoyed the robot talking to me. | 3.0 (1, 5) | 1.0 (1, 2) | −1 (−4, 0) | |
I considered the robot a pleasant conversation partner.d | 3.0 (2, 4) | 2.0 (1, 4) | −1 (−2, 0) | |
When interacting with the robot, I felt like I was talking to a real person.d | 4.0 (2, 6) | 3.0 (2, 6) | 0 (−2, 0) | |
The robot would make my life more interesting. | 3.0 (2, 5) | 2.0 (2, 3) | 0 (−1, 1) | |
I enjoyed doing things with the robot. | 3.0 (2, 3) | 2.0 (1, 3) | −1 (−2, 0) | |
I found the robot pleasant to interact with. | 3.0 (2, 4) | 1.0 (1, 2) | −1 (−2, 0) | |
It would be convenient for me to have a robot. | 4.0 (2, 5) | 3.0 (1, 4) | 0 (−2, 1) | |
It sometimes felt as if the robot was really looking at me.d | 3.0 (3, 5) | 3.0 (1, 5) | 0 (−2, 1) | |
I found the robot scary.c | 7.0 (6,7) | 7 (7,7) | 0 (0, 1) | |
I found the robot enjoyable. | 3.0 (1, 3) | 1.0 (1, 2) | −1 (−2, 1) | |
I can imagine the robot to be a living creature.d | 4.0 (3, 7) | 5.0 (2, 7) | 0 (−2, 0) | |
I found the robot intimidating.b,c | 1.0 (1, 2) | 1.0 (1, 1) | 0 (−1, 0) | |
I found the robot fascinating. | 2.0 (1, 3) | 1.0 (1, 1) | 0 (−1, 0) | |
I thought the robot was nice.c,d | 2.0 (1, 3) | 1.0 (1, 2) | −1 (−1, 0) | |
The robot seemed to have real feelings.d | 6.0 (2, 7) | 5.0 (2, 7) | 0 (−2, 1) | |
I found the robot boring.b,c | 1.0 (1, 4) | 1.0 (1, 1) | 0 (−3, 0) | |
Total attitude aggregate score (without “b” items) | 3.5 (2.8, 3.8) | 2.7 (1.6, 3.5) | −0.7 (−1.3, −0.4) | 0.004 |
Function aggregate score (without “b” items) | 3.1 (2.3, 4.0) | 2.4 (1.7, 3.1) | −0.8 (−1.1. −0.2) | 0.006 |
Human attributes aggregate score (without “b” items) | 4.6 (3.0, 5.2) | 3.6 (1.8, 4.8) | −0.6 (−1.4, 0.2) | 0.023 |
Lower scores denote more positive perception.
Insufficient variability or not contributing much in light of all the better items—dropped.
Items reverse coded.
Denotes human–like attributes.
Phase III: Field Test 2
Design
A pre–post experimental survey was conducted with the 32-item RAS.
Sample/Setting
All experiments were conducted at the Vanderbilt University School of Engineering laboratory. Community-dwelling older adults were recruited from the greater Nashville area. Recruitment was made through postings at local YMCAs, libraries, and senior centers whereby interested individuals contacted the research team. Eligibility criteria included age 65 or older, corrected vision and hearing that allowed the individual to engage in conversation with the SAR, and physically able to participate in chair exercises. Those with a known diagnosis of dementia had to demonstrate understanding of the research process and ability to engage with the SAR. Informed consent or assent with a legally authorized representative was required. At the end of the experiment, the participant received a $20 gift card.
Human Robot Interaction Experiment
The full description of the experimental procedures is described elsewhere (Fan et al., 2016). The current article describes the equipment, laboratory setup, and activities.
Equipment
The commercially available robot NAO (access https://www.ald.softbankrobotics.com/en) was used for robot-mediated interaction (Figure 1). NAO is a medium-sized humanoid robot with a height of 58 cm and weight of approximately 4.3 kg. The disadvantages are lack of facial expressions, eye gaze is relative to its head position, and it cannot move quickly. Despite its disadvantages, NAO is widely used because of its relatively low cost and capability for custom software development.
Figure 1.
NAO humanoid robot from Aldebaran Robotics.
Laboratory Setup
The engineering laboratory had 500 square feet for assessment and intervention. A separate adjacent area, partitioned by a sound proof wall and one-way mirror for observation, housed several high-performance desktop computers that the experimenter used to control the robot. Two sets of experiments were devised. The first set comprised one older adult interacting with the SAR. The second set of experiments comprised two older adults interacting with the SAR (Figure 2). For both experimental procedures, the SAR was placed on a table approximately 6 feet away from participants; throughout the experiments, participants remained seated in a straight-back armchair. For the one-on-one experiments, the length of each session ranged from 45 to 60 minutes to complete all activities. For the triadic experiments (i.e., with two participants and one robot), sessions lasted approximately 30 minutes. Activities were designed to be passive or active in engaging the individual in physical, cognitive, and/or social activities, including: general orientation, observe robot dancing, participating in chair exercises, math questions, guessing game of birth state, and a form of “Simon Says” requiring cognitive and physical activity.
Figure 2.
Room layouts for (a) one-on-one and (b) triadic human–robot interaction (HRI).
Data Collection
The 32-item instrument (Table 1) resulting from Phase II was administered before and immediately after the HRI experiment. Pre-survey items were worded to reflect older adults’ anticipated reactions. For example, an item pre-experiment was “I would feel like the robot understands me” and post-experiment was “I felt like the robot understood me.” A 7-point Likert scale was used for each item, with 1 indicating the most positive response and 7 indicating the most negative response. In addition to the RAS, information was also gathered on demographic characteristics and current computer and technology use.
Data Analyses
Data were entered into REDCap (Harris et al., 2009) and analyzed using SPSS version 23. Summaries of central tendency and variability of the values for the continuous variables were generated with frequency and percentage summaries for nominal variables. Psychometric properties of the three subscales (i.e., Performance Expectancy, Effort Expectancy, and Attitude) were re-examined within the RAS using Cronbach’s alpha coefficient, Cronbach’s alpha if item was deleted, and item–subtotal correlations. Because some have asserted that an individual’s designation of human-like attributes on the robot is a direct influence on intention to use (Heerink et al., 2009), the Attitude subscale was drilled-down to evaluate eight items pertaining to overall function separately from seven items pertaining to attribution of human-like features. Because the four anxiety items had been mapped to external effort and attitudes, they were re-examined as a separate potential construct and sub-scale. An overall aggregate score ranging from 1 to 7 was given to each subscale. Spearman rho correlation tests were conducted on demographic variables with the three subscales within RAS. Comparisons of pre- and post-surveys were analyzed using Wilcoxon signed-rank tests. An alpha of 0.05 was used for determining statistical significance.
RESULTS
Participant Profile
A total of 19 individuals participated in this phase of the study. Eleven individuals (six women and five men) participated in the one-on-one interaction with the SAR. Of those individuals, seven had normal cognition and four had mild cognitive impairment or dementia. Ages ranged from 66 to 94 years (mean age = 82.5 years). Eight additional older adults (four pairs including one with dementia) participated in the dual interaction setting of this study phase. The pairs comprised five women and three men who ranged in age from 70 to 86 years (mean age = 81.3 years). All 19 participants were White and highly educated, with 16 (84%) having a college degree. Ten (53%) individuals indicated they used a computer daily.
Robot Acceptance Survey Analysis
Table 1 displays the pre–post item scores and overall subscale scores, and theoretical range (1 = most positive to 7 = most negative). Note that the text items displayed are for the post-RAS only. Several items had low variability and were dropped from further analyses. Each subscale had acceptable standardized Cronbach’s alpha coefficients pre- and post-experiment: 0.88 and 0.86 for Performance Expectancy, 0.83 and 0.85 for Effort Expectancy, and 0.80 and 0.89 for Attitude, respectively. The Human-Attribute Attitude subscale (five items) had a pre-experimental Cronbach’s alpha coefficient of 0.77 and post-experimental value of 0.89. The four anxiety-related items examined as a potential subscale demonstrated Cronbach’s alpha coefficients of 0.67 pre-experiment and 0.82 post-experiment.
Participants’ Perceptions of the Socially Assistive Robot and Activities
Participants tended to be neutral in their perceptions prior to the experiments, but statistically significantly more positive on all RAS subscales except for the four anxiety items post-experiment (p < 0.05) (Table 1). Participants were asked to provide feedback on the robot itself as well as the activities. Overall, the robot was well accepted, with participants rating it easy to understand (68%), having a pleasant voice (74%), able to hear and understand the robot’s speech (79%), able to keep them interested (95%), and having a pleasant appearance (86%). However, only 63% rated it as comfortable to interact with the robot.
Participants recommended keeping activities in the following order: exercises (including Simon Says) (91%), orientation activities (90%), robot dancing to music (73%), math questions (64%), and guessing birth state game (56%). Participants also rated activities as extremely interesting or very interesting: exercises (82%), robot dancing to music (82%), math questions (82%), orientation (80%), and guessing birth state (55%).
Correlations between demographic and computer use variables and RAS scores at the post-experimental assessment point are shown in Table 2. No statistically significant associations were observed for age, gender, or frequency of use of computers or laptops (absolute rs values ranged from 0.01 to 0.38, p > 0.1). There was a statistically significant positive correlation between years of education with global attitude scores (rs = 0.46, p = 0.05). No associations between education and other RAS scores were statistically significant (rs ranged from 0.10 to 0.34, p > 0.1).
TABLE 2.
ASSOCIATIONS BETWEEN DEMOGRAPHIC CHARACTERISTICS AND COMPUTER USE AND POST-EXPERIMENT ROBOT ACCEPTANCE SURVEY SCORES (N = 19)
Subscale | rs(p Value) | |||
---|---|---|---|---|
Gender | Age (Years) | Education (Years) | Frequency of Computer Use | |
Performance expectancy | −0.15 (0.55) | 0.12 (0.61) | 0.33 (0.18) | 0.17 (0.48) |
Effort expectancy | −0.13 (0.61) | 0.16 (0.51) | 0.10 (0.69) | 0.29 (0.23) |
Global attitude | −0.23 (0.34) | −0.03 (0.89) | 0.46 (0.05) | −0.01 (0.98) |
Attitude on function | −0.38 (0.11) | 0.07 (0.78) | 0.34 (0.15) | 0.09 (0.71) |
Attitude on human attributes | −0.20 (0.42) | 0.12 (0.61) | 0.25 (0.31) | 0.15 (0.54) |
DISCUSSION
SARs have great potential to augment health care providers’ activities to engage older adults in physical, cognitive, and social activities. The focus of the current article is to determine the reliability of a self-report survey to capture older adults’ intentions to use SARs. After conducting a simulated cross-sectional survey, followed by pre- and post-experimental SAR surveys, the final survey comprised 25 items reflecting perceptions of performance expectancy, effort expectancy, and attitudes toward function and human attributes. Overall, the three constructs reflected acceptable internal consistency.
Several items, most notably those associated with anxiety or fear, failed to contribute to the overall scale. This finding is in contrast to others’ reports of Cronbach’s alpha coefficients ranging from 0.70 to 0.85 (Heerink et al., 2010; Louie et al., 2014). The current study’s expert panel had conceptualized the anxiety items as part of the effort expectancy construct and fear items as part of the attitude construct. The four items were re-examined as a separate construct; only after exposure to the robot did the internal consistency coefficient demonstrate acceptable reliability. However, these items were not sensitive to change. The current results may be a function of the small sample. A second explanation for the low anxiety scores may be the experimental procedure. Participants sat away from the robot with no physical contact. All participants knew that research personnel were in the adjacent room in the event they would require assistance. The current sample also comprised self-selected older adults interested in testing a robot. These factors may account for the low variability and lack of contribution to the overall internal consistency of the anxiety items.
A second finding in the current study relates to the degree to which older adults attributed human-like features to the robot. Some have postulated that hedonic factors, such as enjoyment and attractiveness, as well as human-like attributes, are as important as utilitarian factors in determining an individual’s acceptance and use of SARs (Broadbent, 2017; de Graaf & Allouch, 2013; de Graaf, Allouch, & Klamer, 2015; de Graaf, Allouch, & van Dijk, 2015; Joosse et al., 2013). Overall, participants disagreed or were neutral with the items that the commercial robot had human-like attributes. Louie et al. (2014) custom-built a robot to appear as a young man wearing a baseball cap; older adults liked the robot’s voice and facial expressions but only one third liked its life-like appearance. The extent to which the current findings were influenced by the commercial robot used or the one-time interaction remains to be explored in future studies.
LIMITATIONS
There are several limitations of the current study. First, for both phases of testing with participants, sample sizes were small and larger samples are needed to more fully evaluate the psychometric properties of the RAS. There is a pressing need to enhance existing evaluation and assessment methodologies for HRI (Sim & Loo, 2015). Second, the experimental sample comprised volunteers with high levels of education and interest in testing the SAR. Several items need further testing for effective refinement (specifically those related to fear or anxiety). Rephrasing these questions with more nuance may enhance their ability to identify potential anxiety or fear. Changes to the experimental procedures may also elicit whether older adults would be apprehensive or anxious when interacting with a robot. The current study used a small, non-intimidating robot that required a 6-foot separation between the participant and SAR. Testing a SAR under different experimental conditions, such as positioning the robot within the individual’s personal space, and for repeated or prolonged use, may elicit different and varying perceptions (de Graaf & Allouch, 2013; Joosse et al., 2013). Despite these limitations, the extent to which the RAS demonstrated acceptable internal consistency and ability for responses to be sensitive to change upon exposure to the SAR is encouraging.
CONCLUSION
Use of SARs and how their efficacy and effectiveness are measured are still in the early stages of development. Future studies must identify robotic architecture capable of providing cognitive and physical and social assistance, and identify those person-, activity-, situation-, and robot-specific factors that will influence older adults’ acceptance of SARs (Cohen-Mansfield et al., 2011; Damholdt et al., 2015; de Graaf, Allouch, & Klamer, 2015; de Graaf, Allouch, & van Dijk, 2015; Heerink et al., 2009, 2010; Pino, Boulay, Jouen, & Rigaud, 2015). Findings from these studies will direct future work that best uses robotic and task features that can be targeted to older adults with varying needs for physical, cognitive, and/or social assistance.
Acknowledgments
The study was supported by the National Institute on Aging of the National Institutes of Health (NIH) (R21AG050483) and Vanderbilt University (UL1 TR000445) from the National Center for Advancing Translational Sciences/NIH.
Footnotes
The authors have disclosed no potential conflicts of interest, financial or otherwise.
References
- American Health Care Association. American Health Care Association 2012 staffing report. 2012 Retrieved from https://www.ahcancal.org/research_data/staffing/Documents/2012_Staffing_Report.pdf.
- Bedaf S, Gelderblom GJ, de Witte L. Overview and categorization of robots supporting independent living of elderly people: What activities do they support and how far have they developed. Assistive Technology. 2015;27:88–100. doi: 10.1080/10400435.2014.978916. [DOI] [PubMed] [Google Scholar]
- Broadbent E. Interactions with robots: The truths we reveal about ourselves. Annual Review of Psychology. 2017;68:627–652. doi: 10.1146/annurev-psych-010416-043958. [DOI] [PubMed] [Google Scholar]
- Cohen-Mansfield J, Marx MS, Dakheel-Ali M, Thein K. The use and utility of specific nonpharmacological interventions for behavioral symptoms in dementia: An exploratory study. American Journal of Geriatric Psychiatry. 2015;23:160–170. doi: 10.1016/j.jagp.2014.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen-Mansfield J, Marx MS, Freedman LS, Murad H, Regier NG, Thein K, Dakheel-Ali M. The comprehensive process model of engagement. American Journal of Geriatric Psychiatry. 2011;19:859–870. doi: 10.1097/JGP.0b013e318202bf5b. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Damholdt MF, Nørskov M, Yamazaki R, Hakli R, Hansen CV, Vestergaard C, Seibt J. Attitudinal change in elderly citizens toward social robots: The role of personality traits and beliefs about robot functionality. Frontiers in Psychology. 2015;6:1701. doi: 10.3389/fpsyg.2015.01701. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dautenhahn K. Socially intelligent robots: Dimensions of human-robot interaction. Philosophical Transactions of the Royal Society B. 2007;362:679–704. doi: 10.1098/rstb.2006.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Graaf MMA, Allouch SB. Exploring influencing variables for the acceptance of social robots. Robotics and Autonomous Systems. 2013;61:1476–1486. doi: 10.1016/j.robot.2013.07.007. [DOI] [Google Scholar]
- de Graaf MMA, Allouch SB, Klamer T. Sharing a life with Harvey: Exploring the acceptance of and relationship-building with a social robot. Computers in Human Behavior. 2015;43:1–14. doi: 10.1016/j.chb.2014.10.030. [DOI] [Google Scholar]
- de Graaf MMA, Allouch SB, van Dijk J. What makes robots social? A user’s perspective on characteristics for social human-robot interaction. In: Tapus A, Andre E, Martin JC, Ferland F, Ammi M, editors. Social robotics. Vol. 9388. New York, NY: Springer; 2015. pp. 184–193. [Google Scholar]
- Fan J, Bian D, Zheng Z, Beuscher L, Newhouse PA, Mion LC, Sarkar N. A Robotic Coach Architecture for Elder Care (ROCARE) based on multi-user engagement models. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2016:1. doi: 10.1109/TNSRE.2016.260891. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goris ED, Ansel KN, Schutte DL. Quantitative systematic review of the effects of non-pharmacological interventions on reducing apathy in persons with dementia. Journal of Advanced Nursing. 2016;72:2612–2628. doi: 10.1111/jan.13026. [DOI] [PubMed] [Google Scholar]
- Harrington C, Schnelle JF, McGregor M, Simmons SF. The need for higher minimum staffing standards in U.S. nursing homes. Health Services Insights. 2016;9:13–19. doi: 10.4137/hsi.s38994. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics. 2009;42:377–381. doi: 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heerink M, Kröse B, Evers V, Wielinga B. Measuring acceptance of an assistive social robot: A suggested toolkit. 2009 Retrieved from http://ieeexplore.ieee.org/document/5326320/
- Heerink M, Kröse B, Evers V, Wielinga B. Assessing acceptance of assistive social agent technology by older adults: The Almere Model. International Journal of Social Robotics. 2010;2:361–375. doi: 10.1007/s12369-010-0068-5. [DOI] [Google Scholar]
- Hussain A, Rivers PA, Glover SH, Fottler MD. Strategies for dealing with future shortages in the nursing work-force: A review. Health Services Management Research. 2012;25:41–47. doi: 10.1258/hsmr.2011.011015. [DOI] [PubMed] [Google Scholar]
- Institute of Medicine. The future of nursing: Leading change, advancing health. 2011 Retrieved from http://www.nationalacademies.org/hmd/Reports/2010/The-Future-of-Nursing-Leading-Change-Advancing-Health.aspx.
- Joosse M, Sardar A, Lohse M, Evers V. BEHAVE-II: The revised set of measures to assess users’ attitudinal and behavioral responses to a social robot. International Journal of Social Robotics. 2013;5:379–388. doi: 10.1007/s12369-013-0191-1. [DOI] [Google Scholar]
- Louie WYG, McColl D, Nejat G. Acceptance and attitudes toward a human-like socially assistive robot by older adults. Assistive Technology. 2014;26:140–150. doi: 10.1080/10400435.2013.869703. [DOI] [PubMed] [Google Scholar]
- Lynn MR. Determination and quan-tification of content validity. Nursing Research. 1986;35:382–385. [PubMed] [Google Scholar]
- Mann JA, MacDonald BA, Kuo IH, Li X, Broadbent E. People respond better to robots than computer tablets delivering healthcare instructions. Computers in Human Behavior. 2015;43:112–117. doi: 10.1016/j.chb.2014.10.029. [DOI] [Google Scholar]
- Pino M, Boulay M, Jouen F, Rigaud AS. “Are we ready for robots that care for us?” Attitudes and opinions of older adults towards socially assistive robots. Frontiers in Aging Neuroscience. 2015;7:1–15. doi: 10.3389/fnagi.2015.00141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sim DYY, Loo CK. Extensive assessment and evaluation methodologies on assistive social robots for modelling human–robot interaction–A review. Information Sciences. 2015;301:305–344. doi: 10.1016/j.ins.2014.12.017. [DOI] [Google Scholar]
- Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. Management of Information Systems Quarterly. 2003;27:425–478. [Google Scholar]