Abstract
Over 1 billion people in the world are estimated to experience significant disability. These disabilities can impact people’s ability to independently conduct activities of daily living, including ambulating, feeding, dressing, taking care of personal hygiene, and more. Mobile and manipulator robots, which can move about human environments and physically interact with objects and people, have the potential to assist people with disabilities in activities of daily living. Although the vision of physically assistive robots has motivated research across sub-fields of robotics for decades, such robots have only recently become feasible in terms of capabilities, safety, and price. More and more research involves end-to-end robotic systems that interact with people with disabilities in real world settings. In this paper, we survey papers about physically assistive robots intended for people with disabilities from top conferences and journals in robotics, human-computer interactions, and accessible technology, to identify the general trends and research methodologies. We then dive into three specific research themes – interaction interfaces, levels of autonomy, and adaptation – and present frameworks for how these themes manifest across physically assistive robot research. We conclude with directions for future research.
Keywords: physically assistive robots, accessibility, user-centered design, human-robot interaction, assistive technology
1. INTRODUCTION
“[Physically assistive robots] would decrease the workload on family members, help with caregiver burnout, and maybe in the future help a disabled person [like me] have more independence.”
– Tyler Schrenk1, 1985–2023
The World Health Organization estimates that 1.3 billion people around the world experience significant disability (1). Whether due to congenital conditions, injury, illness, or acquired with age, disabilities can impact people’s ability to independently perform activities of daily living (ADLs) and therefore reduce their quality of life. According to the CDC, at least 6 million adults in the US have difficulty doing errands independently (2). While most people with disabilities wish to live independently in their home (3, 4), such difficulties can threaten their ability to do so. Besides their impact on day-to-day activities, disabilities also take a psychological toll and can lead to mental health challenges (5).
The social model of disability argues that disability is a result of the mismatch between a person’s abilities and their environment (6), and advocates to bridge the gap between our inaccessible world and diverse abilities. Universal design has helped bridge the gap in accessing the digital world, allowing people of many abilities to program computers and access the internet. However, the ability gap in accessing the physical world remains.
Mobile and manipulator robots present a unique opportunity for enabling access to the physical world for people with disabilities as they can sense the environment, navigate to different locations, and/or pick up and rearrange objects. Many activities of daily living that are difficult or impossible due to a person’s impairment—such as independently feeding or ambulating—are physically possible for a robot to perform (Fig. 1A). However, developing robots that safely and robustly perform these tasks in diverse environments, with diverse user impairments and preferences, is challenging. Many open questions remain as to how robots should be designed, what user interfaces to use, what levels of autonomy they should have, and more. These questions have fueled research in physically assistive robots (PARs).
Figure 1.

A. Common domains of assistance, exemplifying the different types of robots: mobile (12), mobile manipulator (13, 14, 15), and manipulator (16, 17). (First, second, and fourth images: Reprinted from (12) (CC BY 4.0). ©2012 IEEE. Reprinted, with permission, from (14). ©2019 IEEE. Reprinted, with permission, from (13).) B. Number of papers in this review by year published.
In this paper, we survey papers about physically assistive robots intended for people with disabilities from top conferences and journals in robotics, human-computer interaction, and accessible technology. Three trends motivated this survey. First, over the past decade the number of papers researching PARs has increased several-fold (Fig. 1B). Yet, PAR research has been siloed by domain of assistance, e.g., robot-assisted feeding and robot-assisted navigation, and there is little dialogue about takeaways that cut across these domains. Second, the formative studies that highlight the needs and preferences of people with disabilities tend to be published in venues focused more on human factors and do not always reach the roboticists capable of meeting those needs. Finally, physically assistive robots are increasingly being deployed in real-world settings (8, 9, 10, 11), which is a welcome advancement but makes it more important to have conversations within the field about safety, robustness, working with people with disabilities, and more. Our goal with this survey is to fuel progress in PARs by: (1) highlighting existing research; (2) inspiring more roboticists to apply their skills towards PARs; and (3) systematizing methods so researchers can more easily work with people with disabilities.
1.1. Relation to Other Survey Papers
Newman et al. (18) present a survey of physically and socially assistive robotics in general. Our work differs from theirs by focusing on people with disabilities, who have specific needs and constraints that must be taken into account when developing assistive robots.
Within survey papers focused on assistive robots for participants with disabilities, Matarić and Scassellati (7) focus on socially assistive robots while two survey papers (19, 20) focus on physically assistive robots. Although we report on some similar themes to the latter two papers, their survey was written before the last decade’s drastic increase in PAR papers (Fig. 1B). Mohebbi (21) reviews the human-robot interaction of physically assistive robots; while we have a section dedicated to interaction interfaces (Sec. 5.1), we also focus on other topics such as the methods used in user studies.
Finally, some surveys focus on assistive robots for particular populations—people with quadriplegia (22), older adults (23), and people with visual impairments (24); our paper brings together work focused on multiple types of disabilities and domains of assistance, to facilitate meaningful dialogue across the field of physically assistive robots. Finally, we note that there are several recent surveys in prosthetics and rehabilitation robots (25, 26, 27, 28, 29, 21, 29), which are beyond the mobile/manipulator robot scope of this survey.
2. SURVEY METHODOLOGY
We began by curating a list of top conferences and journals in robotics and assistive technology (Fig. 2). From those venues, we searched for full papers whose title, abstract, or keywords had “robot” and either: “assistive,” “accessibility,” “disability,” “impairment,” or forms thereof. This resulted in 1981 papers. We then screened the title and abstract for the following inclusion criteria: The paper involves (1) a PAR for people with disabilities or older adults, (2) a user study, and (3) a mobile, manipulator, or mobile manipulator robot.
Figure 2.

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram for this paper. We screened 1981 papers and include 87 in this review.
We aligned our interpretations of the above criteria by having a random selection of 60 papers tagged by two or three researchers and discussing any differences until we reached consensus. The rest of the papers were split amongst the three researchers for tagging. 135 papers remained after this title and abstract screening. We then conducted full-text screening. At this stage, we also removed works that had a rehabilitation focus, due to the existence of existing surveys devoted to recent trends in rehabilitation robotics (21, 29). This resulted in 87 papers included in this review. Fig. 2 shows the entire pipeline.
While reading the papers, we iteratively met to converge upon dimensions along which the papers are similar/different that would be of interest to the PAR research community. These dimensions are: Descriptive Statistics (Sec 3), Types of User Studies (Sec 4), Interaction Interface (Sec 5.1), Levels of Autonomy (Sec 5.2), and Adaptation (Sec 5.3). For each dimension, we developed discrete codes by describing and clustering the works (bottom-up), and then identifying existing frameworks that the codes mapped to (top-down).
3. DESCRIPTIVE STATISTICS ABOUT THE PAPERS
3.1. Domain of Assistance
For every paper, we coded the domain(s) of assistance that the PAR helped the user with. This classification drew upon Activities of Daily Living (ADL) and Instrumental Activities of Daily Living (IADL), a framework for classifying the skills and activities necessary to live independently (30). We then compared the proportion of PARs that focus on each (I)ADL to the proportion of people who need assistance with that (I)ADL (31), in Fig 3A.
Figure 3.

A. The proportion of people who need assistance with each (I)ADL versus the proportion of PAR papers that assist with that (I)ADL. B. Papers in this review by the target population’s age (inner) and disability (outer).
There are three spikes amongst PAR research, for (I)ADLs focused on navigation, feeding, and doing housework. For the navigation domain, we characterized works that focused on navigating in any environment (e.g., fall prevention (32, 33), standing assistance (34, 35)) as “getting around,” and works that focused on navigating in environments outside the home (e.g., guide robots for people who have visual impairments (12, 36, 9, 37)) as “going out.” For the housework domain, we classified all “pick-and-place” works, that focused on assisting with the general manipulation of objects, as housework. However, such works can also help with going out (e.g., opening doors) and managing medication (e.g., bringing medication to a user). Note that even if the proportion of PAR research is similar to or greater than the proportion of people who need assistance, that does not mean out work is done; formative studies have found numerous ways in which all PARs must be improved (38, 39, 40, 41, 15).
Some (I)ADLs have a high user need for assistance but proportionately little PAR research—dressing, bathing / grooming, and managing medications. Extending the existing research in these realms (Table 1) would be a fruitful direction for future work. There are also some (I)ADLs that have no papers from this survey. Some, such as difficulty toileting and difficulty getting out of bed, may require special hardware (42, 43) that go beyond the mobile/manipulator focus of this survey. Others, such as difficulty managing money or using the phone, are better served by non-robotic solutions or SARs than PARs (44, 45).
Table 1.
Domain of assistance and type of study for all papers in this review
| Domain of Assistance | Formative | Summative — What is being evaluated? | |||||
|---|---|---|---|---|---|---|---|
| Form ative | Data set | Interaction Interface | Level of Autonomy | Specific Functional ity | Whole System | ||
| In Lab | In Context | ||||||
| Going Around | () | (67) | (34, 68) | () | (67, 32, 69, 33, 70, 35, 71) | (72, 10) | (73, 10, 74) |
| Going Out | (50, 9) | () | (54) | (36, 12) | (54, 49, 59) | (9, 75) | (37) |
| Eating | (39, 38, 40, 76, 77) | (78) | (79, 17, 40) | (17, 80) | (78, 58, 81, 82, 66, 83, 84) | (85, 86, 8, 47) | (8) |
| Dressing | () | (87) | () | () | (87, 88, 48, 89, 13, 84) | () | () |
| Bathing / Grooming | () | (14) | () | () | (90) | (47) | (14) |
| Taking Medicine | (62, 15) | () | (15) | () | () | (91) | (62) |
| Pick-and-Place / House-work | (41, 15, 40, 76) | () | (15, 63, 92, 93, 94, 95, 96, 57, 53, 56, 97, 98, 99) | (100, 101, 102, 16, 52, 103, 104, 105, 106, 107) | (108, 65, 109) | (91, 92, 51, 55, 110) | (111, 64) |
| Playing | (112) | () | (112) | () | () | (113) | (114, 115) |
| Working | () | () | () | () | () | () | (46) |
3.2. Target Population
We coded the target population age for each paper as one of: “children,” “elderly,” or “unspecified age” (which was typically adults across ages). We also coded the target population’s disability (if any) as one of: “motor impairment,” “visual impairment,” or “other2.” Fig 3B presents this data. The bulk of PAR research is motivated by three target populations: people with motor impairments, people who are blind or low-vision, and older adults. This drastically differs from the target populations of Socially Assistive Robots (SARs) research: people with autism, people with dementia, and older adults (7).
4. USER STUDIES IN PAR RESEARCH
For every work, we coded the type of study, number of participants with and without disabilities, what was being evaluated, and the methods used. We coded the type of study as either “formative,” “summative,” or “both.” Formative studies take place in the early stages of research and help “form” the design for the system, while summative studies take place near the end of system development and help evaluate. or “sum up,” the system. Fig 4 and Table 1 show the distribution of papers along these metrics. 14 papers (17%) included a formative study, with the rest including only summative studies3 (Fig. 4A).
Figure 4.

A. Papers included in this review by type of study (inner) and whether they included users with disabilities (outer). B. How many participants with(out) disabilities each paper had.
4.1. Involvement (or Lack Thereof) of Participants With Disabilities
Half of the papers involved no participants with disabilities, while the other half involved at least one4 (Fig 4A). Notably, nearly all formative works involved people with disabilities. This is crucial to ensure that the early decisions that are made in a research area are informed by the needs of the target population. In contrast, the majority of summative evaluations involved only participants without disabilities. Some works framed these evaluations as “preliminary,” “pilot,” or “proof-of-concept,” (51, 52, 53, 54, 55, 56) giving the impression that an evaluation with participants with disabilities is forthcoming. We found a few instances amongst the reviewed papers with a followup evaluation with participants with disabilities, e.g., (15) followed up on (57), (17) followed up on (58). In other cases, researchers claimed to simulate disability amongst non-disabled participants through blindfolds (59, 54), braces (48, 13, 60), or intentional falls (e.g., to simulate older adults falling) (32, 33). Although simulations can be a rapid way of testing capabilities of a robotic system, they are considered problematic in the disability studies literature and should always be complemented with studies involving the target population (61).
About a quarter of works involve participants with and without disabilities (Fig 4B). Some participants without disabilities were caregivers (39, 62), occupational therapists (39), and other stakeholders (9, 63, 64). In other cases, researchers ran a large-sample study with people without disabilities to collect statistical insights, followed by a small-sample study with people with disabilities to collect qualitative insights (65, 15, 66, 8, 10).
4.2. Formative Studies
Involvement of target users in formative research is particularly critical to ensure that researchers: (a) work on problems that are actually important to the target users; and (b) are aware of user constraints and preferences that should be taken into account when developing assistive technologies. This was reflected in the proportion of formative research in our survey that involved people with disabilities. On the other hand, the proportion of formative research to summative research was small, with only five papers that involved solely formative studies (39, 41, 38, 50, 76) and five that included a formative study and summative study (112, 62, 9, 15, 40). This is in contrast with other research focused on (non-robotic) technology for people with disabilities. For example, a recent survey of technology for people with visual impairments found more formative than summative research (118). One reason for this finding could be the lack of familiarity with formative research methods in the robotics community and the emphasis on quantitative findings.
Dataset collection for training a model was rare in the PAR literature, with only four papers (14, 87, 78, 67), despite the popularity of the approach in the robotics community. In all cases, the data was collected to model a component of the system, e.g., for gait tracking (67), force prediction (14), failure prediction (87), and bite timing prediction (78). None of the papers reported on generalizable formative insights based on the collected data.
A variety of formative research methods were exemplified in the papers: surveys (41, 76), interviews (38, 76, 62, 9, 111), group interviews (41, 112, 76), contextual inquiry (39, 76), participatory design (50, 76), observational studies (40, 15),workshops (111, 76), and ethnography (76). Some papers combined methods. For example, Beer et al. conducted a written survey with older adults to assess the tasks they would like assistance with, and then followed up with a group interview to understand why they held those preferences (41).
Formative studies on PARs contribute insights that other researchers can use when designing, developing, and/or evaluating similar PARs. The findings from formative research can be presented as design constraints (62) or guidelines (38, 50, 119), evaluation frameworks (39), limitations of existing systems (15, 40), participants’ concerns and potential opportunities (9, 112, 77), and directions for future work (41, 38). Note that some works conducted a formative study to understand the users’ needs and then a summative study to evaluate the resultant system (9, 62). Further note that some summative studies can also yield formative insights such as users’ preferences on the system’s form factor (36).
4.3. Summative Studies
Summative studies either evaluate a specific component of the system (the middle three columns of Table 1) or the whole system (the last column of Table 1), and gather quantitative and/or qualitative data to conduct that evaluation.
4.3.1. What is being evaluated?.
Studies evaluating a system component focused on the:
Interaction Interface: how users send and receive information to/from the robot.
Level of Autonomy: how much of the sensing, planning, and acting of the system is done by the robot versus the user.
Specific Functionality: any robot functionality that does not fall into the above two categories, such as domain-specific functionality.
These studies typically compare the specific component of their system to one or more baselines, which are either state-of-the-art approaches (67, 105, 16, 78, 65, 34, 80, 52, 96, 108, 94, 93) or variants of their component with some subcomponents systematically removed, i.e., ablation studies (97, 89, 54, 48, 103, 59, 51, 112). Most of these studies are within-subjects, where each participant experiences every condition, which is better when there is high variance across participants (120), such as with participants with disabilities.
Studies that evaluate the whole system sometimes move beyond the lab and into the user’s context-of-use. Of these, some are field studies, which involve running a structured study in the context-of-use (115, 14, 114, 74, 46, 73), while others are deployments, which involve letting users freely interact with the robot in the context-of-use (64, 62, 8, 10, 37, 14, 111). Note that most whole system evaluations are non-comparative. This may be due to the large amount of resources required to develop a whole other system.
4.3.2. What data is being collected?.
Most summative studies in this review gathered quantitative data, which can further be divided into objective and subjective metrics. Objective metrics are often task-specific, such as task completion time (105, 15, 109, 96, 36, 100), the number of mode switches (102, 106), success rate (68, 108, 79), classification accuracy (90, 78, 87), among others. Subjective metrics often focus on user preferences regarding different versions of the robot. Many researchers create their own Likert-scale questions that focus on topics such as usability (96, 79, 100, 105, 80, 74), preference (100, 78, 36), satisfaction (8, 93, 102), feeling of control and safety (97, 65), and more. Others use standardized subjective metrics, such as the System Usability Scale (36, 91), NASA-TLX (15, 10, 12), and Psychosocial Impact of Assistive Devices (36). Note that objective and subjective metrics have complementary benefits—objective metrics are not impacted by biases in self-reporting, but subjective metrics are more grounded in users’ preferences (121)—resulting in many studies that use both (36, 12, 15, 79, 93, 100, 111)
Multiple summative studies paired quantitative data with qualitative data. Qualitative data can help to understand nuances of user preferences, gain insights into additional features users want, or contextualize quantitative results (122). To gather qualitative data several summative studies held semi-structured interviews (112, 76, 62) or focus groups (9, 37) after interacting with the robot, while others had participants share thoughts, insights, and reactions while interacting with the robot (15).
4.4. Suggestions for Physically Assistive Robot (PAR) User Studies
First, we caution PAR researchers to not over-generalize from evaluations involving people without disabilities, as “there is not yet enough evidence supporting the generalization of findings from non-disabled subjects to the [target] population”(123). Further, it is different to live with versus simulate an impairment: “putting on a blindfold for half an hour...can’t give you the full experience of living with a visual impairment for...40 years” (124). While we acknowledge the challenges in running large-sample in-person studies with people with disabilities, alternatives exist (6), including remote studies (15, 65, 39, 17), video studies (65, 38), and working with a community researcher (38, 76).
Second, when using objective metrics (e.g., accuracy, efficiency) we call on PAR researchers to justify why those metrics align with user preferences. There is often the implicit assumption that users want their assistive robot to optimize the metric that researchers are measuring, but prior work has shown that is not always the case (17, 16). As opposed to assuming an objective metric aligns with user preferences, it is important to work with users to identify objective metrics that align with their preferences.
Third, we recommend PAR researchers use standardized scales, such as the System Usability Scale (125) or NASA-TLX (126), for whole system evaluations. Because most whole-system evaluations are non-comparative, it becomes difficult to compare research systems across different labs and papers. Standardized metrics can address this, since they are designed to work across a variety of technologies and have standard interpretations of their numeric scores (125, 127). In addition to the above standardized subjective metrics, standardized objective metrics—that measure the user’s performance on a benchmark task—can facilitate comparisons across works and create a universal interpretation of performance (e.g., the ARAT test used in (128)).
Fourth, we call for more formative research involving PwDs to inform the development of PARS. Formative research can be especially impactful if its findings are synthesized into open problems for robotics (e.g., (38)), allowing other researchers to work on important challenges even without direct involvement by PwDs. Frameworks for describing assistance tasks and user requirements in detailed, structured ways, like SPARCS (129), can further increase the impact of formative work. Another avenue for accelerating progress based on formative research is the creation of robotics benchmarks and simulations for physical assistance. Choi et al. created a list of household objects used by people with ALS (130), allowing researchers who work on pick-and-place tasks to focus on the objects most frequently needed by this user group. Ye et al. conducted formative research with motor-limited individuals, caregivers, and healthcare professionals to inform the design of RCareWorld (131)—a simulation environment with realistic human models representing different disabilities, home environments, and common assistance scenarios.
Finally, we call for more in-context research, particularly deployments. Unfortunately, there is a trend of relegating findings from in-context deployments of PARs to a small section within the paper (10, 14, 8). Although some may argue that small-sample deployments lack the statistical power of large-sample studies, we note that there is a large body of work in the experimental design and statistical analysis of “n-of-1” studies that could add methodological rigor to PAR deployments (132).
5. OVERARCHING THEMES
5.1. Interaction Interface
One overarching theme across these works is the interaction interface that allows users to send and receive information from the robot. Some works explicitly focused on understanding the tradeoffs between different interfaces for different individuals (93) or in different contexts (17, 38). Even those works that did not explicitly focus on interaction interfaces still made design decisions as to which interface(s) were best suited to their application. This section provides an overview of the interfaces that are commonly used and tradeoffs amongst them, based on the Senses and Sensors Taxonomy (133).
5.1.1. Input Interfaces.
The Senses and Sensors Taxonomy (133) differentiates between direct processing, or sensors that directly measure electrical stimuli sent from the brain, and indirect processing, or sensors that measure the outcome of those stimuli.
A small number of works use direct processing, such as electromyography (EMG) or electroencephalogram (EEG), to convert the user’s neural signals into inputs to the robot. Most works used EMG or EEG to teleoperate a robot in the pick-and-place domain of assistance (97, 110). Others combined EMG/EEG with another input device, such as muscle contraction (79, 94), brain signals (82) or eye gaze (103), to teleoperate the robot.
A larger set of papers involve indirect processing through modalities of vision, audition, touch, and kinesthetic inputs. The vision modality contain sensors that see user inputs and send them to the robot. One common application is detecting whether the user is ready for the robot to move towards their face in robot-assisted feeding (86, 38), or robot-assisted drinking (85). Another common application for vision is detecting object that a user wants the robot to acquire, e.g., using a laser pointer (93) or a gaze tracker (86, 92, 56). Yet another application is to have the users completely control the robot with vision inputs (51).
The audition modality contains sensors that hear user inputs and send them to the robot. This includes interfaces that allow the user to give vocal commands to teleoperate a robot arm (100, 96). This also includes systems where the user uses voice to specify the object they want the robot to acquire, such as a specific bite of food (17). While audition sensors have the benefit of not requiring any body motion on the part of users, they may not work well in noisy settings (10) or social settings (38, 17).
The touch modality contains sensors that feel user inputs through direct contact and send them to the robot. This includes traditional methods of interacting with technologies, such as a mouse and keyboard (91, 57, 15, 101), joystick (96, 108, 113, 8, 63), or a touchscreen (93, 17, 50). This also includes custom force-torque sensors used for robot-assisted navigation (36) or robot-assisted drinking (85).
The kinesthetic modality contains sensors that feel user motion and send then to the robot. This includes using inertial measurement units (IMUs) to sense users’ head (99, 79, 94, 99) and upper body movements (95, 53) for tele-operation, or using rotary sensors (72) or pressure sensors (34) for tele-operation. Ranganeni et al. (12) uses a force-torque sensor to detect when the user twists the robot’s handle, and turns the robot accordingly.
5.1.2. Output Interfaces.
Output interfaces are often used for the robot to communicate information to the user about its state, the state of the environment, or its feedback on how the user is completing the task. Relative to the number of PAR papers that incorporate input interfaces, comparatively few explicitly incorporate output interfaces. Papers that use the vision modality often display the robot’s camera feed to the user for tele-operation (15, 110, 97) or interaction (101, 86). Papers that use the audition modality use verbalization to greet the user (50, 91), provide feedback on what direction the user should move in (36), or give the user information on what the robot will be doing (37, 17, 54). Those that use the touch modality use haptic vibrations to convey to the user what direction the robot will move in (12), the direction the user should move (54, 68), or the distance to obstacles (10). Those that use the kinesthetic modality adjust the position of a walker to help users restore their balance (32), adjust the force profile of a walker to help users stand up (35), or guide a user’s hand to their target (112, 92, 50). Note that some works also incorporate multi-modality, such as using verbal instructions to instruct users who are blind on where to find the robot arm and then kinesthetically guiding their arm to the target (112).
5.1.3. Future Work on Interaction Interfaces.
The observations above about interaction interfaces in prior PAR research point towards several opportunities for further research.
First, there has been comparatively less focus on output interaction interfaces than input interaction interfaces. This is despite the fact that research has shown that users’ trust in robots, comfort around robots, and ability to help robots improves if the robot transparently communicates its current state and future intent to them (134, 135, 136). Therefore, we call on future research to investigate what output information users want to receive from their PAR, and how that information improves the user experience. Note that robot motion is an implicit output interface that can expressively communicate the robot’s intent (137, 135), but was not investigated by any works in this survey.
Second, we note that some input interaction interfaces require additional devices (92, 97, 79, 93). However, past research has demonstrated that users want to limit the number of additional devices they have to work with in order to use an assistive technology (38). Therefore, we call for future research on how PARs can effectively integrate with assistive technology interfaces that users already use (e.g., sip-and-puff straws, button arrays, screen readers, etc.). PAR research that utilizes smartphones (37, 17, 10) or computers (57) as an interaction interface are one approach to this problem, as those devices already integrate with numerous assistive technologies.
Third, although some works focus on comparing interaction interfaces, they mostly evaluate preferences aggregated across all participants. Yet, the reality is that preferred interaction interface can vary drastically across individuals and contexts-of-use (93, 38, 17). Further, users with different disabilities may need very different interfaces from one another. Therefore, we call for future research to investigate in what ways users’ interface preference vary with the individual and/or their context(s), and how we can provide a superset of interfaces—and a smooth experience of switching between them—to cover these various preferences and abilities.
5.2. Levels of Autonomy
Another overarching research theme is levels of autonomy (LoA). Autonomy is “the extent to which a robot can sense its environment, plan based on that environment, and act upon that environment with the intent of reaching some task-specific goal (either given to or created by the robot)” (138). This section provides an overview of LoA in PAR research, following the five guidelines in Beer et al.’s framework for levels of autonomy in HRI (138).
5.2.1. Determining autonomy: What task is the robot to perform?.
Beer et al. state that a key consideration for determining the LoA of a robot is the impact of failures on its task (138). With PARs, the impact of failures is often high; a failure in robot-assisted feeding can result in choking or cuts, and a failure in robot-assisted navigation can result in collisions or falls. Therefore, there have been multiple efforts to enable robots to detect, predict, and/or avoid failures. In the case of robot-assisted feeding or shaving, this includes: stopping as soon as an anomalous force is detected (17, 14, 66), as soon as the user winces or has other anomolous movements (90, 66), or as soon as an anomolous sound is detected (66). In the case of robot-assisted navigation, this includes predicting other pedestrians’ motion and avoiding them (49, 9, 37), or predicting when users are getting unbalanced and changing the robot’s force profile to support them (33, 32, 35). There are also standardized methods for hazard analysis, that have been applied to robot-assisted dressing (139).
Note, however, that automated ways of detecting and avoiding failure places accountability for system success on the robot, not the user. Users may not be comfortable with this. Studies have revealed that users want full control to stop their PAR at any time, e.g., by pressing an accessible emergency stop button in robot-assisted feeding (38, 39), or by letting go of or ceasing to push the robot in robot-assisted navigation (50, 12). After stopping the robot, the user can teleoperate it and decide when it continues (140). In addition to giving users control to stop the robot at any time, another approach is giving users sole control to move the robot when near safety-critical areas, e.g., the robot only moves towards the user’s face if they continuously facing it or press on a force-torque sensor (85).
5.2.2. Determining autonomy: What aspects of the task should the robot perform?.
Beer et al. divide tasks into three primitives: sensing, planning, and acting (138). Within PARs, which primitives the robot should perform is heavily influenced by the target population’s impairments. PARs for people with visual impairments assist with “sensing” the environment to account for the user’s reduced ability to independently do so (50, 12, 36, 37). PARs for elderly people who are sighted assist with “acting,” adjusting their force profile to account for the user’s reduced ability to independently maintain balance (33, 32, 35). PARs for people with motor impairments assist with “acting,” acquiring items and moving them to the user’s face to account for their reduced ability to independently do so (58, 85, 14).
While the user’s impairment can influence which aspects of the task they need assistance with, users also have preferences over which aspects of the task they want control over. Users often want control to set the robot’s goal. For example, in robot-assisted feeding users often want to select the bite the robot will feed them (39, 38), and in robot-assisted navigation users often want to set the goal the robot is navigating them to (50, 37). In addition, users sometimes want control over how the robot achieves the task. Works in robot-assisted feeding have shown that some users want control over when the robot feeds them (38, 39, 17), and a work in robot-assisted navigation found that some users want control over which direction the robot turns at a junction (12). These works serve as important reminders that just because a PAR can do something autonomously does not mean that it should, a topic investigated in Bhattacharjee et al. (17).
5.2.3. Determining autonomy: To what extent can the robot perform those aspects of the task?.
Researchers can aim to automate as much of the assistive task as users are willing to have robots perform. However, achieving robust, generalizable robot autonomy in unstructured human environments is extremely challenging. What is possible to automate heavily depends on robot hardware (sensors, actuators, compute) and the state-of-the-art algorithms of the day. When robust robot autonomy is not feasible, including the human-in-the-loop (e.g., giving users control to stop the robot (36, 85)) can enable the robot to reliably complete its assistive task. Alternatively, one can modify the user’s environment to make tasks easier to automate (141); e.g., attaching towels to make drawers easier to manipulate (142), or attaching fiducials to make light switches easier to perceive (143).
Although the three questions for determining LoA restrict the levels that are available in a given situation, there might still be multiple options. Making as many LoAs available on a robot is advisable, as it can allow for customizing based on user preferences, having different interfaces for different users (care recipient versus caregiver), and context-dependent LoA switching e.g., falling back on lower levels of autonomy when unexpected failures occur.
5.2.4. Categorizing automonomy.
A variety of LoAs are exemplified in the PAR literature (Fig. 5). In robot-assisted navigation for people with visual impairments, although the robot has to be autonomous in sensing, there are a range of autonomy levels it can take on for planning and acting. Some robots autonomously plan and execute their route (36). Others autonomously plan but share execution with the user, e.g., having the user push while the robot steers (36, 12). Some yield part of the planning autonomy to users, letting them select the direction to turn (12). Yet other robots fully yield execution to the user; the robot suggests a direction, but the user is the sole agent pushing and steering the robot (59).
Figure 5.

Case studies of the levels of autonomy used in three different domains of assistance: robot-assisted navigation (36, 12, 59), robot-assisted feeding (17, 85), and pick-and-place (63, 15).
In robot-assisted feeding for people with motor impairments, some robots acquire the bite and move it to the user’s mouth autonomously (17). Others let the user influence planning, by specifying high-level guidelines for how the robot should acquire the bite (17). Others let the user influence acting, by controlling how much the robot tilts a drink glass (85).
In pick-and-place for people with motor impairments, some works have the user teleoperate the robot, by doing the sensing, planning, and controlling its base and arm motion (15). Others have the robot and user sense the environment, have the robot present discrete grasping strategies to the user, and then have the robot autonomously grasp the item (63).
As indicated by this range in levels of autonomy for PARs, there is not one LoA that is strictly better than others. Multiple works have found that users preferences for LoA vary based on environmental and individual-level factors (12, 36, 17).
5.2.5. Influence of autonomy on human-robot interaction.
The level of autonomy of a PAR affects users’ feelings of comfort, trust, and safety. Some works found that users feel more comfortable when they have more control over their PAR (12, 36). Others found that users have safety concerns regarding interacting with a fully autonomous robot (17, 38). Another work found that users lose trust in a PAR that fails while operating autonomously, such as colliding into an obstacle (12). Yet another work found that not just the level of autonomy, but also the level of transparency influences users’ experience of the robot (101).
5.2.6. Future Work on Levels of Autonomy.
Despite the finding that users value a variety of LoAs and will use them in different contexts (12, 36, 17), most PAR papers focus on just one LoA. Further, despite the finding that the LoA has an important impact on user experience (Sec. 5.2.5), most PAR papers do not justify why their LoA is a good match for the task and target population. Therefore, we call for more PAR research that investigates the tradeoffs across different levels of autonomy, and provide guidelines on how to determine the most suitable level(s) of autonomy based on the PAR’s domain of assistance, target population, and context(s) of use.
5.3. Adaptation
Another overarching research theme is adaptation. We define adaptation as a process that changes the functionality, interface, or distinctiveness of a system to increase its relevance to an individual in a particular context (144). Note that this process is also referred to as “personalization” or “customization” in the literature—we opt for “adaptation” as it is one of the recommended principles of ability-based design (145).
5.3.1. The Need for Adaptation.
The need for adaptation is motivated by diversity in user’s impairments, preferences, and contexts-of-use. Studies reveal that users want to customize their PAR’s interaction interface, level of autonomy, and other specific functionality.
Regarding adaptation of interaction interfaces, one work found that users with greater mobility preferred a different interface for telling a robot to pick up an object than users with less mobility (93). Other works found that users’ preferred interface for interacting with a robot-assisted feeding system depended on whether they were in a social context (17, 38).
Regarding adaptation of levels of autonomy, some studies found that users’ desired level of autonomy when using a robotic navigational aide was both context-dependant (e.g., is it a new environment or an unfamiliar one) (36, 12) and individual-dependant (12). Bhattacharjee et al. (17) found that users with higher mobility impairment preferred higher levels of autonomy than users with lower levels of impairment. Yet another work found that age could impact users’ preferred level of autonomy when interacting with PARs (101).
Regarding adaptation of specific functionality, Chugo et al. (35) found that the support profile users desired from a robotic walker differed based on their level of motor impairment. Choi et al. (109) found that how a robot should deliver items varies based on their posture and body type. Azenkot et al. (50) found that users with visual impairments had different preferred speeds for robot-assisted navigation systems. Works in robot-assisted feeding have found that users’ preferred bite size, bite timing, bite transfer motion, bite transfer speed, and more varied based on their impairment(s), preferences, and social context (39, 38).
5.3.2. Adaptation in PARs.
We draw upon the questions in Fan and Poole’s (144) classification scheme to characterize adaptation in PARs.
5.3.2.1. What is adapted?.
There are several approaches to adapting interaction interfaces. Some studies found that, partly due to the large variance in ability levels for end-users, the sensors used in input interfaces need to be calibrated per user (94, 34, 86, 95). Another study developed multiple interfaces: one for people with fine motor skills and the other for people without (63). Yet another study leveraged existing adaptation in the user’s assistive technology ecosystem, by allowing them to use their own screen-reading applications to customize hearing speed (37). Note that companies such as Kinova5 have for years provided the ability to interact with their device through a variety of interfaces.
Regarding adapting levels of autonomy, Zhang et al. (36) let users of a robotic navigational aide choose whether the robot operates in full or partial autonomy, and found that users preferred less robot autonomy in environments that were less controlled (e.g., outdoor environments). These findings were mirrored by Ranganeni et al. (12).
Multiple works allowed users to adapt specific functionalities of the robot. One work enabled older adults to program custom skills on their robot, such as “raise the tray when the microwave is on” (91). Another work allowed users to customize a parameter that controlling how much the robot followed its own policy versus the user’s inputs (107). Another study customized how close the robot brings an object to a user, based on the user’s self-declared mobility level (65). Yet another work allows the user to customize the robot’s speed, speech, proximity to the user, and model of the user’s movements (84).
5.3.2.2. Who does the adaptation?.
Works that allow the user to adapt the robot focus on providing the user knobs to tune the robot’s functionality. In the case of Saunders et al. (91), those knobs consisted of an entire domain-specific language designed for customizing that PAR. In other cases, researchers designed multiple discrete modes and let the user select one (36, 63). In yet another case, researchers exposed a continuous parameter to the user and let them adjust it (107).
Works that use shared control to customize the robot typically have the user provide some data during a calibration phase, and have the robot adapt its behavior based on that data. This includes calibrating the sensitivity of sensors (94, 34), asking users for self-reported mobility level (65), and to move through their full range of arm motion (48, 13).
Works where the robot adapts to the user have the robot observe or predict some attribute about the user and change its behavior accordingly. Erickson et al. (89) track the distance between the robot and the user’s body in order to adjust the robot’s motion to the user’s contours. Ondras et al. (78) use information about when the user last took a bite and the gaze of co-diners to predict when to feed the user.
5.3.2.3. When does the adaptation take place?.
A variety of works adapt the robot during its main execution. This includes works that allow users to select one of multiple modes for robot behavior (36, 63), works where the user iteratively modifies a parameter (107), and works where the robot tracks and adapts to attributes of the user (89, 78). On the other hand, all works that involve a calibration phase adapt the robot outside of main execution (94, 34, 65, 48, 13, 53, 86, 95). Further, works that involve the user pre-programming robot actions also involve adapting outside of main execution (91).
5.3.3. Future Work on Adaptation.
Although the broader field of assistive technology has had considerable focus on adaptation, summarized in Wobbrock et al. (145), it has been a smaller focus of PAR research. This presents several exciting directions for future work.
First, certain application domains tend to focus on specific types of adaptation. For example, research into interface adaptation was largely in the domain of pick-and-place (94, 34, 63), although the need has also been established in robot-assisted feeding (39, 38). Similarly, research into LoA adaptation was largely in the application domain of robotic navigational aides for people with visual impairments (36, 12), although the need has also been established in robot-assisted feeding (39) and pick-and-place (101). We call for more cross-domain research in adaptation, particularly to investigate under what conditions insights on adaptation can be transferred across domains.
Second, although there are works across the spectrum of “who does the adaptation,” there are no works to the best of our knowledge that provide guidelines on how to decide who should do the adaptation for a particular robot, user, domain, or context. The same applies to guidelines regarding when the adaptation takes place. We call for research into user perspectives regarding who should do the adaptation, when it should be done, and how that varies across the application domain, user, and context.
WHAT IS A PHYSICALLY ASSISTIVE ROBOT (PAR)?
A physically assistive robot (PAR) is a robot that provides assistance to humans through physical interaction. PARs include robots that help feed users, dress users, help users move, pick up and move objects for users, replace limbs (e.g., prosthetics), rehabilitate limbs, augment the body (e.g., exoskeletons), and more.
This contrasts with a socially assistive robot (SAR), a robot that provides assistance to humans through social interactions. Examples of SARs and that are not PARs include robots to: help provide autism therapy to children, serve as social companions to elderly people, and help motivate their users to exercise (7).
SUMMARY POINTS.
Domains of Assistance (Sec. 3.1): There have been three main foci in PAR research: navigation, feeding, and general pick-and-place.
Involvement of Participants with Disabilities (Sec. 4.1): Nearly all formative works involved people with disabilities, while about half of summative evaluations involved solely participants without disabilities.
In-Context Deployments (Sec. 4.3.1): The few in-context deployments of PARs that have been done tend to be relegated to small sections within a paper, preventing the community from learning about and benefiting from the several research, engineering, and logistical decisions required to deploy a system.
Quantitative Metrics (Sec. 4.3.2): Most summative evaluations gather task-specific objective data (e.g., completion time, number of mode switches, success rate), and/or subjective data based on custom questionnaires measuring usability, satisfaction, feelings of safety, etc.
Interaction Interfaces (Sec. 5.1): PAR research covers a variety of input interfaces, from brain-computer interfaces to vision-based interfaces to touchscreens to kinesthetic interfaces. In contrast, comparatively less work has focused on output interfaces for the robot to communicate to the user.
Levels of Autonomy (Sec 5.2): Most PAR research uses a single level of autonomy, despite the fact that past work has revealed that users’ preferred level of autonomy varies with the individual and context.
Adaptation (Sec. 5.3): Several studies have found that users want their interactions with PARs to be customized to their impairment, their preferences, and their context. Although some work has investigated adaptation, that work is segmented across application domains, and few works investigate tradeoffs across who is doing the adaptation and when it takes place.
FUTURE ISSUES.
Domains of Assistance (Sec. 3.1): We call on researchers to study under-researched (I)ADLs such as dressing, bathing / grooming, and managing medication. This also includes conducting formative studies to ensure the design and development of PARs in these domains is rooted in user needs (Sec. 4.2).
Involvement of Participants with Disabilities (Sec. 4.4): We call on researchers to include more participants with disabilities in their works. In addition to in-person studies, other ways to do so can include remote studies, video studies, or working with a community researcher.
In-Context Deployments (Sec. 4.4): We call on researchers to conduct and publish more in-context deployments. Experimental design theory for “n-of-1” studies can be used to add methodological and statistical rigor to PAR deployments (132).
Quantitative Metrics (Sec. 4.4): We call on researchers to use standardized quantitative metrics such as the System Usability Scale and NASA-TLX when evaluating systems, to facilitate comparisons across PAR research. We also call on researchers to work with users to ensure that objective metrics they gather align with users’ desires for system functionality.
Interaction Interfaces (Sec. 5.1.3): We call on researchers to investigate the desired output information users want to receive from their PARs, as well as how PARs’ input and output interfaces can integrate with users’ existing assistive technology ecosystem.
Levels of Autonomy (Sec 5.2.6): We call on researchers to be intentional about which level(s) of autonomy they use and justify why that is suitable for the task(s), user(s), and context(s). We further call for more research on the tradeoffs between levels of autonomy, in order to derive guidelines for how to determine the most suitable level(s) of autonomy for a PAR.
Adaptation (Sec. 5.3.3): We call on researchers to investigate users’ preferences regarding the different forms of adaptation—what is adapted, who does the adaptation, and when it takes place—and how that varies across domain of assistance, user, and context.
PARs in Society: Developing PARs that are widely used requires engaging with government regulations (146), ethics (147), and factors that influence technology adoption (148). Therefore, we call for more research that places PARs within the context of the political, economic, and social systems that impact their usage.
ACKNOWLEDGMENTS
This paper is dedicated to the memory of Tyler Schrenk1, a dear collaborator, advocate, and friend. We thank Tapomayukh Bhattacharjee for his constructive feedback on this paper. This work was partially funded by the Robert E. Dinning Career Development Professorship and the National Science Foundation awards IIS-1924435 and DGE-1762114.
Glossary
- Mobile Robot
a robot that can move its own base (e.g., a robotic vacuum cleaner)
- Manipulator
a robot that can manipulate objects; for instance, by picking them up and moving them around (e.g., a robotic arm)
- Mobile Manipulator
a robot that can move its base and manipulate objects (e.g., a humanoid robot)
- Physically Assistive Robot (PAR)
a robot that provides assistance to humans through physical interaction
- Activity of Daily Living (ADL)
“skills required to manage one’s basic physical needs” (30)
- Instrumental Activity of Daily Living (IADL)
“more complex activities related to the ability to live independently in the community” (30)
- Formative Study
a type of study that takes place in the early stages of system development and helps form the design for the system (116, 117).
- Summative Study
a type of study that takes place near the end of system development and helps one evaluate, or sum up, the system (116, 117)
- Interaction Interface
how users send information to and receive information from the robot. This includes the modality that is used for interaction, e.g., vision, audition, touch, etc.
- Adaptation
a process that changes the functionality, interface, or distinctiveness of a system to increase its relevance to an individual in a particular context (144, 145)
Footnotes
DISCLOSURE STATEMENT
The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
Works with a target population of “other” either focused generally on “people with disabilities” (46, 47) or used a different form of categorization, e.g., “people in skilled nursing facilities” (48).
All papers that collected data and trained a model were considered both formative—for the data collection and analysis—and summative—for the model evaluation.
LITERATURE CITED
- 1.2023. Disability. World Health Organization [Google Scholar]
- 2.Stevens AC, Carroll DD, Courtney-Long EA, Zhang QC, Sloan ML, et al. 2016. Adults with one or more functional disabilities—united states, 2011–2014. Morbidity and Mortality Weekly Report 65(38):1021–1025 [DOI] [PubMed] [Google Scholar]
- 3.Fausset CB, Kelly AJ, Rogers WA, Fisk AD. 2011. Challenges to aging in place: Understanding home maintenance difficulties. Journal of Housing for the Elderly 25(2):125–141 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hammarberg T 2012. The right of people with disabilities to live independently and be included in the community. Issue Paper, Council of Europe Commissioner for Human Rights [Google Scholar]
- 5.Cree RA, Okoro CA, Zack MM, Carbone E. 2020. Frequent mental distress among adults, by disability status, disability type, and selected characteristics—united states, 2018. Morbidity and Mortality Weekly Report 69(36):1238. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Mankoff J, Hayes GR, Kasnitz D. 2010. Disability studies as a source of critical inquiry for the field of assistive technology. In International ACM SIGACCESS conference on Computers and accessibility (ASSETS), pp. 3–10. ACM [Google Scholar]
- 7.Matarić MJ, Scassellati B. 2016. Socially assistive robotics. Springer handbook of robotics:1973–1994 [Google Scholar]
- 8.Song WK, Song WJ, Kim Y, Kim J. 2013. Usability test of KNRC self-feeding robot. In 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR), pp. 1–5. IEEE [Google Scholar]
- 9.Kayukawa S, Sato D, Murata M, Ishihara T, Kosugi A, et al. 2022. How Users, Facility Managers, and Bystanders Perceive and Accept a Navigation Robot for Visually Impaired People in Public Buildings. In IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 546–553. IEEE [Google Scholar]
- 10.Grzeskowiak F, Devigne L, Pasteau F, Dutra GSV, Babel M, Guégan S. 2022. SWALKIT: A generic augmented walker kit to provide haptic feedback navigation assistance to people with both visual and motor impairments. In International Conference on Rehabilitation Robotics (ICORR), pp. 1–6. IEEE [Google Scholar]
- 11.Nguyen V 2021. Increasing independence with stretch: A mobile robot enabling functional performance in daily activities
- 12.Ranganeni V, Sinclair M, Ofek E, Miller A, Campbell J, et al. 2023. Exploring levels of control for a navigation assistant for blind travelers. arXiv preprint arXiv:2301.02336 [Google Scholar]
- 13.Zhang F, Cully A, Demiris Y. 2019. Probabilistic real-time user posture tracking for personalized robot-assisted dressing. IEEE Transactions on Robotics 35(4):873–888 [Google Scholar]
- 14.Hawkins KP, King CH, Chen TL, Kemp CC. 2012. Informing assistive robots with models of contact forces from able-bodied face wiping and shaving. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 251–258. IEEE [Google Scholar]
- 15.Cabrera ME, Bhattacharjee T, Dey K, Cakmak M. 2021. An exploration of accessible remote tele-operation for assistive mobile manipulators in the home. In IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), pp. 1202–1209. IEEE [Google Scholar]
- 16.Mehta SA, Parekh S, Losey DP. 2022. Learning latent actions without human demonstrations. In International Conference on Robotics and Automation (ICRA), pp. 7437–7443. IEEE [Google Scholar]
- 17.Bhattacharjee T, Gordon EK, Scalise R, Cabrera ME, Caspi A, et al. 2020. Is more autonomy always better? exploring preferences of users with mobility impairments in robot-assisted feeding. In ACM/IEEE international conference on human-robot interaction, pp. 181–190. ACM [Google Scholar]
- 18.Newman BA, Aronson RM, Kitani K, Admoni H. 2022. Helping people through space and time: Assistance as a perspective on human-robot interaction. Frontiers in Robotics and AI:410 [Google Scholar]
- 19.Brose SW, Weber DJ, Salatin BA, Grindle GG, Wang H, et al. 2010. The role of assistive robotics in the lives of persons with disability. American Journal of Physical Medicine & Rehabilitation 89(6):509–521 [DOI] [PubMed] [Google Scholar]
- 20.Chung CS, Wang H, Cooper RA. 2013. Functional assessment and performance evaluation for assistive robotic manipulators: Literature review. The journal of spinal cord medicine 36(4):273–289 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Mohebbi A 2020. Human-robot interaction in rehabilitation and assistance: a review. Current Robotics Reports 1:131–144 [Google Scholar]
- 22.Orejuela-Zapata JF, Rodríguez S, Ramírez GL. 2019. Self-help devices for quadriplegic population: a systematic literature review. IEEE Transactions on Neural Systems and Rehabilitation Engineering 27(4):692–701 [DOI] [PubMed] [Google Scholar]
- 23.Petrie H, Darzentas J. 2017. Older people and robotic technologies in the home: perspectives from recent research literature. In International conference on pervasive technologies related to assistive environments, pp. 29–36. ACM [Google Scholar]
- 24.Kandalan RN, Namuduri K. 2020. Techniques for constructing indoor navigation systems for the visually impaired: A review. IEEE Transactions on Human-Machine Systems 50(6):492–506 [Google Scholar]
- 25.Bajaj NM, Spiers AJ, Dollar AM. 2015. State of the art in prosthetic wrists: Commercial and research devices. In IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 331–338. IEEE [Google Scholar]
- 26.Lara-Barrios CM, Blanco-Ortega A, Guzmán-Valdivia CH, Bustamante Valles KD. 2018. Literature review and current trends on transfemoral powered prosthetics. Advanced Robotics 32(2):51–62 [Google Scholar]
- 27.Sarajchi M, Al-Hares MK, Sirlantzis K. 2021. Wearable lower-limb exoskeleton for children with cerebral palsy: A systematic review of mechanical design, actuation type, control strategy, and clinical evaluation. IEEE Transactions on Neural Systems and Rehabilitation Engineering 29:2695–2720 [DOI] [PubMed] [Google Scholar]
- 28.Xiloyannis M, Alicea R, Georgarakis AM, Haufe FL, Wolf P, et al. 2021. Soft robotic suits: State of the art, core technologies, and open challenges. IEEE Transactions on Robotics 38(3):1343–1362 [Google Scholar]
- 29.Maciejasz P, Eschweiler J, Gerlach-Hahn K, Jansen-Troy A, Leonhardt S. 2014. A survey on robotic devices for upper limb rehabilitation. Journal of neuroengineering and rehabilitation 11(1):1–29 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Edemekong PF, Bomgaars D, Sukumaran S, Levy SB. 2019. Activities of daily living [Google Scholar]
- 31.Taylor DM. 2018. Americans with disabilities: 2014. US Census Bureau:1–32 [Google Scholar]
- 32.Geravand M, Rampeltshammer W, Peer A. 2015. Control of mobility assistive robot for human fall prevention. In IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 882–887. IEEE [Google Scholar]
- 33.Ruiz-Ruiz FJ, Giammarino A, Lorenzini M, Gandarias JM, Gómez-De-Gabriel JH, Ajoudani A. 2022. Improving standing balance performance through the assistance of a mobile collaborative robot. In International Conference on Robotics and Automation (ICRA), pp. 10017–10023. IEEE [Google Scholar]
- 34.Chen Y, Paez-Granados D, Kadone H, Suzuki K. 2020. Control interface for hands-free navigation of standing mobility vehicles based on upper-body natural movements. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11322–11329. IEEE [Google Scholar]
- 35.Chugo D, Kawazoe S, Yokota S, Hashimoto H, Katayama T, et al. 2017. Pattern based standing assistance adapted to individual subjects on a robotic walker. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 1216–1221. IEEE [Google Scholar]
- 36.Zhang Y, Li Z, Guo H, Wang L, Chen Q, et al. 2023. ” I am the follower, also the boss”: Exploring Different Levels of Autonomy and Machine Forms of Guiding Robots for the Visually Impaired. In CHI Conference on Human Factors in Computing Systems, pp. 1–22. ACM [Google Scholar]
- 37.Kayukawa S, Sato D, Murata M, Ishihara T, Takagi H, et al. 2023. Enhancing Blind Visitor’s Autonomy in a Science Museum Using an Autonomous Navigation Robot. In Conference on Human Factors in Computing Systems (CHI), pp. 1–14. ACM [Google Scholar]
- 38.Nanavati A, Alves-Oliveira P, Schrenk T, Gordon EK, Cakmak M, Srinivasa SS. 2023. Design Principles for Robot-Assisted Feeding in Social Contexts. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), HRI ‘23, pp. 24–33. New York, NY, USA: ACM [Google Scholar]
- 39.Bhattacharjee T, Cabrera ME, Caspi A, Cakmak M, Srinivasa SS. 2019. A community-centered design framework for robot-assisted feeding systems. In International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), pp. 482–494. ACM [Google Scholar]
- 40.Al-Halimi RK, Moussa M. 2016. Performing complex tasks by users with upper-extremity disabilities using a 6-dof robotic arm: a study. IEEE Transactions on Neural Systems and Rehabilitation Engineering 25(6):686–693 [DOI] [PubMed] [Google Scholar]
- 41.Beer JM, Smarr CA, Chen TL, Prakash A, Mitzner TL, et al. 2012. The domesticated robot: design guidelines for assisting older adults to age in place. In ACM/IEEE international conference on Human-Robot Interaction (HRI), pp. 335–342 [Google Scholar]
- 42.Güldenpfennig F, Mayer P, Panek P, Fitzpatrick G. 2019. An autonomy-perspective on the design of assistive technology experiences of people with multiple sclerosis. In CHI Conference on Human Factors in Computing Systems, pp. 1–14. ACM [Google Scholar]
- 43.Sivakanthan S, Blaauw E, Greenhalgh M, Koontz AM, Vegter R, Cooper RA. 2021. Person transfer assist systems: a literature review. Disability and Rehabilitation: Assistive Technology 16(3):270–279 [DOI] [PubMed] [Google Scholar]
- 44.Halbach T, Solheim I, Ytrehus S, Schulz T. 2018. A mobile application for supporting dementia relatives: a case study. In Transforming our World Through Design, Diversity and Education. IOS Press [Google Scholar]
- 45.Sääskilahti K, Kangaskorte R, Pieskä S, Jauhiainen J, Luimula M. 2012. Needs and user acceptance of older adults for mobile service robot. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 559–564. IEEE [Google Scholar]
- 46.Chang PH, Park SR, Cho GR, Jung JH, Park SH. 2005. Development of a robot arm assisting people with disabilities at working place using task-oriented design. In International Conference on Rehabilitation Robotics (ICORR), pp. 482–487. IEEE [Google Scholar]
- 47.Erickson Z, Gu Y, Kemp CC. 2020. Assistive vr gym: Interactions with real people to improve virtual assistive robots. In IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 299–306. IEEE [Google Scholar]
- 48.Zhang F, Cully A, Demiris Y. 2017. Personalized robot-assisted dressing using user modeling in latent spaces. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3603–3610. IEEE [Google Scholar]
- 49.Jin P, Ohn-Bar E, Kitani K, Asakawa C. 2019. A-EXP4: Online Social Policy Learning for Adaptive Robot-Pedestrian Interaction. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5086–5093. IEEE [Google Scholar]
- 50.Azenkot S, Feng C, Cakmak M. 2016. Enabling building service robots to guide blind people a participatory design approach. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 3–10. ACM [Google Scholar]
- 51.Jiang Hairong, Wachs JP, Duerstock BS. 2013. Integrated vision-based robotic arm interface for operators with upper limb mobility impairments. In IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 1–6. Seattle, WA: IEEE [Google Scholar]
- 52.Quere G, Hagengruber A, Iskandar M, Bustamante S, Leidner D, et al. 2020. Shared control templates for assistive robotics. In IEEE International Conference on Robotics and Automation (ICRA), pp. 1956–1962. IEEE [Google Scholar]
- 53.Chau S, Aspelund S, Mukherjee R, Lee MH, Ranganathan R, Kagerer F. 2017. A five degree-of-freedom body-machine interface for children with severe motor impairments. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3877–3882. IEEE [Google Scholar]
- 54.Agrawal S, West ME, Hayes B. 2022. A Novel Perceptive Robotic Cane with Haptic Navigation for Enabling Vision-Independent Participation in the Social Dynamics of Seat Choice. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 9156–9163. IEEE [Google Scholar]
- 55.Oyama E, Yoon WK, Wakita Y, Tanaka H, Yoshikawa M, et al. 2012. Development of evaluation indexes for assistive robots based on ICF. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 221–227. IEEE [Google Scholar]
- 56.Cio YSLK, Raison M, Menard CL, Achiche S. 2019. Proof of concept of an assistive robotic arm control using artificial stereovision and eye-tracking. IEEE Transactions on Neural Systems and Rehabilitation Engineering 27(12):2344–2352 [DOI] [PubMed] [Google Scholar]
- 57.Cabrera ME, Dey K, Krishnaswamy K, Bhattacharjee T, Cakmak M. 2021. Cursor-based Robot Tele-manipulation through 2D-to-SE2 Interfaces. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4230–4237. IEEE [Google Scholar]
- 58.Gallenberger D, Bhattacharjee T, Kim Y, Srinivasa SS. 2019. Transfer depends on acquisition: Analyzing manipulation strategies for robotic feeding. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 267–276. IEEE [Google Scholar]
- 59.Wachaja A, Agarwal P, Zink M, Adame MR, Möller K, Burgard W. 2015. Navigating blind people with a smart walker. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6014–6019. IEEE [Google Scholar]
- 60.Paulo J, Peixoto P, Nunes UJ. 2017. Isr-aiwalker: Robotic walker for intuitive and safe mobility assistance and gait analysis. IEEE Transactions on Human-Machine Systems 47(6):1110–1122 [Google Scholar]
- 61.Burgstahler S, Doe T, et al. 2004. Disability-related simulations: If, when, and how to use them in professional development. Review of Disability Studies: An International Journal 1(2) [Google Scholar]
- 62.Motahar T, Farden MF, Sarkar DP, Islam MA, Cabrera ME, Cakmak M. 2019. SHEBA: A Low-cost assistive robot for older adults in the developing world. In IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–8. IEEE [Google Scholar]
- 63.Quintero CP, Ramirez O, Jägersand M. 2015. Vibi: Assistive vision-based interface for robot manipulation. In IEEE International Conference on Robotics and Automation, pp. 4458–4463. IEEE [Google Scholar]
- 64.Cook AM, Bentz B, Harbottle N, Lynch C, Miller B. 2005. School-based use of a robotic arm system by children with disabilities. IEEE transactions on neural systems and rehabilitation engineering 13(4):452–460 [DOI] [PubMed] [Google Scholar]
- 65.Ardón P, Cabrera ME, Pairet E, Petrick RP, Ramamoorthy S, et al. 2021. Affordance-aware handovers with human arm mobility constraints. IEEE Robotics and Automation Letters 6(2):3136–3143 [Google Scholar]
- 66.Park D, Kim H, Hoshi Y, Erickson Z, Kapusta A, Kemp CC. 2017. A multimodal execution monitor with anomaly classification for robot-assisted feeding. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5406–5413. IEEE [Google Scholar]
- 67.Chalvatzaki G, Papageorgiou XS, Tzafestas CS, Maragos P. 2018. Augmented human state estimation using interacting multiple model particle filters with probabilistic data association. IEEE Robotics and Automation Letters 3(3):1872–1879 [Google Scholar]
- 68.Pan YT, Shih CC, DeBuys C, Hur P. 2018. Design of a Sensory Augmentation Walker with a Skin Stretch Feedback Handle. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 832–837. IEEE [Google Scholar]
- 69.Andreetto M, Divan S, Fontanelli D, Palopoli L. 2017. Harnessing steering singularities in passive path following for robotic walkers. In IEEE International Conference on Robotics and Automation (ICRA), pp. 2426–2432. IEEE [Google Scholar]
- 70.Chalvatzaki G, Papageorgiou XS, Tzafestas CS. 2017. Towards a user-adaptive context-aware robotic walker with a pathological gait assessment system: First experimental study. In IEEE/RSJ international conference on intelligent robots and systems, pp. 5037–5042. IEEE [Google Scholar]
- 71.Chen X, Ragonesi C, Galloway JC, Agrawal SK. 2011. Training toddlers seated on mobile robots to drive indoors amidst obstacles. IEEE transactions on neural systems and rehabilitation engineering 19(3):271–279 [DOI] [PubMed] [Google Scholar]
- 72.Jin N, Kang J, Agrawal S. 2015. Design of a novel assist interface where toddlers walk with a mobile robot supported at the waist. In IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 577–582. IEEE [Google Scholar]
- 73.Efthimiou E, Fotinea SE, Vacalopoulou A, Papageorgiou XS, Karavasili A, Goulas T. 2019. User centered design in practice: adapting HRI to real user needs. In ACM International Conference on PErvasive Technologies Related to Assistive Environments, pp. 425–429 [Google Scholar]
- 74.MacNamara S, Lacey G. 2000. A smart walker for the frail visually impaired. In IEEE International Conference on Robotics and Automation., vol. 2, pp. 1354–1359. IEEE [Google Scholar]
- 75.Chuang TK, Lin NC, Chen JS, Hung CH, Huang YW, et al. 2018. Deep trail-following robotic guide dog in pedestrian environments for people who are blind and visually impaired-learning from virtual and real worlds. In IEEE International Conference on Robotics and Automation (ICRA), pp. 5849–5855. IEEE [Google Scholar]
- 76.Arevalo Arboleda S, Pascher M, Baumeister A, Klein B, Gerken J. 2021. Reflecting upon Participatory Design in Human-Robot Collaboration for People with Motor Disabilities: Challenges and Lessons Learned from Three Multiyear Projects. In PErvasive Technologies Related to Assistive Environments Conference, pp. 147–155. ACM [Google Scholar]
- 77.Ljungblad S 2023. Applying “designerly framing” to understand assisted feeding as social aesthetic bodily experiences. ACM Transactions on Human-Robot Interaction 12(2):1–23 [Google Scholar]
- 78.Ondras J, Anwar A, Wu T, Bu F, Jung M, et al. 2022. Human-robot commensality: Bite timing prediction for robot-assisted feeding in groups. In Conference on Robot Learning. PMLR [Google Scholar]
- 79.Lauretti C, Cordella F, Di Luzio FS, Saccucci S, Davalli A, et al. 2017. Comparative performance analysis of M-IMU/EMG and voice user interfaces for assistive robots. In International Conference on Rehabilitation Robotics (ICORR), pp. 1001–1006. London: IEEE [Google Scholar]
- 80.Losey DP, Srinivasan K, Mandlekar A, Garg A, Sadigh D. 2020. Controlling assistive robots with learned latent actions. In IEEE International Conference on Robotics and Automation (ICRA), pp. 378–384. IEEE [Google Scholar]
- 81.Fang Q, Kyrarini M, Ristic-Durrant D, Gräser A. 2018. RGB-D camera based 3D human mouth detection and tracking towards robotic feeding assistance. In PErvasive Technologies Related to Assistive Environments Conference, pp. 391–396 [Google Scholar]
- 82.Perera CJ, Lalitharatne TD, Kiguchi K. 2017. EEG-controlled meal assistance robot with camera-based automatic mouth position tracking and mouth open detection. In IEEE international conference on robotics and automation (ICRA), pp. 1760–1765. IEEE [Google Scholar]
- 83.Ricardez GAG, Takamatsu J, Ogasawara T, Alfaro JS. 2018. Quantitative comfort evaluation of eating assistive devices based on interaction forces estimation using an accelerometer. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 909–914. IEEE [Google Scholar]
- 84.Canal G, Torras C, Alenyaà G. 2021. Are preferences useful for better assistance? a physically assistive robotics user study. ACM Transactions on Human-Robot Interaction (THRI) 10(4):1–19 [Google Scholar]
- 85.Goldau FF, Shastha TK, Kyrarini M, Gräser A. 2019. Autonomous multi-sensory robotic assistant for a drinking task. In IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 210–216. IEEE [Google Scholar]
- 86.Schultz JR, Slifkin AB, Yu H, Schearer EM. 2022. Proof-of-Concept: A Hands-Free Interface for Robot-Assisted Self-Feeding. In International Conference on Rehabilitation Robotics (ICORR), pp. 1–6. IEEE [Google Scholar]
- 87.Kapusta A, Yu W, Bhattacharjee T, Liu CK, Turk G, Kemp CC. 2016. Data-driven haptic perception for robot-assisted dressing. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 451–458. IEEE [Google Scholar]
- 88.Erickson Z, Clever HM, Turk G, Liu CK, Kemp CC. 2018. Deep haptic model predictive control for robot-assisted dressing. In IEEE international conference on robotics and automation (ICRA), pp. 4437–4444. IEEE [Google Scholar]
- 89.Erickson Z, Collier M, Kapusta A, Kemp CC. 2018. Tracking human pose during robot-assisted dressing using single-axis capacitive proximity sensing. IEEE Robotics and Automation Letters 3(3):2245–2252 [Google Scholar]
- 90.Grice PM, Lee A, Evans H, Kemp CC. 2012. The wouse: A wearable wince detector to stop assistive robots. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 165–172. IEEE [Google Scholar]
- 91.Saunders J, Syrdal DS, Koay KL, Burke N, Dautenhahn K. 2015. “teach me–show me”—end-user personalization of a smart home and companion robot. IEEE Transactions on Human-Machine Systems 46(1):27–40 [Google Scholar]
- 92.Shafti A, Orlov P, Faisal AA. 2019. Gaze-based, context-aware robotic system for assisted reaching and grasping. In International Conference on Robotics and Automation (ICRA), pp. 863–869. IEEE [Google Scholar]
- 93.Choi YS, Anderson CD, Glass JD, Kemp CC. 2008. Laser pointers and a touch screen: intuitive interfaces for autonomous mobile manipulation for the motor impaired. In International ACM SIGACCESS conference on Computers and accessibility (ASSETS), pp. 225–232. ACM [Google Scholar]
- 94.Baldi TL, Spagnoletti G, Dragusanu M, Prattichizzo D. 2017. Design of a wearable interface for lightweight robotic arm for people with mobility impairments. In International Conference on Rehabilitation Robotics (ICORR), pp. 1567–1573. IEEE [Google Scholar]
- 95.Jain S, Farshchiansadegh A, Broad A, Abdollahi F, Mussa-Ivaldi F, Argall B. 2015. Assistive robotic manipulation through shared autonomy and a body-machine interface. In IEEE international conference on rehabilitation robotics (ICORR), pp. 526–531. IEEE [Google Scholar]
- 96.Poirier S, Routhier F, Campeau-Lecours A. 2019. Voice Control Interface Prototype for Assistive Robots for People Living with Upper Limb Disabilities. In IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 46–52. Toronto, ON, Canada: IEEE [Google Scholar]
- 97.Tidoni E, Gergondet P, Fusco G, Kheddar A, Aglioti SM. 2016. The role of audio-visual feedback in a thought-based control of a humanoid robot: a bci study in healthy and spinal cord injured people. IEEE Transactions on Neural Systems and Rehabilitation Engineering 25(6):772–781 [DOI] [PubMed] [Google Scholar]
- 98.Zhang J, Zhuang L, Wang Y, Zhou Y, Meng Y, Hua G. 2013. An egocentric vision based assistive co-robot. In IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 1–7. IEEE [Google Scholar]
- 99.Jackowski A, Gebhard M, Thietje R. 2017. Head motion and head gesture-based robot control: A usability study. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26(1):161–170 [Google Scholar]
- 100.House B, Malkin J, Bilmes J. 2009. The VoiceBot: a voice controlled robot arm. In SIGCHI Conference on Human Factors in Computing Systems, pp. 183–192. Boston MA USA: ACM [Google Scholar]
- 101.Olatunji SA, Oron-Gilad T, Markfeld N, Gutman D, Sarne-Fleischmann V, Edan Y. 2021. Levels of automation and transparency: interaction design considerations in assistive robots for older adults. IEEE Transactions on Human-Machine Systems 51(6):673–683 [Google Scholar]
- 102.Herlant LV, Holladay RM, Srinivasa SS. 2016. Assistive teleoperation of robot arms via automatic time-optimal mode switching. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 35–42. ACM [Google Scholar]
- 103.Wang Y, Xu G, Song A, Xu B, Li H, et al. 2018. Continuous shared control for robotic arm reaching driven by a hybrid gaze-brain machine interface. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4462–4467. IEEE [Google Scholar]
- 104.Wang MY, Kogkas AA, Darzi A, Mylonas GP. 2018. Free-view, 3d gaze-guided, assistive robotic system for activities of daily living. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2355–2361. IEEE [Google Scholar]
- 105.Jeon HJ, Losey DP, Sadigh D. 2020. Shared autonomy with learned latent actions. Robotics: Science and Systems (RSS) [Google Scholar]
- 106.Gopinath DE, Argall BD. 2020. Active intent disambiguation for shared control robots. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28(6):1497–1506 [DOI] [PubMed] [Google Scholar]
- 107.Gopinath D, Jain S, Argall BD. 2016. Human-in-the-loop optimization of shared autonomy in assistive robotics. IEEE Robotics and Automation Letters 2(1):247–254 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 108.Vu DS, Allard UC, Gosselin C, Routhier F, Gosselin B, Campeau-Lecours A. 2017. Intuitive adaptive orientation control of assistive robots for people living with upper limb disabilities. In International Conference on Rehabilitation Robotics (ICORR), pp. 795–800. IEEE [Google Scholar]
- 109.Choi YS, Chen T, Jain A, Anderson C, Glass JD, Kemp CC. 2009. Hand it over or set it down: A user study of object delivery with an assistive mobile manipulator. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 736–743. IEEE [Google Scholar]
- 110.Fukuda O, Tsuji T, Kaneko M. 1997. An EMG controlled robotic manipulator using neural networks. In IEEE International Workshop on Robot and Human Communication (RO-MAN), pp. 442–447. IEEE [Google Scholar]
- 111.Bajones M, Fischinger D, Weiss A, Puente PDL, Wolf D, et al. 2019. Results of field trials with a mobile service robot for older adults in 16 private households. ACM Transactions on Human-Robot Interaction (THRI) 9(2):1–27 [Google Scholar]
- 112.Bonani M, Oliveira R, Correia F, Rodrigues A, Guerreiro T, Paiva A. 2018. What my eyes can’t see, A robot can show me: Exploring the collaboration between blind people and robots. In International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), pp. 15–27. ACM [Google Scholar]
- 113.Kronreif G, Prazak B, Mina S, Kornfeld M, Meindl M, Furst M. 2005. Playrob-robot-assisted playing for children with severe physical disabilities. In International Conference on Rehabilitation Robotics (ICORR), pp. 193–196. IEEE [Google Scholar]
- 114.Hansen ST, Bak T, Risager C. 2012. An adaptive game algorithm for an autonomous, mobile robot-A real world study with elderly users. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 892–897. IEEE [Google Scholar]
- 115.Cook AM, Meng MH, Gu JJ, Howery K. 2002. Development of a robotic device for facilitating learning by children who have severe disabilities. IEEE transactions on neural systems and rehabilitation engineering 10(3):178–187 [DOI] [PubMed] [Google Scholar]
- 116.Lindblom J, Alenljung B, Billing E. 2020. Evaluating the user experience of human–robot interaction. Human-Robot Interaction: Evaluation Methods and Their Standardization:231–256 [Google Scholar]
- 117.Hartson R, Pyla PS. 2012. The UX Book: Process and guidelines for ensuring a quality user experience. Elsevier [Google Scholar]
- 118.Brulé E, Tomlinson BJ, Metatla O, Jouffrais C, Serrano M. 2020. Review of quantitative empirical evaluations of technology for people with visual impairments. In Conference on Human Factors in Computing Systems (CHI), pp. 1–14. ACM [Google Scholar]
- 119.Pascher M, Baumeister A, Schneegass S, Klein B, Gerken J. 2021. Recommendations for the Development of a Robotic Drinking and Eating Aid-An Ethnographic Study. In Human-Computer Interaction–INTERACT 2021: 18th IFIP TC 13 International Conference, Bari, Italy, August 30–September 3, 2021, Proceedings, Part I 18, pp. 331–351. Springer [Google Scholar]
- 120.Gergle D, Tan DS. 2014. Experimental research in hci. Ways of Knowing in HCI:191–227 [Google Scholar]
- 121.Zimmerman M, Bagchi S, Marvel J, Nguyen V. 2022. An analysis of metrics and methods in research from human-robot interaction conferences, 2015–2021. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 644–648. ACM [Google Scholar]
- 122.Sofaer S 1999. Qualitative methods: what are they and why use them? Health services research 34(5 Pt 2):1101. [PMC free article] [PubMed] [Google Scholar]
- 123.Bedrosian J 1995. Limitations in the use of nondisabled subjects in aac research. Augmentative and Alternative Communication 11(1):6–10 [Google Scholar]
- 124.Tigwell GW. 2021. Nuanced perspectives toward disability simulations from digital designers, blind, low vision, and color blind people. In Conference on Human Factors in Computing Systems (CHI), pp. 1–15. ACM [Google Scholar]
- 125.Lewis JR. 2018. The system usability scale: past, present, and future. International Journal of Human–Computer Interaction 34(7):577–590 [Google Scholar]
- 126.Hart SG, Staveland LE. 1988. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in psychology, vol. 52. Elsevier [Google Scholar]
- 127.Grier RA. 2015. How high is high? A meta-analysis of NASA-TLX global workload scores. In Human Factors and Ergonomics Society Annual Meeting, vol. 59, pp. 1727–1731. SAGE Publications Sage CA: Los Angeles, CA [Google Scholar]
- 128.Grice PM, Kemp CC. 2019. In-home and remote use of robotic body surrogates by people with profound motor deficits. PloS one 14(3):e0212904. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 129.Madan R, Jenamani RK, Nguyen VT, Moustafa A, Hu X, et al. 2022. Sparcs: Structuring physically assistive robotics for caregiving with stakeholders-in-the-loop. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 641–648. IEEE [Google Scholar]
- 130.Choi YS, Deyle T, Chen T, Glass JD, Kemp CC. 2009. A list of household objects for robotic retrieval prioritized by people with ALS. In 2009 IEEE international conference on rehabilitation robotics, pp. 510–517. IEEE [Google Scholar]
- 131.Ye R, Xu W, Fu H, Jenamani RK, Nguyen V, et al. 2022. RCare World: A Human-centric Simulation World for Caregiving Robots. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 33–40. IEEE [Google Scholar]
- 132.Tate RL, Perdices M. 2015. N-of-1 trials in the behavioral sciences. The essential guide to N-of-1 trials in health:19–41 [Google Scholar]
- 133.Augstein M, Neumayr T. 2019. A human-centered taxonomy of interaction modalities and devices. Interacting with Computers 31(1):27–58 [Google Scholar]
- 134.Baraka K, Veloso MM. 2018. Mobile service robot state revealing through expressive lights: formalism, design, and evaluation. International Journal of Social Robotics 10:65–92 [Google Scholar]
- 135.Szafir D, Mutlu B, Fong T. 2014. Communication of intent in assistive free flyers. In ACM/IEEE international conference on Human-robot interaction (HRI), pp. 358–365. ACM [Google Scholar]
- 136.Tellex S, Knepper R, Li A, Rus D, Roy N. 2014. Asking for help using inverse semantics [Google Scholar]
- 137.Dragan AD, Lee KC, Srinivasa SS. 2013. Legibility and predictability of robot motion. In ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 301–308. ACM [Google Scholar]
- 138.Beer JM, Fisk AD, Rogers WA. 2014. Toward a framework for levels of robot autonomy in human-robot interaction. J. Hum.-Robot Interact. 3(2):74–99 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139.Delgado Bellamy D, Chance G, Caleb-Solly P, Dogramadzi S. 2021. Safety assessment review of a dressing assistance robot. Frontiers in Robotics and AI 8:667316. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 140.Jevtić A, Valle AF, Alenyà G, Chance G, Caleb-Solly P, et al. 2018. Personalized robot assistant for support in dressing. IEEE transactions on cognitive and developmental systems 11(3):363–374 [Google Scholar]
- 141.Cakmak M 2017. What if your robot designed your next home? In Human Computer Interaction Consortium (HCIC). HCIC [Google Scholar]
- 142.Nguyen H, Kemp CC. 2008. Bio-inspired assistive robotics: Service dogs as a model for human-robot interaction and mobile manipulation. In 2008 2nd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, pp. 542–549. IEEE [Google Scholar]
- 143.Nguyen H, Ciocarlie M, Hsiao K, Kemp CC. 2013. Ros commander (rosco): Behavior creation for home robots. In 2013 IEEE International Conference on Robotics and Automation, pp. 467–474. IEEE [Google Scholar]
- 144.Fan H, Poole MS. 2006. What is personalization? perspectives on the design and implementation of personalization in information systems. Journal of Organizational Computing and Electronic Commerce 16(3–4):179–202 [Google Scholar]
- 145.Wobbrock JO, Kane SK, Gajos KZ, Harada S, Froehlich J. 2011. Ability-based design: Concept, principles and examples. ACM Transactions on Accessible Computing (TACCESS) 3(3):1–27 [Google Scholar]
- 146.Caleb-Solly P, Harper C, Dogramadzi S. 2021. Standards and regulations for physically assistive robots. In 2021 IEEE international conference on intelligence and safety for robotics (ISR), pp. 259–263. IEEE [Google Scholar]
- 147.Torras C 2019. Assistive robotics: Research challenges and ethics education initiatives. Dilemata (30):63–77 [Google Scholar]
- 148.Mossfeldt Nickelsen NC. 2019. Imagining and tinkering with assistive robotics in care for the disabled. Paladyn, Journal of Behavioral Robotics 10(1):128–139 [Google Scholar]
