Abstract
Tele-operated social robots (telerobots) offer an innovative means of allowing children who are medically restricted to their homes (MRH) to return to their local schools and physical communities. Most commercially available telerobots have three foundational features that facilitate child–robot interaction: remote mobility, synchronous two-way vision capabilities, and synchronous two-way audio capabilities. We conducted a comparative analysis between the Toyota Human Support Robot (HSR) and commercially available telerobots, focusing on these foundational features. Children who used these robots and these features on a daily basis to attend school were asked to pilot the HSR in a simulated classroom for learning activities. As the HSR has three additional features that are not available on commercial telerobots: (1) pan-tilt camera, (2) mapping and autonomous navigation, and (3) robot arm and gripper for children to “reach” into remote environments, participants were also asked to evaluate the use of these features for learning experiences. To expand on earlier work on the use of telerobots by remote children, this study provides novel empirical findings on (1) the capabilities of the Toyota HSR for robot-mediated learning similar to commercially available telerobots and (2) the efficacy of novel HSR features (i.e., pan-tilt camera, autonomous navigation, robot arm/hand hardware) for future learning experiences. We found that among our participants, autonomous navigation and arm/gripper hardware were rated as highly valuable for social and learning activities.
Keywords: Social Robotics, Robot-Mediated Learning, Health, Access, Equity, Virtual Inclusion
Introduction
A large and growing population of children who are medically restricted to their homes (MRH) are socially isolated and physically segregated (Ahumada-Newhart and Eccles 2020). As advances in pediatric medicine have changed the outcome of many once-fatal childhood illnesses, increasing numbers of children are surviving these illnesses (e.g., cancer, heart disease, immunodeficiency disorders). As a result, millions of children and adolescents in the US now live with these medical conditions for long periods of times (Sexson and Madan-Swain 1993). It is estimated that as many as 2.5 million US children experience severe disruption to academic attendance due to symptoms or treatments for illness (US Census Bureau 2020; CDC 2016). These children are unable to physically attend school, which deprives them of the academic as well as social-emotional experiences needed for healthy development. The impact of missing school on academic development is easy to see, but attending school also provides the social milieu needed for children to mature socially and emotionally (Durlak et al. 2011).
In the US, MRH children are typically offered home instruction services for 4 to 5 hours/week (Disability Rights California 2012). Although home instruction may provide some additional academic lessons, it provides little opportunity for behavioral or social emotional learning. Possibly as a result, many children who are MRH experience loneliness and depression (Bennett 1994; Weitzman 1986), which can undermine their medical recovery as well as their normal development. Currently, children who are MRH are missing opportunities for inclusive educational experiences and social interactions critical for behavioral and socio-emotional development. Telerobots are a promising innovative technology that may change this situation.
Telerobots provide children who are MRH with much-needed autonomy in navigating physical classroom and school environments (Ahumada-Newhart and Olson 2019; Ahumada-Newhart, Warschauer, and Sender 2016). The ability to move independently in the local environment is particularly valuable for children attending school, as it reduces the social burden on classmates to help move the robot body throughout the classroom and school. The burden of social debt has been covered in the literature for adult use of telepresence technologies (e.g., with wearable and movable free-standing devices) by Rae et al. (Johnson et al. 2015). Similarly, in studies on telerobots in the classroom, classmates complained when the telerobot lost connectivity and had to be carried or pushed on a cart (Ahumada-Newhart and Olson 2017; Ahumada-Newhart, Warschauer, and Sender 2016). Moreover, research has also identified the need for additional robot features such as an arm/hand and navigation to facilitate a more immersive experience (Ahumada-Newhart and Olson 2019).
In our study, the Toyota Human Support Robot (HSR) was explored as a robot system for this use case, as it has similar basic features to commercially available telerobots in the areas of vision, hearing, and mobility, with the addition of innovative features such as full pan-tilt vision, mapping and semiautonomous navigation, and robot arm/gripper hardware. In this article, we outline related work, system details for the HSR, methods for our study, results of this study, and a discussion of the findings. We suggest that autonomy, mobility, and manipulation capabilities are features that could benefit remote children using telerobots to attend school.
Literature Review
Traditional Telerobots
Much work has been done on telepresence robots in corporate settings (Johnson et al. 2015; Kristoffersson, Coradeschi, and Loutfi 2013; Lee and Takayama 2011; Takayama and Go 2012; K. M. Tsui, Desai, et al. 2011; Desai et al. 2011), health care environments (Kristoffersson, Coradeschi, and Loutfi 2013; K. M. Tsui and Yanco 2007), academic conferences (Neustaedter et al. 2016; Rae and Neustaedter 2017), and aging in place (Broekens, Heerink, and Rosendal 2009; Sabelli, Kanda, and Hagita 2011; Tsai et al. 2007; K. M. Tsui, Desai, et al. 2011). In addition, work has been done on candidate requirements and survey questions for physical avatar systems to guide researchers during evaluation of existing systems (Riek 2007). Our contributions to the literature build on earlier work with children in schools (Ahumada-Newhart and Eccles 2020; Ahumada-Newhart and Olson 2017; Ahumada-Newhart 2014) to test and measure additional robot design features beyond traditional vision, audio, and mobility that are currently available. Our study pilots and measures the efficacy of the HSR’s features to improve robot-mediated learning experiences for children who are users of telerobots for school attendance.
For an immersive experience, telerobots need quality camera features that allow the remote user to view objects and people in the physical environment as if the remote person were present in the local environment. Earlier work has reported the need for camera pan-tilt features (Ahumada-Newhart and Olson 2019; Desai et al. 2011; Venolia et al. 2010), camera zoom capabilities (Ahumada-Newhart and Olson 2019; Johnson et al. 2015; Rae, Mutlu, and Takayama 2014; Rae and Neustaedter 2017; Venolia et al. 2010), and head movement separate from the body (Ahumada-Newhart and Olson 2019; Desai et al. 2011; Sirkin et al. 2011; K. Tsui, Norton, et al. 2011). Head movement independent of the robot body presents robot behavior that more closely mimics the physical movements of traditional children, allowing classmates to view expressive gestures such as their peer turning their “head” to view objects/people/things they find interesting. In addition, these features also contribute to increased anthropomorphization (i.e., ascription of human qualities to the robot) and acceptance of the embodied HSR by classmates (Ahumada-Newhart, Warschauer, and Sender 2016).
Research has identified the value of arm/hand hardware for gesturing and pointing (Adalgeirsson and Breazeal 2010). Earlier work has also identified that remote children on a robot needed help with activities like opening doors and pushing elevator buttons (Ahumada-Newhart and Olson 2019). Commercially available telerobots do not have arms or hands to allow students to open doors or push elevator buttons. In research studies, a student was reported as “crashing” into an Americans with Disabilities Act (ADA)–compliant door button to gain access to a school building (Ahumada-Newhart and Olson 2019). Arm/hand hardware may greatly improve accessibility for remote students, as these features may allow for increased access to entrances and different floors of school buildings.
Toyota Human Support Robot (HSR)
The Toyota HSR operates using the robotic operating system (ROS), which is a standard open-source software platform used by many of the commercial and research robots (Quigley et al. 2009; Liu et al. 2019). ROS modularizes the sensor/actuator drivers and the software controllers in a distributed framework, which helps in easy integration of custom task-specific software modules. Importantly, ROS provides easy transformation of spatial coordinates in the environment among the frame of references for different parts of the robot.
The HSR has an eight-degrees-of-freedom (8-DoF) body, in addition to pan-tilt. This among other things allows HSR to move base omnidirectionally and perform complex arm maneuvers, which are not possible with other telerobots. The HSR base, torso, and arm have 3-DoF, 1-DoF, and 4-DoF (lift) mobility, respectively (Yamamoto et al. 2018).
The HSR (Figure 1) provides an array of sensors of multiple modalities to capture inputs from the environment and other agents. The “head” of the HSR mounts a microphone, a depth-sensing Red Green Blue (RGB-D) camera, a wide-angle camera, and a higher-resolution stereo RGB camera, which captures sound, images, and the depth map of the scene. The image and sound inputs can be used for telepresence communication and for autonomous object and speech recognition. The RGB-D camera and stereo camera scan the three-dimensional structures of dynamic indoor and outdoor environments, respectively. Although these sensors are all front-facing, the 320-degree pan and 120-degree tilt allow reading visual input from a large part of the three-dimensional environment. The hand of HSR mounts an additional camera and a force sensor to provide visual input and the force on the wrist while gripping an object.
Figure 1:

HSR during a Study Session
In addition, the HSR achieves both seated and standing heights, ideal for participating in classroom settings. The HSR weighs 37 kg with a max speed of 0.8 km/hour. Figure 2 presents an overview of the HSR’s front and side views with added specifications and dimensions.
Figure 2:

HSR Specifications
Source: Toyota Motor Corporation (Toyota 2015)
Technical Innovation and HSR System Details
Remote Access
We developed a graphical user interface (GUI) to allow remote children to control the HSR and interact with the teacher and classmates via robot-mediated learning activities. The interface program is run on a local computer connected via high-bandwidth wireless network to HSR, since the interface program uses large amounts of input data from multiple sensors mounted on HSR in real time. The remotely located student logs into this local computer via a freely available screen-sharing application called TeamViewer (Drugarin, Draghici, and Raduca 2016) and is able to view the GUI on the screen of their home device. The student also logs into the robot directly using another free application, Google Hangouts (Bolton 2013), to interact with the teacher and other students. Google Hangouts allows the remote student to appear on the HSR’s face screen to facilitate real-time audio and visual communication, as shown in Figure 3. The remote student has both applications open on their home device to control the HSR and interact in the local environment. TeamViewer and Google Hangouts were used for rapid prototyping in this pilot study. It should be noted that future versions of the GUI will not rely on third-party software for accessing the robot controls or the audio/video feed.
Figure 3:

Process for Remote Access of the HSR
Graphical User Interface (GUI)
The HSR user interface (i.e., the GUI in Figures 4 and 5) was developed by our research team based on earlier work on development of a brain-inspired network of schema consolidation to help the HSR predict location of objects based on context (Hwu, Kashyap, and Krichmar 2020), and a recurrent neural model inspired by the smooth-pursuit eye movement in primates to track visual targets predictively (Kashyap et al. 2018).
Figure 4:

The CARL-SR Graphical User Interface as Seen by Remote Children
Figure 5:

Arm and Grasp Hardware and Control Window of the GUI as Seen by Remote Children
Source: Toyota Motor Corporation (Toyota 2015)
The CARL-SR user interface and Google Hangouts allow the remotely located student to (1) perform easy two-way audio interaction with the teacher and other students, (2) see through the cameras mounted on HSR, (3) navigate to any location in the classroom by either manual driving or autonomous driving avoiding obstacles, (4) reach out to any object in the room autonomously by clicking on the camera image, and (5) pick and place a hand-graspable object of weight less than 1.2 kg at any location in the classroom. Figure 4 depicts the main GUI seen by the remotely located student when they first log in using TeamViewer. The GUI is broken into the following sections to facilitate navigation of the system:
Color image from the RGB-D sensor that can be switched to the high-resolution camera,
Map of the classroom for auto navigation by clicking at target location,
Button to activate click-at-target feature,
Buttons to manually drive the robot base and head,
Buttons to zoom in and out on the camera image,
Button to switch between RGB-D and high-resolution camera,
Buttons to move the arm (i.e., raise their hand to gain attention, lower their hand),
Buttons to adjust the height of the robot body, and the button to open arm/grasp control window.
The GUI, which is written in Python, was developed using Python libraries pyQt and rospy. The software is publicly available on GitLab.1 We refer to different Python functions and routines in the following sections where appropriate. This will explain our interface implementation approach and allow interested readers to follow the algorithmic logic. However, it should be noted that many of the ROS libraries and robot commands are specific to the Toyota HSR and will not execute on other systems.
The GUI is initiated by calling the carlsr_ui.py routine from a computer command line. The Python software carlsr_ui.py contains callback functions for each of the pyQT buttons, widgets, and sliders shown in Figures 4 and 5. For example, when a user clicks the up arrow in the Move Base section (see Figure 4[d]), the clicked_forward function in button_control.py is called and a forward movement command is published in ROS. Other buttons in the GUI work similarly.
Cameras
On the HSR, the remote child can switch between standard-resolution and high-resolution cameras based on task requirements (Figure 4[a, f]). The high-resolution camera is more appropriate for reading than during navigation, since its output data size is larger. Manually driving the robot forward, rotating the omnidirectional base, and pan-tilt of the head can be accomplished using the buttons shown in Figure 4(d). When a user clicks on the up button in the Move Head section (see Figure 4[d]), the clicked_up function in button_control.py is called and a head_pan_joint and head_tilt_joint movement command is published in ROS that causes the camera to tilt upward. When the user clicks on the “<” button in the Move Head section, a head_pan_joint and head_tilt_joint movement command is published in ROS that causes the camera to pan leftward. The down and “>” buttons in the GUI work similarly.
Arm
The arm of the HSR allows active interaction with a remote environment, which can potentially facilitate a natural experience to remote children. The arm can reach from ground to 1.35 m above ground, and up to 0.45 m in the forward direction. There is also a two-fingered gripper attached to the end of the arm, which can lift objects of weight up to 1.2 kg. The maximum gripper opening is 135 mm, and the maximum gripping force is 40 N. The arm control window for grasping objects, as shown in Figure 4, is hidden by default to maintain a user-friendly GUI for students in all grades K-12. When a child is ready to operate the arm/hand feature, they can select the “Grasp Controls” (Figure 4[i]).
Grasp Controls/Gripper
The HSR arm was found to be highly useful for classroom interactions by the participating remote children in our study. A camera inside the gripper assists in approaching and grasping objects (Figure 5[a]). When a student is ready to operate the Grasp Controls, Figure 5 displays the arm and grasp control window of the GUI as seen by remote children:
Image obtained by the camera mounted on the hand of HSR,
Buttons to move the hand in four directions,
Buttons to move the arm in four directions,
Buttons to close and open the gripper for grasping objects.
The GUI for grasp control can be found in the grasping_ui.py software. Similar to the main GUI software, each grasp control button in Figure 5 is linked to a callback function that publishes the appropriate ROS message to move the arm or hand joint of the HSR in the desired direction.
Pointing
The ability to point at objects in the classroom may be valuable for assessment and engagement in learning activities. Manual navigation to objects of interest is time-consuming and can be monotonous. Therefore, the CARL-SR GUI provides a click-at-target feature that allows the remote student to use the arm/hand hardware to point at any object from close proximity just by clicking the object’s image on the home device’s screen. The student activates this feature by clicking at a button shown in Figure 4(c). After activation, any object of interest observed through the RGB-D camera image can be selected to autonomously drive the robot to a close proximity and then point at the object with the gripper. The autonomous navigation to the object avoiding obstacles works in the same way as the map-based navigation described previously. The location of the selected object in the classroom is obtained by converting the three-dimensional coordinate of the object in the RGB-D camera frame to the base frame of reference, which is then used for autonomous navigation. This feature is also useful when the student wants to pick up an object, as the click-at-target feature automatically positions the arm close to the object’s three-dimensional location. The Python software in get_click_xyz.py contains the routines to map the GUI display coordinates with real-world coordinates. The following pseudocode describes the algorithm for click-at-target.
function click_at_target(pixel_x, pixel_y):
# Get depth of pixel from depth map from the RGB-D camera
depth_xy = get_depth(pixel_x, pixel_y)
# Convert pixel_x, pixel_y, depth_xy into 3D coordinates Xc, Yc, Zc centered
at the RGB-D camera using the inverse intrinsic parameters of the RGB-D camera,
which is calculated based on camera physical attributes and is constant for
each camera.
Zc = depth_from_rgbd * 0.001 # convert mm to meters
Xc = Z * ((pixel_x − cx_d) * fx_inv_d)
Yc = Z * ((pixel_y − cy_d) * fy_inv_d)
# Transform Xc, Yc, Zc into 3D coordinates X, Y, Z of the mapped classroom
(“map”)using ROS transform function
X, Y, Z = ROS::transformPoint(“map”, (Xc, Yc, Zc))
# Calculate the map location Xt, Yt that is 0.5 meters away from X, Y, is
closest to the current HSR location, and is unoccupied. Move HSR to Xt,
Yt using ROS.
robot_pose = get_pose()
if robot_pose.x > X
Xt = X + 0.5
else
Xt = X − 0.5
if robot_pose.y > Y
Yt = Y + 0.5
else
Yt = Y −0.5
ROS::go_to_mapXY(Xt, Yt)
# Move the HSR arm to a close safe position away from X, Y, Z facing the
target object and open the arm gripper using ROS
# Fill ROS message
traj = JointTrajectory()
traj.joint_names=[“arm_lift_joint”,”arm_flex_joint”,”arm_roll_joint”,
“wrist_flex_joint”,”wrist_roll_joint”]
p = JointTrajectoryPoint()
p.positions = [target_z − 0.6, −0.3, 0.0, −1.5, 0.0]
p.time_from_start = rospy.Time(5)
traj.points = [p]
# Publish ROS message
publish(traj)
ROS::open_gripper()
Given a pixel coordinate clicked by the user, the function click_at_target() performs a sequence of functions. The first function is twoD_to_threeD(), which uses camera calibration parameters such as the focal length and optical center to transform the pixel coordinates into camera coordinates. A ROS listener transforms these camera coordinates into real-world coordinates. The second function is go_to_mapXY(), in which the robot navigates toward the real-world coordinates at an offset such that the robot is standing directly behind the object of interest. The third function, point_arm(), extends the arm at the height of the desired object. Finally, the gripper is opened to complete the gesture.
Grasping
In order to pick up and place objects in the classroom, the remote student can drive the arm, the hand, and the gripper from the grasp control window, shown in Figure 5, which can be opened by clicking the button in Figure 4(i). To facilitate grasping (e.g., picking up objects), the view from a camera mounted in the hand is also provided, shown in Figure 5(a). Before performing maneuvers through the grasp-control window, the hand can be brought closer to the object by the automatic click-at-target feature described earlier. The HSR is capable of picking up slim, thin, and wide objects with stable grasping for larger objects (Figure 6). Clicking on the Grasp button in Figure 5(d) causes the clicked_hand_close() routine in arm_control.py to be called. Likewise, clicking on the Release button causes the clicked_hand_open() to be called. These routines publish the appropriate ROS commands to open and close the HSR’s gripper.
Figure 6:

HSR Picking up Objects
Source: Toyota Motor Corporation (Toyota 2015)
Audio
Audio issues and the importance of hearing clearly via the robot have been noted in the existing literature (Ahumada-Newhart and Olson 2019; Lee and Takayama 2011; Neustaedter et al. 2016; Paepcke et al. 2011; K. M. Tsui, Desai, et al. 2011). Real-time audio communication is valuable in maintaining or establishing connections with peers and teachers. HSR has a four-capsule microphone array on the top of its head, and two separate microphones by the RGB-D camera. HSR has one speaker near its base. Figure 7 provides a visual of where the cameras and microphones are located on the HSR “head.” The microphone on the robot facilitates hearing capabilities for the remote student. The speakers on the robot facilitate the remote student’s speaking, whispering, and yelling (e.g., at lunch, in the gym, assemblies) capabilities.
Figure 7:

HSR Placement of Cameras and Microphones
Source: Toyota Motor Corporation (Toyota 2015)
Mapping
The mobility of the HSR extends beyond traditional mobility features to include mapping of the environment, autonomous navigation, and obstacle avoidance. The ROS library provides support for autonomous path planning and navigation in a familiar environment. For this to work, a spatial map of the environment where the robot is located must be created first. A simultaneous localization and mapping (SLAM) procedure can be used first to create the spatial map of the environment (Durrant-Whyte and Bailey 2006). By utilizing data from the laser rangefinder built into the HSR, it is possible to simultaneously estimate a precise spatial map of the environment and the robot’s 2D pose using laser-based SLAM methods, such as Hector SLAM (Kohlbrecher et al. 2013). When a spatial goal is presented to the ROS autonomous navigation module, the prebuilt map and current pose of the robot are utilized to estimate an efficient trajectory to the goal location while avoiding obstacles (Figure 8). Routines for navigating to a location on the map in Figure 4(b) can be found in map_navigation.py and, in particular, the function go_to_mapXY() handles moving the HSR to a location clicked on the map with the mouse. The following pseudocode describes the algorithms for moving the HSR to a location on the map.
Figure 8:

ROS (Robot Operating System)
Source: Toyota Motor Corporation (Toyota 2015)
function go_to_mapXY(target_x, x_offset, target_y, y_offset)
# Adjust X,Y locations by desired offset
x = target_x + x_offset
y = target_y + y_offset
# Fill ROS message
goal = PoseStamped()
goal.header.stamp = time_now()
goal.header.frame_id = “map”
goal.pose.position = Point(x, y, 0)
quat = quaternion_from_euler(0, 0, atan2(target_y − y, target_x −
x))
goal.pose.orientation = Quaternion(*quat)
# Publish ROS message
goal_pub.publish(goal)
The algorithm allows a target X and target Y location to be specified, along with an offset that accounts for the desired positioning of the HSR base. The X, Y coordinates are translated into quaternion coordinates, which are then published as the goal coordinates for navigation. Actual navigation is handled by the internal ROS autonomous navigation module.
Obstacle Avoidance
HSR features an inertial measurement unit (IMU) to calculate acceleration and rotation during movement and a laser rangefinder to measure the distance to obstacles. These sensors are important for navigation and obstacle avoidance. During navigation, the current pose is constantly updated using SLAM and IMU data and trajectories are recalculated when new obstacles are presented. Furthermore, the ROS autonomous navigation module enables dynamic obstacle avoidance in real time by using the depth cloud from the RGB-D sensor mounted on the HSR. In a classroom scenario with dense occupancy, remote robot navigation can be limiting, particularly during group activities. Autonomous navigation and dynamic obstacle avoidance allow remote children to smoothly and securely navigate the robot between places while participating in classroom activities. Safety was a priority for the Toyota HSR design. Thus, obstacle avoidance is a built-in feature of the ROS movement libraries designed for the HSR. Commands to move the robot or arm can be interrupted by low-level controllers that attempt to move around obstacles or pause until the obstacle moves out of the way. Once obstacles clear, the HSR continues to its desired position.
Safety Features
HSR comes with built-in safety features using additional sensors, such as a bumper sensor to detect any contact with other objects and a magnetic sensor to avoid no-entry zones marked with magnetic tapes, and a hand-held STOP button (Figure 9). The HSR GUI allows the remote student to autonomously drive the robot to any location in the classroom while avoiding moving obstacles on the way. The feature takes advantage of the autonomous navigation module of ROS. A spatial map containing static structure of the classroom, depicted in Figure 4(b), is built using Hector SLAM (Kohlbrecher et al. 2013). The white region on the map shows the open space in the classroom where autonomous navigation is allowed without colliding with other objects. When the user clicks at any pixel within the white region, the pixel is mapped to the target location in the classroom using the map. The current robot location and the target location are sent to the ROS navigation module, which finds a valid trajectory to get to the target location avoiding static obstacles and autonomously drive the robot. If obstacles are encountered on the planned route, which are captured by the RGB-D camera or the laser scanner, the robot stops and finds an alternate trajectory to get to the target within the open space and restart the navigation. The autonomous navigation process continues until the robot arrives at the destination or a new target location is clicked.
Figure 9:

Safety Feature—Obstacle Avoidance Sensors; Magnetic Tape Barriers; Hand-Held STOP Button
Source: Toyota Motor Corporation (Toyota 2015)
Although caution must be taken during operation, as with any other robot of its size, HSR is a relatively safe robot to operate in a classroom-like environment. When the HSR body comes in direct contact with an object, the bumper sensor detects the collision and the system goes to inactive state. Also, it is possible to mark danger zones that the robot should avoid, such as stairs or doorways, by marking them with magnetic tapes. The magnetic sensor at the base detects any touches with such tapes, sending HSR into inactive state. Furthermore, a hand-held wireless stop button is provided for local interactants to stop HSR immediately. HSR must always operate within the range of the wireless stop button to remain active.
Methodology
Study Design
Our study design consisted of cross-case analyses between all participants to evaluate the effectiveness of traditional telerobot features for learning and an exploratory, within-subject study to measure the value of three novel features: (1) pan-tilt movement of the camera, (2) mapping/semiautonomous navigation, and (3) arm movement and manipulation. Our study participants were children, aides, or administrators who used commercially available telerobots to attend, or assist, children in traditional, in-person public schools. These participants piloted the use of the HSR in our simulated in-person classroom environment. Data were collected through informal interviews, online surveys, and field notes.
Control Condition
In order to participate in the study, a participant must have used a commercially available telerobot at school for a minimum of two weeks. All participants used a Double2 or VGo robot. At the time of our study, neither model of these robots had a pan-tilt camera, mapping/autonomous navigation, or arm/hand hardware. In our study, participants were asked about their experiences with their robot’s design features that allowed for vision, hearing/speaking, and mobility.
Experimental Condition
To compare vision, audio, and mobility capabilities of their VGo or Double2 robot to the HSR, participants engaged in a robot-mediated language lesson that included activities to explore basic and novel robot features and then answered questions on our online survey. To evaluate the novel features of the HSR, the robot-mediated language lesson also included exploration of the following features: pan-tilt camera, autonomous navigation, and robot arm/hand features. Participants answered questions on our online survey regarding these features but did not compare these features to other robots as they were not available on other robot models. Instead, survey questions on these features focused on the usability and perceived benefit of having these features on a robot.
A link to our survey was emailed to participants shortly after each lab session. On the survey, participants rated the following topics: introductory materials, vision, hearing, mobility, vision, autonomous navigation, arm/hand hardware. Responses used a on a scale of 1 (Excellent) to 5 (Terrible), or (0) Not Applicable (Table 1). All participants completed the robot-mediated language lesson and online survey.
Table 1:
Survey Response Values and Descriptions
| Value | Rating | Description |
|---|---|---|
| 1 | EXCELLENT | Same as being there in person |
| 2 | GOOD | Minor issues but didn’t disrupt the experience |
| 3 | AVERAGE | Some issues but easily managed on my own |
| 4 | POOR | Many issues, not able to work around it every time, needed assistance |
| 5 | TERRIBLE | Major issues, disrupted the experience many times |
| 0 | NA | Did not use this feature |
Hypotheses
We formulated the following hypotheses and used them as the basis to design our study’s framework encompassing the various features and analytical parameters which we considered crucial to achieving our study goals and objectives:
Hypothesis 1: The pan-tilt camera would significantly improve user experience by allowing participants to move only the robot head without moving the body.
This hypothesis was based on an earlier work that identified student and teacher need for the remote student to remain stationary (i.e., sitting) during instruction with the ability to move the head when needed, similar to local classmates (Ahumada-Newhart and Olson 2019).
Pan
Both students and teachers complained of noise, distraction, and slowness when having to move the entire robot body to look to the left and right.
Tilt
While the VGo does have a camera with a tilt feature, the Double2 robot does not. Both students and teachers complained that (when operating a Double2) to view papers on a desk, the student had to move away from the desk as the camera was at a set height and could not look down. Traditionally, in-person students move toward the desk to view objects on the desk. Moving away from the desk is counterintuitive to how students typically move to view objects on a desk.
Hypothesis 2: The navigation feature would reduce cognitive load on participants and facilitate learning course content and interacting with their teacher and classmates.
Our rationale for this hypothesis is based on earlier work that identified student struggles to “walk” and engage in conversation with peers. Participants felt that they could not do both at the same time as controlling the mobility of the robot took much concentration. One student reported that he had to stop “walking” to say “hi” or talk to classmates.
Hypothesis 3: The arm/hand hardware would create increased levels of engagement and expression.
This expectation was also borne out of earlier research that identified student needs to “raise their hand” similar to what other students in the classroom did to gain attention (Ahumada-Newhart and Olson 2019). Earlier work also reported that some students who use a VGo robot blink their lights to gain attention, and students who use a VGo may raise the entire head or call out. Even with these options, students still expressed a desire for “raising their hand,” as they felt teachers did not always see the light (VGo) or notice the raising of the entire head (Double2) (Ahumada-Newhart and Olson 2019).
Study Interaction
Our study was centered on a robot-mediated introductory language lesson. A member of our team, an experienced Spanish instructor, created a 20- to 30-minute introductory lesson that allowed for basic alphabet and vocabulary exercises to explore the above-mentioned features of the HSR. The Spanish language lesson was similar to a validated elementary school Spanish lesson that was designed for in-person instruction in a traditional public school. As study participants were members of traditional classrooms, and neither parents nor children requested any learning accommodations, the language lesson was not modified for learning differences. Specific variations between the validated, in-person lesson and the robot-mediated lesson were: shortened instruction time, student was remote and participated via the telepresence robot, and “classmates” in the lab consisted of one to two in-person members of the study team.
Participants
Recruitment
Participants were recruited via a printed or digital flyer that was distributed to parents and aides through district-approved methods. The flyer informed potential participants of our study and the technologies we would be exploring. After our district partners confirmed parent consent and child assent to share their contact information with the research team, a member of our research team contacted the parent or aide of the remote child to schedule a time to access the HSR and participate in a robot-mediated 20- to 30-minute Spanish lesson.
Participants had experienced being remote learners and using a robot to attend school in grades K-12 or were aides for children who were remote learners but, due to physical limitations, could not drive the robot for themselves. Student participants ranged in age from 10 to 18 years, and adults did not provide their age but were certified school aides or administrators. We did not restrict the number of participants. However, as our study was completely voluntary, conducted outside of school time (during the academic year), and participants were located across the US in different time zones, our sample was small with a total of just nine participants. Six participants were female and three were male (Table 2).
Table 2:
Participants
| Participants (N = 9) | |
|---|---|
| Children: Grades 5–8 | n = 2 |
| Children: Grades 9–12 | n = 2 |
| Adults (aides/administrators) | n = 5 |
| Model of Robot Used for School Attendance | |
| Double2 Users | 6 |
| VGo Users | 3 |
Informed Consent
All participants were provided with study information sheets approved by our university institutional review board (IRB) and local school district external research approval boards. Study information sheets were read aloud by the interviewer before each study, and ample time was provided for questions about the study. Child participants received parental consent and gave verbal assent before the study was conducted. Participants were made aware that they could withdraw from the study at any time and did not have to complete the entire study.
Procedure
Our team selected a language lesson as an ideal setting for comparison of the traditional telerobot features and evaluation of the HSR’s novel features. The robot-mediated language lesson presented new words and concepts for all participants, as none of the participants had taken Spanish classes prior to our study.
TeamViewer and Google Hangouts
Participants were emailed links to instructional videos after they scheduled their session in the lab. A 2-minute video provided instructions on how to connect to the HSR, and a 6-minute video provided instructions on how to control the HSR through the GUI. In addition, participants received written instructions via email on the applications (apps) needed to access the HSR (i.e., Google Hangouts, TeamViewer) and how to operate these apps for their session. Participants were asked to view the video or read the instructions before logging into the HSR. The security and online safety of the participants were considered in selecting TeamViewer as the control interface, as it has strong security levels where data are transferred exclusively via secure data channels. TeamViewer includes end-to-end encryption based on RSA (4,096 bits) and AES (256 bits) (Murphy 2016). Google Hangouts was selected for the audio/video communications, as it transmits both audio and video but does not store any audio/video communications. In addition, data are encrypted both in-transit and at-rest on the Google Hangouts platform (Google 2022; Bolton 2013).
Script
Our team also developed a script for the language lesson that incorporated all five feature areas—audio, visual, mobility, pan-tilt camera, autonomous navigation, and arm/hand hardware. This script underwent iterative refinement after each participant for improved flow of activities, but the activities and interactions remained consistent across all participants.
Setting
This study was conducted in a simulated classroom we constructed in our lab. The room had a teacher’s desk, four student desks, and visuals on the walls. During study sessions, two to four members of the research team were present. One researcher led robot-mediated language lessons in Spanish, one researcher monitored the remote participants’ activities via the shared screen on TeamViewer, and two researchers participated as classmates during the lesson.
Robot Training
Once remote participants had successfully logged in via both Google Hangouts and TeamViewer to control the HSR, the research team introduced themselves and reviewed controlling the HSR movement using the arrow keys similar to both the VGo and Double2. After participants demonstrated capability in moving, participants were directed to a full-length mirror to gain awareness of their presence on the HSR. We discovered mixed findings with mirror use for telepresence robot users (Takayama and Harris 2013). However, we observed that with experienced users who had not physically seen the HSR before, the use of a mirror helped them understand how the HSR functioned. From the mirror, the participant was asked to turn around and face an eye chart for a vision test. After successful navigation of the basic vision, audio, and mobility features, participants were then instructed on the autonomous navigation feature. Once participants had successfully navigated around the classroom, the lead researcher began the Spanish lesson.
Language Lesson
Sessions in our lab, including the Spanish lesson, followed a written script for consistency across all sessions. In the Spanish lesson, the names of five fruits were written on the dry-erase board, and these same fruits were present on the “teachers” desk (albeit in artificial, plastic forms). To explore the participants’ ability to hear and speak, the researcher said the names of the fruit aloud in Spanish and the participant repeated the names of the fruit aloud while reading the words on the dry-erase board. Once participants could repeat the names of the fruits and identify the fruits, they were asked if they would like to try the arm/hand hardware. A researcher instructed them on how to operate the arm/hand feature, and once participants were comfortable, the lesson moved forward to assessment.
The five pieces of plastic fruit were placed on the teacher’s desk approximately 6 in. apart. Each piece of fruit was represented in the earlier portion of the lesson. Participants were asked to use the click-at-target feature to point to their favorite fruit, and “classmates” had to guess which fruit they selected. After mastering “pointing” to the fruit, participants were asked to grasp the fruit. Once the fruit had been successfully grasped, the participants were asked to carry and deliver the fruit to a classmate.
Evaluation of Traditional Telerobot Capabilities
Participants in our study did not report any identified vision or auditory difficulties prior to participation in our study. However, we tested for vision and hearing before each session to ensure that participants’ robot-mediated capabilities would meet expectations for a traditional student in a classroom. Student challenges with in-person mobility were also not reported and did not play a role in our evaluation of robot-mediated mobility features.
Vision
Vision and hearing were tested first to ensure that participants would be able to participate visually without technical barriers. A Snellen eye chart and standard testing procedures (i.e., standing distance from the chart) were used to test the remote child’s vision via the robot. Participants were asked to read the top letter of the eye chart and work their way down the chart until they were not able to read the eye chart letters clearly. Participants were then asked to move the robot closer to the chart until they could read the line for 20/20 vision. To gauge the quality of vision for viewing objects outside the classroom, participants were asked to look outside the window in our lab and describe what they saw. Our lab is on the second floor, and participants had potential views of trees next to the building, buildings across the street, and, weather permitting, mountains in the distance. To gauge the quality of vision during instruction, participants were also asked to view Spanish alphabet flashcards above a dry-erase board and writing on the dry-erase board.
Hearing
Hearing and speaking were tested through the replication of sounds necessary when learning the names of fruits in a new language. Hearing was also tested by a researcher speaking directly to the front of the robot and asking simple questions, such as “What is your name? What kind of robot do you use at school?” Similar questions were then asked with the researcher standing outside the robot’s field of view to the right and left of the robot to assess hearing from different angles.
Mobility
Mobility was evaluated by asking the student to turn right, left, complete a full turn, move toward the instructor, explore the classroom, then navigate manually toward a mirror so they could see themselves. After the student was comfortable with their manual navigation, the research team introduced the “click-at-target” navigation feature of the robot. Students used this mobility feature to move toward the instructor’s desk, back to their desk, and navigate toward other people in the classroom.
Evaluation of Novel HSR Capabilities
Camera/Pan-Tilt
As all participants were experienced drivers of telerobots, very little guidance on vision was needed when they first logged on to the HSR. Participants already knew how to control robot and/or camera movement with keyboard directional arrows or directional arrows on the user interface. In our study, most of the guidance was provided on the pan-tilt feature of the HSR “head” where the cameras are located. The ability to move the head independently of the body was a novel experience for all participants. Participants were also instructed on how to use the zoom capabilities of the camera.
Mapping/Semiautonomous Navigation
After testing for quality of vision and basic navigation (i.e., turning, looking around, moving forward, raising/lowering body and arm), participants were then shown a physical map of the room that matched the visual on their screen. They received brief instructions on how to operate the autonomous navigation by clicking on the white areas of the map. Participants were informed that the red dot represented their location within the map. Once the participant mastered navigating the classroom using the digital map instead of the arrow controls, participants were asked to achieve the following tasks:
“Sit” alongside peers in the classroom facing the “teacher” and dry-erase board
Navigate to the teacher’s desk for one-on-one instruction
Approach the board and read words
Return to their desk
Arm/Gripper Hardware
Once vision and mobility were explored, participants were asked if they would like to pilot the arm/hand hardware. Once they assented to using this feature, instruction was given on how to access and operate the arm/hand controls. Participants were asked to achieve the following tasks:
Raise their hand
Point at a piece of fruit
Pick up a piece of fruit
Hold the piece of fruit while “walking”
Deliver the fruit to a classmate
Measurement
We evaluated participant experiences using an online survey that was completed after each session in our lab. To evaluate the effectiveness of the HSR and GUI, a survey was sent to participants after their session. The survey questions asked participants to rate the HSR and interface features on a scale of 1 (Excellent) to 5 (Terrible), or (0) Not Applicable. The survey captured data on experiences with introductory materials for accessing and controlling the robots, model of robot currently used in school, and evaluation of all traditional robot features (Table 3) as well as the novel features of the HSR robot (Table 4). Our survey also captured open-ended replies that allowed participants to describe how they imagined students would use the HSR’s novel features on the robots that they currently use.
Table 3:
Traditional Telerobot Features
| Vision | Hearing | Mobility |
|---|---|---|
| Ability to see objects and people in the classroom | Ability to hear the teacher | Ability to move around the classroom |
| Ability to see material on a dry-erase board | Ability to hear classmates | Stability (never falling or crashing on its own) |
| Ability to look up and down using the camera (VGo) | Ability to be heard | Digital map feature to navigate the classroom |
| Obstacle avoidance feature |
Table 4:
HSR Novel Features
| Vision | Arm/Gripper Hardware |
|---|---|
| Ability to look up and down using the camera | Pointing |
| Ability to look left and right using the camera | Grasping |
| Autonomous navigation | Raising hand to get attention |
| Getting to the teacher’s desk | |
| Returning to your desk | |
| Moving toward the mirror/window |
Results
Instructional Materials
Accessing and piloting the HSR was a new experience for all participants. We provided instruction via two different methods—written instructions in pdf format and video instructions. To evaluate the effectiveness of the introductory materials and application software used to access the HSR, survey questions asked participants to rate these tools on a scale of 1 (Excellent) to 5 (Terrible), or N/A (see Figure 10).
Figure 10:

HSR Application Software and Instructional Materials
Video instructional materials were the most used and highly rated tool for accessing the HSR with 67% of participants rating it as “Excellent.” In contrast, 33% participants rated written materials as “Excellent” and 44% rated the written materials as “Good.” Both versions of instructional materials received an “average” rating from 11% of participants. Based on these findings, we believe it would be ideal for future studies to consider continuing to include instructional materials in both formats to meet the needs of diverse learners and families.
Application Software
For the two applications used in our study, TeamViewer and Google Hangouts, participants had marginally better experience with the Google Hangouts (44% “excellent”) compared to 33% “excellent” for TeamViewer. It is unclear if familiarity with Google products facilitated the use of this application. However, 88% participants reported excellent, good, or average experience with the applications and felt competent to control the HSR. Three participants provided the following feedback for suggested improvements:
I think a short video tutorial on the user controls would be helpful.
Screenshots are REALLY helpful. I would include as many as possible.
The TeamViewer site was blocked by our firewall, so I had to find a different way to access the download.
Traditional Capabilities: Vision, Hearing, Mobility, Camera Mobility (HSR)
Figure 11 represents survey comparison of HSR and other robots (Double2 or VGo) used by participants. The red line of the box plots denotes the median, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the most extreme data points not to be considered outliers, and outliers are plotted as red plusses.
Figure 11:

Survey Comparison of HSR and Other Robots Used by Participants (Double2 or VGo)
Vision: How Well Participants Could See via the Robots
To evaluate the quality of vision on the HSR, we asked the participants to compare their ability to see with the HSR versus other robots they use in the classroom. Participants were asked how well they see (1) objects and people in the classroom and (2) material on a dry-erase board. These two features were analyzed separately, as they are consistently available features across all robots used by our participants. Participants reported the HSR’s vision was slightly better than other robots that they have used (Figure 11 Vision; p < 0.06; one-tailed Wilcoxon Rank-Sum Test). Four participants rated the HSR vision as “excellent” for ability to see objects and people in the classroom, and two participants rated it “excellent” for ability to see material on the dry-erase board. Six to seven participants rated the HSR as good or excellent overall. Most Double2 and VGo users rated the vision on their robots as “good.” One participant rated the robot as (0) not applicable. It is unclear if this response was an error or if that participant did not use the camera on the robot.
Hearing: How Well Participants Could Hear Teacher and Others
We asked the participants to rate their ability to hear with the HSR versus other robots they use in the classroom. Participants were asked how well they hear (1) the teacher and (2) others in the classroom. Participants reported the HSR’s hearing was significantly better than other robots they have used (Figure 11 Hearing; p < 0.02; one-tailed Wilcoxon Rank-Sum Test).
HSR
Overall, 7 to 8 participants rated the HSR’s capabilities for hearing the teacher and classmates as “excellent/good.” One participant rated the ability to hear the teacher as “poor” and others as “terrible,” but, as these responses are from the same participant, it was unclear if this was an issue with the microphones on the HSR or Wi-Fi issues with Google Hangouts during their session.
Double2/VGo
In contrast, only one Double2/VGo user provided an “excellent” response in one category of hearing: hearing the teacher. Seven participants rated their ability to hear the teacher and others as “good” or “average,” and one participant responded that their ability to hear others was “poor.” One participant did not rate their hearing capabilities on their robot and their response was recorded as a zero.
Mobility: How Well the Robot Navigated the School and Lab Environments
We asked the participants to rate how well the HSR moved around the classroom compared to other robots they use in the classroom. They were asked to rate the robot’s (1) ability to move, (2) stability, and (3) ability to avoid obstacles. They reported the HSR’s mobility was significantly better than other robots they have used (Figure 11 Mobility; p < 0.03; one-tailed Wilcoxon Rank-Sum Test).
HSR
Out of nine participants (N = 9), seven participants rated the HSR’s capability to move about the classroom as “excellent” or “good” with one “average” and one “poor” response. Seven participants felt the stability on the HSR was “excellent” with two participants describing the stability as “good.” Obstacle avoidance on the HSR was viewed as mostly excellent, good, or average with just one response rating it as “poor.” One participant did not feel they used this feature. However, the obstacle avoidance feature was in use during all lab sessions.
Double2/VGo
In contrast, for the Double2/VGo mobility, there were no “excellent” responses, but seven participants rated them as “good” and one rating it “average,” and there were no responses ranking it as “poor.” For the Double2/VGo stability, participants’ ratings were as follows—excellent (2) and good (2) with most participants feeling it was “average” (4). One participant did not answer this question, and their response was recorded as a zero. For obstacle avoidance, three participants were VGo users and able to respond about obstacle avoidance on their school robot. They rated the obstacle avoidance as “good” (2) and “average” (1). One participant rated the Double2’s obstacle avoidance as “poor”; perhaps, they did so because Double2 did not have this feature.
Camera Mobility
Comparison of camera mobility was limited to the VGo, as it has tilt (but no pan) camera capabilities. The Double2 robot does not have any camera pan-tilt capabilities. The HSR has a head that moves independently for full pan-tilt capabilities of the camera. We asked participants to evaluate the HSR’s pan-tilt “head” with camera. Participants were asked to rate the robot’s ability to look (1) up and down and (2) left and right.
HSR and VGo
Participants reported the HSR’s head/camera mobility was significantly better than other robots they have used (Figure 11, Camera Mobility; p < 0.00001; one-tailed Wilcoxon Rank-Sum Test).
HSR Novel Features: Manipulation and Autonomous Navigation
Unlike other robots used in the classroom by participants, the HSR has an arm and a hand that act as a gripper. During the HSR experiments, participants were asked to locate and retrieve items with the HSR. Participants seemed to value this capability with the median survey response being “good” (Figure 12, Manipulation). The HSR is also capable of autonomous navigation. During the HSR experiments, participants clicked on a map of the classroom to move to different locations (teacher’s desk, window, whiteboard, etc.). Most of the participants ranked the navigation capability to be “good” or “excellent” (Figure 12, Navigation). They did not compare these features to the robots they used to attend virtual classes because, at the time of this study, these features were not available on any commercially available robots.
Figure 12:

Survey Responses for HSR’s Arm and Gripper (Manipulation) and for HSR’s Autonomous Navigation
In the following, we detail survey responses to perceived usability and value of these features. Figure 12 represents survey responses for HSR’s arm and gripper (Manipulation) and for HSR’s autonomous navigation (Navigation). These features were unique to the HSR. The red line of the box plots denotes the median, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the most extreme data points not to be considered outliers, and outliers are plotted as red plusses.
Discussion
The results of our study and online survey allow us to evaluate both traditional and novel features of HSR for social interaction and learning. In addition, these findings allowed us to evaluate the implications of these features for future assistive robotics and telepresence research.
Hypothesis 1 was supported by our study results. Most participants felt the pan-tilt camera significantly improved the user experience by allowing participants to move only the robot head without moving the body. When asked, “Are there any features of the HSR that you feel are an improvement over other robots used to attend school?” Three participants mentioned the vision/pan-tilt feature and provided the following feedback:
Having a hand to use, vision, and moving around.
The hand, head movement.
The robot was much more stable than the Double I used, the robot’s ability to tilt its “head” up and down would be very useful for school settings, especially interaction with others, and the map function allowed for me to have a concept of the robot in the actual classroom space.
Hypothesis 2 was not supported by study results, although most participants rated navigation as “good” or “excellent” (see Figure 12). Because of the present study’s design, we were not able to gauge reduction in cognitive load that, in turn, would facilitate ease of learning and interacting with teachers and classmates. However, we were able to evaluate experiences in our lab and perceived value of this feature. When asked, “If your school robot had auto-navigation, for what do you think students could use it?” Four participants provided the following feedback:
Adaptive movement for students with disabilities or injuries preventing them from traditional controls.
To get around places without having to keep pressing the arrow keys and accidently bumping into things.
Moving around the classroom; moving into position to play games like this or that—or 4 corners; moving quickly to a location in the classroom.
It would be useful to be able to get from classroom to classroom or to a corner of a classroom for something like a group assignment.
Hypothesis 3 was supported by study results. The arm/hand hardware created increased levels of engagement and expression for most participants. Seven out of eight participants rated this feature as “excellent or good” for pointing at objects in the classroom, eight out of nine participants rated this feature as “excellent or good” for grasping objects, and six out of seven rated this feature as “excellent or good” for raising their hand to gain attention. When asked, “If your school robot had a hand, for what do you think students could use it?” Four participants provided the following feedback:
Labs, manipulative objects in lessons, interactions with a smart board.
Pressing elevator button, picking things up, throwing things, raising hand.
If the school robot had a hand, the elevator could be accessed and the student would then be able to access all of their classes.
Other than raising your hand, I have no idea.
Strengths and Limitations
Our study makes the following contributions to the interdisciplinary fields of human–robot interaction and child–robot interaction: (1) a child-centered study that allows for evaluation of novel robot features within the social contexts of learning, (2) empirical data on user experience of novel features not typically found on commercially available telerobots (i.e., pan-tilt, autonomous navigation, and arm/hand hardware) by real-world users, and (3) technological development of a custom user interface, as shown by our HSR GUI. Our work encompasses best practices commonly used in learning contexts such as providing introductory materials in multiple formats (Erwin and Guintini 2000) and creating an environment that reacts in a way familiar to the user (Biocca, Harms, and Burgoon 2003) to afford a sense of autonomy and competence (Ryan and Deci 2002). Our research team provided introductory instructional materials in both written and video formats to prepare participants for operating a telerobot prototype they had never seen in person. Another strength of our study was the use of real-world users to evaluate the features of the HSR. Because of their daily experiences using other telerobots, our participants were able to easily navigate the traditional features of vision, audio, and mobility before exploring the novel features. Our study design and results can inform industry partners and robot designers on study methods to incorporate and evaluate the efficacy of additional features, including the ones addressed in this article, in the design and development of robots in future. Our study can also inform future users and consumers on the perceived value of these features to accomplish desired robot-mediated tasks in learning environments.
A limitation of our study is its small sample of participants. Remote children who use robots to attend school not only have to attend or pursue traditional school, home, and community activities, similar to their peers, but also have to handle the burden of additional tasks, such as the need to function routinely despite the challenges associated with their medical conditions, and other tasks such as meeting doctor’s appointments. Due to these activities and limited participant pool, scheduling additional sessions was a challenge. Another limitation of our study is that though our participants were real-world users, they did not pilot the HSR in a real-world classroom or have robot-mediated interactions with other children. Every effort was made to physically simulate a classroom environment, but, due to IRB restrictions, the “classmates” in our study were few and consisted of adult members of our research team. During our study, some participants struggled with accessing Google Hangouts or TeamViewer on their home devices. Future studies should include testing of applications before the lab sessions, as much time was spent on troubleshooting these issues.
Conclusions and Future Work
Our study encompassed an interdisciplinary effort to evaluate telerobot design features for the remote child population. We designed a child-centered study that allowed for evaluation of novel robot features within the social contexts of learning and provided empirical data on user experience of the HSR’s novel features (i.e., pan-tilt, autonomous navigation, and arm/hand hardware) by real-world users. In addition, our team designed and deployed a custom user interface—the CARL-SR GUI to facilitate navigation of these novel features. Our study results indicate that the HSR’s novel features are promising for improving the robot-mediated virtual inclusion of remote children in traditional schools. Participants reported positive feedback for the pan-tilt feature, autonomous navigation, and arm/hand hardware. Future work exploring these features in real-world classrooms by real-world users will allow for future exploration on the efficacy of these features.
Acknowledgment
The project described was supported by the National Science Foundation; National Robotics Initiative Award #2136847; National Center for Research Resources; National Center for Advancing Translational Sciences; National Institutes of Health, through Grant TL1 TR001415; and Toyota Motor North America. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF, the NIH, or Toyota Motor North America. We thank all our participants for their time and valuable feedback.
Footnotes
Informed Consent
This research study was approved by university Institutional Review Board (IRB). Ethical considerations were met by following all relevant Institutional Review Board protocols including guidelines for recruitment, voluntary participation, and anonymization of data. Study information notices were sent to parents and children providing information on the study activity. Parents provided consent before the study and the lead researcher obtained verbal assent from child participants before each study session. No compensation was provided to participants.
Conflict of Interest
The authors declare that there is no conflict of interest.
Contributor Information
Veronica Ahumada-Newhart, University of California Davis, USA.
Hirak J. Kashyap, University of California Irvine, USA.
Tiffany Hwu, University of California Irvine, USA.
Yi Tian, University of California Irvine, USA.
Lara Mirzakhanian, University of California Irvine, USA.
Mikayla Minton, University of California Irvine, USA.
Steven Seader, University of California Irvine, USA.
Sarah Hedden, Toyota Motor North America, USA.
Douglas Moore, Toyota Motor North America, USA.
Jeffrey L. Krichmar, University of California Irvine, USA.
Jacquelynne S. Eccles, University of California Irvine, USA.
REFERENCES
- Adalgeirsson Sigurdur Orn, and Breazeal Cynthia. 2010. “MeBot: A Robotic Platform for Socially Embodied Telepresence.” Paper presented at the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2010), Osaka, Japan, March 2–5, 2010. [Google Scholar]
- Ahumada-Newhart Veronica. 2014. “Virtual Inclusion via Telepresence Robots in the Classroom.” CHI’14 Extended Abstracts on Human Factors in Computing Systems, Ontario, Canada, April 26–May 1, 2014:951–956. 10.1145/2559206.2579417. [DOI] [Google Scholar]
- Ahumada-Newhart Veronica, and Eccles Jacquelynne S.. 2020. “A Theoretical and Qualitative Approach to Evaluating Children’s Robot-Mediated Levels of Presence.” Technology, Mind, and Behavior 1 (1): 1–35. 10.1037/tmb0000007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ahumada-Newhart Veronica, and Olson Judith S.. 2017. “My Student Is a Robot: How Schools Manage Telepresence Experiences for Students.” Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM, New York, 342–347. 10.1145/3025453.3025809. [DOI] [Google Scholar]
- Ahumada-Newhart Veronica, and Olson Judith S.. 2019. “Going to School on a Robot: Robot and User Interface Design Features That Matter.” ACM Transactions on Computer-Human Interaction (TOCHI) 26 (4): 1–28. 10.1145/3325210. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ahumada-Newhart Veronica, Warschauer Mark, and Sender Leonard. 2016. “Virtual Inclusion via Telepresence Robots in the Classroom: An Exploratory Case Study.” International Journal of Technologies in Learning 23 (4): 9–25. 10.18848/2327-0144/CGP/v23i04/9-25. [DOI] [Google Scholar]
- Bennett David S. 1994. “Depression among Children with Chronic Medical Problems: A Meta-Analysis.” Journal of Pediatric Psychology 19 (2): 149–169. 10.1093/jpepsy/19.2.149. [DOI] [PubMed] [Google Scholar]
- Biocca Frank, Harms Chad, and Burgoon Judee K.. 2003. “Toward a More Robust Theory and Measure of Social Presence: Review and Suggested Criteria.” Presence: Teleoperators & Virtual Environments 12 (5): 456–480. 10.1162/105474603322761270. [DOI] [Google Scholar]
- Bolton Robbie. 2013. “Google Hangouts.” Journal de l’Association des bibliothèques de la santé du Canada [Journal of the Canadian Health Libraries Association] 34 (1): 39–40. 10.5596/c13-002. [DOI] [Google Scholar]
- Broekens Joost, Heerink Marcel, and Rosendal Henk. 2009. “Assistive Social Robots in Elderly Care: A Review.” Gerontechnology 8 (2): 94–103. https://ii.tudelft.nl/~joostb/files/Broekens%20et%20al%202009.pdf. [Google Scholar]
- CDC (Centers for Disease Control and Prevention). 2016. “National Health Interview Survey.” National Center for Health Statistics. https://www.cdc.gov/nchs/nhis/shs/tables.htm. [Google Scholar]
- Desai Munjal, Tsui Katherine M., Yanco Holly A., and Uhlik Chris. 2011. “Essential Features of Telepresence Robots.” Paper presented at the 2011 IEEE Conference on Technologies for Practical Robot Applications, Woburn, MA, April 11–12, 2011. [Google Scholar]
- Disability Rights California. 2012. “Special Education Rights and Responsibilities: Information on the Rights of Students with Significant Health Conditions.” https://serr.disabilityrightsca.org/serr-manual/chapter-14-information-on-the-rights-of-students-with-significant-health-conditions/.
- Drugarin CV Anghel Silviu Draghici, and Eugen Raduca. 2016. “Team Viewer Technology for Remote Control of a Computer.” Analele Universităţii” Eftimie Murgu” Reşiţa 23 (1): 61–66. [Google Scholar]
- Durlak Joseph A., Weissberg Roger P., Dymnicki Allison B., Taylor Rebecca D., and Schellinger Kriston B.. 2011. “The Impact of Enhancing Students’ Social and Emotional Learning: A Meta-Analysis of School-Based Universal Interventions.” Child Development 82 (1): 405–432. 10.1111/j.1467-8624.2010.01564.x. [DOI] [PubMed] [Google Scholar]
- Durrant-Whyte Hugh, and Bailey Tim. 2006. “Simultaneous Localization and Mapping: Part I.” IEEE Robotics & Automation Magazine 13 (2): 99–110. 10.1109/MRA.2006.1638022. [DOI] [Google Scholar]
- Erwin Elizabeth J., and Guintini Margaret. 2000. “Inclusion and Classroom Membership in Early Childhood.” International Journal of Disability, Development and Education 47 (3): 237–257. 10.1080/713671117. [DOI] [Google Scholar]
- Google. 2022. “How Classic Hangouts Protects Your Privacy & Keeps You in Control.” Hangouts Help. Google Support. https://support.google.com/hangouts/answer/10505731?hl=en.
- Hwu Tiffany, Kashyap Hirak J., and Krichmar Jeffrey L.. 2020. “A Neurobiological Schema Model for Contextual Awareness in Robotics.” Paper presented at the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, July 19–24, 2020. [Google Scholar]
- Johnson Steven, Rae Irene, Mutlu Bilge, and Takayama Leila. 2015. “Can You See Me Now? How Field of View Affects Collaboration in Robotic Telepresence.” Paper presented at the Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, ACM, New York, February 2015. [Google Scholar]
- Kashyap Hirak J., Detorakis Georgios, Dutt Nikil, Krichmar Jeffrey L., and Neftci Emre. 2018. “A Recurrent Neural Network Based Model of Predictive Smooth Pursuit Eye Movement in Primates.” Paper presented at the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, July 8–13, 2018. [Google Scholar]
- Kohlbrecher Stefan, Meyer Johannes, Graber Thorsten, Petersen Karen, Klingauf Uwe, and von Stryk Oskar. 2013. “Hector Open Source Modules for Autonomous Mapping and Navigation with Rescue Robots.” In Robot Soccer World Cup. [Google Scholar]
- Kristoffersson Annica, Coradeschi Silvia, and Loutfi Amy. 2013. “A Review of Mobile Robotic Telepresence.” Advances in Human-Computer Interaction 2013:1–18. 10.1155/2013/902316. [DOI] [Google Scholar]
- Lee Min Kyung, and Takayama Leila. 2011. “‘Now, I Have a Body’ Uses and Social Norms for Mobile Remote Presence in the Workplace.” Paper presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, Canada, May 7–12, 2011. [Google Scholar]
- Liu Shaoshan, Liu Liangkai, Tang Jie, Yu Bo, Wang Yifan, and Shi Weisong. 2019. “Edge Computing for Autonomous Driving: Opportunities and Challenges.” Proceedings of the IEEE 107 (8): 1697–1716. 10.1109/JPROC.2019.2915983. [DOI] [Google Scholar]
- Murphy David. 2016. “TeamViewer Introduces New Security Measures to Thwart Hacks.” PC Magazine, June 4, 2016. https://www.pcmag.com/news/teamviewer-introduces-new-security-measures-to-thwart-hacks.
- Neustaedter Carman, Venolia Gina, Procyk Jason, and Hawkins Daniel. 2016. “To Beam or Not to Beam: A Study of Remote Telepresence Attendance at an Academic Conference.” Paper presented at the Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, San Francisco, February 27–March 2, 2016. [Google Scholar]
- Paepcke Andreas, Soto Bianca, Takayama Leila, Koenig Frank, and Gassend Blaise. 2011. “Yelling in the Hall: Using Sidetone to Address a Problem with Mobile Remote Presence Systems.” Paper presented at the Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, October 16–19, 2011. [Google Scholar]
- Quigley Morgan, Conley Ken, Gerkey Brian, Faust Josh, Foote Tully, Leibs Jeremy, Wheeler Rob, and Ng Andrew Y.. 2009. “ROS: An Open-Source Robot Operating System.” In ICRA Workshop on Open Source Software, vol. 3. http://robotics.stanford.edu/~ang/papers/icraoss09-ROS.pdf. [Google Scholar]
- Rae Irene, and Neustaedter Carman. 2017. “Robotic Telepresence at Scale.” Paper presented at the Proceedings of the 2017 Chi Conference on Human Factors in Computing Systems, Denver, CO, May 6–11, 2017. [Google Scholar]
- Rae Irene, Mutlu Bilge, and Takayama Leila. 2014. “Bodies in Motion: Mobility, Presence, and Task Awareness in Telepresence.” Paper presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, Canada, April 26–May 1, 2014. [Google Scholar]
- Riek Laurel D. 2007. “Realizing Hinokio: Candidate Requirements for Physical Avatar Aystems.” Paper presented at the Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Arlington, VA, March 10–12, 2007. [Google Scholar]
- Ryan Richard M., and Deci Edward L.. 2002. “Overview of Self-Determination Theory: An Organismic Dialectical Perspective.” In Handbook of Self-Determination Research, edited by Ryan Richard M. and Deci Edward L., 2:3–33. New York: University of Rochester Press. [Google Scholar]
- Sabelli Alessandra Maria, Kanda Takayuki, and Hagita Norihiro. 2011. “A Conversational Robot in an Elderly Care Center: An Ethnographic Study.” Paper presented at the 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, March 6–9, 2011. [Google Scholar]
- Sexson Sandra B., and Madan-Swain Avi. 1993. “School Reentry for the Child with Chronic Illness.” Journal of Learning Disabilities 26 (2): 115–137. 10.1177/002221949302600204. [DOI] [PubMed] [Google Scholar]
- Sirkin David, Venolia Gina, Tang John, Robertson George, Kim Taemie, Inkpen Kori, Sedlins Mara, Lee Bongshin, and Sinclair Mike. 2011. “Motion and Attention in a Kinetic Videoconferencing Proxy.” In Human-Computer Interaction—INTERACT 2011, vol. 6946, edited by Campos P, Graham N, Jorge J, Nunes N, Palanque P, and Winckle M. Berlin, DE: Springer. [Google Scholar]
- Takayama Leila, and Harris Helen. 2013. “Presentation of (Telepresent) Self: On the Double-Edged Effects of Mirrors.” Paper presented at the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, March 3–6, 2013. [Google Scholar]
- Takayama Leila, and Go Janet. 2012. “Mixing Metaphors in Mobile Remote Presence.” Paper presented at the Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, Seattle, Washington, DC, February 11–15, 2012. [Google Scholar]
- Toyota. 2015. “Toyota Shifts Home Helper Robot R&D into High Gear with New Developer Community and Upgraded Prototype.” July 16, 2015. https://global.toyota/en/detail/8709541.
- Tsai Tzung-Cheng, Hsu Yeh-Liang, Ma An-I, King Trevor, and Wu Chang-Huei. 2007. “Developing a Telepresence Robot for Interpersonal Communication with the Elderly in a Home Environment.” Telemedicine and e-Health 13 (4): 407–424. 10.1089/tmj.2006.0068. [DOI] [PubMed] [Google Scholar]
- Tsui Kate, Norton Adam, Brooks David, Yanco H, and Kontak Daniel. 2011. “Designing Telepresence Robot Systems for Use by People with Special Needs.” Paper presented at the International Symposium on Quality of Life Technologies: Intelligent Systems for Better Living, Toronto, Canada, June 6–7, 2011. [Google Scholar]
- Tsui Katherine M., and Yanco Holly A.. 2007. “Assistive, Rehabilitation, and Surgical Robots from the Perspective of Medical and Healthcare Professionals.” Paper presented at the AAAI 2007 Workshop on Human Implications of Human-Robot Interaction. Gold Coast, AU: Springer. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=5a973573fb322e3c79f6163d91c423d4ba77ad16. [Google Scholar]
- Tsui Katherine M., Desai Munjal, Yanco Holly A., and Uhlik Chris. 2011. “Exploring Use Cases for Telepresence Robots.” Paper presented at the 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, March 6–9, 2011. [Google Scholar]
- US Census Bureau. 2020. “QuickFacts on U.S. Population.” https://www.census.gov/quickfacts/fact/table/US/PST045216.
- Venolia Gina, Tang John, Cervantes Ruy, Bly Sara, Robertson George, Lee Bongshin, and Inkpen Kori. 2010. “Embodied Social Proxy: Mediating Interpersonal Connection in Hub-and-Satellite Teams.” Proceedings of the CHI Conference on Human Factors in Computing Systems. 10.1145/1753326.1753482. [DOI] [Google Scholar]
- Weitzman Michael. 1986. “School Absence Rates as Outcome Measures in Studies of Children with Chronic Illness.” Journal of Chronic Diseases 39 (10): 799–808. 10.1016/0021-9681(86)90082-2. [DOI] [PubMed] [Google Scholar]
- Yamamoto Takashi, Nishino Tamaki, Kajima Hideki, Ohta Mitsunori, and Ikeda Koichi. 2018. “Human Support Robot (HSR).” Paper presented at the ACM SIGGRAPH 2018 Emerging Technologies, Vancouver, Canada, August 12–16, 2018:1–2. [Google Scholar]
