Abstract
Autonomous robotic assisted surgery (RAS) systems aim to reduce human errors and improve patient outcomes leveraging robotic accuracy and repeatability during surgical procedures. However, full automation of RAS in complex surgical environments is still not feasible and collaboration with the surgeon is required for safe and effective use. In this work, we utilize our Smart Tissue Autonomous Robot (STAR) to develop and evaluate a shared control strategy for the collaboration of the robot with a human operator in surgical scenarios. We consider 2D pattern cutting tasks with partial blood occlusion of the cutting pattern using a robotic electrocautery tool. For this surgical task and RAS system, we i) develop a confidence-based shared control strategy, ii) assess the pattern tracking performances of manual and autonomous controls and identify the confidence models for human and robot as well as a confidence-based control allocation function, and iii) experimentally evaluate the accuracy of our proposed shared control strategy. In our experiments on porcine fat samples, by combining the best elements of autonomous robot controller with complementary skills of a human operator, our proposed control strategy improved the cutting accuracy by 6.4%, while reducing the operator work time to 44 % compared to a pure manual control.
I. INTRODUCTION
Advances in robotic and camera technology have led to dramatic improvements of medical robots. Robotic assisted surgery (RAS) systems incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgery (MIS) approach, improving patient outcomes by reducing collateral damage and patient recovery times [1]. State of the art systems for robotic assisted surgeries are based on a tele-operated paradigm. The commercially successful da Vinci Surgical System (Intuitive Surgical, Sunnyvale, California) [2] is used for a wide range of surgical procedures in urology, gynecology, cardiothoracic, and general surgery. Another tele-operated system is the Raven surgical robot developed at the University of Washington [3]. Autonomous control algorithms for RAS aim to leverage robotic accuracy and repeatability for surgical procedures. Autonomous RAS has the potential to reduce human errors, deliver improved patient outcomes independent of surgeon’s training and experience, and enable remote surgeries without high-bandwidth network connections [4]. Autonomous RAS with pre-planned functionality was introduced in bony orthopedic procedures (e.g. ROBODOC, Caspar, and CRIGOS), radiotherapy, and cochlear implants [5], [6]. Efforts in automating deformable and unstructured soft tissue surgeries include knot tying, needle insertion, and executing predefined motions [7]–[9].
However, full automation in complex surgical environments is not infallible and safe operation requires surgeon supervision and allow surgeon to take over control. Our goal is to perform complex surgical procedures collaboratively between robot and surgeon with the highest possible degree of autonomy, while ensuring safe operation at all times. Thus a critical element in realizing an effective collaboration between the autonomous robot controller and human operator is designing algorithms that will make the autonomous system “self-aware” of the limitations of its automation capabilities. These algorithms innovate by aiming to maximize the level of automation of the RAS that interacts with complex environments and minimize the expected errors of the variables for which the robot is confident that it can control more accurately than its human collaborator.
Shared control strategies have been implemented outside of medicine in controlled environments. The shared control strategy takes the following general form
| (1) |
where manual control commands from a human operator (i.e. M (t)) are combined with autonomous control commands (i.e. A(t)) with complementary scales (i.e. α(t) ∈ [0, 1] and 1 − α(t) respectively) to from the total control input to the robot (i.e. U (t)). Examples of the control commands include position, velocity and acceleration profiles, and force/torque.
Various novel methods have been proposed for defining the function α(t) as shown in Fig. 1. In these methods, α(t) is determined when the robotic control task is on the fly based on an independent variable x to fulfill certain performance criteria. For example, α(t) can dynamically change as a function of tracking accuracy [10], proximity to obstacles and/or desired locations [11], the accuracy of predicting human intentions in controlling the robot [12], the level of manipulation precision [13], and the trust of human to the autonomous controller of robot [14] similar to the functions shown in Fig. 1.a-d. It has also been shown that a fixed level of α(t) (Fig. 1.e) helps people with disabilities pick-and-place objects more easily via an assistive robotic arm [15]. Fig. 1.f shows an example of a human-like autonomy al-location in which when the difference between human’s self-confidence and his/her trust to the autonomous systems exceeds a high threshold, s/he chooses manual control and vice versa [16]. A suitable candidate for x is the confidence level in manual and autonomous controls and their dynamic uncertainties which is consistent human-like decision-making algorithm [16] and hence reduces operator confusion and resistance [17]. However, the choice/design of α requires identification experiments for both manual and autonomous controls to reveal their strengths and weaknesses.
Fig. 1:

Examples of the control allocation function α.
In this work, we develop and evaluate a “self-aware” confidence based shared control strategy for our Smart Tissue Autonomous Robot (STAR) [18] to combine the best elements of autonomous robot controller with complementary skills of a human operator. More specifically, we develop a shared control strategy with an automated control allocation between the autonomous and manual controls. We consider 2D pattern cutting surgical tasks including the effects of partial blood occlusion of the cutting pattern as common disturbances during surgery. We use an electro-cautery system in STAR to perform consistent and accurate surgical incisions [19]. For the given RAS system and surgical task, our contributions are the following: i) we develop a confidence-based shared control strategy, ii) we assess the performances of each control resource and identify the confidence models for human (i.e. manual control) and robot (i.e. autonomous control) as well as the confidence-based allocation function α(t), and iii) we experimentally evaluate the accuracy of our proposed shared control strategy via multiple tracking tests as well as cutting on porcine fat samples and compare it to single mode control strategies. Our proposed shared control strategy is depicted in Fig. 2. To our knowledge, this study represents the first successful demonstration of confidence based shared control in a complex surgical task such as performing electro-cautery on tissue samples with blood occlusions.
Fig. 2:

Block diagram of our proposed confidence-based shared control strategy for STAR.
II. METHODS
A. Testbed
The shared control testbed for implementing our confidence-based shared control strategy is shown in Fig. 3. A 7-DOF lightweight Kuka arm (KUKA LWR 7+) is used as the surgical robot. Since we consider a pattern cutting task in this paper, we use our electro-cautery prototype developed in [19] to perform the cuts on porcine fats samples. We built this prototype by modifying a commercially available 2-DOF laparoscopic grasper Radius T (Tuebingen Scientific, GMBH, Tbingen, Germany). The quick release interface of the tool is electrically isolated from the shaft, while two conductors within the center of the tool are electrically coupled to an electro-surgical generator (ESG). To use the electro-cautery tool, a needle electrode is inserted into the quick-release interface, and a cutting waveform is selected on the ESG. When the operator activates a foot pedal, the cutting waveform is sent from the needle electrode, to the target tissue, and returned through a grounding pad underneath the sample. The electrical signal vaporizes tissue in contact with the electode, cutting the sample. As shown in Fig. 3, a Point Grey Chameleon RGB camera is used to provide the visual feedback to the operator. This camera is also used with the autonomous controller to detect the desired incision trajectory via OpenCV computer vision libraries [20]. A Sensable Phantom Omni haptic controller device is utilized to allow the operator to control the robot manually. The control system including the planning algorithms, robot controllers, computer vision, and the control allocation strategies are integrated via Robot Operating System (ROS) [21].
Fig. 3:

Experimental testbed.
B. Control System
In this section, we describe how the autonomous and manual control inputs to the robot (i.e. A(t) and M (t) in Figure. 2 and Eqn. (1)) are implemented during the experiments. The block diagram of the autonomous controller is shown in Figure 4. Using this control loop, real-time video frames from the RGB camera are processed to detect the reference trajectory to cut (e.g. a circular pattern). Edge and contour detection algorithms in OpenCV are used to detect the reference cutting trajectory in each test. Then, the reference trajectory is converted from the image frame to the Carte-sian robot frame using a homography transformation. The resulting reference and the real-time positions of the robot are used in the trajectory generator and planner to produce multiple equidistant waypoints starting from the closest point on the trajectory to the robot. Smooth time-based trajectories are produced between the waypoints using Reflexxes Motion Libraries [22]. Kinematics and Dynamics Library (KDL) in Open Robot Control Systems (OROCOS) [23] is used to transform the task-space trajectories of the robot to the joint space trajectories, which is the final output of the high-level autonomous control (i.e. A(t)). Finally, IIWA stack developed in [24] helps to apply the low-level controllers of the robot to follow the desired joint-space trajectories.
Fig. 4:

The autonomous control loop.
The block diagram of the manual control is shown in Fig. 5. In this control loop, the human operator is responsible for using the visual feedback from the camera to detect the desired trajectory, plan, and follow the trajectory using the Phantom Omni haptic device. The position feedback from the robot is used in addition to the position commands from the limited workspace of the haptic device to determine reference positions of the robot in the task space. This is done simply by identifying the initial position of the robot when the test starts and adding new reference positions read from the displacement of the haptic device to produce the final position of the robot in the Cartesian task-space. Similar to the autonomous control, inverse kinematics determine the joint-space command (i.e. M (t)) and IIWA stack guarantees its accurate tracking.
Fig. 5:

The manual control loop.
For the shared control mode described in this paper, the haptic device allows the operator to switch the control between autonomous and manual via the push buttons on the controller as shown in Figure 6. We developed a graphical user interface (GUI) designed to inform and prepare the user of the suggested control mode and mode switches (determined via the α(t) function) by highlighting the parts of the trajectory that should to be done in manual mode. The rest of the trajectory should be completed autonomously.
Fig. 6:

GUI and mode switches.
C. Surgical Task
We chose the second task (i.e pattern cutting) from the Fundamentals of Laparoscopic Surgery (FLS) as a bench-mark in this work. FLS is the standard training method in laparoscopic surgery and includes five subtasks with increasing difficulty [25]. In the pattern cutting task, an operator is asked to cut a 5 cm diameter circular pattern. Surgical proficiency is defined as two consecutive samples where all cuts are within 2mm of the circular pattern, and total cutting time is less than 98 seconds. To simulate typical occlusions observed in surgical environments, we prepared multiple test samples under different random pseudo-blood covered conditions (such as the samples shown in Fig. 7). Pseudo-blood is used due to consistency, ease of storage and contamination concerns, as the optical properties of real blood vary widely with physiological conditions such as osmolarity and haematocrit [26]. In practice, pseudo-blood effectively obscures cutting trajectories in RGB images, and is similar in appearance and consistency to real blood.
Fig. 7:

Different test samples: a) no blood occlusion, and disturbance by blood from b) small, c) medium, d) large, and e-f) multiple blood stains.
D. Identification of Human and Robot Performances
The first step towards designing an effective shared control strategy is to identify the strengths and weaknesses of each control resource (i.e. the manual control performed by human and autonomous control performed by robot). The factors affecting human performance include the camera angle and the dissimilarities between the kinematics of the control device and the robot. The robot performance will be affected by the random failures in detecting the desired cutting trajectory as well as slight imprecision in the calculation of the robot tool location via the robot kinematic chain. Next, we explain the data collection and processing steps for the identification process.
1). Data Collection:
A laser pointer, shown in Fig. 8.a is initially used to identify the tracking accuracy of the autonomous and manual controls in the identification tests. Using the laser pointer, the desired cutting trajectory is followed under manual and autonomous control (in manual control, operator uses the laser pointer as a visual cue to control the robot along the desired trajectory). The laser pointer is used in our initial experiments because it allows a fast way of collecting multiple data sets from different test conditions as it does not require any time consuming tissue preparation process. This is especially helpful because a reliable identification process requires multiple data sets. Moreover, we can quickly replicate the test samples made on the paper for laser pointer and keep the test conditions identical in different test modes, which is hard to achieve when working with soft tissue samples. Finally, it is also easier to initially evaluate the tracking behavior of different control modes since it only requires a simple post-processing algorithm, as explained later in Section II-D.2. During the experiments, robot motion was constrained to a plane parallel to the X-Y plane of the samples at a fixed height (61.43 cm) and orientation to minimize laser-pointing inaccuracies. As later detailed in Section III-D, we will examine the hypothesis and results obtained by the tests using the laser pointer against a set of tests using real tissues and our electrocautery tool.
Fig. 8:

Steps of post-processing the tracking data using laser pointer mounted on the KUKA LWR shining on the circular pattern: a) obtaining a video frame and mapping the 10 cm by 10 cm test area to a 500 by 500 pixel frame using corner detection and perspective transformation, b) tracking the green laser blob compared to the reference circle, and c) detecting the blood stain size and locations.
2). Post-processing:
The performance of the autonomous controller is recorded during each process via the Point Grey camera and the video is post-processed to asses the tracking accuracy compared to the desired trajectory. A C++ code was written using OpenCV to detect the laser pointer and the location and size of blood blobs during each experiment as shown in Fig. 8. We first used perspective transformations to obtain a top view of the desired trajectory(Fig. 8.a). Each pixel represents 0.2 mm on the trajectory plane in our experiment. Using the new image frame, we track the location of the laser pointer during the experiment using color thresholding and blob detection (Fig. 8.b). A similar method is used for finding the location of blood stains (Fig. 8.c). The position of the laser pointer is compared to the desired track in the video frames and the tracking accuracy is recorded
III. EXPERIMENTS AND RESULTS
A. Preliminary Manual and Autonomous Control Tests
Using the testbed, data collection, and processing techniques explained in Section II, we conducted 22 experiments with the laser pointer following circular patterns and assessed the tracking error under different control modes, test conditions, and sample orientations. This included four experiments for the samples a-e in Figure 7 with different size blood stains and two each for sample f (since it is symmetric). A graduate student from the University of Maryland was trained as the operator in our tests. The results are shown in Fig. 9 and indicate that in the ideal case, i.e. no blood occlusion of the track, the autonomous controller outperforms the manual control (0.49 ± 0.33 mm vs 0.83 ± 0.56 mm). However, as the complexity (i.e. size and number of blood stains) increases, the error of the autonomous controller increases, while the manual control accuracy remains in the same range throughout the experiments.
Fig. 9:

Tracking error for pattern following task with manual and autonomous controls under different test conditions.
Representative examples of two test cases with no blood stains and with multiple blood stains are shown in Figures. 10 and 11. As it can be seen in Figure 10, when the trajectory is not blocked by blood, the autonomous controller accurately detects and follows the cutting pattern. On the other hand, as shown in Figure 11, due to the existence of different blood stains on the desired pattern, the tracking accuracy locally decreases near the blood covered regions. This happens because the desired trajectory is not detected correctly by the boundary detection algorithms and no prior information about the shape of trajectory is utilized. From these results, we can conclude that the local performance of the autonomous controller on the non-occluded parts of the track is superior to manual control accuracy and vice versa. Therefore, a careful design of a shared control strategy can take advantage of the local strengths of both controllers and result in a more accurate control system. We achieve this by identifying confidence models for the autonomous and manual controls in the vicinity of blood stains. These models will provide insight on how and when to switch the control modes to improve the overall task performance.
Fig. 10:

Example trajectories of autonomous and manual control under ideal test conditions (no blood stains).
Fig. 11:

Example trajectories of autonomous and manual control under complex test conditions (multiple blood stains).
B. Identification of Confidence Models and Autonomy Allocation Function
In order to perform a uniform analysis of the performance of manual and autonomous controllers near the blood stains regardless of blood location and size, we perform normalization on the raw experimental data. We define as a measure for identifying the intersection of the desired track with the blood stain as shown in Figure. 12. When we enter the blood stain from a clean part of the track we have in the middle , and when we leave the blood . Using this definition, the intersection of the track with blood stains of difference sizes is normalized based on their sizes (larger/smaller blobs cause larger/smaller occlusion on the track). We used blob detection algorithms in OpenCV to find the location and size of blood stains on the track and normalized their intersections. We also identified the tracking error along for each experiment and normalized it based on the blob sizes.
Fig. 12:

Example of the definition of start, middle, and end points of blood occlusion on the track.
Using the data collected from the experiments and the above-mentioned normalization process, we analyzed the local performance of autonomous and manual controllers based on the proximity to the blood stains. The results are shown in Fig. 13. In this figure, first the normalized tracking errors are plotted versus Then, using the curve fitting toolbox in MATLAB [27], different curves are fitted to the collected data. As it can be seen in Figures. 13.a-b, a linear function and a skewed Gaussian function respectively describe the local behavior of the manual control and autonomous control near the blood blobs. The fitted function for manual control is 0.002, and bM = 0.061 and for the autonomous control is according to the following
with aA = 0.206, bA = 0.213, cA = 1.257. As it can be seen in Fig. 14.c, these fitted functions suggest that the manual control is effective in the neighborhood of blood stains while as we leave this region the autonomous controller becomes more effective 1. Therefore, using the fitted curves in 13.a-c, we define confidence models for human (i.e. manual mode) and robot (i.e. autonomous mode) respectively as CM = 1 − ym and CA = 1 − yA.
Fig. 13:

Fitted curves for normalized errors vs the proximity to blood .
Fig. 14:

Confidence models.
Once the confidence models for human and robot in the pattern cutting task are identified, an autonomy allocation strategy needs to be determined based on these functions such that the pattern cutting accuracy is maximized. We use the confidence models to locally select the most reliable control resource as the control task is performed. The confidence models and the autonomy allocation α identified in experiments are shown in Figure. 14. Since confidence in manual CM is more or less constant, we choose α as well as the decision thresholds for switching to manual or autonomous control modes as a function of confidence in autonomous control CA. We design the switches such that the mode which we have more confidence in is chosen. According to the α function and the CA model shown in Figure. 14, as we approach the blood stain (i.e. approaches 0 from negative values), confidence in autonomous control gradually decreases from 1 and reaches a low threshold τl (here τl = 0.93 occurring at ). After this thresh-old, we are locally more confident in the manual control than the autonomous control. As we approach the middle of the blood stain on track (i.e. ), confidence in autonomous control reaches a minimum level (here 0.79) and then starts increasing until it reaches a higher decision threshold τu after which the autonomous control is again more reliable than manual control (here τu = 0.94 occurring at ). Therefore, at τu, switching back to autonomous control (i.e. α = 0) results in a better tracking performance. Next, we test this hypothesis with a set of preliminary experiments with the laser pointer as well as a set of cutting tests on real porcine fat samples with our robotic electrocautery tool.
C. Shared Control Tests with Laser Pointer
Using the confidence models and the control allocation strategy identified in Section III-B as well as the GUI in Fig. 6, we performed 18 tests with the laser pointer using the blood stained samples in Fig. 7 (i.e. similar to the tests in Section III-A, while excluding the clean track that does not require a shared control). In order to implement the shared control strategy efficiently, when the operator is not controlling the robot (i.e. autonomous mode), a tracking controller smoothly controls the motion of the haptic device to track the current position of the robot. This reduces the confusion of operator regarding the current location of the robot and the orientation of motion when returning back to controlling the robot. The results are summarized in Fig. 15. As it can be seen in Fig. 15.a, the average tracking error of the shared control is generally lower than single-mode manual and autonomous controls. However, since the shape of the large blood stain in Figure 7.d is more irregular compared to the other samples, the switching policy does not accurately suggest the switching thresholds, which depend on estimating the blood location and its intersections with track. This results in a larger error for the shared control in the large blood sample compared to pure manual control. The overall average tracking error for the laser tests on blood stained samples with the three control strategies is shown in Fig. 15.b. These results indicate that pure autonomous control is the worst choice when a blood stain is present on the track (1.272 ± 1.439 mm errors). Shared control demonstrated a 7.4 % improvement in tracking accuracy compared to pure manual control (0.731 ± 0.559 mm for S compared to 0.789 ± 0.566 mm for M). This improved performance is obtained while the operator needs to control the robot at only 21.85% of the time compared to 100% for manual control (Figure. 15.c).
Fig. 15:

Comparison of the tests with laser pointer between pure manual (M), pure autonomous (A), and shared (S) control strategies.
D. Electrocautery on Porcine Fat Samples
The results of performing cuts on the porcine fat samples using our robotic electrocautery system are shown in Fig. 16. For these results, similar test samples were made using porcine fat layers, circular cutting pattern, and two blood stains (large and medium sizes). Using these samples, five experiments were conducted including one pure autonomous control (i.e. A), two pure manual control (i.e. M-1 and M-2), and two Shared control tests (i.e. S-1 and S-2). In order to analyze the accuracy of the cuts in different test condition, the performance of the robot was recorded during each experiment. The first frame (before the start of cut) and the last frame (after finishing the cut) of each video were extracted. The initial desired trajectory was manually marked on the first frame and labeled as “desired cut”. Similarly, using the video, the cut trajectory was marked manually on the last frame and labeled “performed cut”. The resulting marked frames were then used in a C++ code that identifies the desired and performed cuts in two frames using contour detection algorithm in OpenCV. Then, for all the pixels of the performed cut contour, errors compared to the desired cut contour are calculated. In our analysis, a 76 mm by 76 mm area of the test sample holder frame is mapped to a 500 by 500 pixel image so that a uniform measurement system is used for all the tests. Using this mapping, the final error of the performed cut is calculated in milliliters. The result of this analysis is summarized in Table I. As it can be seen in this table and Figure. 16, the pure autonomous controller results in the worst tracking error (on average 3.212 mm). This is compatible with our results using the laser pointer in the complex multiple-blood stain test conditions. The confidence-based shared control strategies proposed in this paper, (i.e. S-1 and S-2) resulted in lower and more consistent tracking errors compared to the pure manual tests (i.e. M-1 and M-2) by comparing the average and standard deviation values. Among the four tests for manual and shared controls, the average value of the cutting error for the combined S-1 and S-2 tests was 6.4 % lower compared to the combined M-1 and M-2 tests (i.e. 0.991 mm in shared vs 1.057 mm in manual). This improvements are also consistent with our predictions obtained from the previous tests with the laser pointer. During S-1 and S-2, operator controlled the robot 44.01% of the time. Our previous research in the shared control strategies via a through human subject study [14] indicates that the combined effect of performance improvement and operator workload reduction leads to the satisfaction of the operator about the control system.
Fig. 16:

Results of electrocautery on porcine fat samples. Top row shows the desired cuts and bottom row shows the performed cuts.
TABLE I:
Tracking errors in the sample cuts
| Error (mm) | Test | |||||
| A | M-1 | M-2 | S-1 | S-2 | ||
| avg | 3.212 | 0.881 | 1.234 | 0.866 | 1.116 | |
| std | 2.141 | 0.662 | 1.114 | 0.565 | 0.899 | |
IV. DISCUSSION
Although the overall outcome of shared control is better compared to the other control strategies, sudden switches from autonomous to manual control can sometimes lead to increased transient error due to the lack of situational awareness of the operator when returning back to the control loop. In such situations, the operator suddenly applies a large command which is not consistent with the previous autonomous control commands and this causes an initial impulsive motion until the operator regains the control efficiently. This case did not occur frequently in our tests, but if not addressed properly, it remains a potential source of reduced performance under shared control. Examples of successful and unsuccessful control mode switches in the tests with the laser pointer are included in Fig. 17 with labels Test-a and Test-b respectively. As it can be seen in Test-a, the operator handles the switching between control modes effectively near the blood stains when the confidence-based strategy suggests α = 1 from α = 0 and reduces the average tracking error to 0.6 mm compared to the pure manual or autonomous control. However, in Test-b, at the switching moment near the blood stain, the operator mishandles the switch and this results in a high error, value which eventually leads to a larger average error 0.77 mm compared to pure manual control 0.67 mm. Therefore, identifying the switching effects on the tracking accuracy and their corresponding costs provides more insight and potential for further improvement of the performance of the proposed confidence-based shared control strategy.
Fig. 17:

Examples of successful and unsuccessful shared control switching and their pure manual and autonomous control counterparts. Test-a (successful) is with multiple blood stains and Test-b (unsuccessful) is with a small stain.
Finally, in this study a static scenario was considered by factoring out the dynamic changes in the environment e.g. growth of blood stain sizes and the visibility of the desired cutting pattern as we proceed in completing the cuts. In our experiments, we set a fixed depth of cut and used a high power on the electrocautery tool to perform the incision in one pass. In clinical electrocautery scenarios, however, completing the cut requires the use of electrocautery tool in multiple passes, while gradually increasing the depth of cut. In this case, as we continue cutting the trajectory in different depths, the original marked desired trajectory deforms and might not be identifiable for the robot, since real-time recalculation of trajectory is needed, which in turn results in a dynamic growth of error in the autonomous controller. Such dynamics will affect the confidence models as time goes on during the surgical procedure. Hence, consideration of dynamic deterioration of the performance of autonomous or manual controllers will add a new dimension to the proposed confidence-based shared control strategies.
V. CONCLUSION
In this work, we developed a confidence-based shared control strategy for our smart tissue autonomous robot. We demonstrated how the confidence models for manual and autonomous controls can be obtained for a 2D cutting surgical task. We also identified a confidence-based control allocation function for the shared control system and tested our strategy in multiple tracking and cutting experiments. The results indicate that this study exhibits the potential to improve the overall task performance of robotic assisted surgery systems via an effective shared control strategy, while reducing the work time of the operator compared to pure manual control. Our future work will include minimization of the effects of control switches on the performance of the shared control system, as well as the inclusion of dynamic changes of the surgical task environment in the confidence models.
ACKNOWLEDGEMENTS
The authors would like to thank the University of Mary-land Robotics Realization Laboratory (RRL) and Dr. Sarah Bergbreiter for the use of lab space and Kuka IIWA robot. We would also like to thank Mr. Ivan Penskiy and Mr. Rishabh Agarwal for their help during the use of RRL.
*Research reported in this paper was supported by National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award numbers 1R01EB020610 and R21EB024707. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Footnotes
Note that yA only represents a model describing the local behavior of the autonomous controller. Since there exists an innate constant tracking error in general, due to factors such as sensing noise and actuation accuracies, a global error model will have a static offset e.g. y′A = yA + yoffset. The skewness of the function may be related to the fact that trajectory was always followed in a counter-clockwise pattern.
REFERENCES
- [1].Mercante G, Ruscito P, Pellini R, Cristalli G, and Spriano G, “Transoral robotic surgery (TORS) for tongue base tumours,” Acta otorhinolaryngologica italica, vol. 33, no. 4, p. 230, 2013. [PMC free article] [PubMed] [Google Scholar]
- [2].Kang CM, Chi HS, Kim JY, Choi GH, Kim KS, Choi JS, Lee WJ, and Kim BR, “A case of robot-assisted excision of choledochal cyst, hepaticojejunostomy, and extracorporeal Roux-en-y anastomosis using the da Vinci surgical system,” Surgical La-paroscopy, Endoscopy & Percutaneous Techniques, vol. 17, no. 6, pp. 538–541, December 2007. [DOI] [PubMed] [Google Scholar]
- [3].Lum MJH, Friedman DCW, Sankaranarayanan G, King H,Fodero K, Leuschke R, Hannaford B, Rosen J, and Sinanan MN, “The RAVEN: Design and Validation of a Telesurgery System,” The Int. J. of Robotics Research, vol. 28, no. 9, pp. 1183–1197, September 2009. [Google Scholar]
- [4].Moustris GP, Hiridis SC, Deliparaschos KM, and Kon-stantinidis KM, “Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature,” The Int. J. of Medical Robotics and Computer Assisted Surgery, vol. 7, no. 4, pp. 375–392. [DOI] [PubMed] [Google Scholar]
- [5].Kazanzides P, Mittelstadt B, Musits B, Bargar W, Zuhars J, Williamson B, Cain P, and Carbone E, “An integrated system for cementless hip replacement,” IEEE Engineering in Medicine and Biology Magazine, vol. 14, no. 3, pp. 307–313, May 1995. [Google Scholar]
- [6].Hagag B, Abovitz R, Kang H, Schmitz B, and Conditt M, “RIO: Robotic-Arm Interactive Orthopedic System MAKOplasty: User Inter-active Haptic Orthopedic Robotics,” in Surgical Robotics Springer US, 2011, pp. 219–246. [Google Scholar]
- [7].Knoll A, Mayer H, Staub C, and Bauernschmitt R, “Selective automation and skill transfer in medical robotics: a demonstration on surgical knottying,” The Int. J. of medical robotics + computer assisted surgery: MRCAS, vol. 8, no. 4, pp. 384–397, December 2012. [DOI] [PubMed] [Google Scholar]
- [8].Iyer S, Looi T, and Drake J, “A single arm, single camera system for automated suturing,” in 2013 IEEE International Conference on Robotics and Automation (ICRA), May 2013, pp. 239–244. [Google Scholar]
- [9].Kang H and Wen J, “Robotic assistants aid surgeons during minimally invasive procedures,” IEEE Engineering in Medicine and Biology Magazine, vol. 20, no. 1, pp. 94–104, January 2001. [DOI] [PubMed] [Google Scholar]
- [10].Chipalkatty R, Droge G, and Egerstedt MB, “Less is more: Mixed-initiative model-predictive control with human inputs,” IEEE Trans. on Robotics, vol. 29, no. 3, pp. 695–703, 2013. [Google Scholar]
- [11].Loizou SG and Kumar V, “Mixed initiative control of autonomous vehicles,” in Proceedings 2007 IEEE Int. Conf. on Robotics and Automation. IEEE, 2007, pp. 1431–1436. [Google Scholar]
- [12].Dragan AD and Srinivasa SS, “A policy-blending formalism for shared control,” The Int. J. of Robotics Research, vol. 32, no. 7, pp. 790–805, 2013. [Google Scholar]
- [13].Kofman J, Wu X, Luu TJ, and Verma S, “Teleoperation of a robot manipulator using a vision-based human-robot interface,” IEEE Trans. on Industrial Electronics, vol. 52, no. 5, pp. 1206–1219, 2005. [Google Scholar]
- [14].Saeidi H, Wagner JR, and Wang Y, “A mixed-initiative haptic tele-operation strategy for mobile robotic systems based on bidirectional computational trust analysis,” IEEE Transactions on Robotics, vol. 33, no. 6, pp. 1500–1507, 2017-December. [Google Scholar]
- [15].Kim D-J, Hazlett-Knudsen R, Culver-Godfrey H, Rucks G, Cunningham T, Portee D, Bricout J, Wang Z, and Behal A, “How autonomy impacts performance and satisfaction: Results from a study with spinal cord injured subjects using an assistive robot,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 42, no. 1, pp. 2–14, 2012. [Google Scholar]
- [16].Gao J and Lee JD, “Extending the decision field theory to model operators’ reliance on automation in supervisory control situations,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 36, no. 5, pp. 943–959, 2006. [Google Scholar]
- [17].Abi-Farraj F, Osa T, Peters NPJ, Neumann G, and Giordano PR, “A learning-based shared control architecture for interactive task execution,” in Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017, pp. 329–335. [Google Scholar]
- [18].Shademan A, Decker RS, Opfermann JD, Leonard S, Krieger A, and Kim PCW, “Supervised autonomous robotic soft tissue surgery,” Science Translational Medicine, vol. 8, no. 337, pp. 337ra64– 337ra64, May 2016. [DOI] [PubMed] [Google Scholar]
- [19].Opfermann JD, Leonard S, Decker RS, Uebele NA, Bayne CE, Joshi AS, and Krieger A, “Semi-autonomous electrosurgery for tumor resection using a multi-degree of freedom electrosurgical tool and visual servoing,” in 2017 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2017, pp. 3653–3660. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Bradski G, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000. [Google Scholar]
- [21].Quigley M, Conley K, Gerkey B, Faust J, Foote T, Leibs J, Wheeler R, and Ng AY, “ROS: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3 Kobe, Japan, 2009, p. 5. [Google Scholar]
- [22].Reflexxes motion libraries for online trajectory generation [Online].Available: http://www.reflexxes.ws/
- [23].Smits R, “KDL: Kinematics and Dynamics Library,” http://www.orocos.org/kdl. [Google Scholar]
- [24].Hennersperger C, Fuerst B, Virga S, Zettinig O, Frisch B, Neff T, and Navab N, “Towards MRI-based autonomous robotic US acquisitions: a first feasibility study,” IEEE Trans. on medical imaging, vol. 36, no. 2, pp. 538–548, 2017. [DOI] [PubMed] [Google Scholar]
- [25].Ritter EM and Scott DJ, “Design of a proficiency-based skills training curriculum for the fundamentals of laparoscopic surgery,” Surgical innovation, vol. 14, no. 2, pp. 107–112, 2007. [DOI] [PubMed] [Google Scholar]
- [26].Roggan A, Friebel M, Doerschel K, Hahn A, and Mueller GJ, “Optical properties of circulating human blood in the wavelength range 400–2500 nm,” J. of Biomedical Optics, vol. 4, no. 1, pp. 36–47. [DOI] [PubMed] [Google Scholar]
- [27].“Matlab curve fitting toolbox” [Online]. Available: https://www.mathworks.com/products/curvefitting.html
