Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Aug 2.
Published in final edited form as: Eur Urol Focus. 2021 Jul 8;7(4):696–705. doi: 10.1016/j.euf.2021.06.009

Rethinking Autonomous Surgery: Focusing on Enhancement Over Autonomy

Edoardo Battaglia a, Jacob Boehm a, Yi Zheng a, Andrew R Jamieson b, Jeffrey Gahan b, Ann Majewicz Fey a,b,*
PMCID: PMC10394949  NIHMSID: NIHMS1903557  PMID: 34246619

Abstract

Context:

As robotic assisted surgery is increasingly used in surgical care, the engineering research effort towards surgical automation has also increased significantly. Automation promises to enhance surgical outcomes, offload mundane or repetitive tasks, and improve workflow; however, we must ask an important question - should autonomous surgery be our long-term goal?

Objective:

In this paper, we provide a brief tutorial on the engineering requirements for automating control systems and briefly summarize technical challenges in automated robotic surgery. As an alternative approach, we review sensing and modeling techniques to capture real-time human behaviors in ways that can be understood by and integrated into the robotic control loop for enhanced shared or collaborative control.

Evidence acquisition:

We performed a nonsystematic search of the English language literature up to March 25, 2021. We included original studies related to automation in robot-assisted laparoscopic surgery and human-centered sensing and modeling.

Evidence synthesis:

4 comprehensive review papers present techniques for automating portions of surgical tasks. 16 studies relate to human-centered sensing technologies and 23 relate to computer vision and/or advanced AI or machine learning methods for skill assessment. 22 studies evaluate or review the role of haptic or adaptive guidance during some learning task with only a few applied to robotic surgery. Finally, only 3 studies discuss the role of some form of training on patient outcomes and none evaluated the effects of full or semi-autonomy on patient outcomes.

Conclusions:

Rather than focusing on autonomy, which eliminates the surgeon from the loop, research centered on more fully understanding the surgeon’s behaviors, goals, and limitations could enable a superior class of collaborative surgical robots that could be more effective and intelligent than automation alone.

Patient Summary:

We reviewed the literature for studies on automation in surgical robotics and human behavior modeling in human-machine interaction. The main application is to enhance the ability of surgical robotic systems to collaborate more effectively and intelligently with human surgeon operators.

Keywords: Robotic Surgery, Automation, Surgeon-in-the-Loop

1. Introduction

Since the word “robot” was first popularized in a Czech science fiction play in 1921, we have seen incredible advances in the technology and applications of robotics and automation. Robotic assisted surgery is particularly beneficial in urology, where surgical dexterity is paramount for complex proceedures. For example, robotic assistance can lead to fewer complications in cystectomy [1], along with other potential benefits [2]. Prostatectomy and nephrectomy have also been shown to benefit from robotic assistance [3, 4].

However, commercially available surgical robots have fallen short of the ultimate vision of a robotic system, which is capable of sensing its environment and performing actions (either simple or complex), fully autonomously. As the technology continues to improve, it is natural to wonder if surgical robots will one day be fully autonomous, eliminating the need for a surgeon.

Within the surgical world, any surgery performed with a robot is termed robotic assisted surgery (RAS). Rather than aim for the goal of full autonomy, which may be misguided and currently not possible, we envision as a more realistic and beneficial model to push forward the level of assistance offered by the robot, bringing it to the level of robotic enhanced surgery (RES).

In this review, we first provide a short tutorial on the challenges of fully automating a system from an engineering controls perspective. We briefly overview the current state of surgical robotics research, as it relates to various levels of autonomy. We also focus on summarizing technologies and techniques to enhance the intelligence of the robotic system as it relates to understanding the surgeon operator. We hope to convey that the design of more collaborative and adaptive robot partners that leverage surgeon strengths and help overcome limitations, might be a more feasible and effective near-term goal.

2. Evidence Acquisition

We performed a nonsystematic search of the English language literature using Google Scholar, Scopus, and the PubMed-MEDLINE database up to March 25, 2021. Keyword and title searches were done on topics related to robotic surgery, ranging from general searches such as“autonomous robotic surgery” to more specific terms such as “force measurements in robotic surgery”, “surgical skill assessment”, “physiological sensing”, or “computer vision for robotic surgery”. We included original studies related to automation in robot-assisted surgery and human-centered sensing and modeling as well as previous reviews on relevant topics.

3. Evidence Synthesis

3.1. Towards Autonomous Surgery: What Do We Need?

In robotic control theory, the behavior of a robot is controlled using a theoretical framework, applied to the physical world through a variety of sensors and actuators. While a rigorous description of automated control is beyond the scope of this paper, we introduce some basic requirements for truly autonomous surgical robots. Figure 1 shows how robotic surgery can be described as a closed-loop control problem, with three major elements: a model of expertise, physical reality, and measured reality, the approximate representation of the real world.

Figure 1:

Figure 1:

Closed Loop Control for Automated Surgery. Control begins with a model of desired behavior, such as surgical expertise. The physical surgeon-robot behavior is then measured using sensors and compared to the original model of expertise. Errors are used to provide some form of feedback to the surgeon or robot to enhance performance in near real-time.

The model of expertise, represents the intelligence of the robotic system. It contains a mathematical model of the desired outcome (e.g., a surgical outcome), which is used to generate a control action for the physical robot. The physical reality block is composed of all the physical actors involved in robotic assisted surgery: the patient, the surgeon, and the robotic hardware. Finally, the measured reality block is a representation of how the robot can see and understand the physical reality through sensor measurements. The goal of all modern control systems is to find the error between the measured reality and model of expertise in order to generate meaningful and effective feedback to push the physical reality closer to ideal behavior, minimizing errors. The frequency of this control loop must be fast enough to enable meaningful and stable control (Fig. 2). For all control systems, it is essential that each element of the control feedback loop is fully defined in a mathematically rigorous way to ensure safety and effectiveness of the overall system.

Figure 2:

Figure 2:

Overview of technology available for measurements in robotic surgery. Automated surgery or intelligent feedback to the surgeon requires sensors which provide measurement data fast enough for computer control.

3.2. Levels of Autonomy in Robotic Surgery

One of the great strengths of robotic control systems is that the level or degree of autonomy for a given system can be a design choice, one that is not necessarily a binary one. Borrowing from classifications developed in self-driving cars, Yang et al. classify autonomy for medical robots on a scale of 0 to 5, with 0 corresponding to no autonomy with the surgeon remaining in full control, and 5 to a system fully capable of performing entire surgeries with no human input [57] (Figure 3). Two recent review papers use similar classification methods for surgical robot autonomy, reviewing both academic research results and commercially available surgical robotic platforms [58, 59]. Yip and Das provide an overview of commercially available or otherwise well-known surgical robots [58], while Attanasio et al. provide a comprehensive review summarizing the current state of automating specific types of surgical procedures (e.g., knot typing, supervised suturing, organ and tumor segmentation, ablation, etc.) across a variety of surgical specialties, from urology to orthopedics [59]. The majority of surgical robotic systems that are either in, or nearing, clinical use, fall on either end of the autonomy spectrum, rather than the middle. Arguably, design at the ends of the spectrum represents an easier technological challenge - the engineering problem either simplifies to eliminating any intelligence in the robotic system (such as the da Vinci Surgical System, Intuitive Surgical), or fully eliminate the most unpredictable and dynamic element of the control loop - the human operator (e.g., ROBODOC for supervised autonomous orthopedic surgery or Cyberknife for radiological treatments [58]).

Figure 3:

Figure 3:

Levels of autonomy possible in robotic surgery range from no autonomy, with the surgeon in full control, to full automation with no human input. Copyright ©2017, American Association for the Advancement of Science. Reproduced with permission from [57].

Another important technical distinction is that automatic behavior, where the robot executes rigid predetermined behavior is very different from autonomous behavior, wherein the robot is able to modify its behavior in real-time and change its planning to react to unexpected events [59]. This ability to deal with the unexpected is the ultimate benchmark to compare performance of a human with that of a robot. While Artificial Intelligence is developing rapidly and has shown applications in improving diagnostic capabilities [60], the technology still lacks the level of sophistication required for true autonomy, and this remains a major technical challenge in the design of all robotic systems, not just those designed for surgery.

Finally, beyond technical challenges, there are significant regulatory, legal, and ethical concerns associated with the deployment of autonomous surgical robots [57]. When errors and patient harm occur, who bears the legal responsibility for autonomous surgical robots - the surgeon, the hospital, the robot, or the engineer who designed the robot[61]?

With these considerations in mind, it is clear that keeping a human in the surgical robot control loop is critical, at least in the near-term. However, to enable natural and seamless collaboration between humans and robots, the robotic system needs more information about the surgeon’s intent and ability to carry out the intended task. In the next sections, we will review of the different aspects necessary for the control framework of semi-autonomous, collaborative, surgical robots and highlight technologies and techniques to better model surgeon behavior and skill levels in ways that can be integrated into the real-time robot control loop.

3.3. Measured Reality - Using Sensors to Quantify Surgical Expertise

In this section we review technology available to quantitatively measure the surgeon, robot, and patient environment. Broadly speaking, sensing for robotic assisted surgery can be divided in three categories: motion-related measurements, which obtains information on movement of the actors involved and the forces that they exchange, physiological sensing, which records physiological information from the human in the loop, and vision-based sensing, which provides the robot with a more generalized and high level understanding of the environment, often involving advanced processing techniques, such as machine learning. As robotic control systems tend to operate between 100–1000Hz, it is important that sensors used to measure the real-world can be sampled at similar speeds.

3.3.1. Motion-related Sensing

Robots sense and take commands using simple variables, such as position and velocity, to accomplish tasks. While turning these measurements into metrics that can define a model of good surgery is challenging, obtaining the measurements themselves is relatively straightforward. For example, force sensors can be embedded in surgical tools [33], while kinematic sensors can be embedded into the joints of the robotic systems [32]. Measurements on the human surgeon can also be useful for human-robot collaboration as well as to develop better models of surgical expertise. In this case, measurements of kinematics can be obtained for example through wireless sensors [34], electromagnetic sensors [35], optical and camera trackers [36].

3.3.2. Physiological Sensing

Physiological measurements can be obtained from human surgeons have been linked to expertise level, workload, stress and other factors. For example, eye motions can be used to classify surgical expertise levels [18]. The number of eye blinks (EOG) serves as an indicator of stress and concentration level during training [19], while galvanic skin response (GSR, i.e., changes in skin electrical conductance), can be used to estimate cognitive load, attention, and emotional state [20]. Surface electromyography (EMG) measures the electrical signals from active muscles, which can reveal the underlying motor patterns, physical effort, and motion intent [15, 16]. Heart rate and its variability have been shown to capture the dynamic workload, emotion and cumulative stress [17]. Finally, electroencephalography (EEG) can quantify human emotion, perception, cognition, and technical skills [21]. These sensing technologies are promising in that they measure directly from the human operator; however, wearable sensing can be cumbersome and the interpretation of these data can be challenging.

3.3.3. Vision-Based Sensing

The field of computer vision (CV) aims to transform visual input stimuli into meaningful mathematical representations that can be manipulated by algorithms downstream to execute various higher-level tasks, such as object detection. Analogous to the human visual system, CV-based sensing for roboticoriented surgical analysis can provide a tremendous amount of information moment-to-moment to guide formation and refinement of dynamic models of the operating theatre. Although the ultimate goal of visually “perceiving” real-time surgeries at the level of an expert surgeon (or superior) remains distant, significant technical progress continues to be made, piece-wise, on enabling quantitative, vision-based feedback for guiding/informing robotic surgery. Video-based methods have been proposed for a variety of relevant objectives [22], including characterization of tool articulation and kinematics [27, 28], surgical procedure phase and step recognition [29], action/gesture/task classification [26],and assessment of surgical skill [30, 31, 23]. Notably, at the heart of state-of-the-art approaches to surgical video analysis is Deep Learning (DL), a sub-field of machine learning involving models that can automatically learn multiple layers of data representation to capture increasingly complex patterns in a hierarchical fashion [24]. Progress in DL research has been the most important technological development in recent years for advancing CV and Artificial Intelligence (AI) in general [25].

3.4. Modeling Expertise - Defining Surgical Mastery Quantitatively

A second key aspect of a collaborative control framework is the internal model of expertise that gives the robotic system a reference or ideal trajectory for good surgical behavior. This includes both a quantitative understanding of what surgical expertise is and the formulation of a concrete plan to perform necessary tasks. Combined, these steps represent the control action that will be executed by the robotic system to achieve ideal performance.

3.4.1. Kinematic Modeling of Expertise

As discussed in the previous sections, modern techniques for motion tracking facilitate data collection and analysis during surgical procedures that can be used to quantitatively define good performance for robotic systems. In the research community, the fundamental movements of surgery, referred to as bases of movement, can define the underlying structure and building blocks of surgical movement. Bases of surgical movements have primarily been created by learning from demonstration [9],where machine learning techniques are used to teach a robot how to move based on data collected from humans. Characterizing surgical movements can aid in the assessment of surgical skill such as expertise level [5] or surgical style [11]. For instance, a statistical analysis of jerk, typically related to the smoothness or crispness of a movement, can distinguish experts from novices [6]. More complex analysis can be done with larger data sets by implementing machine learning techniques to assess surgical skill [67, 14].

3.4.2. Post-completion Task-level Metrics of Expertise

Another way to quantify surgical expertise is through task-level metrics such as completion time, path length, economy of volume and mistakes made during execution. These types of metrics are often used to evaluate the surgical training outcomes and their construct validity and ability to identify high levels of expertise has been extensively studied in the literature [7, 8].In addition to providing post-training evaluation, post-task metrics can also be used to provide benchmarks for surgical proficiency or as a metric for optimization by a robotic control system.

3.4.3. Real-Time Metrics of Expertise

While post-completion task evaluation is useful to quantify proficiency, it can not be used to evaluate skill during a procedure. Real-time evaluation is necessary for any level of robotic automation. Real-time evaluation of expertise is challenging, and still an open research topic. Some recent techniques compare tool trajectories to “optimal” trajectories [12] or use streamed kinematic data to classify stylistic behavior [13]. Other work has been done to facilitate the extraction of the most relevant information during surgery for expertise evaluation, thus reducing the memory and computational effort needed [10]. While these results are promising, advances in the field are still not at a stage where such information can be integrated in a completely automated robotic loop.

3.4.4. Control Actions

Assuming that the robot has a correct representation of good surgery, the control actions represent planned step necessary to achieve good surgery. This is a challenging aspect of robotic surgery, and the one that, together with real-time evaluation of expertise, represents the biggest barrier towards automated and semi-automated surgery. Indeed, while it is possible to automate some specific tasks within a surgical procedure [68], to the best of our knowledge there are no fully automated surgical robots.

3.5. Physical Reality - Evaluation of Feedback Effectiveness

The final step towards designing effective and collaborative intelligent surgical robots is to ensure feedback provided to the surgeon is intuitive, natural, and effective. Because of the uniqueness and complexity of the human perceptual system, it is a major technical challenge to design universally effective feedback.

Many surgical robot research prototypes and surgical robotic simulators have evaluated the effects of adding reflective haptic feedback (e.g., providing users with a force model of the surgical environment) and found improvements in performance [37, 39, 40, 41]. Force feedback can also improve the development of psychomotor skills for early surgical trainees [38, 42]. However, reflective force feedback lacks the fidelity needed to accurately represent the surgical environment and is often perceived negatively by the user [43].

An alternative approach to reflective haptic feedback is guidance haptic feedback. In guidance haptic feedback, the goal is not to simulate the patient tissue properties, but rather to enhance motor learning through haptic or tactile motion cues - cues that become critical in human-robot collaborative environments. Studies have shown the effectiveness of haptic feedback in developing motor skills [44, 45, 46], and guiding movement [47, 48, 49]. A common type of training motor skills with haptic feedback involves recording an expert’s movements and having a novice follow those movements, with haptic feedback provided if they deviate from the intended path [50]; however, if feedback gains are too strong, learning can be negatively impacted [51]. Rather, haptic guidance designed to be less restrictive and exploratory can allow the user to discover new movement strategies [52].

One opportunity for guidance haptic feedback is in the domain of adaptive training. Adaptive training is typically used in video gaming, rehabilitation, medical simulation, and industrial training as a way to optimize learning by providing trainee-specific content [53]. Typically, some adaptive variable (e.g., performance) is measured in real-time and used to adapt the learning environment in real-time (e.g., increase task difficulty) [54, 55]. The first surgical robot adaptive training study, published in 2018, used haptic assistance-as-needed to keep a ring centered during a rail following task. They showed faster, though not significant, learning curves in 8 novices with assistance when compared to 8 novices without assistance [56]. The haptic assistance in this study was directly related to the task (e.g., computed from position differences between the ring and rail) and only one haptic gain was evaluated. Despite these limitations, this study paves the way for personalized and adaptive feedback in surgical robotics.

Finally, the ultimate evaluation of any surgical feedback technique is the impact on patient outcomes. In general, there is a paucity in the literature of studies that relate patient outcomes to surgical training techniques [69], with limited work demonstrating some improvements to patient outcomes with simulation based training [70, 71]; however, to our knowledge, there are no papers related the effects of autonomy levels or guidance cues on patient outcomes. These studies will be critical for the future clinical adoption of semi- or fully autonomous surgical robots.

4. Conclusions

To enable full autonomy, it is critical to define the ideal behavior of a system, measure how well the physical system is following that behavior, and provide meaningful and effective feedback to the system to minimize any errors in near-real-time. In surgical robotics, there is a dearth in the literature for all these aspects of autonomous control, making the road to full autonomous surgery a significant engineering challenge. However, if robotic systems could instead be designed to better understand and leverage the intelligence of the surgeon operator, they could be more effective and natural collaborators in the delivery of surgical care, paving the path from robotic assisted surgery towards true robotic enhanced surgery. Solving open challenges in surgeon-robot interaction such as predicting surgeon intent, measuring expertise levels, and determining competency during task execution, while providing effective and natural guidance to the surgeon operator, could help accelerate the clinical adoption of more intelligent and collaborative surgical robots.

Figure 4:

Figure 4:

Examples of real-time measurement systems for human behavior modeling in robotic surgery [62, 63, 64] and human-computer interaction [65], including video data, human-centric EMG, EEG, and GSR measurements, as well as inertial measurement and position tracking.

Figure 5:

Figure 5:

Examples of surgical video analysis techniques, include: detection and characterization of the surgical instrument articulation and movement [27, 28] and surgical phase recognition [29, 66].

Figure 6:

Figure 6:

Examples a near-real-time surgical skill prediction and feedback systems, including (a) a framework capable of classifying surgical expertise from kinematic data in a 1–2 second sliding window, reproduced from [67], and (b) a stylistic detection method that computes stylistic deficiencies every 0.25 seconds [13].

Table 1:

Summary of Prior Work

Topic Reference Numbers Evaluation Type (when present) Open Issues

Model of Expertise Expertise Metrics Research [5, 6, 7, 8] Non-randomized Control Trial [5, 6, 7, 8] No ground truth for surgical expertise




Data-Driven Modeling Non-systematic Review [9], Research [10, 11, 12, 13, 14] Technical Validation [10, 11, 12, 13, 14] Sparse data available for model training, black-box algorithms do not easily translate to training strategies

Measured Reality Physiological Non-systematic Review [15, 16, 17],Research [18, 19, 20, 21] Validation of Measurements [18, 20, 21], Crossover Trial [19] Baseline data collections and wearable sensors are always required.




Vision-Based Non-systematic Review [22, 23, 24, 25], Systematic Review [26], Research [27, 28, 29, 30, 31] Technical Validation [27, 28, 29, 30, 31] Persisting challenges in image segmentation, black-box algorithms do not easily translate to training strategies




Motion-Based Non-systematic Review
[32], Research [33, 34, 35, 36]
Validation of Measurements [33], Validation of Assessment [34, 35, 36] Augmenting robot sensing capabilities

Physical Reality Reflective Haptic Feedback Non-systematic review [37], Systematic Review [38], Research [39, 40, 41, 42, 43] Randomized Crossover Trial [42], Crossover Trial [39, 40, 41, 43] Existing methods lack realism, methods to enhance fidelity not feasible for real time human interaction due to computational complexity




Haptic Movement Guidance Research [44, 45, 46, 47, 48, 49, 50, 51, 52] Demonstration Only [44], Crossover Trial [45], Randomized Concurrent Control Trial [46, 52], Psychophysics (accuracy) [47, 48], Randomized Crossover Trial [49], Crossover Trial [50, 51] Paucity in literature for guidelines on effective feedback strategies, varibility across human learners




Adaptive Training Guidance Non-systematic Review [53], Tutorial [54], Opinion [55], Research [56] Randomized Concurrent Control Trial [56] Paucity in the literature, lack of methods for unstructured (i.e., not pre-defined) movement tasks

5. Acknowledgements

This work was supported by the National Institutes of Health under award number R01EB030125, and the National Science Foundation under award numbers #1846726 and #2024839. The views in this paper are solely those of the authors and do not necessarily represent the official views of the funding agencies.

References

  • [1].Ahmed K, Khan SA, Hayn MH, Agarwal PK, Badani KK, Balbay MD, Castle EP, Dasgupta P, Ghavamian R, Guru KA, et al. , Analysis of intracorporeal compared with extracorporeal urinary diversion after robot-assisted radical cystectomy: results from the international robotic cystectomy consortium, European urology 65 (2) (2014) 340–347. [DOI] [PubMed] [Google Scholar]
  • [2].Brodie A, Kijvikai K, Decaestecker K, Vasdev N, Review of the evidence for robotic-assisted robotic cystectomy and intra-corporeal urinary diversion in bladder cancer, Translational Andrology and Urology 9 (6) (2020) 2946. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Wu S-Y, Chang S-C, Chen C-I, Huang C-C, Latest comprehensive medical resource consumption in robot-assisted versus laparoscopic and traditional open radical prostatectomy: A nationwide population-based cohort study, Cancers 13 (7) (2021) 1564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Carbonara U, Simone G, Minervini A, Sundaram CP, Larcher A, Lee J, Checcucci E, Fiori C, Patel D, Meagher M, et al. , Outcomes of robot-assisted partial nephrectomy for completely endophytic renal tumors: A multicenter analysis, European Journal of Surgical Oncology 47 (5) (2021) 1179–1186. [DOI] [PubMed] [Google Scholar]
  • [5].Judkins TN, Oleynikov D, Stergiou N, Objective evaluation of expert and novice performance during robotic surgical training tasks, Surgical Endoscopy and Other Interventional Techniques 23 (3) (2009) 590–597. [DOI] [PubMed] [Google Scholar]
  • [6].Ghasemloonia A, Maddahi Y, Zareinia K, Lama S, Dort JC, Sutherland GR, Surgical skill assessment using motion quality and smoothness, Journal of surgical education 74 (2) (2017) 295–305. [DOI] [PubMed] [Google Scholar]
  • [7].Dulan G, Rege RV, Hogg DC, Gilberg-Fisher KM, Arain NA, Tesfay ST, Scott DJ, Proficiency-based training for robotic surgery: construct validity, workload, and expert levels for nine inanimate exercises, Surgical endoscopy 26 (6) (2012) 1516–1521. [DOI] [PubMed] [Google Scholar]
  • [8].Hung AJ, Jayaratna IS, Teruya K, Desai MM, Gill IS, Goh AC, Comparative assessment of three standardized robotic surgery training methods, BJU international 112 (6) (2013) 864–871. [DOI] [PubMed] [Google Scholar]
  • [9].Ravichandar H, Polydoros AS, Chernova S, Billard A, Recent advances in robot learning from demonstration, Annual Review of Control, Robotics, and Autonomous Systems 3 (2020) 297–330. [Google Scholar]
  • [10].Anh NX, Nataraja RM, Chauhan S, Towards near real-time assessment of surgical skills: A comparison of feature extraction techniques, Computer methods and programs in biomedicine 187 (2020) 105234. [DOI] [PubMed] [Google Scholar]
  • [11].Ershad M, Rege R, Fey AM, Meaningful assessment of robotic surgical style using the wisdom of crowds, International journal of computer assisted radiology and surgery 13 (7) (2018) 1037–1048. [DOI] [PubMed] [Google Scholar]
  • [12].Jiang J, Xing Y, Wang S, Liang K, Evaluation of robotic surgery skills using dynamic time warping, Computer methods and programs in biomedicine 152 (2017) 71–83. [DOI] [PubMed] [Google Scholar]
  • [13].Ershad M, Rege R, Fey AM, Automatic and near real-time stylistic behavior assessment in robotic surgery, International journal of computer assisted radiology and surgery 14 (4) (2019) 635–643. [DOI] [PubMed] [Google Scholar]
  • [14].Chmarra MK, Klein S, De Winter JC, Jansen FW, Dankelman J, Objective classification of residents based on their psychomotor laparoscopic skills, Surgical Endoscopy 24 (5) (2010) 1031–1039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Ison M, Artemiadis P, The role of muscle synergies in myoelectric control: trends and challenges for simultaneous multifunction control, J. Neural Eng 11 (5) (2014) 051001. [DOI] [PubMed] [Google Scholar]
  • [16].Lobo-Prat J, Kooren PN, Stienen AH, Herder JL, Koopman BF, Veltink PH, Non-invasive control interfaces for intention detection in active movement-assistive devices, J. Neuroeng. Rehab 11 (1) (2014) 168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Thayer JF, Hansen AL, Saus-Rose E, Johnsen BH, Heart rate variability, prefrontal neural function, and cognitive performance: the neurovisceral integration perspective on self-regulation, adaptation, and health, Ann. Behav. Med 37 (2) (2009) 141–153. [DOI] [PubMed] [Google Scholar]
  • [18].Richstone L, Schwartz MJ, Seideman C, Cadeddu J, Marshall S, Kavoussi LR, Eye Metrics as an Objective Assessment of Surgical Skill, Annals of Surgery 252 (1) (2010) 177–182. [DOI] [PubMed] [Google Scholar]
  • [19].Berguer R, Smith WD, Chung YH, Performing laparoscopic surgery is significantly more stressful for the surgeon than open surgery, Surgical Endoscopy 15 (10) (2001) 1204–1207. [DOI] [PubMed] [Google Scholar]
  • [20].Shi Y, Ruiz N, Taib R, Choi E, Chen F, Galvanic skin response (gsr) as an index of cognitive load, in: CHI’07 Extended Abstracts Human Factors Comput. Syst., ACM, 2007, pp. 2651–2656. [Google Scholar]
  • [21].Guru KA, Esfahani ET, Raza SJ, Bhat R, Wang K, Hammond Y, Wilding G, Peabody JO, Chowriappa AJ, Cognitive skills assessment during robot-assisted surgery: separating the wheat from the chaff, BJU Intl. 115 (1) (2015) 166–174. [DOI] [PubMed] [Google Scholar]
  • [22].Ward TM, Mascagni P, Ban Y, Rosman G, Padoy N, Meireles O, Hashimoto DA, Computer vision in surgery, Surgery. [DOI] [PubMed] [Google Scholar]
  • [23].Ma R, Vanstrum EB, Lee R, Chen J, Hung AJ, Machine learning in the optimization of robotics in the operative field, Current Opinion in Urology 30 (6) (2020) 808–816. doi: 10.1097/MOU.0000000000000816. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].LeCun Y, Bengio Y, Hinton G, Deep learning, Nature 521 (7553) (2015) 436–444, number: 7553 Publisher: Nature Publishing Group. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  • [25].Padoy Nicolas, Mascagni Pietro, Vinkle Srivastav Deepak Alapatt, Artificial Intelligence in Surgery: Understanding the Role of AI in Surgical Practice (chapter: Neural Networks and Deep Learning), Vol. 1, McGraw-Hill Education / Medical, 2021. [Google Scholar]
  • [26].Amsterdam B. v., Clarkson M, Stoyanov D, Gesture Recognition in Robotic Surgery: a Review, IEEE Transactions on Biomedical Engineering (2021) 1–1Conference Name: IEEE Transactions on Biomedical Engineering. [DOI] [PubMed] [Google Scholar]
  • [27].Colleoni E, Moccia S, Du X, Momi ED, Stoyanov D, Deep Learning Based Robotic Tool Detection and Articulation Estimation With SpatioTemporal Layers, IEEE Robotics and Automation Letters 4 (3) (2019) 2714–2721, conference Name: IEEE Robotics and Automation Letters. [Google Scholar]
  • [28].Lee D, Yu HW, Kwon H, Kong H-J, Lee KE, Kim HC, Evaluation of Surgical Skills during Robotic Surgery by Deep Learning-Based Multiple Surgical Instrument Tracking in Training and Actual Operations, Journal of Clinical Medicine 9 (6). doi: 10.3390/jcm9061964. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Bar O, Neimark D, Zohar M, Hager GD, Girshick R, Fried GM, Wolf T, Asselmann D, Impact of data on generalization of AI for surgical intelligence applications, Scientific Reports 10 (1) (2020) 22208, number: 1 Publisher: Nature Publishing Group. doi: 10.1038/s41598-020-79173-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Funke I, Mees ST, Weitz J, Speidel S, Video-based surgical skill assessment using 3D convolutional neural networks, International Journal of Computer Assisted Radiology and Surgery 14 (7) (2019) 1217–1225. doi: 10.1007/s11548-019-01995-1. [DOI] [PubMed] [Google Scholar]
  • [31].Khalid S, Goldenberg M, Grantcharov T, Taati B, Rudzicz F, Evaluation of Deep Learning Models for Identifying Surgical Actions and Measuring Performance, JAMA network open 3 (3) (2020) e201664. doi: 10.1001/jamanetworkopen.2020.1664. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Kuo C-H, Dai JS, Robotics for minimally invasive surgery: a historical review from the perspective of kinematics, in: International symposium on history of machines and mechanisms, Springer, 2009, pp. 337–354. [Google Scholar]
  • [33].Kim U, Lee D-H, Yoon WJ, Hannaford B, Choi HR, Force sensor integrated surgical forceps for minimally invasive robotic surgery, IEEE Transactions on Robotics 31 (5) (2015) 1214–1224. [Google Scholar]
  • [34].Kirby GS, Guyver P, Strickland L, Alvand A, Yang G-Z, Hargrove C, Lo BP, Rees JL, Assessing arthroscopic skills using wireless elbow-worn motion sensors, JBJS 97 (13) (2015) 1119–1127. [DOI] [PubMed] [Google Scholar]
  • [35].Datta V, Mackay S, Mandalia M, Darzi A, The use of electromagnetic motion tracking analysis to objectively measure open surgical skill in the laboratory-based model, Journal of the American College of Surgeons 193 (5) (2001) 479–485. [DOI] [PubMed] [Google Scholar]
  • [36].Oropesa I, Sánchez-González P, Chmarra MK, Lamata P, Fernández A, Sánchez-Margallo JA, Jansen FW, Dankelman J, SánchezMargallo FM, Gómez EJ, EVA: Laparoscopic instrument tracking based on endoscopic video analysis for psychomotor skills assessment, Surgical Endoscopy 27 (3) (2013) 1029–1039. [DOI] [PubMed] [Google Scholar]
  • [37].Basdogan C, De S, Kim J, Muniyandi M, Kim H, Srinivasan MA, Haptics in minimally invasive surgical simulation and training, IEEE Computer Graphics and Applications 24 (2) (2004) 56–64. [DOI] [PubMed] [Google Scholar]
  • [38].Van der Meijden OA, Schijven MP, The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: a current review, Surgical Endoscopy 23 (6) (2009) 1180–1190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Tholey G, Desai JP, Castellanos AE, Force feedback plays a significant role in minimally invasive surgery: results and analysis., Annals of surgery 241 1 (2005) 102–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Wottawa CR, Genovese B, Nowroozi BN, Hart SD, Bisley JW, Grundfest WS, Dutson EP, Evaluating tactile feedback in robotic surgery for potential clinical application using an animal model, Surgical Endoscopy 30 (8) (2016) 3198–3209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].King C-H, Culjat MO, Franco ML, Lewis CE, Dutson EP, Grundfest WS, Bisley JW, Tactile feedback induces reduced grasping force in robot-assisted surgery, IEEE Transactions on Haptics 2 (2) (2009) 103–110. [DOI] [PubMed] [Google Scholar]
  • [42].Ström P, Hedman L, Särnå L, Kjellin A, Wredmark T, FelländerTsai L, Early exposure to haptic feedback enhances performance in surgical simulator training: a prospective randomized crossover study in surgical residents, Surgical Endoscopy and Other Interventional Techniques 20 (9) (2006) 1383–1388. [DOI] [PubMed] [Google Scholar]
  • [43].Gwilliam JC, Mahvash M, Vagvolgyi B, Vacharat A, Yuh DD, Okamura AM, Effects of haptic and graphical force feedback on teleoperated palpation, in: 2009 IEEE International Conference on Robotics and Automation, IEEE, 2009, pp. 677–682. [Google Scholar]
  • [44].Boulanger P, Wu G, Bischof WF, Yang XD, Hapto-audio-visual environments for collaborative training of ophthalmic surgery over optical network, in: 2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006), 2006, pp. 21–26. doi: 10.1109/HAVE.2006.283801. [DOI] [Google Scholar]
  • [45].Feygin D, Keehner M, Tendick R, Haptic guidance: experimental evaluation of a haptic training method for a perceptual motor skill, in: Proceedings 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. HAPTICS; 2002, 2002, pp. 40–47. doi: 10.1109/HAPTIC.2002.998939. [DOI] [Google Scholar]
  • [46].Jantscher WH, Pandey S, Agarwal P, Richardson SH, Lin BR, Byrne MD, O’Malley MK, Toward improved surgical training: Delivering smoothness feedback using haptic cues, in: 2018 IEEE Haptics Symposium (HAPTICS), 2018, pp. 241–246. doi: 10.1109/HAPTICS.2018.8357183. [DOI] [Google Scholar]
  • [47].Stanley AA, Kuchenbecker KJ, Evaluation of tactile feedback methods for wrist rotation guidance, IEEE Transactions on Haptics 5 (3) (2012) 240–251. doi: 10.1109/TOH.2012.33. [DOI] [PubMed] [Google Scholar]
  • [48].Norman SL, Doxon AJ, Gleeson BT, Provancher WR, Planar hand motion guidance using fingertip skin-stretch feedback, IEEE Transactions on Haptics 7 (2) (2014) 121–130. doi: 10.1109/TOH.2013.2296306. [DOI] [PubMed] [Google Scholar]
  • [49].Basu S, Tsai J, Majewicz A, Evaluation of Tactile Guidance Cue Mappings for Emergency Percutaneous Needle Insertion, in: IEEE Haptics Symposium, 2016, pp. 106–112. [Google Scholar]
  • [50].Yang X, Bischof WF, Boulanger P, Validating the performance of haptic motor skill training, in: 2008 Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2008, pp. 129–135. doi: 10.1109/HAPTICS.2008.4479929. [DOI] [Google Scholar]
  • [51].Shadmehr R, Brashers-Krug T, Mussa-Ivaldi FA, Interference in learning internal models of inverse dynamics in humans, in: Advances in neural information processing systems, 1995, pp. 1117–1124. [Google Scholar]
  • [52].Gibo TL, Abbink DA, Movement strategy discovery during training via haptic guidance, IEEE Transactions on Haptics 9 (2) (2016) 243–254. [DOI] [PubMed] [Google Scholar]
  • [53].Vaughan N, Gabrys B, Dubey VN, An overview of self-adaptive technologies within virtual reality training, Computer Science Review 22 (2016) 65–87. [Google Scholar]
  • [54].Kelley CR, What is adaptive training?, Human Factors 11 (6) (1969) 547–556. [Google Scholar]
  • [55].Charles D, Kerr A, McNeill M, McAlister M, Black M, Kcklich J, Moore A, Stringer K, Player-centred game design: Player modelling and adaptive digital games, in: Proceedings of the Digital Games Research Conference, Vol. 285, 2005, p. 00100. [Google Scholar]
  • [56].Enayati N, Okamura AM, Mariani A, Pellegrini E, Coad MM, Ferrigno G, De Momi E, Robotic assistance-as-needed for enhanced visuomotor learning in surgical robotics training: An experimental study, in: 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2018, pp. 6631–6636. [Google Scholar]
  • [57].Yang G-Z, Cambias J, Cleary K, Daimler E, Drake J, Dupont PE, Hata N, Kazanzides P, Martel S, Patel RV, et al. , Medical robotics—regulatory, ethical, and legal considerations for increasing levels of autonomy, Science Robotics 2 (4) (2017) 8638. [DOI] [PubMed] [Google Scholar]
  • [58].Yip M, Das N, Robot autonomy for surgery, The Encyclopedia of MEDICAL ROBOTICS: Volume 1 Minimally Invasive Surgical Robotics (2019) 281–313. [Google Scholar]
  • [59].Attanasio A, Scaglioni B, De Momi E, Fiorini P, Valdastri P, Autonomy in surgical robotics, Annual Review of Control, Robotics, and Autonomous Systems 4. [Google Scholar]
  • [60].Checcucci E, De Cillis S, Granato S, Chang P, Afyouni AS, Okhunov Z, et al. , Applications of neural networks in urology: a systematic review, Current Opinion in Urology 30 (6) (2020) 788–807. [DOI] [PubMed] [Google Scholar]
  • [61].O’Sullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, Holzinger K, Holzinger A, Sajid MI, Ashrafian H, Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery, The International Journal of Medical Robotics and Computer Assisted Surgery 15 (1) (2019) e1968. [DOI] [PubMed] [Google Scholar]
  • [62].Wang Z, Kasman M, Martinez M, Rege R, Zeh H, Scott D, Fey AM, A comparative human-centric analysis of virtual reality and dry lab training tasks on the da vinci surgical platform, Journal of Medical Robotics Research 4 (03n04) (2019) 1942007. [Google Scholar]
  • [63].Ershad M, Rege R, Fey AM, Meaningful assessment of robotic surgical style using the wisdom of crowds, International Journal of Computer Assisted Radiology and Surgery (2018) 1–12. [DOI] [PubMed] [Google Scholar]
  • [64].Ershad M, Rege R, Fey AM, Automatic and near real-time stylistic behavior assessment in robotic surgery, International Journal of Computer Assisted Radiology and Surgery 14 (4) (2019) 635–643. [DOI] [PubMed] [Google Scholar]
  • [65].Wang Z, Majewicz Fey A, Human-centric predictive model of task difficulty for human-in-the-loop control tasks, PLOS ONE 13 (4) (2018) 1–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [66].Garrow CR, Kowalewski K-F, Li L, Wagner M, Schmidt MW, Engelhardt S, Hashimoto DA, Kenngott HG, Bodenstedt S, Speidel S, Müller-Stich BP, Nickel F, Machine Learning for Surgical Phase Recognition: A Systematic Review, Annals of Surgery Publish Ahead of Print. doi: 10.1097/SLA.0000000000004425. [DOI] [PubMed] [Google Scholar]
  • [67].Wang Z, Fey AM, Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery, International journal of computer assisted radiology and surgery 13 (12) (2018) 1959–1970. [DOI] [PubMed] [Google Scholar]
  • [68].He Y, Zhao B, Qi X, Li S, Yang Y, Hu Y, Automatic surgical field of view control in robot-assisted nasal surgery, IEEE Robotics and Automation Letters 6 (1) (2020) 247–254. [Google Scholar]
  • [69].Cox T, Seymour N, Stefanidis D, Moving the needle: simulation’s impact on patient outcomes, Surgical Clinics 95 (4) (2015) 827–838. [DOI] [PubMed] [Google Scholar]
  • [70].Sroka G, Feldman LS, Vassiliou MC, Kaneva PA, Fayez R, Fried GM, Fundamentals of laparoscopic surgery simulator training to proficiency improves laparoscopic performance in the operating room—a randomized controlled trial, The American journal of surgery 199 (1) (2010) 115–120. [DOI] [PubMed] [Google Scholar]
  • [71].Zendejas B, Cook DA, Bingener J, Huebner M, Dunn WF, Sarr MG, Farley DR, Simulation-based mastery learning improves patient outcomes in laparoscopic inguinal hernia repair: a randomized controlled trial, Annals of surgery 254 (3) (2011) 502–511. [DOI] [PubMed] [Google Scholar]

RESOURCES