Abstract
Artificial intelligence (AI) has made significant progress in recent years, and many medical fields are attempting to introduce AI technology into clinical practice. Currently, much research is being conducted to evaluate that AI can be incorporated into surgical procedures to make them safer and more efficient, subsequently to obtain better outcomes for patients. In this paper, we review basic AI research regarding surgery and discuss the potential for implementing AI technology in gastric cancer surgery. At present, research and development is focused on AI technologies that assist the surgeon's understandings and judgment during surgery, such as anatomical navigation. AI systems are also being developed to recognize in which the surgical phase is ongoing. Such a surgical phase recognition systems is considered for effective storage of surgical videos and education, in the future, for use in systems to objectively evaluate the skill of surgeons. At this time, it is not considered practical to let AI make intraoperative decisions or move forceps automatically from an ethical standpoint, too. At present, AI research on surgery has various limitations, and it is desirable to develop practical systems that will truly benefit clinical practice in the future.
Keywords: Artificial intelligence, Gastric cancer, Surgery, Minimally invasive surgery
INTRODUCTION
Artificial intelligence (AI) has made remarkable progress in recent years, and many medical fields are attempting to introduce AI technology into clinical practice. Currently, much research is being conducted to evaluate that AI can be incorporated into surgical procedures to make them safer and more efficient, subsequently to obtain better outcomes for patients [1,2,3,4,5,6]. Publications of promising basic research on AI technology related to surgery have been increasing and available in recent years. Many recent surgeries are employed under endoscopic view (e.g., thoracic, laparoscopic and robotic surgery) so-called minimally invasive surgery (MIS) where images are easily to be digitally recorded and shared, making it suitable for AI research to enter the field, because imaging data can be utilized as a training dataset for machine learning [1,2]. Furthermore, robotic surgery in which movement of surgical instruments are mechanically controlled is also considered to have potential for future development of autonomous surgery. Surgery for gastric cancer is generally thought to be one of the complex oncological procedures. Further, incidences of gastric cancer vary by region, which may be one of the reasons to trigger disparity of surgical quality in each institution or country. Therefore, it would be significant if AI technology could make safer and more efficient surgery universally achievable. Additionally, if the surgical quality can be objectively assessed by some AI-supported system, which will be helpful for education or distribution of this procedure.
However, since gastric cancer surgery is not a simple procedure as aforementioned, involving many diverse surgical steps including lymph node dissection around the pancreas or the major vessels as well as intricate intestinal anastomoses, there are not so many AI research papers on gastric cancer surgery that directly provide clinical feedbacks. In this paper, we review basic AI research regarding surgical field, including that in other areas of surgery, and discuss the potential for implementing AI technology in gastric cancer surgery at this time. We leave the detailed explanation of the principles of AI methodology and technology to other specialized reports [1,2,5,6,7].
TARGETS OF AI TECHNOLOGY IMPLEMENTATION IN SURGERY
How AI technology can be used in surgery to improve clinical outcomes for patients is a major challenge. Surgeons perform surgery based on clinical diagnosis obtained by imaging examination with their knowledges, experiences, ethics, sense of mission, and sometimes with their own intuition [4]. Probably, surgical procedures of MIS are consisted of the repeated four steps, unconsciously in most off parts: showing the operative field (exposure), looking at it to understand the anatomy, making decisions, and moving their hand (Fig. 1). At this time, it is not considered practical to let AI make intraoperative decisions or move forceps automatically from an ethical standpoint, too. In addition, the stomach is not a fixed organ like the brain or the eyeball, so it must be said that the hurdles are still high for using AI technology to autonomously expose the surgical field, perform dissection or anastomose intestinal tract. Therefore, at present, research and development is focused on AI technologies that assist the surgeon's judgment during surgery, such as anatomical navigation. AI systems are also being developed to recognize in which the surgical phase is ongoing. Such a surgical phase recognition systems is considered for effective storage of surgical videos and educational purposes, in the future, for use in systems to objectively evaluate the skill of surgeons.
Fig. 1. Intraoperative surgeon’s process. Surgeons unconsciously repeat the following four steps; show operative field, recognize surgical anatomy, decide dissection line, and finally move their hands.
AI = artificial intelligence.
INSTRUMENT DETECTION
General
In an object detection task, snapshots of surgical images are used for deep learning (DL) by computers. A typical example of the applicating object detection is AI-assisted endoscopic diagnosis, such as polyp or early gastric cancer. In the field of surgery, several publications have reported successful surgical instrument detection at the intraoperative images. Bamba et al. [8] reported their research of instrument detection in laparoscopic colorectal surgery using DL techniques. They reported that accuracy of forceps recognition in the test model video of laparoscopic colorectal resection. Forceps were recognized with a high accuracy of more than 90% in any kind of types. Kitaguchi et al. [9] also established an instrument detection model using DL in laparoscopic colorectal surgery. They reported the accuracy of the instrument detection decreases when the conditions (e.g., type of surgery) are changed, which indicated the limitation of generalizability of DL in various surgical setting. Madad Zadeh et al. [10] reported AI recognition of organs and instruments in laparoscopic surgery in gynecology. Accuracy was 84.5%, 29.6%, and 54.5% for the uterus, ovaries, and surgical instruments, respectively.
Gastric cancer surgery
Yamazaki et al. [11] reported on the recognition of surgical instruments in laparoscopic gastrectomy for the first time, with a precision of 0.87 and recall of 0.83. They also reported there was a difference in the time percentage for using each surgical instruments (e.g., energy devices and clip forceps) in laparoscopic gastrectomy depending on whether the surgeons had skill-qualification of Japan Society for Endoscopic Surgery (JSES) [12]. The study suggested that the surgeon's skill could be objectively assessed by the frequency of opportunity use.
ANATOMY RECOGNITION
General
Anatomical recognition has the potential to develop into AI-supported surgery in the future. Anatomical recognition is directly related to navigation surgery in that it guides the surgeon to appropriately perform surgery. It is expected to reduce human error and the burden on the surgeon, and to have significant educational significance especially for novice surgeons. Kitaguchi et al. [13] successfully recognized the inferior mesenteric artery in laparoscopic sigmoid colon resection or rectal resection with a Dice coefficient of 0.798 and high accuracy. Igaki et al. [14] developed an AI-based total mesorectal plane navigation for laparoscopic colorectal surgery. Sato et al. [15] developed AI-model to identify the recurrent laryngeal nerve in thoracoscopic esophagectomy with a Dice coefficient was 0.58, which was superior to that of the general surgeons’ awareness. The recurrent laryngeal nerve preservation is one of the most important aspects of esophageal cancer surgery, and such AI-supported techniques have the potential to improve the quality of surgery. Park et al. [16] reported that while assessment of colon blood flow has traditionally been performed by indocyanine green angiography to predict ischemia-related anastomotic complications during laparoscopic colorectal surgery, AI-based real-time analytical microperfusion has shown accurate and consistent performance. The accuracy of intraoperative ultrasound for liver lesions is highly operator dependent and has a steep learning curve; Barash et al. [17] designed and evaluated an AI system for identifying focal liver lesions in intraoperative ultrasound and reported an overall classification accuracy of 74.6%.
Gastric cancer surgery
Because gastric cancer surgery is a procedure performed in a complex surgical anatomy, there are still few research reports on this subject. However, one important study has already been reported and used for educational purposes. Kumazu et al. [18] reported an AI-supported system to display loose connective tissue during robotic gastric cancer surgery which guides the surgeon adequate dissection plane real-time (Figs. 2 and 3). The dissection maneuver that maintains an adequate plane is one of the most important points in preventing complications during gastric cancer surgery. Such a system may help to avoid intraoperative troubles and reduce complications such as postoperative pancreatic fistula, not to mention its educational significance for novice surgeons.
Fig. 2. Anatomical detection (intraoperative navigation) in robotic infrapyloric lymph node dissection. Connective tissue is highlighted blue (B, D) and pancreatic tissue is highlighted green (D), according to AI recognition (EUREKA system; Anaut, Tokyo, Japan). (A, C) Reference images, (B, D) AI-enhanced images. All the pictures are the authors’ own work using their own operation video.
AI = artificial intelligence.
Fig. 3. Anatomical detection (intraoperative navigation) in robotic suprapancreatic lymph node dissection. Connective tissue is highlighted blue (B, D), pancreatic tissue is highlighted green (B), and nerve sheath surrounding the hepatic artery is highlighted in green (D) according to AI recognition (EUREKA system; Anaut, Tokyo, Japan). (A, C) Reference images, (B, D) AI-enhanced images. All the pictures are the authors’ own work using their own operation video.
AI = artificial intelligence.
SURGERY PHASE RECOGNITION
General
There have been relatively many reports of AI-based surgical process recognition created by DL using intraoperative snapshots (Fig. 4). Although these systems are not directly linked to clinical outcomes at this time, these technologies could become the basis for future AI-supported surgical developments. In recent years, many MIS procedures have been standardized and widespread, facilitating such studies even on a multicenter scale. Shinozuka et al. [19] developed a surgery phase recognition model of laparoscopic cholecystectomy and reported a high accuracy rate of 97%. Cheng et al. [20] developed the similar model using videos of laparoscopic cholecystectomy from multicenter scale, which also showed a high accuracy rate of 91.05%. Golany et al. [21] classified into levels according to the complexity of laparoscopic cholecystectomy, and the surgical process was classified for each level. Although the accuracy of process classification is inversely related to the complexity of the procedure, they succeeded in identifying the surgical stage 81% of the time, even for complex laparoscopic cholecystectomies (complexity level 5). Kitaguchi et al. [22] reported a surgical phase recognition model for laparoscopic sigmoidectomy with an accuracy rate of 91.9%. They also developed a model from 300 laparoscopic colorectal resections from 19 centers across Japan [23]. The previous single-center study had an accuracy of 91.9%, but an accuracy decreased due to the diversity of surgical approaches in a multicenter setting. The accuracy of AI classification of actions (e.g. dissection, exposure) was 83.2%, and the accuracy of surgical instrument recognition was 51.2%. They also reported a model for transanal endoscopic total mesorectal excision with an accuracy rate of 93.2% [24]. Sasaki et al. [25] reported a phase recognition model for laparoscopic liver resection with an accuracy rate of 94.7%. Takeuchi et al. [26] reported about robotic esophagectomy with an accuracy rate of 84%. They also reported that an accuracy rate in laparoscopic inguinal hernia repair was 88.81% and 85.82% for unilateral and bilateral, respectively [27]. In unilateral hernia cases, the duration of peritoneal incision and hernia dissection detected by AI was significantly shorter in experts than in residents. Ward et al. [28] reported an AI phase recognition model for transoral endoscopic myotomy and its accuracy was 87.6%. Hashimoto et al. [29] reported about laparoscopic sleeve gastrectomy for obesity with an accuracy rate of 82% ± 4%. Furthermore, Eckhoff et al. [30] reported that a surgical phase recognition model of laparoscopic sleeve gastrectomy was applicable for transfer learning to the laparoscopic part of Ivor Lewis surgery, which suggests that it may be transferable to other techniques. Finally, Fujinaga et al. [31] reported that an AI model combined with landmark detection and process classification in laparoscopic cholecystectomy shows potential to prevent bile duct injury.
Fig. 4. Surgical phase of laparoscopic distal gastrectomy. The surgical process is classified into 8 categories, which can be recognized by artificial intelligence through deep learning.
LN = lymph node.
Gastric cancer surgery
Because gastric cancer surgery involves so many steps, such studies are often hesitant. However, Takeuchi et al. [32] developed a surgery phase recognition model for robotic distal gastrectomy in single-center scale. Using a test validation model, they reported that this system is useful for intraoperative prediction of surgical difficulty depending on how much time was spent in the early phase of the process. Such a system may also be useful when reviewing surgical videos after the surgery and may have educational significance. Furthermore, it may be possible to use it for efficient operating room management by monitoring surgical progress intraoperatively, since it is possible to predict how long the surgery will take depending on the progress made in the early phases.
SURGICAL SKILL ASSESSMENT
General
Another possibility for the use of AI in relation to surgery is that of using DL data to objectively evaluate the skills of surgeons with scoring. Several fundamental studies have been conducted on this issue as well. First, AI models to assess endoscopic or robotic suturing and knot-tying skills in dry-box was reported from Germany [33] and the United States [34]. Igaki et al. [35] reported that AI confidence in recognizing the standardized surgical field obtained during surgery at applied video of laparoscopic colorectal surgery to surgical skill qualification exam of JSES was correlated with the score made by referees with Spearman’s rank correlation coefficient of 0.81. Subsequently, they reported that the accuracy of an AI model dividing the score made in JSES examination into three categories, mean -2, mean ±1, and mean +2 standard deviation was 75.0% [36]. They also focused on purse-string suture (at transanal endoscopic mesorectal excision) skill assessment and reported that AI skill scores were significantly correlated with length of suture time and surgeon experience [37]. In addition, a study that related surgical skill to the frequency of recognizing bleeding in the surgical field reported that the number of blood pixels recognized by an AI model was significantly lower in the group with higher tissue handling skill [38]. A study using dry box training reported that it was possible to discriminate novice, intermediate, and expert surgeons in robotic surgery with 92.5%, 95.4%, and 91.3% accuracy in suturing, needle-passing, and knot-tying tasks, respectively [39].
Gastric cancer surgery
At this time, no studies have been conducted on AI system on evaluation of gastric cancer surgery skills. However, as mentioned above, once a surgical stage model for gastric cancer is developed, it is highly likely that a system for skill assessment will be studied accordingly. Once these systems are developed and objective skill evaluation is performed, they will contribute to the spread of safe gastric cancer surgery broadly.
LIMITATION OF THE CURRENT AI-RESEARCH RELATED TO SURGERY
Thus, while there has been much AI-related research on surgery, there are some limitations at this time, and we should not overestimate its usefulness. First, most studies have been conducted using data from a single institution or a limited region, and the generality of their AI techniques is not clear. Second, most AI studies on surgery are in the basic research stage, and none have clearly demonstrated their usefulness in general clinical practice. Third, as for navigation surgery using AI technology, it is not intended to achieve above-standard surgery (like expert surgery), but rather to avoid simple human errors and to educate novice surgeons at this point. While understanding these limitations, it is desirable to develop a practical system that will truly benefit clinical practice without overconfidence at this point.
CONCLUSIONS
Although there are various limitations at this time, attempts to introduce AI technology into the surgical fields will continue. At present, navigation surgery using anatomical guidance is probably the most realistic. It would be significant if these technologies are incorporated into the field of gastric cancer surgery in the future to reduce human error, reduce the burden on the surgeon, and increase the benefits to the patient.
Footnotes
Conflict of Interest: No potential conflict of interest relevant to this article was reported.
- Conceptualization: K.T.
- Data curation: K.T., K.M.
- Methodology: K.T.
- Project administration: K.T.
- Visualization: K.T.
- Writing - original draft: K.T.
- Writing - review & editing: K.T., K.M.
References
- 1.Kitaguchi D, Takeshita N, Hasegawa H, Ito M. Artificial intelligence-based computer vision in surgery: recent advances and future perspectives. Ann Gastroenterol Surg. 2022;6:29–36. doi: 10.1002/ags3.12513. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Kitaguchi D, Watanabe Y, Madani A, Hashimoto DA, Meireles OR, Takeshita N, et al. Artificial intelligence for computer vision in surgery: a call for developing reporting guidelines. Ann Surg. 2022;275:e609–e611. doi: 10.1097/SLA.0000000000005319. [DOI] [PubMed] [Google Scholar]
- 3.Igaki T, Takenaka S, Watanabe Y, Kojima S, Nakajima K, Takabe Y, et al. Universal meta-competencies of operative performances: a literature review and qualitative synthesis. Surg Endosc. 2023;37:835–845. doi: 10.1007/s00464-022-09573-4. [DOI] [PubMed] [Google Scholar]
- 4.Shinohara H. Surgery utilizing artificial intelligence technology: why we should not rule it out. Surg Today. 2022;2022:3. doi: 10.1007/s00595-022-02601-9. [DOI] [PubMed] [Google Scholar]
- 5.Lalys F, Jannin P. Surgical process modelling: a review. Int J CARS. 2014;9:495–511. doi: 10.1007/s11548-013-0940-5. [DOI] [PubMed] [Google Scholar]
- 6.Pernek I, Ferscha A. A survey of context recognition in surgery. Med Biol Eng Comput. 2017;55:1719–1734. doi: 10.1007/s11517-017-1670-6. [DOI] [PubMed] [Google Scholar]
- 7.Loftus TJ, Tighe PJ, Filiberto AC, Efron PA, Brakenridge SC, Mohr AM, et al. Artificial intelligence and surgical decision-making. JAMA Surg. 2020;155:148–158. doi: 10.1001/jamasurg.2019.4917. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Bamba Y, Ogawa S, Itabashi M, Kameoka S, Okamoto T, Yamamoto M. Automated recognition of objects and types of forceps in surgical images using deep learning. Sci Rep. 2021;11:22571. doi: 10.1038/s41598-021-01911-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Kitaguchi D, Fujino T, Takeshita N, Hasegawa H, Mori K, Ito M. Limited generalizability of single deep neural network for surgical instrument segmentation in different surgical environments. Sci Rep. 2022;12:12575. doi: 10.1038/s41598-022-16923-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Madad Zadeh S, Francois T, Calvet L, Chauvet P, Canis M, Bartoli A, et al. SurgAI: deep learning for computerized laparoscopic image understanding in gynaecology. Surg Endosc. 2020;34:5377–5383. doi: 10.1007/s00464-019-07330-8. [DOI] [PubMed] [Google Scholar]
- 11.Yamazaki Y, Kanaji S, Matsuda T, Oshikiri T, Nakamura T, Suzuki S, et al. Automated surgical instrument detection from laparoscopic gastrectomy video images using an open source convolutional neural network platform. J Am Coll Surg. 2020;230:725–732.e1. doi: 10.1016/j.jamcollsurg.2020.01.037. [DOI] [PubMed] [Google Scholar]
- 12.Yamazaki Y, Kanaji S, Kudo T, Takiguchi G, Urakawa N, Hasegawa H, et al. Quantitative comparison of surgical device usage in laparoscopic gastrectomy between surgeons’ skill levels: an automated analysis using a neural network. J Gastrointest Surg. 2022;26:1006–1014. doi: 10.1007/s11605-021-05161-4. [DOI] [PubMed] [Google Scholar]
- 13.Kitaguchi D, Takeshita N, Matsuzaki H, Igaki T, Hasegawa H, Kojima S, et al. Real-time vascular anatomical image navigation for laparoscopic surgery: experimental study. Surg Endosc. 2022;36:6105–6112. doi: 10.1007/s00464-022-09384-7. [DOI] [PubMed] [Google Scholar]
- 14.Igaki T, Kitaguchi D, Kojima S, Hasegawa H, Takeshita N, Mori K, et al. Artificial intelligence-based total mesorectal excision plane navigation in laparoscopic colorectal surgery. Dis Colon Rectum. 2022;65:e329–e333. doi: 10.1097/DCR.0000000000002393. [DOI] [PubMed] [Google Scholar]
- 15.Sato K, Fujita T, Matsuzaki H, Takeshita N, Fujiwara H, Mitsunaga S, et al. Real-time detection of the recurrent laryngeal nerve in thoracoscopic esophagectomy using artificial intelligence. Surg Endosc. 2022;36:5531–5539. doi: 10.1007/s00464-022-09268-w. [DOI] [PubMed] [Google Scholar]
- 16.Park SH, Park HM, Baek KR, Ahn HM, Lee IY, Son GM. Artificial intelligence based real-time microcirculation analysis system for laparoscopic colorectal surgery. World J Gastroenterol. 2020;26:6945–6962. doi: 10.3748/wjg.v26.i44.6945. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Barash Y, Klang E, Lux A, Konen E, Horesh N, Pery R, et al. Artificial intelligence for identification of focal lesions in intraoperative liver ultrasonography. Langenbecks Arch Surg. 2022;407:3553–3560. doi: 10.1007/s00423-022-02674-7. [DOI] [PubMed] [Google Scholar]
- 18.Kumazu Y, Kobayashi N, Kitamura N, Rayan E, Neculoiu P, Misumi T, et al. Automated segmentation by deep learning of loose connective tissue fibers to define safe dissection planes in robot-assisted gastrectomy. Sci Rep. 2021;11:21198. doi: 10.1038/s41598-021-00557-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Shinozuka K, Turuda S, Fujinaga A, Nakanuma H, Kawamura M, Matsunobu Y, et al. Artificial intelligence software available for medical devices: surgical phase recognition in laparoscopic cholecystectomy. Surg Endosc. 2022;36:7444–7452. doi: 10.1007/s00464-022-09160-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Cheng K, You J, Wu S, Chen Z, Zhou Z, Guan J, et al. Artificial intelligence-based automated laparoscopic cholecystectomy surgical phase recognition and analysis. Surg Endosc. 2022;36:3160–3168. doi: 10.1007/s00464-021-08619-3. [DOI] [PubMed] [Google Scholar]
- 21.Golany T, Aides A, Freedman D, Rabani N, Liu Y, Rivlin E, et al. Artificial intelligence for phase recognition in complex laparoscopic cholecystectomy. Surg Endosc. 2022;36:9215–9223. doi: 10.1007/s00464-022-09405-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Kitaguchi D, Takeshita N, Matsuzaki H, Takano H, Owada Y, Enomoto T, et al. Real-time automatic surgical phase recognition in laparoscopic sigmoidectomy using the convolutional neural network-based deep learning approach. Surg Endosc. 2020;34:4924–4931. doi: 10.1007/s00464-019-07281-0. [DOI] [PubMed] [Google Scholar]
- 23.Kitaguchi D, Takeshita N, Matsuzaki H, Oda T, Watanabe M, Mori K, et al. Automated laparoscopic colorectal surgery workflow recognition using artificial intelligence: experimental research. Int J Surg. 2020;79:88–94. doi: 10.1016/j.ijsu.2020.05.015. [DOI] [PubMed] [Google Scholar]
- 24.Kitaguchi D, Takeshita N, Matsuzaki H, Hasegawa H, Igaki T, Oda T, et al. Deep learning-based automatic surgical step recognition in intraoperative videos for transanal total mesorectal excision. Surg Endosc. 2022;36:1143–1151. doi: 10.1007/s00464-021-08381-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Sasaki K, Ito M, Kobayashi S, Kitaguchi D, Matsuzaki H, Kudo M, et al. Automated surgical workflow identification by artificial intelligence in laparoscopic hepatectomy: experimental research. Int J Surg. 2022;105:106856. doi: 10.1016/j.ijsu.2022.106856. [DOI] [PubMed] [Google Scholar]
- 26.Takeuchi M, Kawakubo H, Saito K, Maeda Y, Matsuda S, Fukuda K, et al. Automated surgical-phase recognition for robot-assisted minimally invasive esophagectomy using artificial intelligence. Ann Surg Oncol. 2022;29:6847–6855. doi: 10.1245/s10434-022-11996-1. [DOI] [PubMed] [Google Scholar]
- 27.Takeuchi M, Collins T, Ndagijimana A, Kawakubo H, Kitagawa Y, Marescaux J, et al. Automatic surgical phase recognition in laparoscopic inguinal hernia repair with artificial intelligence. Hernia. 2022;26:1669–1678. doi: 10.1007/s10029-022-02621-x. [DOI] [PubMed] [Google Scholar]
- 28.Ward TM, Hashimoto DA, Ban Y, Rattner DW, Inoue H, Lillemoe KD, et al. Automated operative phase identification in peroral endoscopic myotomy. Surg Endosc. 2021;35:4008–4015. doi: 10.1007/s00464-020-07833-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Hashimoto DA, Rosman G, Witkowski ER, Stafford C, Navarette-Welton AJ, Rattner DW, et al. Computer vision analysis of intraoperative video: automated recognition of operative steps in laparoscopic sleeve gastrectomy. Ann Surg. 2019;270:414–421. doi: 10.1097/SLA.0000000000003460. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Eckhoff JA, Ban Y, Rosman G, Müller DT, Hashimoto DA, Witkowski E, et al. TEsoNet: knowledge transfer in surgical phase recognition from laparoscopic sleeve gastrectomy to the laparoscopic part of Ivor-Lewis esophagectomy. Surg Endosc. 2023;37:4040–4053. doi: 10.1007/s00464-023-09971-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Fujinaga A, Endo Y, Etoh T, Kawamura M, Nakanuma H, Kawasaki T, et al. Development of a cross-artificial intelligence system for identifying intraoperative anatomical landmarks and surgical phases during laparoscopic cholecystectomy. Surg Endosc. 2023;37:6118–6128. doi: 10.1007/s00464-023-10097-8. [DOI] [PubMed] [Google Scholar]
- 32.Takeuchi M, Kawakubo H, Tsuji T, Maeda Y, Matsuda S, Fukuda K, et al. Evaluation of surgical complexity by automated surgical process recognition in robotic distal gastrectomy using artificial intelligence. Surg Endosc. 2023;37:4517–4524. doi: 10.1007/s00464-023-09924-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Kowalewski KF, Garrow CR, Schmidt MW, Benner L, Müller-Stich BP, Nickel F. Sensor-based machine learning for workflow detection and as key to detect expert level in laparoscopic suturing and knot-tying. Surg Endosc. 2019;33:3732–3740. doi: 10.1007/s00464-019-06667-4. [DOI] [PubMed] [Google Scholar]
- 34.Fard MJ, Ameri S, Darin Ellis R, Chinnam RB, Pandya AK, Klein MD. Automated robot-assisted surgical skill evaluation: predictive analytics approach. Int J Med Robot. 2018;14:e1850. doi: 10.1002/rcs.1850. [DOI] [PubMed] [Google Scholar]
- 35.Igaki T, Kitaguchi D, Matsuzaki H, Nakajima K, Kojima S, Hasegawa H, et al. Automatic surgical skill assessment system based on concordance of standardized surgical field development using artificial intelligence. JAMA Surg. 2023 doi: 10.1001/jamasurg.2023.1131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Kitaguchi D, Takeshita N, Matsuzaki H, Igaki T, Hasegawa H, Ito M. Development and validation of a 3-dimensional convolutional neural network for automatic surgical skill assessment based on spatiotemporal video analysis. JAMA Netw Open. 2021;4:e2120786. doi: 10.1001/jamanetworkopen.2021.20786. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Kitaguchi D, Teramura K, Matsuzaki H, Hasegawa H, Takeshita N, Ito M. Automatic purse-string suture skill assessment in transanal total mesorectal excision using deep learning-based video analysis. BJS Open. 2023;7:zrac176. doi: 10.1093/bjsopen/zrac176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Sasaki S, Kitaguchi D, Takenaka S, Nakajima K, Sasaki K, Ogane T, et al. Machine learning-based automatic evaluation of tissue handling skills in laparoscopic colorectal surgery: a retrospective experimental study. Ann Surg. 2023;278:e250–e255. doi: 10.1097/SLA.0000000000005731. [DOI] [PubMed] [Google Scholar]
- 39.Wang Z, Majewicz Fey A. Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. Int J CARS. 2018;13:1959–1970. doi: 10.1007/s11548-018-1860-1. [DOI] [PubMed] [Google Scholar]




