Skip to main content
Plastic and Reconstructive Surgery Global Open logoLink to Plastic and Reconstructive Surgery Global Open
. 2022 Dec 2;10(12):e4608. doi: 10.1097/GOX.0000000000004608

A Systematic Review of Artificial Intelligence Applications in Plastic Surgery: Looking to the Future

Daisy L Spoer *,, Julianne M Kiene *, Paige K Dekker , Samuel S Huffman *,, Kevin G Kim §, Andrew I Abadeer , Kenneth L Fan †,¶,
PMCID: PMC9722565  PMID: 36479133

Background:

Artificial intelligence (AI) is presently employed in several medical specialties, particularly those that rely on large quantities of standardized data. The integration of AI in surgical subspecialties is under preclinical investigation but is yet to be widely implemented. Plastic surgeons collect standardized data in various settings and could benefit from AI. This systematic review investigates the current clinical applications of AI in plastic and reconstructive surgery.

Methods:

A comprehensive literature search of the Medline, EMBASE, Cochrane, and PubMed databases was conducted for AI studies with multiple search terms. Articles that progressed beyond the title and abstract screening were then subcategorized based on the plastic surgery subspecialty and AI application.

Results:

The systematic search yielded a total of 1820 articles. Forty-four studies met inclusion criteria warranting further analysis. Subcategorization of articles by plastic surgery subspecialties revealed that most studies fell into aesthetic and breast surgery (27%), craniofacial surgery (23%), or microsurgery (14%). Analysis of the research study phase of included articles indicated that the current research is primarily in phase 0 (discovery and invention; 43.2%), phase 1 (technical performance and safety; 27.3%), or phase 2 (efficacy, quality improvement, and algorithm performance in a medical setting; 27.3%). Only one study demonstrated translation to clinical practice.

Conclusions:

The potential of AI to optimize clinical efficiency is being investigated in every subfield of plastic surgery, but much of the research to date remains in the preclinical status. Future implementation of AI into everyday clinical practice will require collaborative efforts.


Takeaways

Question: What are the applications and limitations of artificial intelligence (AI) in plastic surgery?

Findings: In plastic surgery, AI displays high accuracy in preclinical research. It is most often used to aid in visual diagnosis pertaining to aesthetics, breast, and craniofacial surgery.

Meaning: Despite widespread investigations of AI in plastic surgery, the research remains largely pre-clinical; translating AI to the bedside will likely require further standardization of data collection and interinstitutional collaboration.

INTRODUCTION

Artificial intelligence (AI) is unified by the idea of a system that displays intelligent behavior. AI’s robust automated computing power can reduce diagnostic errors, conserve resources, and increase efficiency.13 The most successful clinical applications of AI are seen in medical specialties that inherently collect standardized data, such as radiology, pathology, ophthalmology, and dermatology.1 Conversely, the decreased emphasis on standardized data collection in surgery may be limiting an associated development of clinical AI.4 Plastic surgery is uniquely reliant on visual diagnosis, prediction, and assessment of aesthetic outcomes. Such AI tasks are achievable and could enhance efficiency and accuracy in plastic surgery. However, the standardization of data and integration of AI in plastic surgery is not well characterized.

In this systematic review, we aim to investigate the current applications of AI in plastic surgery within eight subspecialties: aesthetic and breast surgery, general plastic and reconstructive surgery, craniofacial surgery, burn surgery, microsurgery, oral and maxillofacial surgery, wound care, and hand surgery. We further distinguish the AI implemented in each study by subfield (Table 1) and study phase to characterize current applications and future directions for integrating AI into clinical practice.2,4,518

Table 1.

Definitions and Examples of AI Subfields

Subfield Description Examples
ML Machines can recognize patterns in data and make predictions without explicit programing5 Supervised ML: machine uses training data (eg, EKG tracings) to make output predictions (eg, normal sinus rhythm), which the machine compares to a known correct output (eg, myocardial infarction) to modify its algorithm accordingly.5,6,7 Supervised ML must be programed to recognize appropriate features of a given input and be fed immense amounts of data points with known comparable output values to build an algorithm that is both generalizable and accurate.1,8 Unsupervised ML: makes inferences on unlabeled data to discover groups within the data (clustering), propose rules that describe large portions of the data (association), or generate novel data points independently (generative modeling).5,6,9,10 For example, unsupervised ML has been used to identify patients at risk for dementia based on population surveys by clustering survey respondents into likelihood groups.11
NLP Infers meaning and sentiment from unstructured human language data, such as video, audio files, or text files. NLP can be useful for patient visit documentation and large-scale EMR analyses. For example, NLP has been used to identify words and phrases in operative reports that may predict postoperative complications.12
CV Machines analyze images and videos with the goal of mimicking human visual perception and reasoning. Involves image analysis using geometric and physical statistical modeling and data-based ML approaches.
ANN Modeled after biological nervous systems: computational units or “neurons” in hidden intermediary layer(s) form connections between each other as well as the input and output cell layers.13 DNNs consist of several layers of processing, which can independently extract features from complex inputs (images/speech/video) and arrive at conclusions without the need for explicit programming.14 Some examples of DNNs include CNNs and GAN.2 CNNs perform independent perceptual tasks (eg, interpreting medical scans, pathology slides, skin lesions, facial features) and predict probabilities of classification (eg, presence or absence of disease) beyond the scope which computer vision was previously capable of.15 CNN has shown validated clinical efficacy in the detection of diabetic retinopathy, wrist fractures, histologic breast metastases, and small colonic polyps.2GANs are composed of two networks which synergistically generate data samples.16 While the generator uses the data set (eg, real photos of malignant melanoma) to produce novel samples of data (eg, fake photographs of malignant melanoma), the discriminator trains the generator by distinguishing data as real or fake. This mechanism ultimately yields generated data which are indistinguishable from the original set.11,6,17 GAN have been used to de-noise/reconstruct and synthesize medical images for patient privacy, cross modality imaging (MR↔ CT, histopathology color normalization, MR→PET, real ↔ synthetic), and training sets for CNNs or clinicians.22,17,18

ANNs, artificial neural networks; CNNs, convolutional neural network; CV, computer vision; DNNs, deep neural networks; GANs, generative adversarial networks; ML, machine learning; NLP, natural language processing.

MATERIALS AND METHODS

Search Criteria

A systematic database search was performed on April 11, 2020, to identify all scientific articles covering applications of AI in plastic and reconstructive surgery. The search strategy was designed according to Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines (Fig. 1).19 Keywords and Medical Subject Heading (MeSH) terms regarding AI and plastic surgery were combined with Boolean operators for efficiency. The PRISMA flow diagram was generated using Visio.

Fig. 1.

Fig. 1.

PRISMA flow diagram.

Medline (1946–present), Embase (1947–present), Cochrane Databases, and PubMed electronic databases were searched using the refined search strategy. Table 2 outlines the complete search strategy conducted in Medline; the remaining database searches had the same structure with slight changes accounting for specific keywords recognized by each database. (See figure, Supplemental Digital Content 1, which displays a search strategy performed in Embase (A), Cochrane Registers (B), and PubMed (C). All searches conducted on April 11, 2020, http://links.lww.com/PRSGO/C210.) Identified articles were preliminarily filtered using RefWorks ProQuest to remove duplicates and records missing keywords/tags.20 (See figure, Supplemental Digital Content 2, which displays a filtering application used on RefWorks ProQuest April 2020, http://links.lww.com/PRSGO/C211.) The remaining articles were downloaded into Microsoft Excel for the title and abstract screening.

Table 2.

Example Search Strategy Performed in Medline

Number Searches
1 Plastic surgery.mp. or exp Surgery, Plastic/
2 Reconstructive surgical procedures.mp. or exp Reconstructive Surgical Procedures/
3 exp Esthetics/ or Esthetics.mp.
4 exp Microsurgery/ or microsurgery.mp.
5 1 or 2 or 3 or 4
6 artificial intelligence.mp. or exp Artificial Intelligence/
7 Algorithms.mp. or exp Algorithms/
8 Neural Networks Computer.mp. or exp “Neural Networks (Computer)”/
9 artificial neural network.mp.
10 Computer Simulation.mp. or exp Computer Simulation/
11 Machine Learning.mp. or exp Machine Learning/
12 Deep learning.mp. or exp Deep Learning/
13 Pattern Recognition, Automated.mp. or exp Pattern Recognition, Automated/
14 Three-Dimensional Imaging.mp. or exp Imaging, Three-Dimensional/
15 Automation.mp. or exp Automation/
16 Electronic Data Processing.mp. or exp Electronic Data Processing/
17 Information processing.mp. or exp Electronic Data Processing/
18 Computer-Aided Design.mp. or exp Computer-Aided Design/
19 Computer Security.mp. or exp Computer Security/
20 Computer Simulation.mp. or exp Computer Simulation/
21 Image Processing, Computer-Assisted.mp. or exp Image Processing, Computer-Assisted/
22 Signal Processing, Computer-Assisted.mp. or exp Signal Processing, Computer-Assisted/
23 Facial Recognition.mp. or exp Facial Recognition/
24 Biomedical Technology.mp. or exp Biomedical Technology/
25 Computer assisted diagnosis.mp. or exp Diagnosis, Computer-Assisted/
26 Robotics.mp. or exp Robotics/
27 Electronic Data Processing.mp. or exp Electronic Data Processing/
28 Information processing.mp. or exp Electronic Data Processing/
29 Big data.mp. or exp Big Data/
30 Radiographic Image Interpretation, Computer Assisted.mp. or exp Radiographic Image Interpretation, Computer-Assisted/
31 Computer assisted diagnosis.mp. or exp Diagnosis, Computer-Assisted/
32 Biometric Identification.mp. or exp Biometric Identification/
33 biometry.mp. or exp Biometry/
34 6 or 7 or 8 or 9 or 10 or 11 or 12 or 13 or 14 or 15 or 16 or 17 or 18 or 19 or 20 or 21 or 22 or 23 or 24 or 25 or 26 or 27 or 28 or 29 or 30 or 31 or 32 or 33
35 5 and 34
36 6 and 35
37 Limit 36 to English language

Searches conducted on April 11, 2020.

Eligibility Criteria

Database searches were limited to articles written in English. Two blinded reviewers independently reviewed titles and abstracts (D.S. and J.K.). Exclusion criteria are detailed in Figure 1. Discrepancies were discussed and re-reviewed until disagreement was 0%. The remaining titles and abstracts were categorized into 24 medical and surgical specialties. Records assigned to specialties of plastic & reconstructive surgery, nonbreast microsurgery, oral and maxillofacial surgery, and craniofacial surgery were then independently reviewed for final inclusion. Primary studies reporting clinically relevant AI applications to plastic surgery were included. Secondary studies were checked for unidentified primary studies. Systematic reviews, correspondences, and conference presentations were considered secondary literature and were replaced by respective primary studies if applicable.

Papers were subcategorized into clinical applications as follows: (1) predictive analytics; (2) preoperative consultation; (3) streamlining clinic and procedures; (4) postoperative quality assessment; and (5) medical research.21 Clinical applications were determined by an in-depth qualitative synthesis of each article’s subspecialty-specific primary endpoints and interpretation of implications within a clinical setting.

Data Extraction and Analysis

The primary outcome for quantitative analysis was the AI model’s performance or prediction accuracy. The most probable prediction accuracies were extracted from the referenced models to compare the agreement between the predicted and actual outcomes across studies. Secondary outcomes included the study phase of the AI. Study phases were classified according to a clinical research framework for AI in medicine: phase 0: discovery and invention; phase 1: technical performance and safety; phase 2: efficacy, quality improvement, and algorithm performance in a medical setting; phase 3: clinical therapeutic efficacy; and phase 4: safety and effectiveness (Fig. 2).22 The limitations of AI performance were further categorized by phase-specific obstacles described in figure, Supplemental Digital Content 3, which displays characteristics of included articles on artificial intelligence in plastic surgery (http://links.lww.com/PRSGO/C212).

Fig. 2.

Fig. 2.

Study phase of AI in healthcare. Count of included papers for each study phase of AI in healthcare. Color shows details about subspecialty of plastic surgery. The marks are labeled by subspecialty of plastic surgery and count of included papers. Study phases were classified according to a clinical research framework of AI in medicine: “Phase 0: Discovery and invention,” “Phase 1: Technical performance and safety,” “Phase 2: Efficacy, Quality improvement, and Algorithm performance in a medical setting,” “Phase 3: Clinical therapeutic efficacy,” and “Phase 4: Safety and effectiveness.” AI in healthcare subtypes were replicated from Park et al. Of the 44 studies, 19 were phase 0, twelve were phase 1, twelve were phase 2, one was phase 3, and none were phase 4.

Data extraction parameters included article publication details (author name, year, journal, country published), clinically relevant information (subspecialty, application, and study aim), AI details (study type and phase of AI, limitation, AI subfield, input, and output data descriptions and dimensions, dataset source, number of data used for training/testing/validation), and performance details (validation method, performance outcome measures). Performance was quantified by standardized accuracies, with accuracy referring to the correctly predicted data points in relation to a “ground truth.” The definition of a ground truth depends on the evaluated task, though it is the reference standard for AI research. Articles were subcategorized according to the subspecialty of plastic and reconstructive surgery and the study phase of AI in health care. The subfields of AI are described further in Table 1.2,4,618 Analysis and visual representation of article characteristics, including subspecialty, study phase, AI subfield, and geographical location published, were conducted using Tableau Software and Prism.

Quality Appraisal

To assess the quality of the included studies, the authors applied the methodologic index for nonrandomized studies (MINORS) adapted by Ren et al to appraise diagnostic AI research.23 Definitions of the MINORS measures used are provided in Figure 3.5,23,24

Fig. 3.

Fig. 3.

Quality appraisal of AI in plastic surgery. Methodological quality summary for included studies: disclosures, stated aim, prospective, data source, comparison, ground truth, accuracy, AUC (color) broken down by author, year. Representation of quality for nonrandomized controlled trials modified from Ren et al. One (white): not reported; 2 (green): reported and adequate; 0 (red): reported but inadequate. Comparisons were considered “equivalent” if it was like the study group measured by criteria other than the study endpoints and if there were no clear confounding variables that could influence the comparison. The study was considered to have an “adequate control group” if it was compared to a gold standard diagnostic test or therapeutic intervention recognized as the optimal intervention according to the available published data. The ground truth was considered contemporary if the control and study groups were managed during the same time and there was no historical comparison. Performance accuracy: the highest reported probability as reported in the body of the included article.

Statistical Analysis

Descriptive statistics were used to evaluate trends and categorize data. Post-hoc statistical analysis was performed using Prism. Qualitative variables were translated into categorically coded numbers. Quantitative measurements were standardized to be unitless and are reported as discrete counts (data points and limitations) or percentages (accuracy). Accuracies were extracted as available.

Ordinary one-way analysis of variance was used to evaluate differences between mean accuracies when stratified by subspecialty of plastic surgery, AI subfield, and study phase. A nonparametric Spearman r correlation matrix was used to assess correlations between study phases, the number of data used for training/testing/validating the algorithm, number of limitations in each phase, the number of total limitations, and reported accuracy. Statistical significance was determined by P values less than 0.05.

RESULTS

Study Selection

The PRISMA flow diagram (Fig. 1) demonstrates the article retrieval and screening mentioned above. Database searching identified 1820 records, of which 547 duplicates and 610 missing keywords or tags were removed. (See figure, Supplemental Digital Content 2 http://links.lww.com/PRSGO/C211.)20 Studies were excluded (n = 603) during the title and abstract screening if they did not mention AI or plastic surgery, were out of scope, a referencing study, an additional duplicate, or if the full text was not available (Fig. 1). Of the 60 full-text articles screened, 44 met the criteria and were included in the systematic review. Performance metrics were heterogenous, although 41 reported accuracy of AI. (See figure, Supplemental Digital Content 4, which displays characteristics of included AI models on artificial intelligence in plastic surgery, http://links.lww.com/PRSGO/C213.)

Study Characteristics

The number of manuscripts published on AI in plastic surgery has increased over time. (See figure, Supplemental Digital Content 5, which displays plastic surgery subspecialties in included papers, http://links.lww.com/PRSGO/C214.) Most authors published studies from the United States (n = 15, 34%)2536 followed by Japan (n = 4, 9%)3740 and the United Kingdom (n = 3, 7%).4143 (See figure, Supplemental Digital Content 6, which displays a map of included articles, http://links.lww.com/PRSGO/C215.) (See figure, Supplemental Digital Content 7, which displays institutions researching AI in the United States, http://links.lww.com/PRSGO/C216.) The remaining 22 articles were published among authors from 16 other countries.

The 44 studies were published in a total of 30 journals. Articles were most often cited in the Institute of Electrical and Electronics Engineers (n = 5, 11%),33,44,45,46 Plastic and Reconstructive Surgery (n = 4, 9%),17,28,32,33,48,49 The Journal of Craniofacial Surgery (n = 3, 7%)30,37,49 and Scientific Reports (n = 2, 4.5%)43,50 Approximately two-thirds (n = 29, 66%) of the journals had a clinical focus, with the remaining (n = 15, 34%) having a more technical scope.

AI was primarily used to aid in diagnosis and surgical planning (n = 9)25,26,29,30,44,5154 and to aid in the prediction (n = 12)29,31,35,37,40,41,5559 or objective evaluation of outcomes (n = 10)28,27,47,50,56,6063 within all subfields of plastic and reconstructive surgery. (See figure, Supplemental Digital Content 5, http://links.lww.com/PRSGO/C214.) AI was frequently used to predict indications for surgical reconstruction,17,24,25,30,31,33,43,57,62,64 the likelihood of clinical and technical outcomes (blood loss,65 swelling‚50 response to antibiotics,40 wound healing,35,57,66 surgical site infection‚62 flap failure‚55 and overall survival64), anatomical landmarks for surgical planning (skeletal profiles,30,31,37 midfacial plane,29 and perforators52), and measurements of qualitative postoperative success (improvements in aesthetics,28,29,49,53,56,60 form,61,63 and function32,47,54,67). The most popular subspecialty was aesthetic and breast surgery (n = 12, 27%),2528,47,49,52,55,56,60,61,68 followed by craniofacial surgery (n = 10, 23%),2932,37,41,51,54,67,68 nonbreast microsurgery (n = 6, 14%),39,42,44,45,47,62 burn surgery (n = 5, 11%),33,40,57,58,64 general plastic surgery (n = 4, 9%),17,34,48,69 oral and maxillofacial surgery (n = 3, 7%),43,50,65 wound care (n = 2, 5%),35,59 and hand surgery (n = 2, 5%).36,46 AI models were most often in study phases 0 (n = 19)2931,33,3739,4146,57,6164,68 and 1 (n = 12, 27%); only one article (2%) investigated clinical therapeutic efficacy (phase 3) (Fig. 2).27 AI models were frequently burdened by phase 0 limitations related to data, statistical performance, and workflow (n = 26) or phase 1 limitations related to technical performance and safety (n = 28). However, many studies were limited by multiple factors from various limitation phases (mean limitations per study = 3.11, n = 44). (See table, Supplemental Digital Content 4, http://links.lww.com/PRSGO/C213.)

AI Model Characteristics

AI model metrics (input and output variables, data sources, number of training data, and comparisons) are outlined in figure, Supplemental Digital Content 4 (http://links.lww.com/PRSGO/C213). The most frequently used algorithms were supervised machine learning (n = 11, 25%)2931,33,34,47,53,54,58,59,65 and artificial neural networks (n = 11, 25%).36,40,42,43,50,57,6064 Other algorithms used were convolutional neural networks (n = 8, 19%),25,27,28,37,38,39,49,56 unsupervised machine learning (n = 4, 9%),41,45,55,68 natural language processing (n = 4, 9%),34,48,67,68 generative adversarial networks (n = 2, 5%),17,26 computer vision (n = 2, 5%),32,52 and combinations of models (combo; n = 2, 5%).43,44 Input features were typically comprised of raw and preprocessed variables, such as subject characteristics (age, lapse time, comorbidities, vital signs, and laboratory values, anatomical and wound measurements, tissue reflectance spectrum), clinical images (facial photography, CT images, angiography, photoplethysmography, dermatoscopy, 3D cephalograms), surgical factors (surgical approach, intraoperative interactions with equipment), and synthetic or experimentally derived metrics (external muscle stimulation pulse widths, frequently asked questions). Input data were typically acquired from institutional electronic health records, publicly available databases, or experimentally generated synthetic data. Output values were closely related to the study aim, and most (n = 30, 69%) were one-dimensional measurements of categorically predicted outcomes (eg, “disease” versus “no disease,” mL blood lost, estimated age). The quantity and standardization of data used for training (nTrain), testing (nTest), and validation (nValidate) of the AI varied widely, with nTrain ranging from 10 to 70,950 data points (eg, photographs, key points, CT voxels, patient characteristics).

AI Performance in Plastic Surgery

Accuracy varied between studies, with the highest reported accuracies ranging between 50% and 100%. There were no significant differences in mean model accuracies used across subspecialties (P = 0.1887), AI subfields (P = 0.1836), or study phases (P = 0.3705) (Fig. 4). However, accuracy was associated with the number of data used for testing (nTest, P = 0.009) and the number of phase 0 limitations (P = 0.030). (See figure, Supplemental Digital Content 8, which displays a correlation of AI study characteristics, http://links.lww.com/PRSGO/C217.) The ultimate study phase of AI was correlated with the nTest (P = 0.017) as well as the number of phase 0 and phase 2 limitations (P < 0.00001).

Fig. 4.

Fig. 4.

Graphical display of ordinary one-way ANOVA of the performance accuracies by article feature (A) Plastic and Reconstructive surgery subspecialty, (B) AI Subfield, and (C) study phase. Reported accuracies of AI models did not differ significantly by (A) subspecialty, P = 0.189 (B) AI subfield, P = 0.184, or (C) study phase P = 0.3705. SML: supervised machine learning, CV: computer vision, NLP: natural language processing, USML: unsupervised machine learning, ANN: artificial neural network, CNN: convolutional neural network, Combo: combination of AI subfield, GAN: generative adversarial network. Study phase 0: discovery and innovation; phase 1: technical performance and safety; phase 2: efficacy, quality improvement, and algorithm performance in a medical setting; and phase 3: clinical therapeutic efficacy. Points on the graphs represent median accuracy and bars represent minimum and maximum accuracy.

Quality Assessment

The quality of the studies included in this review was satisfactory (Fig. 3). All but eight studies (n = 36, 82%) provided disclosures. A study aim was clearly stated in all papers, and all but two articles67,68 reported the data source used (n = 42, 95%). Most papers reported equivalent comparison groups (n = 35, 76%) though fewer studies compared AI to an adequate control group (ie, a gold standard diagnostic test or therapeutic intervention) (n = 28, 64%) or a contemporary, nonhistorical, ground truth (n = 29, 66%). Most papers (n = 37, 85%) reported an accuracy of the AI model used, although three of these were inadequate and potentially biased.29,31,63

Furthermore, only three studies used prospective data,48,55,58 and five reported an area under the curve (AUC).25,35,40,58,62 None of the papers were deficient in all nine quality criteria examined. The median number of missed categories was 3 ± 1. The most frequently missing criterion was the use of prospective data.

DISCUSSION

Summary of Findings

The median prediction accuracy of the studied AI models was 92%. The absolute improvement in accuracy was not calculated because the ground truths reported in 43% (n = 29) of studies were heterogeneous and often without an established means of measurement or prediction accuracy for comparison (identity, actual age, actual outcome of wound healing). Our results show that AI models can alleviate cognitive tasks in plastic surgery with high accuracy in predicting diagnosis,27,31,35,37,41,43,58 outcomes,40,48,57,59,64 streamlining preoperative surgical planning (anatomical segmentation),25,26,29,30,44,5154 and postoperative evaluation of outcomes.27,28,32,47,49,50,56,6063 Plastic surgery applicable AI-models can perform cognitive tasks safely and effectively in experimental settings (phases 0–2: n = 43, 97%). Still, dataset limitations may confound the reportedly high accuracies and preclude subsequent clinical evaluation. The fields within plastic surgery that have most successfully integrated AI (aesthetics and breast surgery, and craniofacial surgery) inherently collect large amounts of standardized data (eg, clinical and radiologic images) and utilize AI to perform either inherently subjective (eg, postoperative evaluation of age, beauty, femininity) or time-consuming tasks (eg, craniofacial segmentation).

Article Characteristics

There has been widespread growth in the use of AI in surgery.70 However, the potential utility of AI in plastic surgery remains understudied compared to other surgical subspecialties, including vascular surgery and orthopedics, which have recently published extensive systematic reviews and meta-analyses.23,71,72 Published systematic reviews on AI in plastic surgery only identified 14–24 studies on this topic.73,74 Our study builds on this existing literature by providing a comprehensive overview of the published literature regarding the clinical applications of AI and machine learning in plastic surgery while performing qualitative and quantitative statistical analyses of article, study, and model characteristics.

The research interest in AI in plastic surgery has grown significantly, with a substantial increase in publications between 2002 and 2021. (See figure, Supplemental Digital Content 5, http://links.lww.com/PRSGO/C214.) Most of the papers included in this study were published in journals with a clinical scope. However, only six articles were published in journals specific to plastic surgery. 27,28,32,34,41,48 These trends reinforce the notion that the lack of centralized exposure may limit surgeon awareness of AI in plastic surgery and possible publication bias to journals with higher impact factors (IFs). For example, the most frequent publisher, the Institute of Electrical and Electronics Engineers (IF = 10.961), has a higher IF than any of the referenced plastic surgery journals (IF = 1.539–4.73).75 However, since 70% of the included articles were phases 0 and 1 involving discovery, invention, and technical performance of AI, publication of these articles in technical journals may be most appropriate.

Role of AI in Plastic Surgery

Most of the identified literature focused on applications of AI in aesthetic and breast surgery,2528,47,49,52,53,55,56,60,61 followed by craniofacial surgery.2932,37,41,51,54,59,67 The emphasis of research on AI in aesthetic and breast surgery coincides with the relative number of publications in the general literature subfield. Studies in aesthetics and breast surgery primarily employed patient images (n = 10, 83%)2528,49,52,53,56,60,61 and predicted classifications (n = 7, 59%).25,27,28,49,53,55,56 These study designs had a median phase of 2 and generally avoided limitations related to data quality (n = 12, 100%)25,27,28,49,53,55,56,61 and extensive preprocessing (n = 10, 83%).27,28,49,52,53,55,56,61,69 Standardization of photography is particularly emphasized in aesthetic plastic surgery, which could understandably generate more reliable datasets without limitations on image quality. The 2D photograph analysis may be simple enough to eliminate the need for preprocessing and complex analysis.18

The study designs in craniofacial surgery were diverse but most often used radiological (2D and 3D) and clinical photography as input data for diagnosis (n = 5, 50%)29,30,40,53,58 and anatomical segmentation (n = 3, 30%).28,36,37 These studies had a median phase of 0, and 90% faced barriers related to data (quality, representativeness, completeness) or statistical performance. The application of AI for anatomical segmentation, particularly skeletal landmarking, is gaining traction and optimism within the fields of orthopedics and neurosurgery.23,76,77 Despite the computational complexity involved, 3D input values (CT voxels) offer three dimensions of features that can be incorporated individually for the robust training of AI models.78 This can generate large datasets of standardized values, reinforcing the promise of using AI for radiological anatomical analysis, as exemplified by the high accuracy of the models using craniofacial volumetric data for this purpose (96%–100%).

Limitations of AI

The same bottlenecks that limit the use of AI in healthcare similarly limit the clinical application of plastic surgery-specific algorithms.24 However, using AI for autonomous diagnosis, management, and interventional action, is inherently dangerous.79 AI in surgery poses threats to safety and efficacy that necessitate meticulous design, rigorous performance assessment, contextual ethical principles, and human rights prioritization.69 Clinicians interested in AI must appreciate the importance of using a stepwise framework to evaluate the safety and efficacy of these new technologies, such as the FDA’s phased development of medical drugs and devices.24 The production of accurate and unbiased AI requires an enormous, standardized dataset and rigorous testing before it should be tested in a clinical realm. In this methodical evaluation, subsequent phases are designed to prevent potential risks associated with the preliminary use of AI. This partly explains the apparent stagnation of these efforts among surgeons, who often appreciate AI research from an unfamiliar vantage point.

Overall, AI in plastic surgery was impacted by limitations related to a lack of complete data (12, 27%),30,31, 3741,43,46,55,57,68 a lack of human comparison (n = 11, 25%),27,33,36,44,46,5557,67,68 pre- or postprocessing (n = 13, 30%),25,30,31,33,41,47,49,51,54,59,62,64,69 poor generalizability (n = 12, 27%)27,34,35,40,44,4749,58,61,67,69 and insufficient validation to justify more extensive trials (n = 12, 27%).25,27,28,32,35,50,55,56,59,60,62,65 The dataset limitations remain the most fundamental obstacle to developing and implementing phases 3 and 4, equating to clinically functional AI in plastic surgery. Clinical studies of algorithms are necessary for safe and effective AI, which requires more samples than parameters. Creating well-trained algorithms based on precise and robust data may be a tall order, given plastic surgery’s vast diagnostic and operative scope. Nonetheless, the speed of advancing research in the field is promising. This is especially true in the age of high-definition videos, which contain an abundance of actionable data points.36,80,81 For example, it is predicted that just one minute of a high-definition surgical video has 25 times more data than what can be gleaned from a high-resolution CT image.5

The generalizability of AI to different patient populations will be limited if data on those populations are less readily available due to overt or subconscious exclusion from clinical trials and patient registries.5,78 Additionally, proper data categorization is paramount, as mislabeling variables may lead to misguided outputs.5,78 The automatic interpretation of data by AI raises concern because of its “black box” design.5 Human oversight is needed to decipher how patterns within the data were discovered by AI, as algorithms do not explain their interpretations. Appropriately using AI in clinical practice will require significant collaboration between clinicians, engineers, and hospital-system support.82,83,84

Several ethical implications exist surrounding AI.78 In aesthetic plastic surgery, for example, AI may use algorithms to “objectively” determine beauty. However, developing this algorithm requires recasting a subjective attribute into an objective parameter, raising significant ethical concerns about generalizing specific cultural and societal norms.78

Finally, the lack of clinical studies on AI raises the question of safety in AI-assisted surgery. A recent review emphasizes the potential of intelligent robotic systems in an assisting surgical role85; however, the authors emphasize that while cognitive robotics display superior dexterity, the machines lack an understanding of the surgical context and fail to adapt to the complex workflow. Incorporating surgical workflow analysis and semantic knowledge via deep learning methods could support the potential of AI-assisted surgery.84

Study Strengths and Limitations

Our search strategy included an extensive and highly sensitive systematic search of three databases, yielding many studies for initial screening. Our search strategy employed appropriate terms and synonyms related to AI and plastic surgery as subject headings and free-text key words.86,87 We minimized the effect of publication bias by searching preprint servers and hand-searching the reference lists included. There was high interrater reliability throughout and agreement between the authors.

There are some limitations to this study. Overall, the quantitative analysis of this review was limited by available metrics and the quality of the included papers. Only five articles within this review reported values of areas under the curve, which are more reliable measures for comparing accuracy between studies. This may bias the results of the quantitative analysis presented. Additionally, a meta-analysis could not feasibly analyze the heterogeneous nature of the articles captured by this review. As such, future studies on the progression of AI in plastic surgery should investigate new research within more narrow subsets of plastic surgery that may have the potential for meta-analysis.

CONCLUSIONS

AI has long promised advancement in clinical efficacy and efficiency. Yet, despite the broad investigation of AI in every subfield of plastic surgery, much of the literature remains in the pre-clinical space. There are important considerations within the publication and dissemination of research on AI in plastic surgery consistent with other surgical subspecialties and limitations within AI models that need to be addressed.45 Collaborative, interdisciplinary efforts will be necessary to evolve current efforts of meaningful implementation of AI into clinical practice.

Supplementary Material

gox-10-e4608-s001.pdf (125.1KB, pdf)
gox-10-e4608-s002.pdf (74.8KB, pdf)
gox-10-e4608-s003.pdf (168KB, pdf)
gox-10-e4608-s004.pdf (181.3KB, pdf)
gox-10-e4608-s005.pdf (133.4KB, pdf)
gox-10-e4608-s006.pdf (611.9KB, pdf)
gox-10-e4608-s007.pdf (762.6KB, pdf)
gox-10-e4608-s008.pdf (42.3KB, pdf)

Footnotes

Disclosure: The authors have no financial interest to declare in relation to the content of this article.

Related Digital Media are available in the full-text version of the article on www.PRSGlobalOpen.com.

REFERENCES

  • 1.Chandawarkar A, Chartier C, Kanevsky J, et al. A practical approach to artificial intelligence in plastic surgery. Aesthet Surg J Open Forum. 2020;2:1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Gibson JAG, Dobbs TD, Kouzaris L, et al. Making the most of big data in plastic surgery: improving outcomes, protecting patients, informing service providers. Ann Plast Surg. 2021;86:351–358. [DOI] [PubMed] [Google Scholar]
  • 3.Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25:44–56. [DOI] [PubMed] [Google Scholar]
  • 4.Einstein A, Podolsky B, Rosen N. Can quantum-mechanical description of physical reality be considered complete? Phys Rev. 1935;47:777–780. [Google Scholar]
  • 5.Hashimoto DA, Rosman G, Rus D, et al. Artificial intelligence in surgery: promises and perils. Ann Surg. 2018;268:70–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Brownlee J. How to choose a feature selection method for machine learning. Machine Learning Mastery. 2019 Dec. Available at https://machinelearningmastery.com/feature-selection-with-real-and-categorical-data/. Accessed November 22, 2022. [Google Scholar]
  • 7.Deo RC. Machine learning in medicine. Circulation. 2015;132:1920–1930. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.LeCun Y, Bottou L, Bengio Y, et al. l. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86:2278-2324. [Google Scholar]
  • 9.Murphy KP. Machine Learning. Vol 51. The MIT Press; 2012. [Google Scholar]
  • 10.Dayan P. Unsupervised learning. MIT AIML J. 1999;6:1–6. [Google Scholar]
  • 11.De Langavant LC, Bayen E, Yaffe K. Unsupervised machine learning to identify high likelihood of dementia in population based surveys: development and validation study. J Med Internet Res. 2018;20:1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Soguero-Ruiz C, Hindberg K, Rojo-Alvarez JL, et al. Support vector feature selection for early detection of anastomosis leakage from bag-of-words in electronic health records. IEEE J Biomed Health Inf. 2016;20:1404–1415. [DOI] [PubMed] [Google Scholar]
  • 13.Gropper M, Fleisher L, Wierner-Kronish J, et al. Miller’s Anesthesia. Online: Elsevier; 2019. [Google Scholar]
  • 14.Indolia S, Goswami AK, Mishra SP, et al. Conceptual understanding of convolutional neural network- a deep learning approach. Procedia Comput Sci. 2018;132:679–688. [Google Scholar]
  • 15.Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018;2:719–731. [DOI] [PubMed] [Google Scholar]
  • 16.Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: a review. Med Image Anal. 2019;58:101552. [DOI] [PubMed] [Google Scholar]
  • 17.Baur C, Albarqouni S, Navab N. Generating highly realistic images of skin lesions with GANs. OR 20 context-aware operating theaters, computer assisted robotic endoscopy, clinical image-based procedures, and skin image analysis. Springer; 2018:260–267. [Google Scholar]
  • 18.Nie D, Trullo R, Lian J, et al. Medical image synthesis with context-aware generative adversarial networks BT—medical image computing and computer-assisted intervention. Miccatai 2017. 2017;1:417–425. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6:e1000097. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018;2:719–731. [DOI] [PubMed] [Google Scholar]
  • 21.Liang X, Yang X, Yin S, et al. Artificial intelligence in plastic surgery: applications and challenges. Aesthetic Plast Surg. 2021;45:784–790. [DOI] [PubMed] [Google Scholar]
  • 22.Dobbs TD, Cundy O, Samarendra H, et al. A systematic review of the role of robotics in plastic and reconstructive surgery-from inception to the future. Front Surg. 2017;4:66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Ren M, Yi PH. Artificial intelligence in orthopedic implant model classification: a systematic review. Skeletal Radiol. 2022;51:407–416. [DOI] [PubMed] [Google Scholar]
  • 24.O’Neill AC, Yang D, Roy M, et al. Development and evaluation of a machine learning prediction model for flap failure in microvascular breast reconstruction. Ann Surg Oncol. 2020;27:3466–3475. [DOI] [PubMed] [Google Scholar]
  • 25.Geras KJ, Wolfson S, Shen Y, et al. High-resolution breast cancer screening with multi-view deep convolutional neural networks. arXiv preprint arXiv:170307047. 2017. [Google Scholar]
  • 26.Guan S, Loew M. Breast cancer detection using synthetic mammograms from generative adversarial networks in convolutional neural networks. J Med Imaging. 2019;6:031411. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Dorfman R, Chang I, Saadat S, et al. Making the subjective objective: machine learning and rhinoplasty. Aesthet Surg J. 2020;40:493–498. [DOI] [PubMed] [Google Scholar]
  • 28.Chen K, Lu SM, Cheng R, et al. Facial recognition neural networks confirm success of facial feminization surgery. Plast Reconstr Surg. 2020;145:203–209. [DOI] [PubMed] [Google Scholar]
  • 29.Wu J, Heike C, Birgfeld C, et al. Measuring symmetry in children with unrepaired cleft lip: Defining a standard for the three-dimensional midfacial reference plane. Cleft Palate Craniofac J. 2016;53:695–704. [DOI] [PubMed] [Google Scholar]
  • 30.Bhalodia R, Dvoracek LA, Ayyash AM, et al. Quantifying the severity of metopic craniosynostosis: a pilot study application of machine learning in craniofacial surgery. J Craniofac Surg. 2020;31:697. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Mendoza CS, Safdar N, Okada K, et al. Personalized assessment of craniosynostosis via statistical shape modeling. Med Image Anal. 2014;18:635–646. [DOI] [PubMed] [Google Scholar]
  • 32.Dusseldorp JR, Guarin DL, van Veen MM, et al. In the eye of the beholder: Changes in perceived emotion expression after smile reanimation. Plast Reconstr Surg. 2019;144:457–471. [DOI] [PubMed] [Google Scholar]
  • 33.Heredia-Juesas J, Thatcher JE, Yang Lu, et al. Non-invasive optical imaging techniques for burn-injured tissue detection for debridement surgery. Annu Int Conf IEEE Eng Med Biol Soc. 2016;2893-2896. [DOI] [PubMed] [Google Scholar]
  • 34.Boczar D, Sisti A, Oliver JD, et al. Artificial intelligent virtual assistant for plastic surgery patient’s frequently asked questions: a pilot study. Ann Plast Surg. 2020;84:e16–e21. [DOI] [PubMed] [Google Scholar]
  • 35.Jung K, Covington S, Sen CK, et al. Rapid identification of slow healing wounds. Wound Repair Regen. 2016;24:181–188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Luján JL, Crago PE. Computer-based test-bed for clinical assessment of hand/wrist feed-forward neuroprosthetic controllers using artificial neural networks. Med Biol Eng Comput. 2004; 42:754-761. [DOI] [PubMed] [Google Scholar]
  • 37.Nishimoto S, Sotsuka Y, Kawai K, et al. Personal computer-based cephalometric landmark detection with deep learning, using cephalograms on the internet. J Craniofac Surg. 2019;30:91–95. [DOI] [PubMed] [Google Scholar]
  • 38.Ma Q, Kobayashi E, Fan B, et al. Automatic 3D landmarking model using patch‐based deep neural networks for CT image of oral and maxillofacial surgery. Int J Med Robot. 2020;16:e2093. [DOI] [PubMed] [Google Scholar]
  • 39.Nakazawa A, Harada K, Mitsuishi M, et al. Real-time surgical needle detection using region-based convolutional neural networks. Int J Comput Assist Radiol Surg. 2020;15:41–47. [DOI] [PubMed] [Google Scholar]
  • 40.Yamamura S, Kawada K, Takehira R, et al. Prediction of aminoglycoside response against methicillin-resistant Staphylococcus aureus infection in burn patients by artificial neural network modeling. Biomed Pharmacother. 2008;62:53–48. [DOI] [PubMed] [Google Scholar]
  • 41.Ferry Q, Steinberg J, Webber C, et al. Diagnostically relevant facial gestalt information from ordinary photos. elife. 2014;3:e02020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Power M, Thompson AJ, Anastasova S, et al. A monolithic force‐sensitive 3D microgripper fabricated on the tip of an optical fiber using 2‐photon polymerization. Small. 2018;14:1703964. [DOI] [PubMed] [Google Scholar]
  • 43.Knoops PGM, Papaioannou A, Borghi A, et al. A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Sci Rep. 2019;9:13597. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Fichera L, Pardo D, Mattos LS. Supervisory system for robot assisted laser phonomicrosurgery. Annu Int Conf IEEE Eng Med Biol Soc. 2013;2013:4839–4842. [DOI] [PubMed] [Google Scholar]
  • 45.Tatinati S, Veluvolu KC, Ang WT. Multistep prediction of physiological tremor based on machine learning for robotics assisted microsurgery. IEEE Trans Cybern. 2014;45:328–339. [DOI] [PubMed] [Google Scholar]
  • 46.Hincapie JG, Kirsch RF. Feasibility of EMG-based neural network controller for an upper extremity neuroprosthesis. IEEE Trans Neural Syst Rehabil Eng. 2009;17:80–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Kiranantawat K, Sitpahul N, Taeprasartsit P, et al. The first Smartphone application for microsurgery monitoring: SilpaRamanitor. Plast Reconstr Surg. 2014;134:130–139. [DOI] [PubMed] [Google Scholar]
  • 48.Levites HA, Thomas AB, Levites JB, et al. The use of emotional artificial intelligence in plastic surgery. Plast Reconstr Surg. 2019;144:499–504. [DOI] [PubMed] [Google Scholar]
  • 49.Borsting E, Desimone R, Ascha M, et al. Applied deep learning in plastic surgery: classifying rhinoplasty with a mobile app. J Craniofac Surg. 2020;31:102–106. [DOI] [PubMed] [Google Scholar]
  • 50.Zhang W, Li J, Li ZB, et al. Predicting postoperative facial swelling following impacted mandibular third molars extraction by using artificial neural networks evaluation. Sci Rep. 2018;8:12281. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Ma Q, Kobayashi E, Fan B, et al. Automatic 3D landmarking model using patch-based deep neural networks for CT image of oral and maxillofacial surgery. Int J Med Robot. 2020;16:e2093. [DOI] [PubMed] [Google Scholar]
  • 52.Mavioso C, Araújo RJ, Oliveira HP, et al. Automatic detection of perforators for microsurgical reconstruction. Breast. 2020;50:19–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Gunes H, Piccardi M. Assessing facial beauty through proportion analysis by image processing and supervised learning. Int J Hum Comput. 2006;64:1184–99. [Google Scholar]
  • 54.Maier A, Hönig F, Bocklet T, et al. Automatic detection of articulation disorders in children with cleft lip and palate. J Acoust Soc Am. 2009;126:2589–602. [DOI] [PubMed] [Google Scholar]
  • 55.O’Neill AC, Yang D, Roy M, et al. Development and evaluation of a machine learning prediction model for flap failure in microvascular breast reconstruction. Ann Surg Oncol. 2020;27:3466–3475. [DOI] [PubMed] [Google Scholar]
  • 56.Patcas R, Bernini DAJ, Volokitin A, et al. Applying artificial intelligence to assess the impact of orthognathic treatment on facial attractiveness and estimated age. Int J Oral Maxillofac Surg. 2019;48:77–83. [DOI] [PubMed] [Google Scholar]
  • 57.Yeong EK, Hsiao TC, Chiang HK, et al. Prediction of burn healing time using artificial neural networks and reflectance spectrometer. Burns. 2005;31:415–420. [DOI] [PubMed] [Google Scholar]
  • 58.Martínez-Jiménez MA, Ramirez-GarciaLuna JL, Kolosovas-Machuca ES, et al. Development and validation of an algorithm to predict the treatment modality of burn wounds using thermographic scans: Prospective cohort study. PLoS One. 2018;13:e0206477. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Lavrač N, Škrlj B, Robnik-Šikonja M. Propositionalization and embeddings: two sides of the same coin. Mach Learn. 2020;109:1465–507. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Silva W, Castro E, Cardoso MJ, et al. , eds. Deep keypoint detection for the aesthetic evaluation of breast cancer surgery outcomes. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019); 2019: IEEE. [Google Scholar]
  • 61.Sable AH, Talbar SN. Adaptive GLOH with PSO-trained NN for the recognition of plastic surgery faces and their types. BAMS. 15: 20180033. [Google Scholar]
  • 62.Kuo P-J, Wu S-C, Chien P-C, et al. Artificial neural network approach to predict surgical site infection after free-flap reconstruction in patients receiving surgery for head and neck cancer. Oncotarget. 2018;9:13768. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Xi W, Vista F, Kim DW, et al. Assessing the deformity of cleft lip nose based on neural network. Int J Precis. 2010;11:473–82. [Google Scholar]
  • 64.Estahbanati HK, Bouduhi N. Role of artificial neural networks in prediction of survival of burn patients-a new approach. Burns. 2002;28:579–86. [DOI] [PubMed] [Google Scholar]
  • 65.Stehrer R, Hingsammer L, Staudigl C, et al. Machine learning based prediction of perioperative blood loss in orthognathic surgery. J Craniomaxillofac Surg. 2019;47:1676–81. [DOI] [PubMed] [Google Scholar]
  • 66.Robnik-Šikonja M, Cukjati D, Kononenko I. Comprehensible evaluation of prognostic factors and prediction of wound healing. In: Artificial Intelligence in Medicine. Philadelphia: Elsevier;2003:25–38. [DOI] [PubMed] [Google Scholar]
  • 67.Sari E, Ucar C, Türk O, et al. Treatment of a patient with cleft lip and palate using an internal distraction device. Cleft Palate Craniofac J. 2008;45:552–560. [DOI] [PubMed] [Google Scholar]
  • 68.Xie K, Yang J, Zhu YM. Fast collision detection based on nose augmentation virtual surgery. Comput Methods Programs Biomed. 2007;8:1–7. [DOI] [PubMed] [Google Scholar]
  • 69.Sarker A, Mollá D, Paris C. Automatic evidence quality prediction to support evidence-based decision making. Artif Intell Med. 2015;64:89–103. [DOI] [PubMed] [Google Scholar]
  • 70.Gumbs AA, Frigerio I, Spolverato G, et al. Artificial intelligence surgery: how do we get to autonomous actions in surgery? Sensors (Basel). 2021;21(16). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Li B, Feridooni T, Cuen-Ojeda C, et al. Machine learning in vascular surgery: a systematic review and critical appraisal. NPJ Digit Med. 2022;5:7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Javidan AP, Li A, Lee MH, et al. A systematic review and bibliometric analysis of applications of artificial intelligence and machine learning in vascular surgery. Ann Vasc Surg. 2022;85:395-405. [DOI] [PubMed] [Google Scholar]
  • 73.Jarvis T, Thornburg D, Rebecca AM, et al. Artificial intelligence in plastic surgery: current applications, future directions, and ethical implications. Plast Reconstr Surg Glob Open. 2020;8:e3200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Eldaly AS, Avila FR, Torres-Guzman RA, et al. Simulation and artificial intelligence in rhinoplasty: a systematic review. Aesthetic Plast Surg. 2022;46:2368–2377. [DOI] [PubMed] [Google Scholar]
  • 75.IEEE. The World’s Largest Technical Professional Organization Dedicated to Advancing Technology for the Benefit of Humanity 2021. Available at https://www.ieee.org/about/at-a-glance.html#:~:text=IEEE%20is%20the%20world’s%20largest,and%20professional%20and%20educational%20activities.
  • 76.Senders JT, Arnaout O, Karhade A, et al. Natural and artificial intelligence in neurosurgery: a systematic review. Clin Neurosurg. 2018;83:181–192. [DOI] [PubMed] [Google Scholar]
  • 77.Langerhuizen DWG, Janssen SJ, Mallee WH, et al. What are the applications and limitations of artificial intelligence for fracture detection and classification in orthopaedic trauma imaging? A systematic review. Clin Orthop Relat Res. 2019;477:2482–2491. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Hashimoto DA, Rosman G, Rus D, et al. Surgical video in the age of big data. Ann Surg. 2018;268:e47–e48. [DOI] [PubMed] [Google Scholar]
  • 79.WHO. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance 2021. Available at http://apps.who.int/bookorders. Accessed October 20, 2021.
  • 80.Bonrath EM, Gordon LE, Grantcharov TP. Characterising ‘near miss’ events in complex laparoscopic surgery through video analysis. BMJ Qual Saf. 2015;24:516–521. [DOI] [PubMed] [Google Scholar]
  • 81.Chandawarkar R, Nadkarni P. Safe clinical photography: best practice guidelines for risk management and mitigation. Arch Plastic Surg. 2021;48:295–304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Kerr RS. Surgery in the 2020s: implications of advancing technology for patients and the workforce. Future Healthc J. 2020;7:46–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Marwaha JS, Landman AB, Brat GA, et al. Deploying digital health tools within large, complex health systems: key considerations for adoption and implementation. npj Digital Med. 2022;5:1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Rahimi B, Nadri H, Afshar HL, et al. A systematic review of the technology acceptance model in health informatics. Appl Clin Inform. 2018;9:604–634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Bodenstedt S, Wagner M, Müller-Stich BP, et al. Artificial intelligence-assisted surgery: potential and challenges. Visc Med. 2020;36:450–455. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Salvador-Oliván JA, Marco-Cuenca G, Arquero-Avilés R. Errors in search strategies used in systematic reviews and their effects on information retrieval. J Med Libr Assoc. 2019;107:210–221. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Whiting P, Savović J, Higgins JPT, et al. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–234. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Plastic and Reconstructive Surgery Global Open are provided here courtesy of Wolters Kluwer Health

RESOURCES