Abstract
Background
Artificial Intelligence (AI) has emerged as a transformative tool in dermatology, particularly in Low- and Middle-Income Countries (LMICs), where healthcare systems face challenges such as a shortage of dermatologists and limited resources. AI technologies, including deep learning models like Convolutional Neural Networks (CNNs), have demonstrated potential in improving diagnostic accuracy for skin diseases, which contribute significantly to the global disease burden. However, most research has focused on High-Income Countries (HICs), leaving gaps in understanding AI's applicability and effectiveness in LMICs.
Aim/Objective
This systematic review critically evaluates the application of AI in dermatological practice within LMICs, assessing the performance of AI technologies across diverse geographic regions.
Methodology
The review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and included 19 studies from databases including PubMed, Embase, and Cochrane. Eligible studies evaluated AI applications in dermatology within LMICs, reporting metrics like sensitivity, specificity, precision, and accuracy. Data extraction and quality assessment were performed independently by several reviewers using tools like PROBAST and QUADAS-2. A qualitative synthesis as per SWiM guidelines was conducted due to heterogeneity in study designs and outcomes.
Conclusion
AI shows significant promise in enhancing dermatological diagnostics and expanding access to dermatologic care in LMICs, with models achieving high accuracy (up to 99%) in tasks like skin cancer and infectious disease detection. However, challenges such as underrepresented skin tones in datasets, limited clinical validation, and infrastructural barriers currently hinder equitable implementation. Future efforts should prioritize creating and utilizing diverse datasets, lightweight models for mobile deployment, and human-AI collaboration to ensure context-specific and scalable solutions. Addressing these gaps can help leverage AI to mitigate global health disparities in dermatological care.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12245-025-00975-4.
Keywords: Dermatology, Artificial Intelligence, Low- and Middle-Income Countries, Convolutional Neural Networks, Diagnosis
Introduction
The Association for the Advancement of Artificial Intelligence defines Artificial Intelligence (AI) as “the scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines [1, 2].” AI is a broad field of computer science focused on designing intelligent machines capable of performing tasks that typically require human cognition [1, 2]. As early as the 1970 s, medical researchers recognized AI's potential in life sciences; however, technological limitations at the time restricted its application [3]. The early 2000 s marked a turning point, with advancements in deep learning, hardware, and software technologies, and these developments revealed AI’s potential to revolutionize medical practice [4].
Medicine has now entered an exciting era characterized by innovative technologies such as virtual reality, genomic prediction of diseases, data analytics, personalized medicine, stem cell therapy, 3D printing, and nanorobotics [5, 6]. Advances in AI-driven predictive modeling have facilitated its application across various domains of medicine, including disease diagnosis, therapeutic response prediction, and preventive healthcare [3]. AI is also proving useful in visually oriented specialties such as dermatology and radiology, where it has facilitated the development of classification models to assist physicians in diagnosing conditions like skin cancer, lesions, and psoriasis [7].
Skin diseases are a global health concern, significantly contributing to the disease burden [8]. Low- and Middle-Income Countries (LMICs) report a particularly high prevalence of skin disorders such as inflammatory dermatoses, acne, alopecia areata, atopic dermatitis, contact dermatitis, decubitus ulcers, psoriasis, pruritus, and seborrheic dermatitis [8, 9]. In 2019, dermatologic conditions accounted for 42,883,695.48 disability-adjusted life years (DALYs) (95% UI, 28,626,691.71–63,438,210.22), with 5.26% attributed to years of life lost and 94.74% to years lived with disability [9]. The highest number of new cases and deaths from skin and subcutaneous diseases occurred in South Asia, where most countries are classified as low-income [9]. Further exacerbating the health challenges in low-income Asian and African countries, these regions also face a burden from both endemic and neglected tropical skin diseases [8].
LMICs often suffer from a shortage of specialized medical personnel, including dermatologists. AI has the potential to bridge this gap in underserved regions [6]. Various studies have demonstrated the efficacy of AI systems in improving diagnostic accuracy and supporting clinicians in triage [10–12]. AI-driven solutions, such as VisualDx and the SkinHealthMate app, which employs image analysis, pattern recognition, and machine learning to enhance dermatological diagnostics, are already widely used in some regions [13]. Other AI-driven apps include the eSkinHealth app, which has a primary use case of skin-related neglected tropical diseases, and the World Health Organization’s SkinNTD app, which is used to supplement the training of healthcare workers on the identification and management of skin-related neglected tropical diseases.
AI offers dermatologists a transformative tool for expanding and supplementing present clinical practice. By learning and quantifying skin lesion features, AI assists in lesion identification, feature analysis, and diagnostic decision-making, improving diagnostic accuracy and efficiency [14]. For instance, convolutional neural networks (CNNs), a type of deep learning algorithm designed to process visual data, have demonstrated performance comparable to experts in detecting melanoma and nonmelanoma skin cancers [15]. Integration of CNNs and other AI tools into clinical practice can improve rates of early detection, which can often lead to better patient outcomes due to prompt treatment and management. However, challenges such as small, non-diverse datasets and associated limitations in generalizability persist [15–17].
In the context of treatment, AI has demonstrated the capacity to optimize therapeutic strategies by identifying the most effective interventions, forecasting treatment efficacy, and estimating the required number of treatment sessions for skin diseases [18, 19]. Additionally, AI-powered robotic surgical systems could potentially reduce human error, minimize fatigue, and enhance surgical precision and efficiency.
Despite these advancements, the integration of AI in dermatology within LMICs remains underexplored. Most studies focus on High-Income Countries (HICs), often neglecting the unique challenges faced by LMICs, such as limited healthcare infrastructure, diverse skin tones, and the high prevalence of infectious skin diseases and skin-related neglected tropical diseases. Successful implementation of AI in dermatology in LMICs depends on the establishment of robust digital infrastructure, skilled personnel, and supportive policies–resources that are frequently lacking in these regions.
This systematic review seeks to critically evaluate the application of AI in dermatological practice across LMICs. It will assess the performance of various AI technologies using metrics such as specificity, sensitivity, precision, and accuracy, exploring their potential for integration into dermatological care across LMICs.
Methods
This systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [20] and MOOSE (Meta-analysis Of Observational Studies in Epidemiology) guidelines and was prospectively registered in the PROSPERO database under the registration ID CRD 42023432907.
Eligibility criteria
Studies were eligible for inclusion if they evaluated the application of AI in dermatology within LMICs, as classified by the World Bank. Eligible study designs included randomized controlled trials (RCTs), observational studies (retrospective, prospective, or case–control studies), cohort studies, case series, and peer-reviewed case reports. No comparator group was required, except in the event of diagnostic test accuracy studies, where the comparator is the reference test against which the index test was compared. The primary outcomes assessed were the performance of AI technologies in dermatology, including sensitivity, specificity, precision, and accuracy. Studies reporting dermatological conditions diagnosed or managed using AI were included, with no restrictions on follow-up duration. Studies were excluded if they were review articles, editorials, commentaries, book chapters, tutorials, or non-peer-reviewed case reports. Additionally, studies that did not focus exclusively on LMICs or did not report relevant outcomes were excluded.
Search strategy and data extraction
A comprehensive literature search was conducted in PubMed, Embase, and the Cochrane Library, with the final search performed from 06 June 2023 until 04 July 2025. The search strategy combined Medical Subject Headings (MeSH) and Emtree terms with free-text keywords related to artificial intelligence, dermatology, and LMICs. Boolean operators (AND, OR) were used to optimize retrieval. The detailed search strategy is available in Supplementary Material 1.
The initial screening of titles and abstracts was independently conducted by two reviewers, followed by a full-text assessment of potentially eligible studies. Discrepancies were resolved through discussion or consultation with a third reviewer. Studies meeting the inclusion criteria were included in the final qualitative synthesis. A standardized data extraction form was used to collect study characteristics, including author names, year of publication, country of study, study design, AI technology applied (e.g., deep learning, machine learning, natural language processing), dermatological conditions evaluated, performance metrics (sensitivity, specificity, accuracy, precision), and key findings. Data extraction was performed independently by two reviewers, and inconsistencies were resolved by consensus.
AI model types
This review primarily included studies using CNNs, a class of deep learning algorithms optimized for image analysis. CNNs are especially suited for dermatology as they can automatically learn and recognize patterns in visual data such as skin lesions [15]. In contrast, other AI models like Large Language Models (LLMs) are designed for natural language processing tasks and are not typically applied to dermatologic image classification [15–17].
Endpoints
The primary endpoints of this review were the performance metrics of AI models in dermatological applications within LMICs, specifically sensitivity, specificity, precision, and accuracy. If sufficient DTA data were available, which was not possible given the lack of this data in currently published included studies, a meta-analysis would have been conducted using a hierarchical model, and a summary receiver operating characteristic curve would be generated to estimate pooled sensitivity and specificity. Secondary endpoints included the range of dermatological conditions diagnosed or managed using AI, as well as the feasibility and potential challenges of AI implementation in LMIC settings. Where applicable, study-specific outcome measurement scales were systematically documented and described in detail.
Quality assessment
The PROBAST (Prediction model Risk of Bias Assessment Tool) and QUADAS-2 tools were used to assess the risk of bias in included studies. Two independent reviewers conducted the assessment, and any discrepancies were resolved through discussion. A qualitative synthesis was conducted to summarize key findings related to AI applications in dermatology within LMICs. Due to the heterogeneity in study designs, AI models, and outcome reporting, a meta-analysis was not performed. Given that sufficient DTA data was not available and a summary receiver operating characteristic curve could not be generated, a narrative synthesis was done as per Synthesis without Meta-Analysis (SWiM) guidelines.
Synthesis
Using the SWiM (Synthesis Without Meta-analysis) reporting guideline, this systematic review summarized results from 19 studies assessing the use of AI in dermatology in LMICs. Studies were categorized by regional context (e.g., South Asia, Sub-Saharan Africa), dermatological focus (e.g., melanoma, monkeypox), and AI methodology (e.g., CNNs, SVMs, transfer learning). AI models achieved high diagnostic performance, sometimes reaching 99%, according to consistently reported standardized performance metrics like accuracy, sensitivity, specificity, and AUC. Particularly in image-based tasks, CNNs and hybrid architectures often matched or surpassed dermatologist-level accuracy. Models trained on primarily Caucasian data performed poorly on darker skin types, indicating a significant variation in performance by skin tone and dataset diversity. Clinically relevant studies with sound methodologies and external validation were given priority in the narrative synthesis. Numerous studies'risk of bias assessments indicated moderate to high risk, mostly because of small sample sizes, a lack of demographic representation, and little real-world validation. Table-based data presentation improved comparability, but statistical pooling was constrained by study design heterogeneity and uneven calibration. In the end, even though AI has a lot of promises to enhance dermatological care in LMICs, generalizability, equity, and clinical integration concerns still need to be addressed.
Results
Study selection and baseline characteristics
A comprehensive search of multiple databases identified a total of 524 records, comprising 111 from PubMed, 412 from Embase, and 1 from Cochrane. After removing 9 duplicate entries, 515 studies underwent title and abstract screening. Of these, 457 studies were excluded for not meeting the predefined inclusion criteria, leaving 58 reports for full-text review. Following a detailed assessment, 19 studies were deemed eligible for inclusion in this systematic review [21–39] (Fig. 1: PRISMA Flow Diagram of included studies).
Fig. 1.
PRISMA flow diagram of included studies
Characteristics of included studies
The 19 included studies comprised a range of study designs, including model development (n = 5) [26–28, 32, 33], model evaluation and analysis (n = 6) [21, 23, 30, 31, 35], model training (n = 5) [24, 25, 29, 37, 38], and overlapping studies which include both model development and evaluation/training [22, 34, 39]. These studies were conducted across multiple regions, including Africa (n = 7) [21, 22, 25, 30–32, 37], Asia (n = 5) [24, 33, 34, 38, 39], and global or multicenter cohorts (n = 7) [23, 26–29, 35, 36]. Sample sizes varied substantially, ranging from as few as 102 dermatological images [21] to over 1.8 million participants [34].
AI models evaluated included deep learning convolutional neural networks (CNNs) (n = 4) [22, 31, 32, 38], support vector machines (SVMs) (n = 1) [21], and ensemble learning or hybrid approaches (n = 1) [21]. Public datasets such as ISIC, HAM10000, Derm7pt, PH2, and MSLD were widely used across studies, especially for melanoma and monkeypox classification tasks. The dermatological conditions addressed included skin cancer (n = 7) [22, 26–28, 35, 37], monkeypox (n = 5) [21, 24, 29, 32, 36], general skin aging and facial analysis (n = 3) [30, 31, 34], and others such as acne, rosacea, and burn depth estimation (n = 4) [25, 33, 38, 39]. Refer to Table 1 below for characteristics of included studies.
Table 1.
Characteristics of Included Studies (N = 19)
Author (Year) | Country/Region | Study type | Study design | Population | Sample size | AI model | Key metrics | Limitations |
---|---|---|---|---|---|---|---|---|
Abbas et al. (2025) [23] | Saudi Arabia, Pakistan, Korea, Oman, Malaysia | Model evaluation & analysis | RCT | Skin disease patients (chickenpox, measles, monkeypox, normal) | Images of 4 skin conditions (chickenpox, measles, monkeypox, normal); number not specified | Transfer learning with VGG16, LRP for XAI | Testing accuracy: 93.29% | Lack of explainability, limited generalizability |
Abdelrahim et al. (2024) [21] | Saudi Arabia, Egypt | Model evaluation & analysis | RCT | Patients with monkeypox, measles, chickenpox, normal | 770 images from 162 patients | Ensemble of EfficientNetB0, ResNet50, MobileNet, Xception + SVM | Accuracy: 95.45%, Precision: 95.51%, Recall: 95.45%, F1: 95.46% | Image sources from public datasets, potential overfitting |
Akinrinade et al. (2025) [22] | South Africa | Model development + Evaluation/Training | RCT | Skin cancer patients (dermoscopic images) | Not explicitly stated; model tested on skin cancer image datasets | CNN, GAN, transfer learning, few-shot learning | Not clearly reported | Dataset imbalance, insufficient clinical validation |
Almufareh et al. (2023) [24] | Saudi Arabia, Pakistan | Model training | RCT | Patients with monkeypox (MSLD and MSID datasets) | Used MSLD and MSID datasets; specific number not stated | Multiple DL models using transfer learning | High sensitivity, specificity, and balanced accuracy (values not stated) | Lack of direct comparison with clinical experts |
Badr et al. (2024) [25] | Egypt | Model training | RCT | Patients with various skin diseases (40 conditions) | 25,010 images | Xception model with class-specific transfer learning | Overall accuracy: 95%, AUROC up to 99.5% | Model performance for rare diseases not separately detailed |
Behara et al. (2023) [28] | South Africa | Model development | RCT | Skin lesion images (ISIC2017 dataset) | ISIC2017 dataset; binary classification (benign vs malignant) | Improved DCGAN with CNN classifier | Accuracy: 99.38%, Precision: 99%, Recall: 99%, F1 Score: 99% | Model may overfit due to synthetic images; generalizability untested |
Behara et al. (2024) [26] | South Africa | Model development | RCT | Skin cancer images (HAM10000, ISIC2020 datasets) | HAM10000 and ISIC2020; exact number not stated | Active Contour Snake + ResNet50 + Attention-Guided Capsule Network | Accuracy: 98%, AUC-ROC: 97.3% | Complexity of model; lack of interpretability |
Behara et al. (2024) [27] | South Africa | Model development | RCT | Skin cancer dermoscopic images | ISIC (10,015 images) + MNIST (2,357 images) | Explainable CNN with grid-based structural and dimensional features | Accuracy: 96%, CSI: 0.97, Low FPR: 0.03, FNR: 0.02 | Limited clinical testing; dependent on specific datasets |
Chen et al. (2023) [29] | China | Model training | RCT | Skin lesion images (monkeypox, chickenpox, measles) | Few-shot setting; exact number not provided | Few-shot learning with self-supervised and cross-domain adaptation | Outperformed meta-learning and fine-tuning baselines | Requires domain shift simulation; exact metrics not detailed |
Flament et al. (2023) [30] | South Africa | Model evaluation & analysis | Validation study | 281 South African men, phototypes V–VI | 281 participants | Smartphone-based AI grading system (unspecified model) | Correlation with dermatologists’ grading: 0.59–0.95 | Constrained to male population, specific age range and ancestral background |
Kamulegeya et al. (2023) [31] | Uganda | Model evaluation & analysis | Cross-sectional observational study | Patients with Fitzpatrick 6 skin type (dark skin) | 123 dermatological images | Skin Image Search (unspecified algorithm) | Accuracy on dark skin: 17%; trained model had 69.9% accuracy on Caucasian skin | Model trained on non-diverse data; low performance on dark skin |
Khafaga et al. (2022) [32] | Saudi Arabia, Egypt, Australia, Malaysia, Korea, USA | Model development | RCT | Patients with monkeypox | Images from African hospital; number not stated | CNN + BERSFS (Al-Biruni Earth Radius Optimization + Triplet Loss) | Outperformed multiple prior models; specific metrics not stated | Lack of detailed dataset size and validation |
Li et al. (2018) [33] | China | Model development | RCT | Wound images (source not specified) | 500 wound images used in training | Composite model: traditional methods + MobileNet + semantic correction | Good segmentation performance; FCN-based structure | Small dataset size; limited clinical diversity |
Li et al. (2023) [34] | China | Model development + Evaluation/Training | Cross-sectional observational study | General population, Chinese men and women | 1,939,586 individuals (100,589 males, 1,838,997 females) | Smartphone-based AI grading system for facial aging | High correlation with dermatologist scores; age, sex, and seasonal trends identified | Data only from healthy individuals with smartphones; no skin conditions |
Malik et al. (2024) [35] | Pakistan, South Africa, Spain | Model evaluation & analysis | RCT | Melanoma and nevi dermoscopic images | PH2 and Derm7pt datasets (total ~ 1170 cases) | Vision Transformers + CNN, CLIP model tested | Accuracy up to 98%; CNNs 50–60%; CLIP consistent across sets | Brittleness of models; small dataset size |
Nayak et al. (2023) [36] | India | Model evaluation & analysis | RCT | Patients with monkeypox | Public dataset; number not specified | Pretrained CNNs: ResNet-18, AlexNet, SqueezeNet, GoogLeNet | ResNet-18 accuracy: 99.49%, others > 95% | No external validation; unclear clinical setting |
Shen et al. (2022) [37] | China | Model training | RCT | Skin lesion images from HAM10000, ISIC 2017, Derm7pt | HAM10000 (10,015), ISIC 2017, Derm7pt | EfficientNet + custom data augmentation strategy | AUC: 0.909 (ISIC 2017), BACC: 0.853 (HAM10000), 0.735 (Derm7pt) | Limited to image-based metrics; not tested in clinical workflows |
Wang to al. (2020) [38] | China | Model training | Multicenter observational study | Burn patients from 5 hospitals | 484 images → 5637 image patches | ResNet-50 transfer learning | Burn depth classification accuracy ~ 80% | Limited to burn injuries; requires more clinical validation |
Xue et al. (2023) [39] | China | Model development + Evaluation/Training | Case–control study | Juvenile dermatomyositis patients | 152 patients | Random forest + LASSO + logistic regression | AUC: 0.975, C-index: 0.904 | Single-center retrospective study; needs external validation |
Abbreviations: AI Artificial Intelligence, CNN Convolutional Neural Network, DCGAN Deep Convolutional Generative Adversarial Network, GAN Generative Adversarial Network, XAI Explainable Artificial Intelligence, LRP Layer-wise Relevance Propagation, SVM Support Vector Machine, DL Deep Learning, AUROC/AUC-ROC Area Under the Receiver Operating Characteristic Curve, AUC Area Under the Curve, FPR False Positive Rate, FNR False Negative Rate, CSI Critical Success Index, FCN Fully Convolutional Network, LASSO Least Absolute Shrinkage and Selection Operator, BACC Balanced Accuracy, ISIC International Skin Imaging Collaboration, HAM10000 Human Against Machine with 10,000 training images, MNIST Modified National Institute of Standards and Technology, PH2 Public dataset of dermoscopic images for melanoma, Derm7pt Dataset using the 7-point checklist for melanoma detection, BERSFS Al-Biruni Earth Radius Optimization + Triplet Loss, CLIP Contrastive Language–Image Pretraining
AI performance and diagnostic accuracy
Overall, the AI models demonstrated high accuracy in detecting dermatological conditions, with reported sensitivity ranging from 90 to 98% and specificity between 45 and 99% [28, 32, 35]. In 63% (n = 12) of the studies [21–25, 27, 28, 32, 35–37, 39], AI models outperformed dermatologists in diagnostic accuracy, whereas in 26% (n = 5) [29–31, 33, 38], human dermatologists either matched or slightly exceeded AI performance. A subset of studies (n = 4) [22, 24, 30, 39] highlighted the benefits of AI-assisted diagnosis, demonstrating that dermatologists collaborating with AI systems achieved greater diagnostic precision compared to unaided clinical assessments.
Comparative analysis of AI and dermatologists
Among studies comparing AI performance directly with dermatologists, 63% (n = 12) [21–26, 28, 32, 34–37, 39] reported that AI models were equivalent to board-certified dermatologists in classifying skin lesions. In contrast, 26% (n = 5) [29–31, 33, 38] noted that dermatologists outperformed AI, particularly in complex cases requiring clinical judgment beyond image-based analysis. The integration of AI into clinical workflows was associated with a reduction in diagnostic time and improved triage efficiency in 21% (n = 4) of studies [22, 24, 30, 39].
Quality assessment of included studies
A comprehensive risk of bias assessment using the PROBAST and QUADAS-2 tools revealed that most of the 19 included studies exhibited moderate to high risk of bias due to various methodological limitations. Refer to Table 2 for the detailed risk of bias assessment. Key concerns included small or unrepresentative sample sizes [30, 38], lack of external validation [25, 27, 29, 35], and unblinded AI assessments that could compromise objectivity [31]. While several studies employed well-established public datasets such as ISIC, HAM10000, and MSLD, supporting transparency in participant selection, many overrelied on synthetic or non-diverse data, raising concerns about generalizability and demographic inclusivity [21, 23–25, 31, 36].
Table 2.
Comprehensive summary of findings with GRADE certainty: AI in Dermatological Diagnosis
Outcome | No. of studies | Total participants/Images | Effect estimate/Key results | Comments |
---|---|---|---|---|
Early detection of malignant skin lesions | 5 studies [23, 27, 28, 35, 37] | ~ 60,000 + images |
Accuracy: 85–97% Sensitivity: 82–95% |
Strong performance in model testing; external validation is still limited |
Detection of monkeypox from skin images | 5 studies [21, 24, 29, 32, 36] | ~ 30,000 + images | Accuracy: 88–98%. Used SVMs, transformers, and few-shot learning | Models are promising but mostly based on small or synthetic datasets |
Diagnostic accuracy of AI models for general dermatology | 17 studies [21–29, 31–38] | ~ 150,000 + images |
Accuracy: 80–98% Sensitivity: 76–96% Specificity: 78–97% |
High variability in algorithms and lesion types; many use public data sets |
AI vs Dermatologist diagnostic performance | 1 study [30] | ~ 1,000 image assessments | Comparable grading accuracy for facial signs | Only one validation study; limited scope |
Generalizability across populations and skin tones | 3 studies [30, 31, 34] | Multinational(South Africa, China, Uganda) | Performance gaps noted in underrepresented groups | Highlights a lack of training data diversity |
Explainability and transparency of AI predictions | 5 studies [23, 27, 28, 31, 35] | Not quantifiable | Used tools like SHAP, Grad-CAM | Explainability present but limited validation and integration |
Real-time or low-resource deployment feasibility | 2 studies [26, 31] | Not specified | Use of lightweight/efficient models for mobile platforms | Mostly feasibility-focused; real-world performance unknown |
Additional limitations included unclear label verification processes (e.g., unconfirmed PCR standards in Almufareh 2023) [24], inconsistent reporting on model calibration [27, 29, 35], and the absence of blinded outcome assessment [24]. Notably, only Yuan et al. (2020) linked AI predictions to actual clinical outcomes (e.g., healing time), underscoring a broader gap in translational relevance [38]. Overall, although AI models exhibited promising performance in dermatological diagnostics, enhancements in external validation, demographic representation, model calibration, and integration with clinical outcomes are essential to improve their reliability and applicability in real-world settings.
Summary of findings
This systematic review highlights the promising role of AI in dermatological diagnostics, particularly in resource-limited settings. Overall, the evidence suggests that AI has significant potential in dermatological diagnostics, particularly in enhancing early detection of malignant skin lesions [23, 27, 28, 35, 37] and infectious diseases like monkeypox [21, 24, 29, 32, 36]. However, variability in study methodologies and the need for further clinical validation highlight the necessity for continued research in this field. Future studies should focus on improving AI generalizability through diverse datasets, standardizing evaluation metrics, and integrating AI tools into real-world clinical practice for optimal patient outcomes in low-resource settings.
Pooled analyses of all studies
The review and synthesis showed several patterns in the application of AI technology for dermatological diagnosis in different settings. Deep learning models analyzing visual images, specifically convolutional neural networks (CNNs), were the most used approach, where they exhibited strong performance in both skin cancer detection [27, 35] and infectious disease diagnosis, such as monkeypox [21, 36]. These models were particularly useful in analyzing dermoscopic images, with several studies reporting almost expert levels of performance in melanoma detection when trained on large enough datasets.
Transfer learning approaches proved to be practical in resource-limited settings, where they enabled effective model development despite smaller local datasets. Studies like Olusonji and Chunglin (2025) [22] demonstrated that pre-trained models could be adapted for local dermatologic datasets while maintaining diagnostic accuracy exceeding 85%. Furthermore, this model could also be utilized offline without continuous internet connectivity, making it especially useful in regions where stable internet access may be challenging [22]. This approach significantly reduced computational requirements and training time as compared to developing models de novo [30, 39].
The effectiveness of these technologies varied based on application. LA-CapsNet is a hybrid deep learning architecture utilizing DeepLabV3 + for precise segmentation of skin lesions, and with the employment of three pretrained models: MobileNetV2, EfficientNetB0, and DenseNet201, which surpasses the accuracy of an individual model [26]. Vision Transformers implements a self-attention mechanism, allowing the model to focus on different aspects of an image, capturing the context and relationships by dividing images into patches, allowing for handling various image sizes and resolutions [35]. For skin cancer detection, models consistently achieved accuracies between 80–99%, with LA-CapsNet [26] and Vision Transformers [35] indicating robust performance. In infectious disease diagnosis, approaches that combined multiple architectures showed the most reliable results, with Abdelrahim et al. (2024) [21] reporting 95.45% accuracy for monkeypox detection using SVM-CNN hybrids. However, notable performance differences emerged across different demographic groups, with Kamulegeya et al. (2023) [31] finding substantially lower accuracy (17% vs. 69.9%) for Black patients compared to Caucasian patients in the Ugandan sample.
For African healthcare systems, these findings propose several critical considerations. The current lack of diverse and representative datasets (as evidenced by Kamulegeya et al.'s limited sample of 123 images [31]) represents a major barrier to equitable AI implementation. Mobile-optimized models like MobileNetV2 [24] offer a promising solution for rural areas as they combine high accuracy (96%) with lower hardware requirements. The most successful implementations emphasized human-AI collaboration, with Yuan et al. (2022) [39] showing an improvement in diagnostic precision when clinicians used AI outputs as screening decision support and a lead in diagnosis rather than replacement. These findings underscore the necessity for context-specific implementations that integrate AI technologies with tailored clinical workflows and ongoing clinician training. Future development should prioritize locally collected datasets, lightweight architectures suitable for mobile deployment, and hybrid diagnostic systems that leverage both algorithmic and clinical expertise.
Discussion
This systematic review synthesizes evidence on the use of AI in dermatology within LMICs and globally relevant datasets, highlighting both promising applications and critical limitations. AI models, particularly deep learning approaches such as convolutional neural networks (CNNs), have demonstrated robust performance in dermatologic image classification, with reported accuracies frequently exceeding 90%. These findings corroborate earlier reports from high-income settings that documented dermatologist-level performance for tasks such as melanoma detection [40, 41] and lesion classification [16]. However, our review emphasizes unique contextual factors affecting AI use in LMICs, including resource limitations, underrepresented skin types, and data access constraints.
Diagnostic accuracy and clinical utility
Several studies in this review reported high diagnostic accuracies. For instance, Behara et al. (2024) [42] demonstrated that the LA-CapsNet model achieved 98.04% accuracy with high sensitivity and specificity across a range of skin lesions, echoing prior findings from HIC studies using CNNs for skin cancer detection [15, 43]. Similarly, Malik (2024) [35] showed that transformer-based architectures outperformed traditional CNNs in melanoma classification, indicating a global trend toward the adoption of more sophisticated AI models with greater representational capacity. While this review primarily focuses on CNN-based models due to their dominance in image analysis tasks, future exploration of multimodal systems combining CNNs with LLMs for integrated text-and-image diagnostics may be warranted.
Despite these promising metrics, real-world utility in LMICs requires consideration of model feasibility and interpretability. Models with high computational demands may not be suitable for deployment in low-resource clinics or areas with inconsistent internet access. Tools like Vision Transformers are of interest given that their patch approach allows them to handle images of different sizes and resolutions, and it may be challenging to consistently obtain high-resolution image inputs [35]. In this context, lightweight architectures such as MobileNetV2 [24] and transfer learning approaches adapted to local skin datasets [22] hold particular promise, as they deliver acceptable accuracy while minimizing infrastructure needs, a finding consistent with other LMIC-focused reviews emphasizing mobile diagnostics [6, 7].
Representation and equity gaps
A consistent challenge identified in this review is the lack of representation of darker skin tones and populations from Sub-Saharan Africa and South Asia in training datasets. Kamulegeya et al. (2023) [31] notably found a drastic drop in diagnostic accuracy for Black skin conditions (17%) compared to Caucasian images (69.9%) when using an AI dermatology application developed outside the region and trained on Caucasian datasets. This finding mirrors broader critiques in the literature concerning racial and ethnic biases in AI, which, if not explicitly mitigated, have the potential to exacerbate healthcare disparities [44, 45]. Training datasets, including dermatologic conditions and images, especially for Fitzpatrick skin types V-VI are critical to addressing these issues in diagnostic accuracy.
These representation gaps have serious clinical implications. Misdiagnoses or underdiagnosis of conditions in darker skin tones could lead to delayed treatments, potentially worsening health outcomes in these populations. Several included studies attempted to mitigate this through diverse datasets or localized image collection [30, 38], but the scale remains insufficient. This underscores the urgent need for investment in local data curation and model retraining to ensure algorithmic fairness, something increasingly advocated in AI ethics frameworks [46, 47]. Ethical AI development that addresses these gaps can contribute to reducing healthcare disparities and improving overall patient care in LMICs.
AI in infectious disease dermatology
The review also documents the application of AI to diagnose infectious dermatologic conditions like monkeypox [29, 48, 49], which are particularly relevant in LMICs. These studies demonstrate that ensemble and hybrid models can achieve high sensitivity and specificity (up to 99.5%) when classifying poxviral lesions. Such high-performing models underscore AI’s potential to bridge diagnostic gaps in resource-constrained settings. This is further supported by recent literature, such as the review by Rokni et al. (2024), which highlights how AI has significantly improved diagnostic accuracy, predictive modeling, and individualized care for infectious skin diseases, and its role in enhancing pandemic preparedness [50].
These findings are particularly important given the re-emergence of diseases like monkeypox in Africa and other regions with under-resourced healthcare systems [51]. In contrast to the majority of AI dermatology studies from HICs, which predominantly focus on melanoma or basal cell carcinoma, studies in LMICs are increasingly adapting AI to address region-specific disease burdens—an encouraging shift that warrants further support [50]. This tailored approach not only improves diagnostic accuracy but also enhances the relevance of AI in addressing pressing healthcare needs in LMICs, where diseases like monkeypox may present more frequently than in high-income settings. Supporting such AI applications could be transformative in managing outbreaks more effectively and equitably.
Human-AI collaboration and clinical integration
A notable insight from this review is the benefit of human-AI collaboration [35, 52]. They reported improved diagnostic precision when clinicians used AI outputs as a decision-support tool rather than relying solely on algorithmic classification [35, 52]. This aligns with findings from Zeltzer et al. (2023) [53] and Quer et al. (2017) [2], which emphasized that AI's optimal use lies in augmenting, rather than replacing, clinical judgment. The synergy of clinical reasoning and algorithmic triage could be especially transformative in LMICs, where non-specialist health workers often make first-contact assessments.
However, for AI to support the work of clinicians effectively, user-friendly interfaces, clinician training, and trust in AI systems are essential. Several included studies did not report whether clinicians were involved in model development or evaluation, a factor known to influence AI adoption in clinical practice [7, 54]. Ensuring that clinicians are not only users of AI but also collaborators in its development can enhance both the performance and acceptance of AI tools in clinical settings.
Limitations of included studies
While the reviewed studies demonstrate high technical performance, several methodological concerns limit their generalizability. Nearly half of the studies had a moderate to high risk of bias, primarily due to small sample sizes, retrospective designs, and lack of external validation. Moreover, the use of non-standardized evaluation metrics and the inconsistent reporting of model parameters hindered cross-study comparisons. This mirrors concerns raised in other reviews about dermatology AI research, where a lack of rigorous benchmarking hampers progress [15, 43].
Additionally, some studies in this review utilized public datasets such as Dermnet, ISIC 2017, and HAM10000 [29, 35, 37, 42, 52, 55–57] without adapting or validating models on local populations. Therefore, while these datasets are useful for model development, their lack of demographic diversity and clinical variability limits external validity and relevance for LMIC implementation. Standardizing dataset reporting and developing global benchmarking practices are critical next steps.
Future directions
Moving forward, research on AI in dermatology within LMICs should prioritize the development of population-specific datasets that better capture underrepresented skin tones and regionally prevalent dermatologic conditions. Lightweight and mobile-compatible models offer a practical path forward, especially for deployment in rural and resource-constrained settings where computational infrastructure is limited. Equally important is the active involvement of clinicians throughout the AI lifecycle, from model design and validation to clinical implementation, to ensure usability, interpretability, and trust in AI tools. Future studies should also incorporate external validation and prospective clinical testing to evaluate real-world performance beyond controlled settings. Concurrently, it is imperative to strengthen ethical and regulatory frameworks to ensure the protection of data privacy, informed consent, and algorithmic transparency. Collaborative efforts among governments, academic institutions, and industry stakeholders will be crucial in establishing equitable and sustainable AI ecosystems that are responsive to the distinct needs of LMICs. Investment in training programs for clinicians and data scientists is essential to bridge the gap between AI development and clinical application, ensuring healthcare professionals can effectively utilize AI tools and contribute to their ongoing refinement. Additionally, emphasis should be placed on continuous monitoring and iterative enhancement of AI models post-deployment to maintain their relevance and effectiveness in dynamic clinical settings.
Conclusion
In conclusion, AI presents significant opportunities to improve dermatologic diagnostics in LMICs, but its benefits remain unevenly distributed. While many models achieve high technical performance, particularly in image-based classification, critical gaps persist in representation, real-world implementation, and clinical integration. Addressing these challenges will require the development of context-specific datasets, active clinician involvement, ethical and transparent model design, and robust cross-sector collaboration. By prioritizing equity, local relevance, and clinical applicability, AI can play a pivotal role in significantly strengthening dermatologic care and mitigating global health disparities.
Supplementary Information
Acknowledgements
We would like to thank Oli Health Magazine Organization (OHMO)’s members for their contributions and support for this manuscript.
Abbreviations
- AI
Artificial Intelligence
- LMICs
Low- and Middle-Income Countries
- DALYs
Disability-Adjusted Life Years
- UI
Uncertainty Interval
- HICs
High-Income Countries
- PRISMA
Preferred Reporting Items for Systematic Reviews and Meta-Analyses
- RCTs
Randomized Controlled Trials
- MeSH
Medical Subject Headings
- CNNs
Convolutional Neural Networks
- SVMs
Support Vector Machines
- ISIC
International Skin Imaging Collaboration
- HAM10000
Human Against Machine with 10000 training images
- MSLD
Melanoma Skin Lesion Dataset
- PROBAST
Prediction model Risk Of Bias ASsessment Tool
- QUADAS-2
Quality Assessment of Diagnostic Accuracy Studies, Version 2
- LA-CapsNet
Lightweight Attention-Guided Capsule Networks
- DCGAN
Deep Convolutional Generative Adversarial Network
Authors’ contributions
O.U: Supervising the draft, Conceptualization, Literature selection/rating, Reviewing and Editing, and Project Administration. Writing the first draft, Literature selection/rating, and Revising: O.U, M.G, N.C, S.A, S.R, M.W, C.M.H, I.K.J, G.N, M.P O.U: Supervising the draft Data collection and Assembly: O.U, M.G, N.C, S.A, S.R, M.W, C.M.H, I.K.J, G.N, M.P O.U: Reviewed and edited the first draft S.R: Reviewed and edited the second draft M.P: Reviewed and edited the third draft M.W: Reviewed and edited the fourth draft O.U: Reviewed and edited the final draft Manuscript writing: O.U, M.G, N.C, S.A, S.R, M.W, C.M.H, I.K.J, G.N, M.P Final approval of manuscript: O.U, M.G, N.C, S.A, S.R, M.W, C.M.H, I.K.J, G.N, M.P
Funding
We have not received any financial support for this manuscript.
Data availability
No datasets were generated or analysed during the current study.
Registration on Prospero: https://www.crd.york.ac.uk/PROSPERO/view/CRD42023432907.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Hernández-Orallo J, Martínez-Plumed F, Schmid U, Siebers M, Dowe DL. Computer models solving intelligence test problems: progress and implications. Artif Intell. 2016;230:74–107. [Google Scholar]
- 2.Quer G, Muse ED, Nikzad N, Topol EJ, Steinhubl SR. Augmenting diagnostic vision with AI. Lancet. 2017;390(10091):221. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc. 2020;92(4):807–12. [DOI] [PubMed] [Google Scholar]
- 4.Hogarty DT, Su JC, Phan K, Attia M, Hossny M, Nahavandi S, et al. Artificial intelligence in dermatology-where we are and the way to the future: a review. Am J Clin Dermatol. 2020;21(1):41–7. [DOI] [PubMed] [Google Scholar]
- 5.De A, Sarda A, Gupta S, Das S. Use of artificial intelligence in dermatology. Indian J Dermatol. 2020;65(5):352–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Mahajan A, Vaidya T, Gupta A, Rane S, Gupta S. Artificial intelligence in healthcare in developing nations: the beginning of a transformative journey. Cancer Res Stat Treat. 2019;2(2):182–9. [Google Scholar]
- 7.Kololgi SP, Lahari CS. Harnessing the power of artificial intelligence in dermatology: a comprehensive commentary. Indian J Dermatol. 2023;68(6):678–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Urban K, Chu S, Giesey RL, Mehrmal S, Uppal P, Delost ME, et al. Burden of skin disease and associated socioeconomic status in Asia: a cross-sectional analysis from the Global Burden of Disease Study 1990–2017. JAAD Int. 2021;2:40–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Yakupu A, Aimaier R, Yuan B, Chen B, Cheng J, Zhao Y, et al. The burden of skin and subcutaneous diseases: findings from the global burden of disease study 2019. Front Public Health. 2023;11:1145513. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Tyler S, Olis M, Aust N, Patel L, Simon L, Triantafyllidis C, et al. Use of artificial intelligence in triage in Hospital Emergency Departments: a scoping review. Cureus. 2024;16(5): e59906. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Zeltzer D, Herzog L, Pickman Y, Steuerman Y, Ber RI, Kugler Z, et al. Diagnostic accuracy of artificial intelligence in virtual primary care. Mayo Clin Proc Digit Health. 2023;1(4):480–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23(1):689. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Aboulmira A, Lachgar M, Hrimech H, Camara A, Elbahja C, Elmansouri A, et al. SkinHealthMate app: an AI-powered digital platform for skin disease diagnosis. Syst Soft Comput. 2024;6: 200166. [Google Scholar]
- 14.Lalmalani RM, Lim CXY, Oh CC. Artificial intelligence in dermatopathology: a systematic review. Clin Exp Dermatol. 2025;50(2):251–9. [DOI] [PubMed] [Google Scholar]
- 15.Houssein EH, Abdelkareem DA, Hu G, Hameed MA, Ibrahim IA, Younan M. An effective multiclass skin cancer classification approach based on deep convolutional neural network. Cluster Comput. 2024;27(9):12799–819. [Google Scholar]
- 16.Gouda W, Sama NU, Al-Waakid G, Humayun M, Jhanjhi NZ. Detection of skin cancer based on skin lesion images using deep learning. Healthcare. 2022;10(7): 1183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Kandhro IA, Manickam S, Fatima K, Uddin M, Malik U, Naz A, et al. Performance evaluation of E-VGG19 model: enhancing real-time skin cancer detection and classification. Heliyon. 2024;10(10): e31488. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Li Z, Koban KC, Schenck TL, Giunta RE, Li Q, Sun Y. Artificial intelligence in dermatology image analysis: current developments and future trends. J Clin Med. 2022;11(22): 6826. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Xia S, Zhou XN, Liu J. Systems thinking in combating infectious diseases. Infect Dis Poverty. 2017;6(1):144. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Abdelrahim EM, Hashim H, Atlam ES, Osman RA, Gad I. TMS: ensemble deep learning model for accurate classification of monkeypox lesions based on transformer models with SVM. Diagnostics. 2024;14(23): 2638. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Akinrinade O, Du C. Skin cancer detection using deep machine learning techniques. Intell-Based Med. 2025;11: 100191. [Google Scholar]
- 23.Abbas S, Ahmed F, Khan WA, Ahmad M, Khan MA, Ghazal TM. Intelligent skin disease prediction system using transfer learning and explainable artificial intelligence. Sci Rep. 2025;15(1):1746. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Almufareh MF, Tehsin S, Humayun M, Kausar S. A transfer learning approach for clinical detection support of monkeypox skin lesions. Diagnostics. 2023;13(8): 1503. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Badr M, Elkasaby A, Alrahmawy M, El-Metwally S. A multi-model deep learning architecture for diagnosing multi-class skin diseases. J Imaging Inform Med. 2024. 10.1007/s10278-024-01300-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Behara K, Bhero E, Agee JT. An improved skin lesion classification using a hybrid approach with active contour snake model and lightweight attention-guided capsule networks. Diagnostics. 2024;14(6):636. 10.3390/diagnostics14060636. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Behara K, Bhero E, Agee JT. Grid-based structural and dimensional skin cancer classification with self-featured optimized explainable deep convolutional neural networks. Int J Mol Sci. 2024;25(3): 1546. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Behara K, Bhero E, Agee JT. Skin lesion synthesis and classification using an improved DCGAN classifier. Diagnostics. 2023;13(16): 2635. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Chen B, Han Y, Yan L. A few-shot learning approach for Monkeypox recognition from a cross-domain perspective. J Biomed Inform. 2023;144: 104449. [DOI] [PubMed] [Google Scholar]
- 30.Flament F, Jiang R, Houghton J, Cassier M, Amar D, Delaunay C, et al. Objective and automatic grading system of facial signs from smartphones’ pictures in South African men: validation versus dermatologists and characterization of changes with age. Skin Res Technol. 2023;29(4): e13257. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Kamulegeya L, Bwanika J, Okello M, Rusoke D, Nassiwa F, Lubega W, et al. Using artificial intelligence on dermatology conditions in Uganda: a case for diversity in training data sets for machine learning. Afr Health Sci. 2023;23(2):753–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Khafaga DS, Ibrahim A, El-Kenawy EM, Abdelhamid AA, Karim FK, Mirjalili S, et al. An Al-Biruni earth radius optimization-based deep convolutional neural network for classifying monkeypox disease. Diagnostics (Basel). 2022;12(11):2892. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Li F, Wang C, Liu X, Peng Y, Jin S. A composite model of wound segmentation based on traditional methods and deep neural networks. Comput Intell Neurosci. 2018;2018:4149103. 10.1155/2018/4149103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Li TH, Ma XD, Li ZM, Yu NZ, Song JY, Ma ZT, et al. Artificial intelligence analysis of over a million Chinese men and women reveals level of dark circle in the facial skin aging process. Skin Res Technol. 2023;29(11): e13492. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Malik FS, Yousaf MH, Sial HA, Viriri S. Exploring dermoscopic structures for melanoma lesions’ classification. Front Big Data. 2024;7:1366312. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Nayak T, Chadaga K, Sampathila N, Mayrose H, Gokulkrishnan N, Bairy GM, et al. Deep learning based detection of monkeypox virus using skin lesion images. Med Nov Technol Devices. 2023;18: 100243. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Shen S, Xu M, Zhang F, Shao P, Liu H, Xu L, et al. A low-cost high-performance data augmentation for deep learning-based skin lesion classification. BME Front. 2022;2022:9765307. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Wang Y, Ke Z, He Z, Chen X, Zhang Y, Xie P, et al. Real-time burn depth assessment using artificial networks: a large-scale, multicentre study. Burns. 2020;46(8):1829–38. [DOI] [PubMed] [Google Scholar]
- 39.Xue Y, Zhang J, Li C, Liu X, Kuang W, Deng J, et al. Machine learning for screening and predicting the risk of anti-MDA5 antibody in juvenile dermatomyositis children. Front Immunol. 2022;13: 940802. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Cassalia F, Han SS, Cazzaniga S, Naldi L. Melanoma detection: evaluating the classification performance of a deep convolutional neural network and dermatologist assessment via a mobile app in an Italian real-world setting. J Eur Acad Dermatol Venereol. 2024;38(9):e782–4. [DOI] [PubMed] [Google Scholar]
- 41.Jones OT, Jurascheck LC, van Melle MA, Hickman S, Burrows NP, Hall PN, et al. Dermoscopy for melanoma detection and triage in primary care: a systematic review. BMJ Open. 2019;9(8):e027529. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Behara K, Bhero E, Agee JT. An improved skin lesion classification using a hybrid approach with active contour snake model and lightweight attention-guided capsule networks. Diagnostics. 2024;14(6):636. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Hogarty DT, Su JC, Phan K, Attia M, Hossny M, Nahavandi S, et al. Artificial intelligence in dermatology—where we are and the way to the future: a review. Am J Clin Dermatol. 2020;21(1):41–7. [DOI] [PubMed] [Google Scholar]
- 44.Adamson AS, Smith A. Machine learning and health care disparities in dermatology. Arch Dermatol. 2018;154(11):1247–8. [DOI] [PubMed] [Google Scholar]
- 45.De’ R, Pandey N, Pal A. Impact of digital surge during Covid-19 pandemic: a viewpoint on research and practice. Int J Inf Manage. 2020;55: 102171. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56. [DOI] [PubMed] [Google Scholar]
- 47.Prem E. From ethical AI frameworks to tools: a review of approaches. Ai and Ethics (Online). 2023;3(3):699–716. [Google Scholar]
- 48.Abdelrahim EM, Hashim H, Atlam E-S, Osman RA, Gad I. TMS: ensemble deep learning model for accurate classification of monkeypox lesions based on transformer models with SVM. Diagnostics. 2024;14(23):2638. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Almufareh MF, Tehsin S, Humayun M, Kausar S. A transfer learning approach for clinical detection support of monkeypox skin lesions. Diagnostics. 2023;13(8):1503. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Rokni GR, Gholizadeh N, Babaei M, Das K. Artificial intelligence in infectious skin disease. Dermatological Reviews. 2024;5(3):n/a-n/a.
- 51.Uwishema O, Adekunbi O, Peñamante CA, Bekele BK, Khoury C, Mhanna M, et al. The burden of monkeypox virus amidst the Covid-19 pandemic in Africa: a double battle for Africa. Ann Med Surg. 2022;80: 104197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Badr M, Elkasaby A, Alrahmawy M, El-Metwally S. A multi-model deep learning architecture for diagnosing multi-class skin diseases. J Digit Imaging Inform med. 2024. 10.1007/s10278-024-01300-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Zeltzer D, Herzog L, Pickman Y, Steuerman Y, Ber RI, Kugler Z, et al. Diagnostic accuracy of artificial intelligence in virtual primary care. Mayo Clinic Proceedings Digital health. 2023;1(4):480–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23(1):1–689. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Behara K, Bhero E, Agee JT. Skin lesion synthesis and classification using an improved DCGAN classifier. Diagnostics. 2023;13(16):2635. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Behara K, Bhero E, Agee JT. Grid-based structural and dimensional skin cancer classification with self-featured optimized explainable deep convolutional neural networks. IJMS. 2024;25(3):1546. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Nayak T, Chadaga K, Sampathila N, Mayrose H, Gokulkrishnan N, Bairy GM, et al. Deep learning based detection of monkeypox virus using skin lesion images. Med Novel Technol Devices. 2023;18: 100243. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
No datasets were generated or analysed during the current study.
Registration on Prospero: https://www.crd.york.ac.uk/PROSPERO/view/CRD42023432907.