Abstract
Background:
The number needed to biopsy (NNB) ratio for melanoma diagnosis is calculated by dividing the total number of biopsies by the number of biopsied melanomas. It is the inverse of positive predictive value (PPV), which is calculated by dividing the number of biopsied melanomas by the total number of biopsies. NNB is increasingly used as a metric to compare the diagnostic accuracy of health care practitioners (HCPs).
Objective:
To investigate the association of NNB with the standard statistical measures of sensitivity and specificity.
Methods:
We extracted published diagnostic accuracy data from 5 cross-sectional skin cancer reader studies [median (min-max) readers/study was 29 (8–511)]. As NNB is a ratio, we converted it to PPV.
Results:
Four studies showed no association and one showed a negative association between PPV and sensitivity. All five studies showed a positive association between PPV and specificity.
Limitations:
Reader study data.
Conclusion:
An individual HCP with a lower NNB is likely to have a higher specificity than one with a higher NNB, assuming they practice under similar conditions; no conclusions can be made about their relative sensitivity. We advocate for additional research to define quality metrics for melanoma detection and caution when interpreting NNB.
Keywords: Number needed to biopsy, NNB, diagnostic accuracy, melanoma, melanoma sensitivity, melanoma specificity, melanoma screening, melanoma positive predictive value
Capsule Summary:
We found that the number needed to biopsy (NNB) ratio is associated with specificity but not sensitivity in melanoma diagnosis.
An individual HCP with a lower NNB is likely to have a higher specificity than one with a higher NNB but no conclusions can be made about their respective sensitivity.
The health and economic burden of skin cancer in the United States is high and rapidly increasing. From 2007–2011, approximately 5 million adults were treated for skin cancer annually, with average treatment costs of $8.1 billion each year.1 In order to combat rising costs, there is a growing interest in using skin cancer diagnostic accuracy as a value-based performance measure.2 The most commonly used metric to assess the diagnostic accuracy of health care practitioners (HCPs) in skin cancer recognition, particularly for cutaneous melanoma, is the number needed to biopsy (NNB) ratio. The NNB ratio is calculated by dividing the total number of biopsies by the number of biopsied melanomas [NNB = (true positives + false positives) / true positives].
The NNB ratio is closely related to positive predictive value (PPV), which represents the same underlying data as a proportion and is a more formally recognized statistical measure. PPV represents the proportion of biopsied lesions that are, in fact, melanomas and is calculated by dividing the number of biopsied melanomas by the total number of biopsies [PPV = true positives / (true positives + false positives)]; it is also the mathematical inverse of NNB. At least three different melanoma-specific NNB metrics have been reported in the literature: (a) including all biopsied tumors, (b) including only biopsied melanocytic tumors after pathologic review, and (c) including only those biopsied lesions submitted with clinical concern for melanoma. The NNB ratio has also been reported for non-melanoma skin cancer (i.e., dividing the total number of biopsies by the number of biopsied non-melanoma skin cancers) as well as for all skin cancer (i.e., dividing the total number of biopsies by the number of biopsied skin malignancies).3, 4
It has been stated that “the NNB [ratio]…is an indicator of diagnostic sensitivity [for melanoma]…”5, “a lower NNB [ratio] implies a higher skill level at discerning suspicious lesions on examination”2, and that NNB ratio is “one of the most useful metrics for measuring accuracy in melanoma detection…”6. Based on these assumptions, the NNB ratio has been used to directly compare skin cancer diagnostic accuracy across provider types.3, 6, 7 It has also been used to justify the consideration of restricting the performance of biopsies for skin cancer to individual clinicians with lower NNB ratios in order to reduce healthcare costs.2 However, there is limited evidence directly associating lower individual provider NNB ratios with better melanoma diagnostic accuracy, improved patient outcomes, or lower healthcare costs. In fact, the relationship between the NNB ratio and standard metrics of diagnostic accuracy, such as sensitivity [i.e., the true positive rate, which is defined as the proportion of melanomas that undergo skin biopsy, or true positives / (true positives + false negatives)] and specificity [i.e., the true negative rate, which is defined as the proportion of benign lesions that do not undergo skin biopsy, or true negatives / (true negatives + false positives)], is not well studied.
Methods:
Our primary objective was to investigate the association of the NNB ratio with sensitivity and specificity. We used existing data from three previously published skin cancer reader studies (Tables 1–2)8–10. Each individual study underwent requisite Institutional Review Board approval. In the 2018 Marchetti et al study8, eight dermatologists examined 100 dermoscopy images (50 nevi/lentigines, 50 melanomas); in the 2019 Marchetti et al study9, eight dermatologists and nine dermatology residents evaluated 150 dermoscopy images [50 nevi, 50 melanomas, 50 seborrheic keratoses (SKs)]; in the Tschandl et al study10, 511 readers examined randomized batches of 30 dermoscopy images from a larger set of 1511 images that included nevi, melanomas, vascular lesions, dermatofibromas, benign keratinocytic lesions, intraepithelial carcinomas, and basal cell carcinomas. In each individual study, human readers and participants provided written consent to allow analysis of their ratings.
Table 1.
Association between sensitivity and PPV by study.
| Author (year) | Data as observed in publication | Adjusted Malignancy Prevalence of 5% | Adjusted Malignancy Prevalence of 1% | Adjusted Malignancy Prevalence of 0.1% | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Number of Readers | Average Sensitivity (SD) | Average Specificity (SD) | Average Malignancy Prevalence | Positive Predictive Value | Slope Coefficient* (95% CI) | p-value | Slope Coefficient** (95% CI) | p-value | Slope Coefficient** (95% CI) | p-value | Slope Coefficient** (95% CI) | p-value | |
| Monheit (2011) | 39 Dermatologists | 0.78 (0.13) | 0.43 (0.18) | 0.50 | 0.58 | −0.49 (−1.26,0.27) | 0.20 | −1.59 (−4.31,1.13) | 0.24 | −7.20 (−19.70,5.28) | 0.25 | −70.53 (−193.0,51.99) | 0.25 |
| Ferris (2015) | 12 Dermatologists 10 Residents† 8 Physician Assistants |
0.71 (0.15) | 0.58 (0.18) | 0.38 | 0.53 | −1.31 (−2.01,−0.60) | 0.001 | −3.88 (−5.93,−1.82) | 0.001 | −17.12 (−26.20, −8.05) | 0.001 | −166.36 (−254.56,−78.18) | 0.001 |
| Marchetti (2018) | 8 Dermatologists | 0.82 (0.11) | 0.60 (0.13) | 0.51 | 0.69 | −0.10 (−1.84, 1.64) | 0.90 | −1.13 (−7.10,4.83) | 0.70 | −4.99 (−31.59,21.60) | 0.66 | −48.54 (−307.68,210.60) | 0.66 |
| Marchetti (2019) | 8 Dermatologists 9 Residents |
0.65 (0.17) | 0.75 (0.09) | 0.33 | 0.57 | 0.03 (−1.49, 1.54) | 0.97 | 0.02 (−3.66, 3.69) | 0.99 | 0.03 (−15.53, 15.59) | 0.99 | 0.19 (−149.50, 149.88) | 0.99 |
| Tschandl (2019) | 283 Dermatologists 118 Residents 83 General Practitioners 27 Not Specified |
0.76 (0.13) | 0.78 (0.11) | 0.36 | 0.67 | 0.10 (0.01,0.19) | 0.04 | −0.001 (−0.09, 0.09) | 0.98 | 0.24 (−0.19,0.67) | 0.33 | −7.20 (−19.70,5.28) | 0.25 |
Slope coefficient for the association between sensitivity and positive predictive value based on the malignancy prevalence observed in the publication
Slope coefficient for the association between sensitivity and positive predictive value based on an adjusted malignancy prevalence
Unable to extract data from 1 reader
CI = confidence interval; SD = standard deviation
Table 2.
Association between specificity and PPV by study.
| Author (year) | Data as observed in publication | Adjusted Malignancy Prevalence of 5% | Adjusted Malignancy Prevalence of 1 % | Adjusted Malignancy Prevalence of 0.1% | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Number of Readers | Average Sensitivity (SD) | Average Specificity (SD) | Average Malignancy Prevalence | Positive Predictive Value | Slope Coefficient* (95% CI) | p-value | Slope Coefficient** (95% CI) | p-value | Slope Coefficient** (95% CI) | p-value | Slope Coefficient** (95% CI) | p-value | |
| Monheit (2011) | 39 Dermatologists | 0.78 (0.13) | 0.43 (0.18) | 0.50 | 0.58 | 2.67 (2.06, 3.29) | <0.001 | 9.18 (6.86,11.51) | <0.001 | 41.94 (31.18,52.70) | <0.001 | 410.92 (305.13,516.71) | <0.001 |
| Ferris (2015) | 12 Dermatologists 10 Residents† 8 Physician Assistants |
0.71 (0.15) | 0.58 (0.18) | 0.38 | 0.53 | 2.31 (1.84,2.78) | <0.001 | 6.56 (5.05,8.08) | <0.001 | 28.75 (21.91,35.60) | <0.001 | 278.8 (211.95,345.71) | <0.001 |
| Marchetti (2018) | 8 Dermatologists | 0.82 (0.11) | 0.60 (0.13) | 0.51 | 0.69 | 1.65 (0.43,2.88) | 0.016 | 6.42 (3.25,9.59) | 0.003 | 28.52 (14.12,42.91) | 0.003 | 277.54 (136.71,418.36) | 0.003 |
| Marchetti (2019) | 8 Dermatologists 9 Residents |
0.65 (0.17) | 0.75 (0.09) | 0.33 | 0.57 | 1.11 (0.56, 1.67) | 0.001 | 2.64 (1.26,4.01) | 0.001 | 11.11 (5.23, 16.99) | 0.001 | 106.72 (50.04, 163.41) | 0.001 |
| Tschandl (2019) | 283 Dermatologists 118 Residents 83 General Practitioners 27 Not Specified |
0.76 (0.13) | 0.78 (0.11) | 0.36 | 0.67 | 0.64 (0.59, 0.69) | <0.001 | 1.02 (0.96, 1.08) | <0.001 | 3.09 (2.86,3.31) | <0.001 | 26.40 (24.37, 28.42) | <0.001 |
Slope coefficient for the association between specificity and positive predictive value based on the malignancy prevalence observed in the publication
Slope coefficient for the association between specificity and positive predictive value based on an adjusted malignancy prevalence
Unable to extract data from 1 reader
CI = confidence interval; SD = standard deviation
We also extracted data from two published contemporary melanoma reader studies11, 12 using validated graph digitizer software to estimate the original (x,y)-data from the image of a scanned graph.13 These studies were similar in their objective and methodology to our previous studies in that the performance of human readers was being compared against computer algorithms. Monheit et al examined 39 dermatologists evaluating 50 clinical and dermoscopy images (25 melanomas, 25 nevi).11 Ferris et al examined 30 readers evaluating 65 dermoscopy images [25 melanomas, 40 benign lesions (nevi, lentigines, SKs)]12; we successfully extracted data for 29 of these 30 readers. To determine the validity of our extracted data, we averaged the estimated individual reader sensitivity and specificity values and compared those to the published data in these studies; in both studies our estimated data matched the published average reader values within one decimal place (data not shown).
We plotted sensitivity and specificity versus PPV for 604 readers from the 5 cohorts and used separate least squares regressions to evaluate relationships at the published malignancy prevalence for each individual study. Due to the heterogeneity across studies with regards to types of images (clinical vs. dermoscopy), dataset composition by diagnoses, and setting, we did not aim to aggregate results using a summary least squares regression line. We defined sensitivity as the proportion of malignant lesions that were rated by a reader as malignant [i.e., sensitivity = true positives / (true positives + false negatives)]. We defined specificity as the proportion of benign lesions that were rated by a reader as benign [i.e., specificity = true negatives / (true negatives + false positives)]. We used PPV instead of NNB for all analyses as PPV is a standard statistical metric and ranges from 0–1. We defined PPV as the proportion of lesions rated by a reader as malignant that were truly malignant [i.e., PPV = true positives / (true positives + false positives)].
Since the malignancy prevalence varied between studies (min-max of 33%−51%) and was higher than observed in clinical practice, we also examined the resultant relationships between PPV and both sensitivity and specificity on theoretical samples of 1000 lesions by transposing the sensitivity and specificity characteristics from the 5 cohorts. These relationships were evaluated at a constrained malignancy prevalence of 5%, 1%, and 0.1%, respectively. In other words, we took the sensitivity and specificity of each individual reader within each study and applied these values to theoretical samples of 1000 lesions with a malignancy prevalence of 5%, 1%, and 0.1%. Separate least squares regressions were estimated for each cohort/prevalence scenario, along with 95% confidence intervals. The slope coefficient for the least squares regression line reflects the change in Y (sensitivity or specificity) for every unit increase in X (PPV). Horizontal slopes (i.e., slopes close to zero) indicate little or no association between variables, whereas bigger, positive or negative values indicate direct or inverse relationships, respectively. We used a pre-specified alpha value of 0.05. Analyses were performed using Stata v.14.2, Stata Corporation, College Station, TX.
Results:
The five studies had a median (min-max) readers per study of 29 (8–511); of the 604 readers included, 350 were dermatologists, 137 were residents, 83 were general practitioners, 27 were not specified, and 8 were physician assistants. Two studies included readers exclusively from the United States and three studies included readers from multiple countries. The median (min-max) lesions per study was 100 (50–1511). In four studies the malignant lesions were comprised solely of melanomas and in one study they were composed of melanomas, basal cell carcinomas, and intraepithelial carcinomas. The median (min-max) malignancy prevalence in each study was 38% (33–51%). The median (min-max) sensitivity and specificity of readers across studies was 76% (65–82%) and 60% (43–78%), respectively.
At the published malignancy prevalence of each study, the slope coefficients for sensitivity versus PPV were −0.49 (p=0.20), −1.31 (p=0.001), −0.10 (p=0.90), 0.03 (p=0.97), and 0.10 (p=0.04) (Table 1 and Figure 1); thus, three studies showed no association between sensitivity and PPV, one showed a positive association, and one showed a negative association. Using a constrained malignancy prevalence of 5%, 1%, and 0.1%, four studies showed no association and one showed a negative association (Table 1).
Figure 1.

Scatterplots of sensitivity by positive predictive value. Scatterplot of sensitivity by positive predictive value for all 5 cohorts (Monheit et al 2011, Ferris et al 2015, Marchetti et al 2018, Marchetti et al 2019, Tschandl et al 2019). Each circle represents the performance of an individual reader. Stratified least squares regression lines are added to plot the individual associations for each dataset at the published malignancy prevalence. Dotted lines correspond to the association between sensitivity and positive predictive value, for the constituent datasets, if the prevalence of malignancy decreases to 5%.
Regarding specificity versus PPV, at the published malignancy prevalence of each study, the slope coefficients were 2.67 (p<0.001), 2.31 (p<0.001), 1.65 (p=0.016), 1.11 (p=0.001), and 0.64 (p<0.001) (Table 2 and Figure 2); thus, all five studies showed a positive association between PPV and specificity. Similarly, using a constrained malignancy prevalence of 5%, 1%, and 0.1%, all five studies showed a positive association between PPV and specificity (Table 2).
Figure 2.

Scatterplots of specificity by positive predictive value. Scatterplot of specificity by positive predictive value for all 5 cohorts (Monheit et al 2011, Ferris et al 2015, Marchetti et al 2018, Marchetti et al 2019, Tschandl et al 2019). Each circle represents the performance of an individual reader. Stratified least squares regression lines are added to plot the individual associations for each dataset at the published malignancy prevalence. Dotted lines correspond to the association between specificity and positive predictive value, for the constituent datasets, if the prevalence of malignancy decreases to 5%.
Discussion:
Herein, we analyzed the relationships between HCPs’ PPV and both sensitivity and specificity within five studies that differed in reader composition, dataset size, lesion diagnoses, setting, and image types. Despite notable study heterogeneity, we found that as the NNB ratio decreased among readers, there was a strong and statistically significant increase in specificity in all five studies, both at the published malignancy prevalence and at lower malignancy prevalence values that are more similar to clinical practice. Regarding sensitivity, we observed that for most of the studies, as the NNB ratio decreased among readers, reader sensitivity did not increase or decrease; in other words, there was no association between the NNB ratio and sensitivity. However, in one study we found that as the NNB ratio decreased, sensitivity decreased among readers; this relationship remained significant at a lower malignancy prevalence of 5%, 1% and 0.1%. As a result, further research is needed to investigate the relationship between NNB ratio and sensitivity.
Through the lens of population health, our results suggest that the NNB ratio may be a useful metric to compare the melanoma diagnostic accuracy of large groups of HCPs who practice under similar conditions and examine patients with comparable characteristics and prevalence of melanoma. Under these explicit assumptions, our data suggests that groups with a lower average NNB ratio may have superior melanoma diagnostic accuracy than groups with a higher average NNB ratio. This difference is due to greater specificity and likely equivalent sensitivity.
Our data also suggests that the NNB ratio can be a problematic metric to compare the diagnostic accuracy of individual HCPs for melanoma. Although an individual HCP with a lower NNB is likely to have a higher specificity than an individual with a higher NNB (assuming they evaluate similar patient populations, particularly with regards to melanoma prevalence), one cannot infer their respective sensitivity in melanoma diagnosis. Our data shows that HCPs with similar NNB ratios exhibit a wide variation in sensitivity for skin cancer detection. Thus, health care policies that restrict the performance of skin biopsies based on NNB ratios may have untoward consequences by excluding HCPs with high sensitivity for melanoma diagnosis. Scrutiny of HCPs with outlier NNB ratios (i.e., very low or high) may be aided by additional metrics, such as the ratio of in situ to invasive melanoma, median Breslow thickness of diagnosed invasive melanomas, or overall biopsy utilization per patient encounter.14
More concerningly, the NNB ratio is used to compare the performance of individual HCPs who practice in diverse settings and environments. Such comparisons are invalid and should be viewed with caution. The NNB ratio reflects a complex interplay of disease prevalence, diagnostic accuracy, and the applied threshold for diagnostic sensitivity used in clinical practice.15, 16 The inherent trade-off between sensitivity and specificity dictates that 2 HCPs with identical accuracy but different thresholds of sensitivity will have discordant NNB ratios. Furthermore, HCPs that examine populations that differ in disease prevalence will have discordant NNBs, as increased disease prevalence directly leads to higher PPV.
If NNB ratios become value-based performance metrics of individual HCPs, what may be the possible effects of pressure or incentives to lower melanoma NNB values? Given the trade-off between sensitivity and specificity, an individual provider may increase their specificity at the expense of sensitivity to lower their NNB. Although this would reduce unnecessary biopsies of benign lesions, it might also lead to missed melanomas. Two studies have artificially modeled the potential economic trade-offs associated with the yield of biopsy rates and concluded that relatively higher NNBs (170–562) could hypothetically lower healthcare costs due to earlier diagnoses of melanoma; both studies are limited by the underlying assumptions in their models, particularly that high NNB ratios are associated with high sensitivity for melanoma.17, 18 More likely, improved training or adoption of diagnostic aids (e.g., dermoscopy, total body photography (TBP), reflectance confocal microscopy, molecular adhesive tape tests, and/or artificial/augmented intelligence) that fundamentally improve an individual HCP’s sensitivity and specificity could be alternative strategies to improve outcomes or lower costs, if proven through well-designed clinical studies. In fact, specialized surveillance centers using dermoscopy, TBP, and sequential digital dermoscopic imaging may be more cost-effective in the management of individuals at high risk of melanoma due to detection of melanoma at earlier stages and lower overall excision rates (i.e., they are associated with high sensitivity, high specificity, and low NNB ratios).19
A limitation of our study is that the data originates from reader studies performed in artificial settings, which may not accurately reflect a clinical setting and do not generally incorporate factors beyond morphology that aid in diagnosis (i.e., lesion symptoms, patient history). This is notable for (a) specificity, which is >99% in clinical practice as hundreds of lesions are evaluated for every lesion biopsied, and (b) dataset composition, which is enriched for malignancy in reader studies. To potentially address some of these limitations, we explored what would happen to the results at lower malignancy prevalence values that correspond to clinically relevant NNB ratios (i.e., 5–20). However, measuring skin cancer diagnostic accuracy, particularly sensitivity, in a cross-sectional clinical study remains problematic as false-negative assessments are not characterized. Reader studies are accepted as proxy estimations of diagnostic accuracy, including by the U.S. Food and Drug Administration.20 Reader studies also limit biases inherent to comparisons of HCPs in clinical studies in which they do not evaluate identical patients and lesions.
As secondary prevention of melanoma via early detection remains an important strategy to reduce mortality in high-risk individuals, validated quality metrics for skin cancer detection are important to identify. Our data suggest that NNB is a limited surrogate for melanoma sensitivity, which is an important diagnostic accuracy metric. However, the most important outcome from melanoma screening is not measures of diagnostic accuracy but the demonstration of reduced melanoma-specific mortality. Given the heterogeneity in the biologic behavior of cancer, improved identification of indolent forms of melanoma (e.g., melanoma in situ or so-called “slow-growing melanoma”21, 22) and detection of borderline tumors in populations with low melanoma mortality may not translate into an appreciable reduction in deaths. For example, high sensitivity for the early diagnosis of atypical intraepidermal melanocytic proliferation or melanoma in situ occurring on sun-damaged skin in older individuals may lead to more harm than good if it is associated with low specificity and innumerable biopsies of solar lentigines, junctional nevi, and lichen planus-like keratoses. We advocate for additional research to define quality metrics for skin cancer detection, and greater caution when interpreting NNB, particularly for individual HCPs.
Funding source:
This research is funded in part by a grant from the National Cancer Institute / National Institutes of Health (P30-CA008748).
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Conflict of interest: The authors have no conflicts of interest to declare.
References:
- 1.Guy GP Jr., Machlin SR, Ekwueme DU, Yabroff KR. Prevalence and costs of skin cancer treatment in the U.S., 2002–2006 and 2007–2011. American journal of preventive medicine 2015;48:183–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Shahwan KT, Kimball AB. Should We Leave the Skin Biopsies to the Dermatologists? JAMA Dermatol 2016;152:371–2. [DOI] [PubMed] [Google Scholar]
- 3.Nault A, Zhang C, Kim K, Saha S, Bennett DD, Xu YG. Biopsy Use in Skin Cancer Diagnosis: Comparing Dermatology Physicians and Advanced Practice Professionals. JAMA dermatology 2015;151:899–902. [DOI] [PubMed] [Google Scholar]
- 4.Privalle A, Havighurst T, Kim K, Bennett DD, Xu YG. Number of skin biopsies needed per malignancy: Comparing the use of skin biopsies among dermatologists and nondermatologist clinicians. J Am Acad Dermatol 2019. [DOI] [PubMed] [Google Scholar]
- 5.Nelson KC, Swetter SM, Saboda K, Chen SC, Curiel-Lewandrowski C. Evaluation of the Number-Needed-to-Biopsy Metric for the Diagnosis of Cutaneous Melanoma: A Systematic Review and Meta-analysis. JAMA Dermatol 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Argenziano G, Cerroni L, Zalaudek I, Staibano S, Hofmann-Wellenhof R, Arpaia N et al. Accuracy in melanoma detection: a 10-year multicenter survey. J Am Acad Dermatol 2012;67:54–9. [DOI] [PubMed] [Google Scholar]
- 7.Anderson AM, Matsumoto M, Saul MI, Secrest AM, Ferris LK. Accuracy of Skin Cancer Diagnosis by Physician Assistants Compared With Dermatologists in a Large Health Care System. JAMA Dermatol 2018;154:569–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Marchetti MA, Codella NCF, Dusza SW, Gutman DA, Helba B, Kalloo A et al. Results of the 2016 International Skin Imaging Collaboration International Symposium on Biomedical Imaging challenge: Comparison of the accuracy of computer algorithms to dermatologists for the diagnosis of melanoma from dermoscopic images. J Am Acad Dermatol 2018;78:270–7.e1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Marchetti MA, Liopyris K, Dusza SW, Codella NCF, Gutman DA, Helba B et al. Computer Algorithms Show Potential for Improving Dermatologists’ Accuracy to Diagnose Cutaneous Melanoma; Results of ISIC 2017. J Am Acad Dermatol 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Tschandl P, Codella N, Akay BN, Argenziano G, Braun RP, Cabo H et al. Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study. The lancet oncology 2019;20:938–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Monheit G, Cognetta AB, Ferris L, Rabinovitz H, Gross K, Martini M et al. The performance of MelaFind: a prospective multicenter study. Arch Dermatol 2011;147:188–94. [DOI] [PubMed] [Google Scholar]
- 12.Ferris LK, Harkes JA, Gilbert B, Winger DG, Golubets K, Akilov O et al. Computer-aided classification of melanocytic lesions using dermoscopic images. J Am Acad Dermatol 2015;73:769–76. [DOI] [PubMed] [Google Scholar]
- 13.Flower A, McKenna JW, Upreti G. Validity and Reliability of GraphClick and DataThief III for Data Extraction. Behavior modification 2016;40:396–413. [DOI] [PubMed] [Google Scholar]
- 14.Hamid RN, McGregor SP, Siegel DM, Feldman SR. Assessment of Provider Utilization Through Skin Biopsy Rates. Dermatologic surgery : official publication for American Society for Dermatologic Surgery [et al] 2019;45:1035–41. [DOI] [PubMed] [Google Scholar]
- 15.Marchetti MA, Dusza SW, Halpern AC. A Closer Inspection of the Number Needed to Biopsy. JAMA Dermatol 2016;152:952–3. [DOI] [PubMed] [Google Scholar]
- 16.Marghoob AA, Marchetti MA, Dusza SW. Performance of Dermatology Physician Assistants. JAMA dermatology 2018;154:1229. [DOI] [PubMed] [Google Scholar]
- 17.Goldsmith SM. Cost analysis suggests overemphasis on biopsy rate for melanoma diagnosis. J Am Acad Dermatol 2013;68:517–9. [DOI] [PubMed] [Google Scholar]
- 18.Aires DJ, Wick J, Shaath TS, Rajpara AN, Patel V, Badawi AH et al. Economic Costs Avoided by Diagnosing Melanoma Six Months Earlier Justify >100 Benign Biopsies. J Drugs Dermatol 2016;15:527–32. [PubMed] [Google Scholar]
- 19.Watts CG, Cust AE, Menzies SW, Mann GJ, Morton RL. Cost-Effectiveness of Skin Surveillance Through a Specialized Clinic for Patients at High Risk of Melanoma. J Clin Oncol 2017;35:63–71. [DOI] [PubMed] [Google Scholar]
- 20.Food U.S. and Drug Administration. PMA P090012: FDA Summary of Safety and Effectiveness Data. Available at: https://www.accessdata.fda.gov/cdrh_docs/pdf9/P090012b.pdf. Accessed December 3, 2019.
- 21.Argenziano G, Kittler H, Ferrara G, Rubegni P, Malvehy J, Puig S et al. Slow-growing melanoma: a dermoscopy follow-up study. The British journal of dermatology 2010;162:267–73. [DOI] [PubMed] [Google Scholar]
- 22.Terushkin V, Dusza SW, Scope A, Argenziano G, Bahadoran P, Cowell L et al. Changes observed in slow-growing melanomas during long-term dermoscopic monitoring. The British journal of dermatology 2012;166:1213–20. [DOI] [PubMed] [Google Scholar]
