Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2025 Jun 27;64(10):1870–1878. doi: 10.1111/ijd.17918

Cumulative Sum Analysis‐Integrated E‐Learning for Differentiation Between Basal Cell Carcinoma and Non‐Basal Cell Carcinoma on Optical Coherence Tomography: An Observational Cohort Study

Tom Wolswijk 1,2,, Patricia Joan Nelemans 3, Fieke Adan 1,2, Frank van Leersum 1,2, Daniel Kreiter 4, Thomas Adams 5, Sem van Dorsten 5, Klara Mosterd 1,2
PMCID: PMC12418905  PMID: 40579737

ABSTRACT

Background

The clinical implementation of optical coherence tomography (OCT) for diagnosing clinically equivocal lesions suspicious for basal cell carcinoma (BCC) is limited by an OCT assessor shortage. Cumulative sum (CUSUM) analysis enables monitoring of diagnostic performance during training. The objective was to evaluate whether CUSUM‐integrated e‐learning is suitable for training healthcare professionals in achieving and maintaining an acceptable error rate for differentiating BCC from non‐BCC lesions on OCT. Furthermore, we explored the diagnostic accuracy of high‐confidence BCC diagnoses by the newly trained assessors.

Methods

A CUSUM‐integrated e‐learning was developed. Trainee performance was monitored by CUSUM analysis. The number of OCT scans required to achieve and maintain a predefined acceptable error rate (percentage of correct diagnoses) was evaluated. Successfully trained assessors were asked to discriminate BCC from non‐BCC on 100 OCT scans for a subsequent diagnostic accuracy study. Histopathology served as the reference standard.

Results

Seventeen trainees successfully completed the training. Adequate performance was achieved and maintained after assessing a median of 385 scans (interquartile range [IQR]: 314–429). The pooled area under the curve (AUC) as measure for the ability to differentiate BCC from non‐BCC lesions was 0.852 (95% confidence interval [CI]: 0.833–0.870). Pooled specificity and sensitivity for a high‐confidence diagnosis were 95.4% (95% CI: 93.2–96.9) and 31.1% (95% CI: 24.2–39.0), respectively.

Conclusions

CUSUM‐integrated e‐learning was successfully applied to train healthcare professionals to differentiate BCC from non‐BCC on OCT. Trainees achieved a diagnostic accuracy suitable to start using OCT in clinical practice. This method can be applied to overcome the OCT assessor shortage and for training other medical skills.

Trial Registration

Clinicaltrials.gov: NCT05634421

Keywords: basal cell carcinoma, cumulative sum analysis, e‐learning, optical coherence tomography, training


Abbreviations

AUC

area under the curve

BCC

basal cell carcinoma

CI

confidence interval

CUSUM

cumulative sum

DOR

diagnostic odds ratio

iBCC

infiltrative basal cell carcinoma

IQR

interquartile range

MUMC+

Maastricht University Medical Center+

nBCC

nodular basal cell carcinoma

OCT

optical coherence tomography

ROC

receiver operating characteristic

sBCC

superficial basal cell carcinoma

1. Introduction

The incidence of basal cell carcinoma (BCC), the most common form of cancer, is increasing [1]. Recent studies demonstrated that imaging techniques such as optical coherence tomography (OCT) provide a safe and noninvasive alternative for invasive diagnostic procedures such as punch biopsies [2].

OCT is based on light interferometry and produces cross‐sectional images within the black‐and‐white spectrum to visualize the skin and its adnexal structures with a depth of up to 1.5 mm (Supplement S1) [3]. OCT assessment has additional diagnostic value in lesions suspected for BCC, wherein the treating physician cannot establish a diagnosis clinically and biopsy is indicated [4]. Moreover, OCT proved to be a safe and cost‐effective method to diagnose BCC based on predefined characteristics [5, 6]. A recent randomized non‐inferiority clinical trial concluded that OCT‐guided diagnosis and treatment of lesions suspected for BCC is non‐inferior to a biopsy‐guided strategy [2]. The idea is that OCT enables identification and classification of BCC without the need for biopsy. These BCCs can be treated directly with either excision or noninvasive treatment, depending on the BCC subtype. To prevent incorrect treatment, it is important that the proportion of histopathologic non‐BCCs that are misclassified as BCC is kept low. Thus, specificity to detect non‐BCC must be high. If a high confidence BCC diagnosis cannot be established, a biopsy is still needed for a definitive diagnosis. In the aforementioned trial, the sensitivity of an experienced OCT assessor to detect BCC was 85.5% combined with specificity to detect non‐BCC lesions of 94.6% [2], making a strong case for clinical implementation of OCT.

Currently, implementation is still limited by an OCT assessor shortage, and a standardized training method is lacking. OCT experts are scarce and unable to train multiple trainees simultaneously. Therefore, the question arises whether an e‐learning program, enabling remote training, is effective. E‐learning has several advantages over face‐to‐face learning. First, it enables trainees to pace their own learning [7]. Second, content can be easily revisited at any desired time, enabling continuous learning [7]. Third, the internet facilitates widespread distribution of the e‐learning content to users worldwide [7]. Finally, e‐learning modules can evaluate results from OCT assessors during practice to determine whether increased knowledge or skills have been achieved [7]. Therefore, an e‐learning program with integrated skills assessment can be valuable in overcoming the shortage of OCT assessors and for ensuring acceptable diagnostic performance.

The performance of OCT trainees can be monitored using cumulative sum (CUSUM) analysis, which was first described by Page in 1954 [8]. CUSUM analysis is a sequential probability ratio test that has been used to monitor interventional and diagnostic skills in the medical field [9, 10, 11, 12]. A previous study from our group illustrated how CUSUM analysis can be used to visualize a learning curve of trainees in differentiating BCC from non‐BCC on OCT [12]. However, in this previous study, CUSUM analysis was not used for monitoring during training but was performed on data from OCT assessors who had already finished training.

The objective of this study was to evaluate the effectiveness of an online training program for OCT assessment with integrated continuous monitoring of diagnostic performance by CUSUM analysis for training healthcare professionals with the aim to achieve and maintain an acceptable error rate in differentiating BCC from non‐BCC on OCT.

2. Materials and Methods

Five categories of healthcare professionals were recruited to participate in this study: dermatologists, residents, research physicians, nurses, and medical students. Healthcare professionals with previous experience in OCT were excluded from participation.

2.1. CUSUM‐Integrated E‐Learning

The software for CUSUM‐integrated e‐learning was developed at the Department of Dermatology, Maastricht University Medical Center+ (MUMC+), Maastricht The Netherlands, in collaboration with GoCode, Maastricht The Netherlands. Scans were obtained from a registry containing OCT scans of histopathologically confirmed lesions clinically suspect for keratinocyte cancer or (pre) malignancy, for which there was an indication for biopsy because the lesion could not be diagnosed clinically. The scans were made with a VivoSight Multi‐beam Swept‐Source Frequency Domain OCT scanner (Michelson Diagnostics; resolution < 7.5 μm lateral, < 5 μm axial; depth of focus 1.0 mm; scan area 6 × 6 mm). The histopathologic diagnosis served as reference standard and was determined on a 3 mm punch biopsy of the lesion by experienced dermatopathologists blinded to the OCT scans and OCT test results. The trainees were blinded to histopathology and were provided with clinical photographs of the lesions taken by a medical photographer (Nikon D750).

The theoretical content of the e‐learning program was developed by three OCT experts (TW, FA, and KM), and the educational design was approved by an educational expert (SH). The theoretical module comprised 26 chapters educating trainees on systematic OCT assessment, BCC characteristics as described by Hussain et al. [5], and common pitfalls in diagnosing BCC on OCT [13]. Trainees could freely navigate through chapters and revisit content at any time. The digital environment is a progressive web application built using Laravel and Vue.js frameworks.

After the theoretical module, hands‐on training for OCT assessment was offered in the same digital environment. A total of 600 clinical cases were included, each containing an OCT scan with 120 cross‐sectional images and a clinical photograph of the scanned lesion. The first 50 cases were rather easy to recognize as BCC or non‐BCC lesions due to the more obvious presence or absence of characteristic BCC features. Trainees were asked to express their suspicion for BCC on a 5‐point confidence scale (Table 1). A confidence score ≥ 2 was considered a positive OCT test result for BCC presence, whereas a confidence score < 2 was considered a negative OCT test result for BCC presence. After each assessment, trainees received feedback on the histopathologic diagnosis.

TABLE 1.

5‐point confidence‐scale for suspicion of basal cell carcinoma on optical coherence tomography assessment.

Confidence‐score Definition
4 Certain of BCC presence and BCC subtype
3 Certain of BCC presence, uncertain of BCC subtype
2 High suspicion for BCC
1 Low suspicion for BCC
0 No suspicion for BCC

Abbreviation: BCC, basal cell carcinoma.

2.2. Monitoring of Performance

CUSUM analysis was integrated in the e‐learning software. After each attempt of a trainee, a sum score is calculated. The CUSUM score is plotted against the index number of the attempt, and the plot visualizes changes in performance over time. A continuing descending curve indicates that the failure rate is lower than the target value, implying an acceptable diagnostic error rate, whereas an ascending curve indicates the opposite [10]. The method can be used to test whether a trainee can achieve and maintain acceptable performance [10], and to assess the number of attempts that are needed to achieve competence. Trainees could see their CUSUM‐graph throughout the training to evaluate their performance (Figure 1). Correct diagnoses (true positive and true negative results) were considered successes, whereas wrong diagnoses (false positive and false negative results) were considered failures.

FIGURE 1.

FIGURE 1

An example of a cumulative sum (CUSUM) analysis‐based learning curve as can be seen by a trainee who successfully completed the training. This CUSUM‐analysis visualizes the learning curve of a participant. The Y‐axis represents the CUSUM score, while the X‐axis displays the index number of attempts. Horizontal lines on the graph represent acceptable and unacceptable boundaries. Successes result in a subtraction (S) from the CUSUM score, whereas failures lead to an addition (1‐S). The rising trend over the first three quarters of the graph indicates that the failure rate is higher than the target value and causes the CUSUM graph to traverse unacceptable boundaries, thereby resetting the 0‐line. Around attempt 225, the CUSUM score starts a declining trend, signaling that the failure rate is lower than the target value. This declining trend persists, indicating the trainee's ability to maintain an acceptable error rate over time.

In this study, the diagnostic error rate was defined as the sum of false negative and false positive OCT results as a proportion of the total number of cases. A diagnostic error rate of 16% was considered acceptable based on the failure rate of two OCT experts with 23 and 8 years of experience, respectively, who evaluated 100 cases from our registry [12]. The unacceptable error rate was set at 25%, representing the error rate of clinical examination of the same cases [12]. The number of OCT scans and the time needed to achieve and maintain acceptable performance were recorded.

2.3. Diagnostic Accuracy After Training

Trainees who successfully completed the CUSUM‐integrated e‐learning were considered novice assessors and were asked to participate in a diagnostic accuracy study to differentiate BCC from non‐BCC on OCT. Assessors were presented with 100 OCT scans and clinical photographs of histopathologically confirmed lesions (50 BCC, 50 non‐BCC). Included were lesions clinically suspicious for BCC, wherein the treating physician could not establish a definitive diagnosis and biopsy was necessary. Excluded were clinically evident BCC and cases previously used in the e‐learning.

Assessors were blinded to histopathology and expressed their diagnostic certainty for BCC presence on a 5‐point confidence scale (Table 1). If applicable, they reported the suspected BCC subtype. A high‐confidence diagnosis (confidence score 4) of BCC was considered a positive test result because, in these cases, biopsy could theoretically be omitted, whereas lower certainty (confidence score ≤ 3) was considered a negative test result because, in clinical practice, these lesions would undergo biopsy for a definitive diagnosis. For BCC subtyping, a distinction was made between superficial BCC (sBCC; topical treatment optional) and nodular/infiltrative BCC (nBCC/iBCC; excision required). An OCT expert independently evaluated the same dataset to serve as a benchmark for comparison. Outcome measures were diagnostic parameters for the ability to discriminate between BCC and non‐BCC lesions and between superficial and other BCC subtypes.

2.4. Statistical Analysis

During training, a CUSUM score was calculated after each evaluated OCT scan. For each failure, a score (1‐S) is added, and for each success, a score (S) is subtracted (Supplement S1). Threshold boundaries are used to conclude whether a trainee achieved acceptable or unacceptable performance. These boundaries depend on the setting of the acceptable (p0) and unacceptable error rate (p1) and the type I (α) and type II (β) error, which were conventionally set at 0.1 (Supplement S1).

When the running CUSUM score exceeds the lower boundary (h1), it can be concluded that performance over a sequence of assessments is acceptable. When the score exceeds the upper boundary (h0), it signals unacceptable performance. Traversing an unacceptable boundary resets the 0‐line, thereby initiating a new series of assessments and adaptively replacing the lower (black) and upper (red) boundary (Figure 1). Only after consecutively traversing two acceptable boundaries (green) without traversing an unacceptable boundary in between practice series, it was concluded that acceptable performance could be maintained and trainees were considered novice assessors.

Overall diagnostic accuracy in the subsequent study was expressed as the area under the curve (AUC) with a 95% confidence interval (CI). Diagnostic accuracy of a high‐confidence diagnosis was expressed as sensitivity, specificity, and diagnostic odds ratio (DOR) with a 95% CI. For BCC subtyping, sensitivity was defined as the proportion of patients with an nBCC/iBCC correctly identified on OCT, whereas specificity was defined as the proportion of patients with an sBCC correctly identified on OCT. The most aggressive subtype on OCT and histopathology was considered the predicted and true subtype, respectively.

Diagnostic accuracy parameters of individual healthcare professionals were pooled using methods that account for the clustering of observations within observers. Adjusted variances of the pooled sensitivity, specificity, and DOR were estimated using the Sandwich estimator and mixed logistic regression, respectively.

3. Results

Forty trainees participated in the CUSUM‐integrated e‐learning program between November 2022 and September 2023. Of the 40 participants, 17 successfully completed the training by October 2023 and became novice OCT assessors. Achieving and maintaining adequate performance required a median of 385 scans (interquartile range [IQR]: 314–429), which took a median of 11 h of training (IQR: 7–21) (Table 2). Two participants assessed all 600 cases, but did not achieve and maintain acceptable performance. Participant characteristics are shown in Table 3. The lesions on the 600 OCT scans used in the e‐learning program represented a broad variety of 50 histopathologic diagnoses (Supplement S2).

TABLE 2.

Duration of training and diagnostic accuracy of a high‐confidence basal cell carcinoma diagnosis achieved by 16 trained novice optical coherence tomography assessors.

Duration of training Overall diagnostic performance Diagnostic accuracy of a high confidence BCC diagnosis on OCT
Hours No. scans needed AUC 95% CI Sensitivity 95% CI Specificity 95% CI DOR 95% CI
Median (IQR) Median (IQR) (% (x/n)) (% (x/n))
Dermatologist 1 11 347 0.860 0.784–0.936 44.0 (22/50) 34.8–49.3 92.0 (46/50) 82.8–97.3 9.036 2.570–34.814
Dermatologist 2 a 9 n/a n/a n/a n/a n/a n/a n/a n/a n/a
Dermatologist 3 a 18 n/a n/a n/a n/a n/a n/a n/a n/a n/a
All dermatologists 11 (9–18) 347 0.860 0.784–0.936 44.0 (22/50) 34.8–49.3 92.0 (46/50) 82.8–97.3 9.036 2.570–34.814
Nurse 1 44 395 0.878 0.818–0.939 18.0 (9/50) 11.2–19.9 98.0 (49/50) 91.2–99.9 10.765 1.294–236.179
Nurse 2 21 593 0.896 0.833–0.959 44.0 (22/50) 35.9–45.9 98.0 (49/50) 89.9–99.9 38.500 5.012–808.216
All nurses 33 (21–44) 494 (395–593) 0.888 0.842–0.933 31.0 (31/100) 22.9–40.4 98.0 (98/100) 86.9–99.7 22.010 5.098–95.017
Resident 1 3 108 0.846 0.771–0.922 44.0 (22/50) 34.8–49.3 92.0 (46/50) 82.2–97.3 9.036 2.570–34.814
Resident 2 9 385 0.835 0.755–0.915 16.0 (8/50) 9.4–17.9 98.0 (49/50) 91.4–99.9 9.333 1.102–207.171
Resident 3 10 401 0.839 0.760–0.918 26.0 (13/50) 18.1–29.3 96.0 (48/50) 88.1–99.3 8.432 1.645–57.882
Resident 4 b 4 282 n/a n/a n/a n/a n/a n/a n/a n/a
All residents 7 (3–10) 334 (152–397) 0.834 0.788–0.880 28.7 (43/150) 21.6–36.9 95.3 (143/150) 90.8–97.7 8.210 3.554–18.963
Research physician 1 2 103 0.935 0.891–0.978 20.0 (10/50) 13.4–20.0 100 (50/50) 93.4–100 Inf 2.216‐Inf
Research physician 2 10 539 0.863 0.791–0.935 42.0 (21/50) 33.5–45.3 96.0 (48/50) 87.5–99.3 17.379 3.522–116.087
Research physician 3 18 506 0.861 0.786–0.937 42.0 (21/50) 32.6–48.0 90.0 (45/50) 80.6–96.0 6.517 2.015–22.403
Research physician 4 12 385 0.904 0.845–0.963 22.0 (11/50) 14.8–23.9 98.0 (49/50) 90.8–99.9 13.821 1.709–298.770
All research physicians 11 (4–17) 446 (174–531) 0.888 0.856–0.921 31.5 (63/200) 23.5–40.8 96.0 (192/200) 91.2–98.2 11.037 5.122–23.779
Medical student 1 10 390 0.893 0.836–0.945 14.0 (7/50) 8.1–14.0 100 (50/50) 94.1–100 Inf 1.385‐Inf
Medical student 2 11 387 0.827 0.748–0.905 8.0 (4/50) 3.1–9.9 98.0 (49/50) 93.1–99.9 4.261 0.421–103.968
Medical student 3 26 346 0.844 0.768–0.919 44.0 (22/50) 35.1–48.4 94.0 (47/50) 85.9–97.4 12.310 3.084–57.162
Medical student 4 12 374 0.732 0.640–0.823 20.0 (10/50) 12.6–23.3 96.0 (48/50) 88.6–99.3 6.000 1.129–42.278
Medical student 5 5 103 0.820 0.738–0.902 32.0 (16/50) 23.2–37.3 92.0 (46/50) 83.2–97.3 5.412 1.505–21.223
Medical student 6 20 456 0.867 0.796–0.938 62.0 (31/50) 52.2–68.6 88.0 (44/50) 78.2–94.6 11.965 3.908–38.487
All medical students 12 (9–22) 381 (285–407) 0.827 0.795–0.859 30.0 (90/300) 24.4–36.3 94.7 (283/300) 90.0–97.2 7.607 4.341–13.329
All trainees 11 (7–19) 385 (314–429) 0.852 0.833–0.870 31.1 (249/800) 25.6–37.2 95.4 (763/800) 92.2–97.3 9.319 6.487–13.388
Expert assessor n/a n/a 0.948 0.901–0.995 74.0 (37/50) 66.7–74.0 100 (50/50) 92.7–100 Inf 25.35‐inf

Abbreviations: 95% CI, 95% confidence interval; AUC, area under the curve; DOR, diagnostic odds ratio; Inf, infinite; IQR, interquartile range; n/a, not applicable.

a

Did not participate in diagnostic accuracy study because an acceptable error rate was not achieved and maintained during training.

b

Did not participate in the diagnostic accuracy study but achieved and maintained an acceptable error rate during training.

TABLE 3.

Characteristics of e‐learning participants.

All participants (n = 40)
Age median (IQR) 28 (24–35)
Category n (%)
Dermatologist 8 (20)
Resident/clinical researcher 18 (45)
Nurse 3 (8)
Medical student 11 (28)
Years of dermatologic experience median (IQR) 2 (0–6)
Mohs experience n (%)
Yes 8 (20)
No 32 (80)
Novice OCT assessors (n = 17)
All (n = 17)
Age median (IQR) 26
Years of dermatologic experience median (IQR) 1 (0–4)
Mohs experience n (%)
Yes 1 (6)
No 16 (94)
Dermatologist (n = 1)
Age 35
Years of dermatologic experience 10
Mohs experience n (%)
Yes 0 (0)
No 1 (100)
Nurse (n = 2)
Age median 33
Years of dermatologic experience mean (range) 9 (3–14)
Resident (n = 4)
Age median (IQR) 30 (26–31)
Years of dermatologic experience median (IQR) 3 (1–6)
Mohs experience n (%)
Yes 1 (25)
No 3 (75)
Research physician (n = 4)
Age median (IQR) 25 (24–27)
Years of dermatologic experience mean (range) 1 (0–2)
Medical student (n = 6)
Age median (IQR) 23 (23–25)
Years of dermatologic experience 0 (0–0)

Abbreviations: IQR; interquartile range, OCT; optical coherence tomography.

Sixteen novice OCT assessors participated in the diagnostic accuracy study. The lesions on the 100 OCT scans in the diagnostic accuracy study represented 18 histopathologic diagnoses (Supplement S3).

Individual assessors achieved AUCs ranging from 0.732 to 0.935. The pooled AUC for all assessors was 0.852 (95% CI: 0.833–0.870) (Table 2).

Table 2 shows the diagnostic parameters for a high‐confidence BCC diagnosis on OCT. The pooled DOR, which represents the ability to differentiate BCC from non‐BCC, was 9.32 (95% CI: 6.49–13.39). The pooled specificity was 95.4% (95% CI: 92.2%–97.3%), indicating that novice assessors are well able to identify histopathologic non‐BCC lesions. The pooled sensitivity was 31.1% (95% CI: 25.6%–37.2%) indicating that in approximately 3 out of 10 BCC patients, BCC was detected on OCT with high confidence. When changing the cut‐off point for a positive test result from 4 to ≥ 3, the sensitivity increases from 31.1% to 58.8%, whereas specificity only slightly decreases from 95.4% to 91.5%. Table 2 also provides the diagnostic parameters of the expert assessor for comparison with the results by novice assessors. The sensitivity for BCC detection of 74.0% (95% CI: 66.7–74.0) was higher compared to the pooled sensitivity of 31.0% by novice assessors, at a comparable specificity. This indicates that an expert assessor may be able to omit more biopsies when assessing OCT scans.

Diagnostic parameters on BCC subtyping are shown in Table 4. The pooled DOR was 43.56 (95% CI: 19.02–99.73), indicating good ability to discriminate between BCC subtypes. The pooled sensitivity to detect nBCC/iBCC was 87.5% (95% CI: 72.8%–94.8%). The pooled specificity to detect sBCC was 86.2% (95% CI: 62.3%–95.9%). Misclassification of histopathologic nBCC/iBCC as sBCC on OCT should be prevented because this may result in under treatment, especially if iBCC is treated topically. In this study, nBCC/iBCCs were misclassified as sBCC on OCT 22 times. Of these 22 misclassifications, only one histopathologic iBCC was misclassified once as sBCC. The remaining 21 misclassifications were classified as nBCC.

TABLE 4.

Diagnostic accuracy for basal cel carcinoma subtyping achieved by 16 trained novice optical coherence tomography assessors.

Diagnostic accuracy of BCC subtyping
Sensitivity 95% CI Specificity 95% CI DOR 95% CI
nBCC/iBCC sBCC
(% (x/n)) (% (x/n))
Dermatologist 1 94.1 (16/17) 80.9–99.5 80.0 (4/5) 35.1–98.2 64.000 2.288–10488.930
Nurse 1 83.3 (5/6) 57.2–98.5 66.7 (2/3) 14.4–97.0 10.000 0.226–2139.092
Nurse 2 94.1 (16/17) 82.3–99.7 60.0 (3/5) 19.7–78.9 24.000 1.134–1149.725
All nurses 91.3 (21/23) 70.1–97.9 62.5 (5/8) 27.3–88.1 17.499 2.281–134.278
Resident 1 85.0 (17/20) 85.0–92.7 0 (0/2) 0–77.0 0 0–42.376
Resident 2 85.7 (6/7) 72.2–85.7 100 (1/1) 5.6–100 Inf 0.155‐Inf
Resident 3 88.9 (8/9) 65.5–88.9 100 (4/4) 47.4–100 Inf 1.714‐Inf
All residents 86.1 (31/36) 66.6–95.1 71.4 (5/7) 27.3–94.3 15.499 2.336–102.839
Research physician 1 75.0 (3/4) 28.0–75.0 100 (6/6) 68.7–100 Inf 0.851‐Inf
Research physician 2 71.4 (10/14) 52.7–78.2 85.7 (6/7) 48.4–99.2 15.000 1.045–465.757
Research physician 3 88.9 (16/18) 78.0–88.9 100 (3/3) 34.6–100 Inf 1.874‐Inf
Research physician 4 66.7 (6/9) 49.3–66.7 100 (2/2) 21.7–100 Inf 0.270‐Inf
All research physicians 77.8 (35/45) 55.9–90.6 94.4 (17/18) 63.8–99.4 59.475 7.030–503.162
Medical student 1 100 (5/5) 69.6–100 100 (2/2) 24.0–100 Inf 0.722‐Inf
Medical student 2 100 (2/2) 30.1–100 100 (2/2) 30.1–100 Inf 0.185‐Inf
Medical student 3 84.6 (11/13) 64.7–84.6 100 (9/9) 71.2–100 Inf 4.531‐Inf
Medical student 4 100 (7/7) 73.7–100 100 (3/3) 38.7–100 Inf 1.772‐Inf
Medical student 5 100 (13/13) 85.6–100 100 (3/3) 37.6–100 Inf 3.591‐Inf
Medical student 6 87.0 (20/23) 74.9–93.8 75.0 (6/8) 40.5–94.8 20.000 2.033–276.476
All medical students 92.1 (58/63) 78.9–97.3 92.6 (25/27) 72.0–98.4 144.875 26.331–797.095
All trainees 87.5 (161/184) 72.8–94.8 86.2 (56/65) 62.3–95.9 43.556 19.022–99.730
Expert assessor 80.8 (21/26) 68.8–84.4 90.9 (10/11) 62.7–99.5 42.00 3.72–1117.17

Abbreviations: 95% CI; 95% confidence interval, AUC; area under the curve, BCC; basal cell carcinoma, DOR; diagnostic odds ratio, Inf; infinite, IQR; interquartile range, nBCC/iBCC nodular/infiltrative basal cell carcinoma, sBCC; superficial basal cell carcinoma.

4. Discussion

This study demonstrates that CUSUM‐integrated e‐learning is suitable for the simultaneous and distant training of aspiring OCT assessors to achieve and maintain an acceptable error rate for differentiating BCC from non‐BCC on OCT. A median number of 385 OCT scans and 11 h of training was required to achieve and maintain acceptable performance. In a subsequent diagnostic study, the participating novice assessors achieved good overall diagnostic accuracy as indicated by a pooled AUC of 0.852. Pooled specificity for non‐BCC detection of a high‐confidence diagnosis was 95.4%, ensuring that most non‐BCC lesions are correctly referred for biopsy, misclassification as BCC is rare, and patient safety is warranted. However, in only one‐third of BCC patients with clinically equivocal lesions, novice assessors are able to establish a high‐confidence BCC diagnosis and differentiate between BCC subtypes.

The rather low sensitivity indicates that novice assessors still lack a high diagnostic confidence, causing assessors to be hesitant to classify BCC lesions with high certainty. When changing the cut‐off point for a positive OCT test result from score 4 to score ≥ 3, where assessors are still certain of BCC presence but uncertain of its subtype, sensitivity increased from 31.1% to 58.8% with only a slight decrease in the corresponding specificity from 95.4% to 91.5%. Hence, completion of the e‐learning should be viewed as a starting point for novice assessors to safely start working with OCT. To gain more diagnostic confidence and improve subtyping, additional practice is still required, preferably supervised by an expert assessor [14].

The reported diagnostic parameters should be placed in context because they were achieved in an artificial setting. A limitation of the e‐learning program is that evaluation of scans in conjunction with photographs of the lesion does not fully reflect real‐world practice, where OCT scans are combined with a patient history, dermoscopy, and clinical examination to establish a diagnosis. From this perspective, the e‐learning program provides a good starting point for novice assessors to expand their experience in clinical practice.

A 2021 international consensus statement [15] suggests dermatologists should preferably assess OCT scans. However, our study shows comparable diagnostic performance by non‐dermatologists. Outsourcing OCT assessment to non‐dermatologists may offer cost savings and alleviate the workload of dermatologists.

Of the 40 participants enrolled in the e‐learning, only 17 successfully completed the e‐learning. Ikenwilo & Skåtum reported lack of time, insufficient clinical cover, and remoteness from educational centers as common barriers to healthcare professional development. Although the latter two are mitigated as e‐learning provides the opportunity for self‐paced and remote learning, the training duration remained significant. Motivation for this e‐learning may increase once accreditation is facilitated, OCT becomes widely available, and has been implemented in routine practice.

This study has some limitations. The skill assessment is limited by a small sample size, with 16 novice assessors from five professional categories, and only three of them were dermatologists. To our knowledge, no prior studies have evaluated OCT training in larger groups. Although CUSUM allows assessors to keep practicing after an acceptable error rate is achieved and maintained, no assumptions can be made regarding long‐term skill retention in the absence of continued practice.

To our knowledge, this is the first e‐learning framework with integrated CUSUM‐analysis, effective for training healthcare professionals in differentiating BCC from non‐BCC on OCT. However, achieving and maintaining acceptable performance does not guarantee future acceptable performance. Learning is a life‐long commitment in medicine and is characterized by continuous competency development and assessment. E‐learning with integrated CUSUM‐analysis is a unique approach enabling trainees to continue training after completion, thus facilitating continuous competency monitoring. This dermatologic study paves the way for its adoption in training for other diagnostic and interventional skills among various medical specialties.

In conclusion, CUSUM‐integrated e‐learning is effective for training healthcare professionals in differentiating BCC from non‐BCC on OCT. Upon successful completion, specificity to detect non‐BCC lesions of novice OCT assessors was 95.4%, thus ensuring patient safety. In 31.1% of BCC patients with clinically equivocal lesions, novice OCT assessors can detect and subtype BCC with high confidence. Moreover, BCC could be detected with high confidence, albeit without certainty of its subtype, in 58.8% of BCC lesions. CUSUM‐integrated e‐learning may therefore play a pivotal role in overcoming the OCT assessor shortage and provide a starting point for novice OCT assessors to use OCT in clinical practice.

Disclosure

IRB approval status: approved (METC: 2022‐3253).

Conflicts of Interest

The authors declare no conflicts of interest.

Supporting information

Supplementary Material

IJD-64-1870-s001.zip (505.9KB, zip)

Acknowledgments

The authors would like to thank Steven Hornstra (SH), educational specialist at MUMC+, for his involvement in the educational design of our e‐learning module and Sander van Kuijk, clinical epidemiologist at KEMTA MUMC+, for his assistance with the statistical analyses. Declaration of generative AI and AI‐assisted technologies in the writing process: During the preparation of this work the author(s) used DeepL Write and ChatGPT in order to optimize the readability of the text. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.

Funding: The authors received no specific funding for this work.

Data Availability Statement

The data underlying this article will be shared upon reasonable request to the corresponding or senior author.

References

  • 1. Peris K., Fargnoli M. C., Garbe C., et al., “Diagnosis and Treatment of Basal Cell Carcinoma: European Consensus–Based Interdisciplinary Guidelines,” European Journal of Cancer 118 (2019): 10–34. [DOI] [PubMed] [Google Scholar]
  • 2. Adan F., Nelemans P. J., Essers B. A. B., et al., “Optical Coherence Tomography Versus Punch Biopsy for Diagnosis of Basal Cell Carcinoma: A Multicentre, Randomised, Non‐Inferiority Trial,” Lancet Oncology 23 (2022): 1087–1096. [DOI] [PubMed] [Google Scholar]
  • 3. Welzel J., “Optical Coherence Tomography in Dermatology: A Review,” Skin Research and Technology Review article 7, no. 1 (2001): 1–9. [DOI] [PubMed] [Google Scholar]
  • 4. Sinx K. A., van Loo E., Tonk E. H., et al., “Optical Coherence Tomography for Noninvasive Diagnosis and Subtyping of Basal Cell Carcinoma: A Prospective Cohort Study,” Journal of Investigative Dermatology 140, no. 10 (2020): 1962–1967. [DOI] [PubMed] [Google Scholar]
  • 5. Hussain A. A., Themstrup L., and Jemec G. B. E., “Optical Coherence Tomography in the Diagnosis of Basal Cell Carcinoma,” Archives of Dermatological Research 307, no. 1 (2015): 1–10. [DOI] [PubMed] [Google Scholar]
  • 6. Adan F., Mosterd K., Kelleners‐Smeets N. W., and Nelemans P. J., “Diagnostic Value of Optical Coherence Tomography Image Features for Diagnosis of Basal Cell Carcinoma,” Acta Dermato‐Venereologica 101, no. 11 (2021): 421. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Ruiz J. G., Mintzer M. J., and Leipzig R. M., “The Impact of e‐Learning in Medical Education,” Academic Medicine 81, no. 3 (2006): 207–212. [DOI] [PubMed] [Google Scholar]
  • 8. Page E. S., “Continuous Inspection Schemes,” Biometrika 41, no. 1/2 (1954): 100–115. [Google Scholar]
  • 9. Bolsin S. and Colson M., “The Use of the Cusum Technique in the Assessment of Trainee Competence in New Procedures,” International Journal for Quality in Health Care 12, no. 5 (2000): 433–438. [DOI] [PubMed] [Google Scholar]
  • 10. Williams S. M., Parry B. R., and Schlup M., “Quality Control: An Application of the Cusum,” BMJ (Clinical Research Ed.) 304, no. 6838 (1992): 1359. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Eltoum I., Chhieng D., Jhala D., et al., “Cumulative Sum Procedure in Evaluation of EUS‐Guided FNA Cytology: The Learning Curve and Diagnostic Performance Beyond Sensitivity and Specificity,” Cytopathology 18, no. 3 (2007): 143–150. [DOI] [PubMed] [Google Scholar]
  • 12. van Loo E., Sinx K. A., Welzel J., et al., “Cumulative Sum Analysis for the Learning Curve of Optical Coherence Tomography Assisted Diagnosis of Basal Cell Carcinoma,” Acta Dermato‐Venereologica 100, no. 12 (2020): adv00343. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Wolswijk T., Nelemans P. J., Adan F., Abdul Hamid M., and Mosterd K., “Pitfalls for Differentiating Basal Cell Carcinoma From Non‐Basal Cell Carcinoma on Optical Coherence Tomography: A Clinical Series,” Journal of Dermatology 51 (2023): 40–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Wolswijk T., Adan F., Nelemans P. J., and Mosterd K., “A Cohort Study on Detection and Subtyping of Basal Cell Carcinoma With Optical Coherence Tomography: The Additional Value of Distant Diagnosis by an Expert,” Journal of the American Academy of Dermatology 84, no. 4 (2022): 871–872. [DOI] [PubMed] [Google Scholar]
  • 15. Fuchs C. S. K., Ortner V. K., Mogensen M., et al., “2021 International Consensus Statement on Optical Coherence Tomography for Basal Cell Carcinoma: Image Characteristics, Terminology and Educational Needs,” Journal of the European Academy of Dermatology and Venereology 36, no. 6 (2022): 772–778. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material

IJD-64-1870-s001.zip (505.9KB, zip)

Data Availability Statement

The data underlying this article will be shared upon reasonable request to the corresponding or senior author.


Articles from International Journal of Dermatology are provided here courtesy of Wiley

RESOURCES