Abstract
Background
Early accurate detection of all skin cancer types is essential to guide appropriate management and to improve morbidity and survival. Melanoma and squamous cell carcinoma (SCC) are high‐risk skin cancers which have the potential to metastasise and ultimately lead to death, whereas basal cell carcinoma (BCC) is usually localised with potential to infiltrate and damage surrounding tissue. Anxiety around missing early curable cases needs to be balanced against inappropriate referral and unnecessary excision of benign lesions. Teledermatology provides a way for generalist clinicians to access the opinion of a specialist dermatologist for skin lesions that they consider to be suspicious without referring the patients through the normal referral pathway. Teledermatology consultations can be 'store‐and‐forward' with electronic digital images of a lesion sent to a dermatologist for review at a later time, or can be live and interactive consultations using videoconferencing to connect the patient, referrer and dermatologist in real time.
Objectives
To determine the diagnostic accuracy of teledermatology for the detection of any skin cancer (melanoma, BCC or cutaneous squamous cell carcinoma (cSCC)) in adults, and to compare its accuracy with that of in‐person diagnosis.
Search methods
We undertook a comprehensive search of the following databases from inception up to August 2016: Cochrane Central Register of Controlled Trials, MEDLINE, Embase, CINAHL, CPCI, Zetoc, Science Citation Index, US National Institutes of Health Ongoing Trials Register, NIHR Clinical Research Network Portfolio Database and the World Health Organization International Clinical Trials Registry Platform. We studied reference lists and published systematic review articles.
Selection criteria
Studies evaluating skin cancer diagnosis for teledermatology alone, or in comparison with face‐to‐face diagnosis by a specialist clinician, compared with a reference standard of histological confirmation or clinical follow‐up and expert opinion. We also included studies evaluating the referral accuracy of teledermatology compared with a reference standard of face‐to‐face diagnosis by a specialist clinician.
Data collection and analysis
Two review authors independently extracted all data using a standardised data extraction and quality assessment form (based on QUADAS‐2). We contacted authors of included studies where there were information related to the target condition of any skin cancer missing. Data permitting, we estimated summary sensitivities and specificities using the bivariate hierarchical model. Due to the scarcity of data, we undertook no covariate investigations for this review. For illustrative purposes, we plotted estimates of sensitivity and specificity on coupled forest plots for diagnostic threshold and target condition under consideration.
Main results
The review included 22 studies reporting diagnostic accuracy data for 4057 lesions and 879 malignant cases (16 studies) and referral accuracy data for reported data for 1449 lesions and 270 'positive' cases as determined by the reference standard face‐to‐face decision (six studies). Methodological quality was variable with poor reporting hindering assessment. The overall risk of bias was high or unclear for participant selection, reference standard, and participant flow and timing in at least half of all studies; the majority were at low risk of bias for the index test. The applicability of study findings were of high or unclear concern for most studies in all domains assessed due to the recruitment of participants from secondary care settings or specialist clinics rather than from primary or community‐based settings in which teledermatology is more likely to be used and due to the acquisition of lesion images by dermatologists or in specialist imaging units rather than by primary care clinicians.
Seven studies provided data for the primary target condition of any skin cancer (1588 lesions and 638 malignancies). For the correct diagnosis of lesions as malignant using photographic images, summary sensitivity was 94.9% (95% confidence interval (CI) 90.1% to 97.4%) and summary specificity was 84.3% (95% CI 48.5% to 96.8%) (from four studies). Individual study estimates using dermoscopic images or a combination of photographic and dermoscopic images generally suggested similarly high sensitivities with highly variable specificities. Limited comparative data suggested similar diagnostic accuracy between teledermatology assessment and in‐person diagnosis by a dermatologist; however, data were too scarce to draw firm conclusions. For the detection of invasive melanoma or atypical intraepidermal melanocytic variants both sensitivities and specificities were more variable. Sensitivities ranged from 59% (95% CI 42% to 74%) to 100% (95% CI 48% to 100%) and specificities from 30% (95% CI 22% to 40%) to 100% (95% CI 93% to 100%), with reported diagnostic thresholds including the correct diagnosis of melanoma, classification of lesions as 'atypical' or 'typical, and the decision to refer or to excise a lesion.
Referral accuracy data comparing teledermatology against a face‐to‐face reference standard suggested good agreement for lesions considered to require some positive action by face‐to‐face assessment (sensitivities of over 90%). For lesions considered of less concern when assessed face‐to‐face (e.g. for lesions not recommended for excision or referral), agreement was more variable with teledermatology specificities ranging from 57% (95% CI 39% to 73%) to 100% (95% CI 86% to 100%), suggesting that remote assessment is more likely recommend excision, referral or follow‐up compared to in‐person decisions.
Authors' conclusions
Studies were generally small and heterogeneous and methodological quality was difficult to judge due to poor reporting. Bearing in mind concerns regarding the applicability of study participants and of lesion image acquisition in specialist settings, our results suggest that teledermatology can correctly identify the majority of malignant lesions. Using a more widely defined threshold to identify 'possibly' malignant cases or lesions that should be considered for excision is likely to appropriately triage those lesions requiring face‐to‐face assessment by a specialist. Despite the increasing use of teledermatology on an international level, the evidence base to support its ability to accurately diagnose lesions and to triage lesions from primary to secondary care is lacking and further prospective and pragmatic evaluation is needed.
Keywords: Adult; Humans; Carcinoma, Basal Cell; Carcinoma, Basal Cell/diagnostic imaging; Carcinoma, Squamous Cell; Carcinoma, Squamous Cell/diagnostic imaging; Data Accuracy; Dermatology; Dermatology/methods; Diagnostic Errors; Diagnostic Errors/statistics & numerical data; Melanoma; Melanoma/diagnostic imaging; Photography; Physical Examination; Physical Examination/methods; Sensitivity and Specificity; Skin Neoplasms; Skin Neoplasms/diagnostic imaging; Telemedicine; Telemedicine/methods
Plain language summary
What is the diagnostic accuracy of teledermatology for the diagnosis of skin cancer in adults?
Why is improving the diagnosis of skin cancer important?
There are different types of skin cancer. Melanoma is one of the most dangerous forms and it is important to identify it early so that it can be removed. If it is not recognised when first brought to the attention of doctors (also known as a false‐negative test result) treatment can be delayed resulting in the melanoma spreading to other organs in the body and possibly causing early death. Cutaneous squamous cell carcinoma (cSCC) and basal cell carcinoma (BCC) are usually localised skin cancers, although cSCC can spread to other parts of the body and BCC can cause disfigurement if not recognised early. Calling something a skin cancer when it is not really a skin cancer (a false‐positive result) may result in unnecessary surgery and other investigations that can cause stress and worry to the patient. Making the correct diagnosis is important. Mistaking one skin cancer for another can lead to the wrong treatment being used or lead to a delay in effective treatment.
What is the aim of the review?
The aim of this Cochrane Review was to find out whether teledermatology is accurate enough to identify which people with skin lesions need to be referred to see a specialist dermatologist (a doctor concerned with disease of the skin) and who can be safely reassured that their lesion (damage or change of the skin) is not malignant. We included 22 studies to answer this question.
What was studied in the review?
Teledermatology means sending pictures of skin lesions or rashes to a specialist for advice on diagnosis or management. It is a way for primary care doctors (general practitioners (GPs)) to get an opinion from a specialist dermatologist without having to refer patients through the normal referral pathway. Teledermatology can involve sending photographs or magnified images of a skin lesion taken with a special camera (dermatoscope) to a skin specialist to look at or it might involve immediate discussion about a skin lesion between a GP and a skin specialist using videoconferencing.
What are the main results of the review?
The review included 22 studies, 16 studies comparing teledermatology diagnoses to the final lesion diagnoses (diagnostic accuracy) for 4057 lesions and 879 malignant cases and five studies comparing teledermatology decisions to the decisions that would be made with the patient present (referral accuracy) for 1449 lesions and 270 'positive' cases.
The studies were very different from each other in terms of the types of people with suspicious skin cancer lesions included and the type of teledermatology used. A single reliable estimate of the accuracy of teledermatology could not be made. For the correct diagnosis of a lesion to be a skin cancer, data suggested that less than 7% of malignant skin lesions were missed by teledermatology. Study results were too variable to tell us how many people would be referred unnecessarily for a specialist dermatology appointment following a teledermatology consultation. Without access to teledermatology services however, most of the lesions included in these studies would likely be referred to a dermatologist.
How reliable are the results of the studies of this review?
In the included studies, the final diagnosis of skin cancer was made by lesion biopsy (taking a small sample of the lesion so it could be examined under a microscope) and the absence of skin cancer was confirmed by biopsy or by follow‐up over time to make sure the skin lesion remained negative for melanoma. This is likely to have been a reliable method for deciding whether people really had skin cancer. In a few studies, a diagnosis of no skin cancer was made by a skin specialist rather than biopsy. This is less likely to have been a reliable method for deciding whether people really had skin cancer*. Poor reporting of what was done in the study made it difficult for us to say how reliable the study results are. Selecting some patients from specialist clinics instead of primary care along with different ways of doing teledermatology were common problems.
Who do the results of this review apply to?
Studies were conducted in: Europe (64%), North America (18%), South America (9%) or Oceania (9%). The average age of people who were studied was 52 years; however, several studies included at least some people under the age of 16 years. The percentage of people with skin cancer ranged between 2% and 88% with an average of 30%, which is much higher than would be observed in a primary care setting in the UK.
What are the implications of this review?
Teledermatology is likely to be a good way of helping GPs to decide which skin lesions need to be seen by a skin specialist. Our review suggests that using magnified images, in addition to photographs of the lesion, improves accuracy. More research is needed to establish the best way of providing teledermatology services.
How up‐to‐date is this review?
The review authors searched for and used studies published up to August 2016.
*In these studies, biopsy, clinical follow‐up or specialist clinician diagnosis were the reference comparisons.
Summary of findings
Summary of findings'. 'Summary of findings table.
Question: | What is the diagnostic accuracy of teledermatology for the detection of skin cancer in adults? | ||
Population: | Adults with lesions suspicious for skin cancer | ||
Index test: | TD using photographic or dermoscopic (or both) images | ||
Comparator test: | Face‐to‐face diagnosis using visual inspection or dermoscopy (or both) | ||
Target condition: | Any skin cancer, including invasive melanoma and atypical intraepidermal melanocytic variants, BCC and cSCC | ||
Reference standard: | Histology with or without long‐term follow‐up (diagnostic accuracy); expert face‐to‐face diagnosis (for referral accuracy) | ||
Action: | If accurate, positive results ensure the malignant lesions are not missed but are appropriately referred for specialist assessment or are treated appropriately in a non‐referred setting, and those with negative results can be safely reassured and discharged. | ||
Quantity of evidence | Number of studies | Total lesions | Total cases |
Diagnostic accuracy | 16 | 4057 | 879 |
Referral accuracy | 6 | 1449 | 270 |
Limitations | |||
Risk of bias: | Low risk for participant selection in 7 studies; high risk (5) from case‐control design (2) or inappropriate exclusion criteria (4). Low risk for teledermatology assessments (22). Low risk for comparison with face‐to‐face diagnosis (2/5); unclear (3). Low risk for reference standard (10/22); high risk from use of expert diagnosis alone – referral accuracy (6) or inadequate reference standard (6). High risk for participant flow (17) due to differential verification (5), and exclusions following recruitment (14); timing of tests not mentioned in 14 studies. | ||
Applicability of evidence to question: | High concern (14/22) for applicability of participants due to recruitment from secondary care or specialist clinics (12) or inclusion of multiple lesions per participant (6). High concern for applicability of teledermatology assessments (12/22) due to images acquired by dermatologists secondary care settings or in medical imaging units rather than images acquired in primary care. Low concern for reference standard (6/22); unclear concern due to lack of information concerning the expertise of the histopathologist (13) or expert face‐to‐face diagnosis (3). | ||
Findings: | |||
7 studies reported diagnostic accuracy data for the primary target condition of any skin cancer; 9 studies for the detection of invasive melanoma or atypical intraepidermal melanocytic variants; 2 studies for invasive melanoma alone; and 4 studies for BCC alone. 6 studies reported only referral accuracy data (teledermatology decisions versus face‐to‐face decisions). The findings presented are based on results for the primary target condition of any skin cancer and for the detection of invasive melanoma or atypical intraepidermal melanocytic variants. | |||
Diagnostic accuracy data | Number of datasets | Total lesions | Total malignant |
Test: TD using photographic images for any skin cancer | 4 | 717 | 452 |
3 studies reported a cross‐tabulation of lesion final diagnoses against the diagnosis on teledermatology such that data could be extracted for the detection of any malignancy, regardless of any misclassification of 1 skin cancer for another, e.g. a BCC diagnosed as a melanoma or vice versa; and 1 study presented data for the detection of 'malignant' versus benign cases with no breakdown of individual lesion diagnoses given. Summary sensitivity was 94.9% (95% CI 90.1% to 97.4%) and summary specificity 84.3% (95% CI 48.5% to 96.8%). 2 studies providing a direct comparison between TD assessment and in‐person diagnosis by a dermatologist the data suggested similar accuracy between approaches; however, data were too scarce to draw firm conclusions. | |||
Test: TD using clinical and dermoscopic images for any skin cancer | 3 | 928 | 215 |
Sensitivities were 100% in all 3 studies. Specificities ranged from 25% (95% CI 5% to 57%) to 92% (95% CI 74% to 99%). Studies used varying thresholds to decide test positivity and included highly selected populations. No statistical pooling was undertaken. | |||
Test: TD using photographic images for invasive melanoma or atypical intraepidermal melanocytic variants | 4 | 1834 | 106 |
Sensitivities ranged from 59% (95% CI 42% to 74%) to 100% (95% CI 48% to 100%) and specificities from 30% (95% CI 22% to 40%) to 100% (95% CI 93% to 100%). Diagnostic thresholds were correct diagnosis of melanoma (3) or classification as 'atypical' or 'typical.' Populations also varied, some including only atypical or higher‐risk pigmented lesions and excluding equivocal lesions and others including both pigmented and non‐pigmented lesions who were either self‐referred or were deemed to require lesion excision. The number of melanomas missed ranged from 0 to 17. | |||
Test: TD using photographic and dermoscopic images for invasive melanoma or atypical intraepidermal melanocytic variants | 4 | 664 | 93 |
Summary estimates were 85.4% (95% CI 68.3% to 94.1%) for sensitivity and 91.6% (95% CI 81.1% to 96.5%) for specificity. Sensitivities were lower for the correct diagnosis of melanoma (71% to 81%) compared to the decision to refer or to excise a lesion (96% to 100%). The number of melanomas missed ranged from 0 to 7. | |||
All data (referral accuracy) | 6 | 1449 | 270 |
TD diagnoses were reported based on photographic images alone (4), photographic and dermoscopic images (1) and using live‐link TD (1). Diagnostic decisions on TD varied, including the diagnosis of malignancy, the decision to excise a lesion, the decision to refer versus not refer, or to excise or follow‐up at a later date. For store‐and‐forward TD sensitivities were generally above 90% indicating good agreement between remote image‐based decisions with the face‐to‐face reference standard for lesions considered to require some positive action by face‐to‐face assessment. Specificities were more variable ranging from 57% (95% CI 39% to 73%) to 100% (95% CI 86% to 100%) suggesting that remote assessment is more likely to recommend excision, referral or follow‐up for lesions considered of less concern when assessed face‐to‐face. |
BCC: basal cell carcinoma; CI: confidence interval; cSCC: cutaneous squamous cell carcinoma; TD: teledermatology.
Background
This review is one of a series of Cochrane Diagnostic Test Accuracy (DTA) reviews on the diagnosis and staging of melanoma and keratinocyte skin cancers conducted for the National Institute for Health Research (NIHR) Cochrane Systematic Reviews Programme. For the purposes of these reviews, diagnostic accuracy is assessed by the sensitivity and specificity of a test. Appendix 1 shows the content and structure of the programme, Appendix 2 provides a glossary of terms used.
Target condition being diagnosed
There are three main forms of skin cancer. Melanoma has the highest skin cancer mortality (Cancer Research UK 2017); however, the most common skin cancers in Caucasian populations are those arising from keratinocytes: basal cell carcinoma (BCC) and cutaneous squamous cell carcinoma (cSCC) (Gordon 2013; Madan 2010). In 2003, the World Health Organization estimated that between two and three million 'non‐melanoma' skin cancers (of which BCC is estimated to account for around 80% and cSCC around 16% of cases) and 132,000 melanoma skin cancers occur globally each year (WHO 2003).
In this DTA review there are three target conditions of interest melanoma, BCC and cSCC.
Melanoma
Melanoma arises from uncontrolled proliferation of melanocytes – the epidermal cells that produce pigment or melanin. Cutaneous melanoma refers to any skin lesion with malignant melanocytes present in the dermis, primarily including superficial spreading, nodular, acral lentiginous and lentigo maligna melanoma variants (see Figure 1). Melanoma in situ refers to abnormal melanocytes that are contained within the epidermis and have not yet invaded the dermis, but are at risk of progression to melanoma if left untreated. Lentigo maligna, a subtype of melanoma in situ in chronically sun‐damaged skin, denotes another form of proliferation of abnormal melanocytes. Melanoma in situ and lentigo maligna are both atypical intraepidermal melanocytic variants. All forms of melanoma in situ can progress to invasive melanoma if growth breaches the dermo‐epidermal junction during a vertical growth phase; however, malignant transformation is both lower and slower for lentigo maligna than for melanoma in situ (Kasprzak 2015). Melanoma is one of the most dangerous forms of skin cancer, with the potential to metastasise to other parts of the body via the lymphatic system and bloodstream. It accounts for only a small percentage of skin cancer cases but is responsible for up to 75% of skin cancer deaths (Boring 1994; Cancer Research UK 2017).
The incidence of melanoma rose to over 200,000 newly diagnosed cases worldwide in 2012 (Erdmann 2013; Ferlay 2015), with an estimated 55,000 deaths (Ferlay 2015). The highest incidence was observed in Australia with 13,134 new cases of melanoma of the skin in 2014 (ACIM 2017), and in New Zealand with 2341 registered cases in 2010 (HPA and MelNet NZ 2014). In the USA, the predicted incidence in 2014 was 73,870 per annum and the predicted number of deaths was 9940 (Siegel 2015). The highest rates in Europe are in north‐western Europe and the Scandinavian countries, with the highest incidence reported in Switzerland of 25.8 per 100,000 people in 2012. Rates in the England have tripled from 4.6 and 6.0 per 100,000 in men and women respectively, in 1990, to 18.6 and 19.6 per 100,000 in 2012 (EUCAN 2012). Indeed, in the UK, melanoma has one of the fastest rising incidence rates of any cancer, with the biggest projected increase in incidence between 2007 and 2030 (Mistry 2011). In the decade leading up to 2013, age‐standardised incidence increased by 46%, with 14,500 new cases in 2013 and 2459 deaths in 2014 (Cancer Research UK 2017). Rates are higher in women than in men; however, the rate of incidence in men is increasing faster than in women (Arnold 2014). The rising incidence in melanoma is thought to be primarily related to an increase in recreational sun exposure and tanning bed use and an increasingly ageing population with higher lifetime recreational ultraviolet (UV) exposure, in conjunction with possible earlier detection (Belbasis 2016; Linos 2009). Belbasis 2016 provides a detailed review of putative risk factors, including eye and hair colour, skin type and density of freckles, history of melanoma, sunburn and presence of particular lesion types.
A database in the USA of over 40,000 patients from 1998 onwards, which assisted the development of the 8th American Joint Committee on Cancer (AJCC) Staging System indicated a five‐year survival of 97% to 99% for stage I melanoma, which dropped to 32% to 93% in stage III disease depending on tumour thickness, the presence of ulceration and number of involved nodes (Gershenwald 2017). While these are substantial increases relative to survival in 1975 (Cho 2014), increasing incidence between 1975 and 2010 means that mortality rates have remained static during the same period. This observation, coupled with increasing incidence of localised disease, suggests that improved survival rates may be due to earlier detection and heightened vigilance (Cho 2014). New targeted therapies for advanced (stage IV), melanoma (e.g. BRAF inhibitors), have improved survival, and immunotherapies are evolving such that long‐term survival is being documented (Pasquali 2018; Rozeman 2017). No new data regarding the survival prospects for patients with stage IV disease were analysed for the AJCC 8 staging guidelines due to lack of contemporary data (Gershenwald 2017).
Basal cell carcinoma
BCC can arise from multiple stem cell populations, including from the bulge and interfollicular epidermis (Grachtchouk 2011). BCC growth is usually localised, but it can infiltrate and damage surrounding tissue, sometimes causing considerable destruction and disfigurement, particularly when located on the face (Figure 1). The four main subtypes of BCC are superficial, nodular, morphoeic or infiltrative, and pigmented. They typically present as slow‐growing asymptomatic papules, plaques or nodules which may bleed or form ulcers that do not heal (Firnhaber 2012). People with a BCC often present to healthcare professionals with a non‐healing lesion rather than specific symptoms such as pain. Many lesions are diagnosed incidentally (Gordon 2013).
BCC most commonly occurs on sun‐exposed areas on the head and neck (McCormack 1997), and they are more common in men and in people over the age of 40. Different authors have attributed a rising incidence of BCC in younger people to increased recreational sun exposure (Bath‐Hextall 2007a; Gordon 2013; Musah 2013). Other risk factors include Fitzpatrick skin types I and II (Fitzpatrick 1975; Lear 1997; Maia 1995); previous skin cancer history; immunosuppression; arsenic exposure; and genetic predisposition, such as in basal cell naevus (Gorlin's) syndrome (Gorlin 2004; Zak‐Prelich 2004). Annual incidence is rising worldwide; Europe has experienced a mean increase of 5.5% per year since the late 1970s, the USA 2% per year, while estimates for the UK show incidence appears to be increasing more steeply at a rate of an additional 6 per 100,000 people per year (Lomas 2012). The rising incidence has been explained by an ageing population; changes in the distribution of known risk factors, particularly UV radiation; and improved detection due to the increased awareness among both practitioners and the general population (Verkouteren 2017). Hoorens 2016 points to evidence for a gradual increase in the size of BCCs over time, with delays in diagnosis ranging from 19 to 25 months.
According to National Institute for Health and Care Excellence (NICE) guidance (NICE 2010), low‐risk BCCs are nodular lesions occurring in people older than 24 years who are not immunosuppressed and do not have Gorlin syndrome. Furthermore, they should be located below the clavicle; should be small (diameter of less than 1 cm) with well‐defined margins; not recurrent following incomplete excision; and not in awkward or highly visible locations (NICE 2010). Superficial BCCs are also typically low risk and may be amenable to medical treatments such as photodynamic therapy (PDT) or topical chemotherapy (Kelleners‐Smeets 2017). Assigning BCCs as low or high risk influences the management options (Batra 2002; Randle 1996).
Advanced locally destructive BCC can be found on 'high‐risk' anatomical areas such as the eyebrow, eyelid, nose, ear and temple (these are at higher risk of invisible spread and therefore are more at risk of being incompletely excised (Baxter 2012; Lear 2014)), and they can arise from long‐standing untreated lesions or from a recurrence of aggressive BCC after primary treatment (Lear 2012). Very rarely, BCC metastasises to regional and distant sites resulting in death, especially cases of large neglected lesions in people who are immunosuppressed or those with Gorlin syndrome (McCusker 2014). Rates of metastasis are reported at 0.0028% to 0.55% (Lo 1991), with very poor survival rates. It is recognised that basosquamous carcinoma (more like a high‐risk SCC in behaviour and not considered a true BCC) is likely to have accounted for many cases of apparent metastases of BCC hence the spuriously high reported incidence in some studies of up to 0.55% which is not seen in clinical practice (Garcia 2009).
Squamous cell carcinoma of the skin
Primary cSCC arises from the keratinocytes of the outermost layer (epidermis) of the skin. People with cSCC often present with an ulcer or firm (indurated) papule, plaque or nodule (Firnhaber 2012; Griffin 2016), often with an adherent crust and poorly defined margins (Madan 2010). This type of carcinoma can arise in the absence of a precursor lesion or it can develop from pre‐existing actinic keratosis or Bowen's disease (considered by some clinicians to be squamous cell carcinoma in situ); the estimated annual risk of progression being less than 1% to 20% for newly arising lesions (Alam 2001), and 5% for pre‐existing lesions (Kao 1986). It remains locally invasive for a variable length of time, but it has the potential to spread to the regional lymph nodes or via the bloodstream to distant sites, especially in immunosuppressed individuals (Lansbury 2010). High‐risk lesions are those arising on the lip or ear, recurrent cSCC, lesions arising on non‐exposed sites, scars or chronic ulcers, tumours more than 20 mm in diameter and with depth of invasion more than 4 mm, and poor differentiation on pathological examination (Motley 2009). Perineural invasion of nerves at least 0.1 mm in diameter is a further documented risk factor for high‐risk cSCC (Carter 2013).
Chronic ultraviolet light exposure through recreation or occupation is strongly linked to cSCC occurrence (Alam 2001). It is particularly common in people with fair skin and in less common genetic disorders of pigmentation, such as albinism, xeroderma pigmentosum and recessive dystrophic epidermolysis bullosa (Alam 2001). Other recognised risk factors include immunosuppression; chronic wounds; arsenic or radiation exposure; certain drug treatments, such as voriconazole and BRAF mutation inhibitors; and previous skin cancer history (Baldursson 1993; Chowdri 1996; Dabski 1986; Fasching 1989; Lister 1997; Maloney 1996; O'Gorman 2014). In solid organ transplant recipients, cSCC is the most common form of skin cancer; the risk of developing cSCC has been estimated at 65 to 253 times that of the general population (Hartevelt 1990; Jensen 1999; Lansbury 2010). Overall, local and metastatic recurrence of cSCC at five years is estimated at 8% and 5% respectively (Rowe 1992). The five‐year survival rate of metastatic cSCC of the head and neck is around 60% (Moeckelmann 2018).
Treatment
For primary melanoma, the mainstay of definitive treatment is wide local surgical excision of the lesion, to remove both the tumour and any malignant cells that might have spread into the surrounding skin (Garbe 2016; Marsden 2010; NICE 2015a; SIGN 2017; Sladden 2009). Recommended lateral surgical margins vary according to tumour thickness (Garbe 2016), and to stage of disease at presentation (NICE 2015a).
Treatment options for BCC and cSCC include surgery, other destructive techniques such as cryotherapy or electrodesiccation and topical chemotherapy. A Cochrane Review of 27 randomised controlled trials (RCTs) of interventions for BCC found very little good‐quality evidence for any of the interventions used (Bath‐Hextall 2007b). Complete surgical excision of primary BCC has a reported five‐year recurrence rate of less than 2% (Griffiths 2005; Walker 2006), leading to significantly fewer recurrences than treatment with radiotherapy (Bath‐Hextall 2007b). After apparent clear histopathological margins (serial vertical sections) after standard excision biopsy with 4 mm surgical peripheral margins taken there is a five‐year reported recurrence rate of around 4% (Drucker 2017). Mohs micrographic surgery , whereby surgeons microscopically examine horizontal sections of the tumour peri‐operatively, undertaking re‐excision until the margins are tumour‐free, are options for high‐risk lesions on the face where standard wider excision margins might lead to incomplete excision or considerable functional impairment (Bath‐Hextall 2007b; Lansbury 2010; Motley 2009; Stratigos 2015). Bath‐Hextall 2007b found one trial comparing Mohs micrographic surgery with a 3 mm surgical margin excision in BCC (Smeets 2004); the update of this study showed non‐significantly lower recurrence at 10 years with Mohs micrographic surgery (4.4% with Mohs micrographic surgery compared to 12.2% after surgical excision; P = 0.10) (van Loo 2014).
The main treatments for high‐risk BCC are standard surgical excision, Mohs micrographic surgery or radiotherapy. For low‐risk or superficial subtypes of BCC, or for people with small or multiple (or both) BCCs at low‐risk sites (Marsden 2010), destructive techniques other than excisional surgery may be used (e.g. electrodesiccation and curettage or cryotherapy (Alam 2001; Bath‐Hextall 2007b)). Alternatively, non‐surgical ('non‐destructive') treatments may be considered (Bath‐Hextall 2007b; Drew 2017; Kim 2014), including topical chemotherapy such as imiquimod (Williams 2017), 5‐fluorouracil (Arits 2013), ingenol mebutate (Nart 2015), and photodynamic therapy (PDT) (Roozeboom 2016). Non‐surgical treatments are most frequently used for superficial forms of BCC, with one head‐to‐head trial suggesting topical imiquimod is superior to PDT and 5‐fluorouracil (Jansen 2018). Although non‐surgical approaches are increasingly used, they do not allow histological confirmation of tumour clearance, and their efficacy is dependent on accurate characterisation of the histological subtype and depth of tumour. The 2007 Cochrane review of BCC interventions found limited evidence from very small RCTs for these approaches (Bath‐Hextall 2007b), which have only partially been addressed by subsequent studies (Bath‐Hextall 2014; Kim 2014; Roozeboom 2012). Most BCC trials have compared interventions within the same treatment class, and few have compared medical versus surgical treatments (Kim 2014).
Vismodegib, a first‐in‐class Hedgehog signalling pathway inhibitor is now available for the treatment of metastatic or locally advanced BCC based on the pivotal study ERIVANCE BCC (Sekulic 2017). It is licensed for use in these patients where surgery or radiotherapy is inappropriate, e.g. for treating locally advanced periocular and orbital BCCs with orbital salvage of patients who otherwise would have required exenteration (Wong 2017). However, NICE has recommended against the use of vismodegib based on cost effectiveness and uncertainty of evidence (NICE 2017).
A systematic review of interventions for primary cSCC found only one RCT eligible for inclusion (Lansbury 2010). Current practice therefore relies on evidence from observational studies, as reviewed in Lansbury 2013, for example. Surgical excision with predetermined margins is usually the first‐line treatment (Motley 2009; Stratigos 2015). Estimates of recurrence after Mohs micrographic surgery, surgical excision or radiotherapy, which are likely to have been evaluated in higher‐risk populations, have shown pooled recurrence rates of 3%, 5.4% and 6.4%, respectively, with overlapping confidence intervals (CI); the review authors advised caution when comparing results across treatments (Lansbury 2013).
Index test(s)
Teledermatology is a term used to describe the delivery of dermatological care through information and communication technology (Bashshur 2015). It uses imaging modalities to provide specialist dermatology services either to other healthcare professionals (such as general practitioners (GP)), or to patients directly (Ndegwa 2010). It is considered a valuable tool in the diagnosis and management of skin disease, because of the visual nature of skin lesions and rashes (Warshaw 2011). Teledermatology allows an increased information flow between primary care physicians and dermatologists, which could lead to a reduction in waiting times and limit unnecessary referrals (Ndegwa 2010; Warshaw 2011; Bashshur 2015). In rural areas, where access to speciality services can have significant and potentially off‐putting travel and time implications for the patient, teledermatology has the potential to widen access to specialist opinion.
Teledermatology consultations can be conducted in two main ways, store‐and‐forward or 'asynchronous,' and live interactive or 'synchronous' (Ndegwa 2010). With the store‐and‐forward approach, clinicians and patients are separated by both time and space, as electronic digital images are taken and then transmitted to a dermatologist for review at a later unspecified time (Warshaw 2011). The pictures can be digital photographic (or 'macroscopic') images, or can be magnified dermoscopic images taken using a dermatoscope. Images are often accompanied by a summary of the patient history and demographic information as part of a consultation package (Ndegwa 2010). Furthermore, recent developments in smartphone technology have also introduced a new platform for transferring lesion images from one setting to another (Chuchu 2018). The store‐and‐forward approach is advantageous as it requires less sophisticated technology and lower‐cost equipment (Warshaw 2011); however, it does not allow the specialist to take a direct history, request additional views or communicate in detail the purpose of management to the patient or referrer (Ndegwa 2010).
Live interactive teledermatology uses videoconferencing and image transmission to connect the patient, referrer and dermatologist in real time (Ndegwa 2010). The dermatologist and patient can interact verbally in a similar manner to a traditional clinic‐based encounter, but more extensive telecommunications infrastructure and time are needed (Ndegwa 2010).
Clinical pathway
The diagnosis of melanoma can take place in primary, secondary and tertiary care settings by both generalist and specialist healthcare providers. In the UK, people with concerns about a new or changing skin lesion will usually present first to their GP or, less commonly, directly to a specialist in secondary care, which could include a dermatologist, plastic surgeon, other specialist surgeon (such as an ear, nose and throat specialist or maxillofacial surgeon), or ophthalmologist (Figure 2). Current UK guidelines recommend that all suspicious pigmented lesions presenting in primary care should be assessed by taking a clinical history and visual inspection guided by the revised seven‐point checklist (MacKie 1990). Clinicians should refer those with suspected melanoma or cSCC for appropriate specialist assessment within two weeks (Chao 2014; Marsden 2010; NICE 2015a). Evidence is emerging, however, to suggest that excision of melanoma by GPs is not associated with increased risk compared with outcomes in secondary care (Murchie 2017). In the UK, low‐risk BCC are usually recommended for routine referral, with urgent referral for those in whom a delay could have a significant impact on outcomes, for example due to large lesion size or critical site (NICE 2015b). Appropriately qualified generalist care providers increasingly undertake management of low‐risk BCC in the UK such as by excision of low‐risk lesions (NICE 2010). Similar guidance is in place in Australia (CCAAC Network 2008).
Teledermatology consultations can aid more appropriate triage of lesions, providing reassurance for benign lesions and referral via urgent or non‐urgent routes to secondary care (e.g. for suspected BCC. The distinction between setting and examiner qualifications and experience is important, as specialist clinicians might work in primary care settings (e.g. in the UK, GPs with a special interest in dermatology and skin surgery who have undergone appropriate training), and generalists might practice in secondary care settings (e.g. GPs working alongside dermatologists in secondary care, or plastic surgeons who do not specialise in skin cancer). The level of skill and experience in skin cancer diagnosis varies for both generalist and specialist care providers and impacts on test accuracy.
For referred lesions, the specialist clinician will use history‐taking, visual inspection of the lesion (in comparison with other lesions on the skin) usually in conjunction with dermoscopic examination, and palpation of the lesion and associated regional nodal basins to inform a clinical decision. If melanoma is suspected, then urgent 2 mm excision biopsy is recommended (Lederman 1985; Lees 1991); for cSCC, predetermined surgical margin excision or a diagnostic biopsy may be considered. BCC and premalignant lesions potentially eligible for non‐surgical treatment may undergo a diagnostic biopsy before initiation of therapy. Equivocal melanocytic lesions for which a definitive clinical diagnosis cannot be reached may undergo surveillance to identify any lesion changes that would indicate excision biopsy or reassurance and discharge for lesions that remain stable over a period of time.
Prior test(s)
Although smartphone applications and community‐ or high street pharmacy‐based teledermatology services (e.g. the Boots 'Mole Scanning Service' www.boots.com/health‐pharmacy‐advice/skin‐services/mole‐scanning‐service) can increasingly be accessed directly by people who have concerns about a skin lesion (Kjome 2017), the diagnosis of skin cancer is still based on history taking and clinical examination by a suitably qualified clinician. In the UK, this is typically undertaken at two decision points – first in primary care where the GP makes a decision to refer or not to refer, and then a second time by a dermatologist or other secondary care clinician where a decision is made to biopsy or excise or not.
Visual inspection of the skin is undertaken iteratively, using both implicit pattern recognition (non‐analytical reasoning) and more explicit 'rules' based on conscious analytical reasoning (Norman 2009), the balance of which will vary according to experience and familiarity with the diagnostic question. Various attempts have been made to formalise the "mental rules" involved in analytical pattern recognition for melanoma (Friedman 1985; Grob 1998; MacKie 1985; MacKie 1990; Sober 1979; Thomas 1998); however, visual inspection for keratinocyte skin cancers relies primarily on pattern recognition. Accuracy has been shown to vary according to the expertise of the clinician. Primary care physicians have been reported to miss over 50% of BCCs (Offidani 2002) and to misdiagnose around 33% of BCCs (Gerbert 2000). In contrast, one Australian study found that trained dermatologists were able to detect 98% of BCCs, but with a specificity of only 45% (Green 1988).
A range of technologies have emerged to aid diagnosis to reduce the number of diagnostic biopsies or inappropriate surgical procedures. Dermoscopy using a hand‐held microscope has become the most widely used tool used by clinicians to improve diagnostic accuracy of pigmented lesions, in particular for melanoma (Dinnes 2018a); it is less well established for the diagnosis of BCC or cSCC. Dermoscopy (also referred to as dermatoscopy or epiluminescence microscopy) uses a hand‐held microscope and incident light (with or without oil immersion) to reveal subsurface images of the skin at increased magnification of ×10 to ×100 (Kittler 2001). Used alongside clinical examination, dermoscopy has been shown in some studies to increase the sensitivity of clinical diagnosis of melanoma from around 60% to as much as 90% (Bono 2006; Carli 2002; Kittler 1999; Stanganelli 2000) with much smaller effects in others (Benelli 1999; Bono 2002). The accuracy of dermoscopy depends on the experience of the examiner (Kittler 2001), with accuracy when used by untrained or less‐experienced examiners potentially no better than clinical inspection alone (Binder 1997; Kittler 2002).
The diagnostic accuracy, and comparative accuracy, of visual inspection and dermoscopy have been evaluated in a further three reviews in this series (Dinnes 2018a; Dinnes 2018b; Dinnes 2018c).
Role of index test(s)
The use of teledermatology by primary care or by other generalist clinicians has the potential to ensure that people with suspicious lesions are appropriately referred for examination by a specialist clinician, and people with non‐suspicious lesions are appropriately reassured and managed in primary care. If an accurate triage is made, the proportion of people who are referred unnecessarily will be minimised and lesions requiring urgent referral and treatment correctly identified. By creating an environment where there is facilitated access to more specialist services, selective dermatology referral could ultimately reduce costs while enabling a faster, more reliable and more efficient service (Piccolo 2002). Increased information flow between primary care physicians and dermatologists also has the potential effect of increasing knowledge and reducing isolated decision‐making (Bashshur 2015).
When diagnosing potentially life‐threatening conditions such as melanoma, the consequences of falsely reassuring a person that they do not have skin cancer can be potentially fatal, as the delay to diagnosis means that the window for successful early treatment may be missed. To minimise these false‐negative diagnoses, a good diagnostic test for melanoma demonstrates high sensitivity and high negative predictive value (i.e. very few of those with a negative test result will actually have a melanoma). Giving false‐positive test results (meaning the test has poor specificity and a high false‐positive rate) resulting in the removal of lesions that are benign, is arguably less of an error than missing a potentially fatal melanoma, but does have implications for patient welfare and costs. False‐positive diagnoses cause unnecessary scarring from the biopsy or excision procedure, and increase patient anxiety while they await the definite histology results and increase healthcare costs as the number needed to remove to yield one melanoma diagnosis increases.
Delay in diagnosis of a BCC as a result of a false‐negative result is not as serious as for melanoma because BCCs are usually slow growing and unlikely to metastasise. However, delayed diagnosis can result in a larger and more complex excision with consequent greater morbidity. Very sensitive diagnostic tests for BCC may compromise on lower specificity leading to a higher false‐positive rate, and an enormous burden of skin surgery, such that a balance between sensitivity and specificity is needed. The situation for cSCC is more similar to melanoma in that the consequences of falsely reassuring a person that they do not have skin cancer can be serious and potentially fatal. Thus, a good diagnostic test for cSCC should demonstrate high sensitivity and a corresponding high negative predictive value. In summary, a test that can reduce false‐positive clinical diagnoses without missing true cases of disease has patient and resource benefits. False‐positive clinical diagnoses cause unnecessary morbidity from the biopsy, and could lead to initiation of inappropriate therapies and increase patient anxiety.
Alternative test(s)
Teledermatology provides an alternative means for primary care clinicians (and therefore patients) to access specialist opinion compared to the standard referral process from primary to secondary care. Although the general public can also seek advice on skin lesions, they may be concerned about doing so via smartphone applications or from services provided within community or 'high street' pharmacies. These are not considered direct alternatives to teledermatology services.
Several other tests that may have a role in diagnosis of skin cancer have been reviewed as part of our series of systematic reviews, including visual inspection and dermoscopy (Dinnes 2018a; Dinnes 2018b; Dinnes 2018c), smartphone applications (Chuchu 2018). Reflectance confocal microscopy (Dinnes 2018d; Dinnes 2018e), optical coherence tomography (Ferrante di Ruffano 2018a), and computer‐assisted diagnosis techniques applied to various types of images including those generated by dermoscopy, diffuse reflectance spectrophotometry and electrical impedance spectroscopy (Ferrante di Ruffano 2018b), and high‐frequency ultrasound (Dinnes 2018f). Evidence permitting, the accuracy of available tests will be compared in an overview review, exploiting within‐study comparisons of tests and allowing the analysis and comparison of commonly used diagnostic strategies where tests may be used singly or in combination.
Rationale
Our series of reviews of diagnostic tests used to assist clinical diagnosis of melanoma aimed to identify the most accurate approaches to diagnosis and provide clinical and policy decision‐makers with the highest possible standard of evidence on which to base decisions. With increasing rates of skin cancer and the push towards the use of dermoscopy and other high‐resolution image analysis in primary care, the anxiety around missing early cases needs to be balanced against the risk of over referrals, to avoid sending too many people with benign lesions for a specialist opinion. It is questionable whether all skin cancers detected by sophisticated techniques, even in specialist settings, help to reduce morbidity and mortality, or whether newer technologies run the risk of increasing false‐positive diagnoses. It is also possible that use of some technologies (e.g. widespread use of dermoscopy in primary care with no training), could actually result in harm by missing melanomas if they are used as replacement technologies for traditional history‐taking and clinical examination of the entire skin. Many branches of medicine have noted the danger of such "gizmo idolatry" among doctors (Leff 2008).
Although teledermatology is increasingly used, the accuracy of different approaches to providing teledermatology services (e.g. store‐and‐forward versus live‐link modalities, and use of clinical versus dermoscopic images) has yet to be fully established. A review by Warshaw 2011 suggested that both store‐and‐forward and live‐link teledermatology had acceptable diagnostic accuracy and concordance when compared with clinical face‐to‐face diagnosis; however, clinic‐based dermatology had superior diagnostic accuracy (i.e. in comparison to store‐and‐forward teledermatology consultations). As with any technology requiring significant investment, a full understanding of the benefits including patient acceptability and cost‐effectiveness in comparison to usual practice should be obtained before such an approach can be recommended; establishing the accuracy of diagnosis and referral accuracy is one of the key components. Given the rapidly changing evidence base in skin cancer diagnosis, there is a need for an up‐to‐date analysis of the accuracy of teledermatology for skin cancer diagnosis.
This review followed a generic protocol which covered the full series of Cochrane DTA reviews for the diagnosis of melanoma (Dinnes 2015a); aspects of this review which relate to the diagnosis of BCC and cSCC follow the generic protocol that was written to cover the reviews in the series for the diagnosis of keratinocyte skin cancers (Dinnes 2015b). The 'Background' and 'Methods' sections of this review therefore use some text that was originally published in the protocols (Dinnes 2015a; Dinnes 2015b), and text that overlaps some of our other reviews (Chuchu 2018; Dinnes 2018a; Dinnes 2018b).
Objectives
To determine the diagnostic accuracy of teledermatology for the detection of any skin cancer (melanoma, BCC or cSCC) in adults, and to compare its accuracy with that of in‐person diagnosis.
Accuracy was estimated separately according to the type of teledermatology images used:
photographic images;
dermoscopic images;
photographic and dermoscopic images.
Secondary objectives
To determine the diagnostic accuracy of teledermatology for the detection of invasive melanoma or atypical intraepidermal melanocytic variants in adults, and to compare its accuracy with that of in‐person diagnosis.
To determine the diagnostic accuracy of teledermatology for the detection of invasive melanoma only, in adults, and to compare its accuracy with that of in‐person diagnosis.
To determine the diagnostic accuracy of teledermatology for the detection of BCC in adults, and to compare its accuracy with that of in‐person diagnosis.
To determine the diagnostic accuracy of teledermatology for the detection of cSCC in adults, and to compare its accuracy with that of in‐person diagnosis.
To determine the referral accuracy of teledermatology (i.e. to compare diagnostic decision making based on teledermatology images with that of an in‐person consultation).
Investigation of sources of heterogeneity
We set out to address a range of potential sources of heterogeneity for investigation across our series of reviews, as outlined in our generic protocols (Dinnes 2015a; Dinnes 2015b), and described in Appendix 4; however, our ability to investigate these was necessarily limited by the available data on each individual test reviewed.
Methods
Criteria for considering studies for this review
Types of studies
We included test accuracy studies that allow comparison of the result of the index test with that of a reference standard, including the following:
studies where all participants receive a single index test and a reference standard;
studies where all participants receive more than one index test(s) and reference standard;
studies where participants are allocated (by any method) to receive different index tests or combinations of index tests and all receive a reference standard (between‐person comparative (BPC) studies);
studies that recruit series of participants unselected by true disease status;
diagnostic case‐control studies that separately recruit diseased and non‐diseased groups (see Rutjes 2005); however, we did not include studies that compared results for malignant lesions to those for healthy skin (i.e. with no lesion present);
both prospective and retrospective studies; and
studies where previously acquired clinical or dermoscopic images were retrieved and prospectively interpreted for study purposes.
We excluded studies from which we could not extract 2×2 contingency data or if they included fewer than five cases of melanoma, BCC or cSCC or fewer than five benign lesions. For studies of referral accuracy where a lesion's final diagnosis was not reported, we required at least five 'positive' cases as identified by the expert diagnosis reference standard. The size threshold of five was arbitrary. However, such small studies are unlikely to add precision to estimate of accuracy.
Studies available only as conference abstracts were excluded; however, attempts were made to identify full papers for potentially relevant conference abstracts (Searching other resources).
Participants
We included studies in adults with lesions suspicious for skin cancer or adults at high risk of developing skin cancer. We excluded studies that recruited only participants with malignant or benign final diagnoses.
We excluded studies conducted in children, or which clearly reported inclusion of more than 50% of participants aged 16 and under.
Index tests
We included studies evaluating teledermatology alone, or teledermatology in comparison with face‐to‐face diagnosis.
The following index tests were eligible for inclusion:
store‐and‐forward teledermatology;
real‐time 'live link' teledermatology.
Data for face‐to‐face clinical diagnosis against a histological reference standard was also included where reported, to allow a direct comparison with teledermatology to be made.
Although primary care clinicians can in practice be specialists in skin cancer, we considered primary care physicians as generalist practitioners and dermatologists as specialists. Within each group, we extracted any reporting of special interest or accreditation in skin cancer.
Target conditions
We defined the primary target condition as the detection of any skin cancer, primarily cutaneous melanoma, BCC or cSCC.
We considered four additional definitions of the target condition in secondary analyses, namely, the detection of:
invasive cutaneous melanoma alone;
invasive cutaneous melanoma or atypical intraepidermal melanocytic variants (including melanoma in situ, lentigo maligna);
BCC;
cSCC.
We also considered referral accuracy, comparing decision making from teledermatology compared with the face‐to‐face decisions for the same lesions. These decisions could have included the decision to excise a lesion, follow‐up a lesion or refer a lesion for face‐to‐face assessment.
Reference standards
Teledermatology can be assessed in terms of diagnostic accuracy in comparison to the final lesion diagnosis and referral accuracy in comparison to a face‐to‐face expert management decision.
To establish diagnostic accuracy, the ideal reference standard was histopathological diagnosis in all eligible lesions. A qualified pathologist or dermatopathologist should have performed histopathology. Ideally, reporting should have been standardised detailing a minimum dataset to include the histopathological features of melanoma to determine the AJCC Staging System (e.g. Slater 2014). We did not apply this minimum dataset requirement as a necessary inclusion criterion, but extracted any pertinent information.
Partial verification (applying the reference test only to a subset of those undergoing the index test) was of concern given that lesion excision or biopsy were unlikely to be carried out for all benign‐appearing lesions within a representative population sample. Therefore, to reflect what happens in reality, we accepted clinical follow‐up of benign‐appearing lesions as an eligible reference standard, while recognising the risk of differential verification bias (as misclassification rates of histopathology and follow‐up will differ). Additional eligible reference standards included cancer registry follow‐up and 'expert opinion' with no histology or clinical follow‐up.
All of the above were considered eligible reference standards for establishing lesion final diagnoses (diagnostic accuracy) with the following caveats:
all study participants with a final diagnosis of the target disorder must have had a histological diagnosis, either subsequent to the application of the index test or after a period of clinical follow‐up, and
at least 50% of all participants with benign lesions must have had either a histological diagnosis or clinical follow‐up to confirm benignity.
To establish referral accuracy of teledermatology (i.e. the ability of the remote observer to approximate an in‐person diagnosis), the action recommended by the remote observer was compared with an in‐person 'expert opinion' reference standard (i.e. the diagnosis or management recommendation of an appropriately qualified clinician made face‐to‐face with the study participant).
Search methods for identification of studies
Electronic searches
The Information Specialist (SB) carried out a comprehensive search for published and unpublished studies. A single large literature search was conducted to cover all topics in the programme grant (see Appendix 1 for a summary of reviews included in the programme grant). This allowed for the screening of search results for potentially relevant papers for all reviews at the same time. A search combining disease related terms with terms related to the test names, using both text words and subject headings was formulated. The search strategy was designed to capture studies evaluating tests for the diagnosis or staging of skin cancer. As the majority of records were related to the searches for tests for staging of disease, a filter using terms related to cancer staging and to accuracy indices was applied to the staging test search, to try to eliminate irrelevant studies, for example, those using imaging tests to assess treatment effectiveness. A sample of 300 records that would be missed by applying this filter was screened and the filter adjusted to include potentially relevant studies. When piloted on MEDLINE, inclusion of the filter for the staging tests reduced the overall numbers by around 6000. The final search strategy, incorporating the filter, was subsequently applied to all bibliographic databases as listed below (Appendix 5). The final search result was cross‐checked against the list of studies included in five systematic reviews; our search identified all but one of the studies, and this study was not indexed on MEDLINE. The Information Specialist devised the search strategy, with input from the Information Specialist from Cochrane Skin. No additional limits were used.
We searched the following bibliographic databases to 29 August 2016 for relevant published studies:
MEDLINE via Ovid (from 1946);
MEDLINE In‐Process & Other Non‐Indexed Citations via Ovid;
Embase via Ovid (from 1980).
We searched the following bibliographic databases to 30 August 2016 for relevant published studies:
the Cochrane Central Register of Controlled Trials (CENTRAL) 2016, Issue 7, in the Cochrane Library;
the Cochrane Database of Systematic Reviews (CDSR) 2016, Issue 8, in the Cochrane Library;
Cochrane Database of Abstracts of Reviews of Effects (DARE) 2015, Issue 2;
CRD HTA (Health Technology Assessment) database 2016, Issue 3; and
CINAHL (Cumulative Index to Nursing and Allied Health Literature via EBSCO from 1960).
We searched the following databases for relevant unpublished studies using a strategy based on the MEDLINE search:
CPCI (Conference Proceedings Citation Index), via Web of Science™ (from 1990; searched 28 August 2016); and
SCI Science Citation Index Expanded™ via Web of Science™ (from 1900, using the 'Proceedings and Meetings Abstracts' Limit function; searched 29 August 2016).
We searched the following trials registers using the search terms 'melanoma', 'squamous cell', 'basal cell' and 'skin cancer' combined with 'diagnosis':
Zetoc (from 1993; searched 28 August 2016).
The US National Institutes of Health Ongoing Trials Register (www.clinicaltrials.gov); searched 29 August 2016.
NIHR Clinical Research Network Portfolio Database (www.nihr.ac.uk/research‐and‐impact/nihr‐clinical‐research‐network‐portfolio/); searched 29 August 2016.
The World Health Organization International Clinical Trials Registry Platform (apps.who.int/trialsearch/); searched 29 August 2016.
We aimed to identify all relevant studies regardless of language or publication status (published, unpublished, in press, or in progress). We applied no date limits.
Searching other resources
We screened relevant systematic reviews identified by the searches for their included primary studies, and included any missed by our searches. We checked the reference lists of all included papers, and subject experts within the author team reviewed the final list of included studies. There was no electronic citation searching.
Data collection and analysis
Selection of studies
At least one review author (JDi or NC) screened titles and abstracts, and discussed and resolved any queries by consensus. A pilot screen of 539 MEDLINE references showed good agreement (89% with a kappa of 0.77) between screeners. Primary test accuracy studies and test accuracy reviews (for scanning of reference lists) of any test used to investigate suspected melanoma, BCC or cSCC were included at initial screening. Both a clinical reviewer (from one of a team of 12 clinician reviewers) and a methodologist reviewer (JDi or NC) independently applied inclusion criteria to all full‐text articles (Appendix 6), and resolved disagreements by consensus or by a third party (JDe, CD, HW and RM). We contacted authors of eligible studies when insufficient data were presented to allow for the construction of 2×2 contingency tables.
Data extraction and management
One clinical (as detailed above) and one methodological reviewer (JDi, NC or LFR) independently extracted data concerning details of the study design, participants, index test(s) or test combinations, and criteria for index test positivity, reference standards and data required to complete a 2×2 diagnostic contingency table for each index test using a piloted data extraction form. Data were extracted at all available index test thresholds. We resolved disagreements by consensus or by a third party (JDe, CD, HW, and RM). We entered data into Review Manager 5 (Review Manager 2014).
We contacted authors of included studies where information related to the target condition (in particular to allow the differentiation of invasive cancers from 'in situ' variants) or there were missing diagnostic threshold. We contacted authors of conference abstracts published from 2013 to 2015 to ask whether full data were available. If there was no full paper, we marked conference abstracts as 'pending' and will revisit them in a future review update.
Dealing with multiple publications and companion papers
Where we identified multiple reports of a primary study, we maximised yield of information by collating all available data. Where there were inconsistencies in reporting or overlapping study populations, we contacted study authors for clarification in the first instance. If this contact with authors was unsuccessful, we used the most complete and up‐to‐date data source where possible.
Assessment of methodological quality
We assessed risk of bias and applicability of included studies using the QUADAS‐2 checklist (Whiting 2011), tailored to the review topic (see Appendix 7). We piloted the modified QUADAS‐2 tool on five included full‐text articles. One clinical reviewer (as detailed above) and one methodological reviewer (JDi, NC or LFR) independently assessed quality for the remaining studies; we resolved disagreements by consensus or by a third party (JDe, CD, HW and RM).
Statistical analysis and data synthesis
Our unit of analysis was the lesion rather than the participant. This is because firstly, in skin cancer initial treatment is directed to the lesion rather than systemically (thus it is important to be able to correctly identify cancerous lesions for each person), and secondly, it is the most common way in which the primary studies reported data. Although there is a theoretical possibility of correlations of test errors when the same people contribute data for multiple lesions, most studies included very few people with multiple lesions and any potential impact on findings was likely to be very small, particularly in comparison with other concerns regarding risk of bias and applicability. For each analysis, we included only one dataset per study to avoid multiple counting of lesions.
For the diagnosis of melanoma, any BCCs or invasive cSCCs that were correctly identified by teledermatology but that were identified as melanomas in the 'disease‐negative' group were considered as true‐negative test results rather than as false positives, on the basis that excision of such lesions would be a positive outcome for the participants concerned. However, for the diagnosis of BCC, we considered any melanomas or cSCCs that were mistaken for BCCs as false‐positive results. This decision was taken on the basis that the clinical management of a lesion considered to be a BCC might be quite different to that for a melanoma or cSCC and could potentially lead to a negative outcome for the participants concerned, for example if a treatment other than excision was initiated.
For preliminary investigations of the data, we plotted estimates of sensitivity and specificity on coupled forest plots and in receiver operating characteristic (ROC) space for each index test, target condition and reference standard combination. When meta‐analysis was possible and there were at least four studies, we used a bivariate model to obtain summary estimates of sensitivity and specificity (Chu 2006; Reitsma 2005). When there were fewer than four studies and little or no heterogeneity in ROC space, we pooled sensitivity and specificity using fixed‐effect logistic regression (Takwoingi 2017). We included data for face‐to‐face diagnosis only if reported in comparison to teledermatology diagnosis. Using these direct (head‐to‐head) comparisons, a comparative meta‐analysis to compare the accuracy of teledermatology with face‐to‐face diagnosis was not possible because there were too few studies. However, we tabulated results from the studies and estimated differences in sensitivity and specificity. Since these comparative studies did not report the cross‐classified results of the two index tests in participants with and without a particular form of skin cancer, we were unable to compute CIs for the differences using methods that accounted for the paired nature of the data. Therefore, we assumed independence between the sensitivities and between the specificities of the two tests, and calculated 95% CIs for the differences using the Newcombe‐Wilson method without continuity correction (Newcombe 1998). We performed analyses using Stata version 15 (Stata 2017).
Investigations of heterogeneity
We examined heterogeneity by visually inspecting forest plots of sensitivity and specificity, and summary ROC (SROC) plots. We were unable to perform meta‐regression to investigate potential sources of heterogeneity due to insufficient numbers of studies.
Sensitivity analyses
There were too few data to perform sensitivity analyses.
Assessment of reporting bias
Because of uncertainty about the determinants of publication bias for diagnostic accuracy studies and the inadequacy of tests for detecting funnel plot asymmetry (Deeks 2005), we did not perform tests to detect publication bias.
Results
Results of the search
The search identified 34,517 unique references and screened them for inclusion after reading the title and abstract. Of these, we reviewed 1051 full‐text papers for eligibility for any one of the suite of reviews of tests to assist in the diagnosis of melanoma or keratinocyte skin cancer; 203 publications were included in at least one review in our series and 848 publications were excluded (see Figure 3; PRISMA flow diagram of search and eligibility results).
Of the 125 studies tagged as potentially eligible for this review of teledermatology, we included 22 publications. Exclusions from the review were primarily due to: lack of test accuracy data to complete a 2×2 contingency table (31 studies), ineligible populations (38 studies) or target conditions (19 studies), not accuracy studies (23 studies) and inadequate sample size (fewer than five cases of skin cancer, fewer than five benign lesions, or for referral accuracy, fewer than five 'positive' cases identified by the expert diagnosis reference standard) (seven studies) or reference standards (11 studies). Of the 31 studies for which a 2×2 table could not be constructed, 11 reported only agreement between observers or between teledermatology and the reference standard and 14 had at least one piece of missing data. A list of the 103 studies with reasons for exclusion is provided in the Characteristics of excluded studies table, with a list of all studies excluded from the full series of reviews available as a separate pdf (please contact skin.cochrane.org for a copy of the pdf).
We contacted the corresponding authors of 12 studies and asked them to supply further information to allow study inclusion or to clarify diagnostic thresholds or target condition definition. We received responses from four authors allowing inclusion of all four studies in this review (Borve 2015;Mahendran 2005; Warshaw 2010b; Wolf 2013). One of the four authors was unable to provide all of the information requested such that the data presented in the paper could only be partially included in this review (Warshaw 2010b).
The 22 included study publications reported on 22 cohorts of lesions and provided 96 datasets (individual 2×2 contingency tables). Sixteen studies (73%) including 4057 lesions and 879 malignant cases reported data for the diagnostic accuracy of teledermatology (Arzberger 2016; Borve 2015; Bowns 2006; Congalton 2015; Coras 2003; Ferrara 2004; Grimaldi 2009; Jolliffe 2001a; Kroemer 2011; Massone 2014; Moreno Ramirez 2005; Piccolo 2000; Piccolo 2004; Silveira 2014; Warshaw 2010b; Wolf 2013), five of which also reported data for the diagnostic accuracy of expert face‐to‐face clinical diagnosis (Coras 2003; Jolliffe 2001a; Kroemer 2011; Piccolo 2000; Warshaw 2010b). Six studies (27%) including 1449 lesions reported data for the referral accuracy of teledermatology (i.e. teledermatology diagnosis or management action as the index test versus expert face‐to‐face diagnosis or management action as the reference standard) (Jolliffe 2001b; Mahendran 2005; Manahan 2015; Oliveira 2002; Phillips 1998; Shapiro 2004); these studies included 270 'positive' cases as determined by the reference standard face‐to‐face decision. A cross‐tabulation of studies by reported comparisons, target conditions and types of image used is provided in Table 2 and summary study details is presented in Appendix 8.
1. Cross‐tabulation of included studies against target condition assessed, by type of images used for teledermatology.
Studies of diagnostic accuracy –TD vs histology |
FTF dxa vs histology |
TD vs expert FTF |
||||
Study | Any skin cancer | Melanomab | BCC | cSCC | Action | |
Arzberger 2016 | Photo/Derm (excise or not) | — | — | — | — | — |
Borve 2015 | Photo/Derm (dx as malignant/possibly malignant) | — | — | — | — | — |
Bowns 2006 | — | Photo/Derm (dx as MM/MiS) | Photo/Derm (dx as BCC) | — | — | — |
Congalton 2015 | — | Photo/Derm (excise or not) | — | — | — | — |
Coras 2003 | — | Photo/Dermc (dx as MM/MiS) | — | — | dx as MM | — |
Ferrara 2004 | — | Derm only (dx as MM/MiS) | — | — | — | — |
Grimaldi 2009 | — | Photo/Dermc (excise or not) | — | — | (GP dx as suspicious for malignancy; data not included) | — |
Jolliffe 2001a | Photo onlyc (dx as any SC) | < 5 MM | Photo onlyc (dx as BCC) | — | dx as any SC dx as BCC |
— |
Kroemer 2011 | Photo onlyc Derm onlyc (dx as malignant) |
Photo onlyc Derm onlyc (dx as MM/MiS) |
Photo onlyc Derm onlyc (dx as BCC) |
Photo onlyc Derm onlyc (dx as cSCC) |
dx as malignant dx as MM/MiS dx as BCC dx as cSCC |
— |
Massone 2014 | Photo/Derm (dx as malignant) | — | — | — | — | — |
Moreno Ramirez 2005 | Photol only (dx as malignant) | Photo only (dx as MM/MiS) | Photo only (dx as BCC) | — | — | — |
Piccolo 2000 | — | Photo/Dermc (dx as MM) | — | — | dx as MM | — |
Piccolo 2004 | — | Derm only (dx as MM) | — | — | — | — |
Silveira 2014 | Photo only (dx as malignant) | — | — | — | — | — |
Warshaw 2010b | — | Photo onlyc (dx as MM/MiS) | — | — | dx of MM/MiS | — |
Wolf 2013 | — | Photo only (dx as atypical) | — | — | — | — |
Studies of referral accuracy – TD vs expert FTF decision | ||||||
Jolliffe 2001b | — | — | — | — | — | Clin only (refer or not) |
Mahendran 2005 | — | — | — | — | — | Clin only (excise or not) |
Manahan 2015 | — | — | — | — | — | Clin/Derm (see FTF) |
Oliveira 2002 | — | — | — | — | — | Clin only (dx as malignant) |
Phillips 1998 | — | — | — | — | — | Live link (dx as any SC; definitely/probably malignant/ excise or not) |
Shapiro 2004 | — | — | — | — | — | Clin only (excise or not) |
BCC: basal cell carcinoma; clin: clinical; cSCC: cutaneous squamous cell carcinoma; Derm: dermoscopic images; dx: diagnosis; FTF: face‐to‐face; GP: general practitioner; MiS: melanoma in situ (atypical intraepidermal melanocytic variants); MM: invasive melanoma; Photo: photographic images; SC: skin cancer; TD: teledermatology.
aFace‐to‐face diagnosis by an expert/dermatologist unless otherwise stated.
bAll include melanoma in situ as disease positive apart from Piccolo 2000 and Piccolo 2004, which reported detection of invasive melanoma only.
cIncluded a direct comparison with expert face‐to‐face diagnosis versus histology.
Studies were primarily prospective case series (18 studies; 82%), with two retrospective case series (9%) (Moreno Ramirez 2005; Piccolo 2004), and two case‐control studies (9%) (Ferrara 2004; Wolf 2013), three of which retrospectively selected previously acquired images for prospective evaluation in the study (Ferrara 2004; Piccolo 2004; Wolf 2013). Studies were conducted in: Europe (14 (64%) studies), including five studies from Austria (Arzberger 2016; Kroemer 2011; Massone 2014; Piccolo 2000; Piccolo 2004), and four from the UK (Bowns 2006; Jolliffe 2001a; Jolliffe 2001b; Mahendran 2005); North America (four (18%) studies; Phillips 1998; Shapiro 2004; Warshaw 2010b; Wolf 2013); or South America (two studies; 9%; Oliveira 2002; Shapiro 2004); or in Australia (Manahan 2015); or New Zealand (Congalton 2015) (9%). Eight (36.4%) studies included only pigmented (Coras 2003; Grimaldi 2009; Jolliffe 2001a; Moreno Ramirez 2005; Piccolo 2000; Wolf 2013) or melanocytic (Ferrara 2004; Piccolo 2004) lesions (Piccolo 2004 restricting to acral lesions only); the remainder included any suspicious lesion.
Ten studies were based in primary care or community‐based settings. Seven studies acquired lesion images in primary care (Borve 2015; Grimaldi 2009; Mahendran 2005; Massone 2014; Moreno Ramirez 2005; Oliveira 2002; Shapiro 2004), two studies were in a community skin cancer screening outreach programme with image acquisition for 'remote' assessment by specialists in a secondary care setting (Silveira 2014) or using live transmission video‐conferencing (Phillips 1998), and one study recruited participants at high risk of melanoma who took images of their own lesions that they 'did not like the look of' using a smartphone (Manahan 2015). There were no studies conducted in high street pharmacy‐type settings. Five of the 10 studies reported the diagnostic accuracy of the teledermatology image‐based assessment (Borve 2015; Grimaldi 2009; Massone 2014; Moreno Ramirez 2005; Silveira 2014), and five examined referral accuracy, comparing the teledermatology assessment against specialist in‐person assessment (Mahendran 2005; Manahan 2015;Oliveira 2002; Phillips 1998; Shapiro 2004). Of these studies, nine were prospective in design and one was retrospective (Moreno Ramirez 2005).
Twelve studies acquired lesion images at a secondary care dermatology or pigmented lesion clinic (Arzberger 2016; Congalton 2015; Ferrara 2004; Jolliffe 2001a; Jolliffe 2001b; Kroemer 2011; Piccolo 2000; Piccolo 2004; Warshaw 2010b; Wolf 2013), from a private dermatology practice (Coras 2003), or in a medical imaging unit (Bowns 2006), at the time of the patient consultation, and a second dermatologist made remote image‐based diagnoses. Of these studies, nine were prospective in design, two studies retrieved routinely collected lesion images for prospective 'teledermatology' examination (Piccolo 2004;Wolf 2013), and one did not clearly report how lesion images were acquired (Ferrara 2004). Six studies reported data only for the diagnostic accuracy of specialist image‐based assessment (Arzberger 2016; Bowns 2006; Congalton 2015; Ferrara 2004; Piccolo 2004; Wolf 2013), five compared the diagnostic accuracy of image‐based diagnosis to that of a dermatologist's face‐to‐face diagnosis (Coras 2003; Jolliffe 2001a; Kroemer 2011; Piccolo 2000; Warshaw 2010b), and one examined referral accuracy, comparing specialist image‐based assessment against specialist in‐person assessment (Jolliffe 2001b).
Ten evaluations reported data using photographic images for teledermatology consultations (Jolliffe 2001a; Jolliffe 2001b; Kroemer 2011; Mahendran 2005; Moreno Ramirez 2005; Oliveira 2002; Shapiro 2004; Silveira 2014; Warshaw 2010b; Wolf 2013), nine for a combination of clinical and dermoscopic images (Arzberger 2016; Borve 2015; Bowns 2006; Congalton 2015; Coras 2003; Grimaldi 2009; Manahan 2015; Massone 2014; Piccolo 2000), and three reported data for diagnosis using dermoscopic images only (Ferrara 2004; Kroemer 2011; Piccolo 2004). The final study reported referral accuracy data for live‐link teledermatology using three cameras: a full‐body camera, a lens for close‐up views and a magnifying lens to allow magnified examination and examination with polarised light (Phillips 1998). The Warshaw 2010b paper also reported accuracy data for diagnosis using a combination of photographic and dermoscopic images; however, we were unable to obtain underlying 2×2 data to allow the inclusion of this aspect of the study.
Images for store and forward teledermatology were obtained using a mobile phone camera alone (Kroemer 2011) or coupled with a dermatoscope (Borve 2015; Kroemer 2011; Manahan 2015); using still images from a video camera (Jolliffe 2001a; Jolliffe 2001b); using a combination of film and digital images (Ferrara 2004); or using a digital camera (13 studies) to acquire photographs (Mahendran 2005; Moreno Ramirez 2005; Oliveira 2002; Shapiro 2004; Silveira 2014; Warshaw 2010b; Wolf 2013), dermoscopic images (Ferrara 2004; Piccolo 2004), or both (Arzberger 2016; Congalton 2015; Coras 2003; Grimaldi 2009; Massone 2014; Piccolo 2000). Bowns 2006 acquired photographic and dermoscopic images in a medical photography unit (equipment not described). See the Characteristics of included studies table for details of the digital cameras and mobile phones used. Eighteen (82%) studies provided observers with additional clinical information along with the digital image of the lesion, three did not clearly report whether additional clinical information was provided (Arzberger 2016; Grimaldi 2009; Manahan 2015), and one did not provide any further participant‐related information (Wolf 2013). The remote observers were dermatologists in 19 (86%) studies, oncologists in one, and a mixture of dermatologists and other healthcare professionals (e.g. oncologists, plastic surgeons) in two (9%).
The five studies providing a direct comparison of teledermatology with in‐person evaluation by a dermatologist based diagnosis on visual inspection of the lesion with dermoscopy used for some (proportions not reported) (Coras 2003; Jolliffe 2001a; Kroemer 2011; Warshaw 2010b), or for all lesions (Piccolo 2000). One study reported using pattern analysis (Coras 2003), and the remaining three (50%) studies did not specify any algorithm used to aid diagnosis. The direct comparison of teledermatology with in‐person evaluation by a GP provided in Grimaldi 2009 was not included in this review as it is not directly relevant to teledermatology in this context.
For the 16 studies reporting diagnostic accuracy,11 (69%) used histology alone as the reference standard (Arzberger 2016; Congalton 2015; Coras 2003; Ferrara 2004; Jolliffe 2001a; Kroemer 2011; Piccolo 2000; Piccolo 2004; Silveira 2014; Warshaw 2010b; Wolf 2013), three (19%) used both histology and expert opinion for some benign lesions (Borve 2015; Bowns 2006; Massone 2014), and two (12%) used histology and follow‐up of clinically benign‐appearing lesions (Grimaldi 2009; Moreno Ramirez 2005). The median number of study participants was 77 (interquartile range (IQR) 4p to 182) (reported in 10 studies), and median number of lesions was 116 (IQR 45 to 240). The median prevalence of skin cancer was 30% (IQR 21% to 45%) and median percentage of men was 46% (IQR 35% to 47%) (reported in nine studies). Where reported (eight studies), the median age was 52 years (IQR 43 to 65) (reported in eight studies). Four of the 16 diagnostic accuracy studies did not report the range in age of included participants (Bowns 2006; Coras 2003; Grimaldi 2009; Wolf 2013), four restricted inclusion to adults only (Borve 2015; Massone 2014; Silveira 2014; Warshaw 2010b), and eight included participants under the age of 16 years (Arzberger 2016; Congalton 2015; Ferrara 2004; Jolliffe 2001a; Kroemer 2011; Moreno Ramirez 2005; Piccolo 2002; Piccolo 2004); however, data were not presented to allow data to be extracted excluding children.
For the six studies reporting referral accuracy, the same dermatologist undertaking the teledermatology assessment made the face‐to‐face reference standard diagnosis, either within one to two weeks (Mahendran 2005; Oliveira 2002), or with a gap of several months between assessments (Jolliffe 2001b), by a dermatology trainee under the supervision of the teledermatologist (Manahan 2015), or a different dermatologist (Phillips 1998; Shapiro 2004). The median number of study participants was 50 (IQR 49 to 72) (reported in all six studies), and median number of lesions was 107 (IQR 94 to 253). The reported decisions of the face‐to‐face expert included diagnosis of malignancy (Oliveira 2002; Phillips 1998), decision to excise (Mahendran 2005; Phillips 1998; Shapiro 2004), or decision to see face‐to‐face (Jolliffe 2001b; Manahan 2015); the median overall percentage of 'positive' expert diagnoses was 15% (IQR 10% to 41%). One study reported the median age of included participants was 46.7 years (Phillips 1998); two studies reported age ranges of 8 to 94 years (Jolliffe 2001b), and 50 to 64 years (Manahan 2015); and four did not report the range in ages (Mahendran 2005; Oliveira 2002; Phillips 1998; Shapiro 2004). The percentage of men ranged from 15.7% (Phillips 1998) to 49% (Manahan 2015) (reported in three studies).
Methodological quality of included studies
The overall methodological quality of included studies is summarised in Figure 4 and Figure 5. At least half of studies were at high or unclear risk of bias for participant selection, reference standard, and flow and timing domains, while the majority were at low risk for the index test. The applicability of study findings were of high or unclear concern for the majority of studies in all domains assessed.
For participant selection, seven (32%) studies were at low risk of bias (Borve 2015; Bowns 2006; Grimaldi 2009; Jolliffe 2001b; Mahendran 2005; Oliveira 2002; Warshaw 2010b), and five (23%) studies were at high risk (Congalton 2015; Ferrara 2004; Massone 2014; Moreno Ramirez 2005; Wolf 2013). Four studies applied inappropriate participant exclusions (such as excluding poor‐quality images, or difficult to diagnose lesions) (Congalton 2015; Massone 2014; Moreno Ramirez 2005; Wolf 2013); and two studies used a case‐control type study design with separate selection of malignant cases and lesions with benign diagnosis (Ferrara 2004; Wolf 2013). Fourteen studies were at high concern for applicability of participants due to recruitment from secondary care or specialist clinics rather than from the primary setting in which teledermatology is more likely to be used (12/22; Arzberger 2016; Bowns 2006; Congalton 2015; Coras 2003; Jolliffe 2001a; Jolliffe 2001b; Kroemer 2011; Moreno Ramirez 2005; Piccolo 2000; Piccolo 2004; Warshaw 2010b; Wolf 2013), and/or due to inclusion of multiple lesions per participant (6/22; Congalton 2015; Grimaldi 2009; Jolliffe 2001b; Kroemer 2011; Manahan 2015; Warshaw 2010b). Two studies were at low concern for participant applicability (Oliveira 2002; Phillips 1998).
For the index test, all of the teledermatology assessments were at low risk of bias for the index test apart from Phillips 1998, in which it was unclear whether the teledermatology diagnoses were made blinded to the decision to the face‐to‐face clinician (reference standard). One study was judged of low concern for applicability of teledermatology (Massone 2014), and 12 (45%) studies were of high concern (Arzberger 2016; Bowns 2006; Congalton 2015; Coras 2003; Ferrara 2004; Jolliffe 2001a; Jolliffe 2001b; Kroemer 2011; Piccolo 2000; Piccolo 2004; Warshaw 2010b; Wolf 2013), due to images acquired by dermatologists based in secondary care settings or from pigmented lesion clinic databases rather than images acquired in primary care. For the five studies reporting direct comparisons of teledermatology with face‐to‐face expert clinical diagnoses, two (40%) were at low risk of bias (Coras 2003; Warshaw 2010b) and three (60%) were judged unclear, as the thresholds used were not clearly prespecified (Jolliffe 2001a; Kroemer 2011; Piccolo 2000). For the comparison between tests, there was no blinding between store‐and‐forward teledermatology and face‐to‐face clinical diagnosis in Jolliffe 2001a; the remaining four comparative studies did not describe any blinding (Coras 2003; Kroemer 2011; Piccolo 2000; Warshaw 2010b). One (20%) study was of low concern for applicability of the face‐to face diagnosis (Coras 2003), and four of unclear concern of applicability as the thresholds used for diagnosis were not clearly reported to allow replication of methods (Jolliffe 2001a; Kroemer 2011; Piccolo 2000; Warshaw 2010b).
For the reference standard domain, 10 (45%) studies were at low risk of bias (Arzberger 2016; Congalton 2015; Coras 2003; Ferrara 2004; Jolliffe 2001a; Piccolo 2000; Piccolo 2004; Silveira 2014; Warshaw 2010b; Wolf 2013), and 12 (55%) were at high risk either because they were referral accuracy studies using only expert face‐to‐face diagnosis as the reference standard (Jolliffe 2001b; Mahendran 2005; Manahan 2015; Oliveira 2002; Phillips 1998; Shapiro 2004), or because they were diagnostic accuracy studies that did not meet our criteria for an adequate reference standard (i.e. greater than 80% of lesions with histology and up to 20% with clinical follow‐up; see Appendix 7) (Borve 2015; Bowns 2006; Grimaldi 2009; Kroemer 2011; Massone 2014; Moreno Ramirez 2005). Only two studies clearly reported blinding of the reference standard diagnosis to the teledermatology assessment (Phillips 1998; Shapiro 2004), and two did not implement any blinding (Mahendran 2005; Oliveira 2002); these were all referral accuracy studies. Six studies were of low concern for the applicability of the reference standard, including three of the 16 diagnostic accuracy studies that clearly reported the level of experience of the histopathologist (Ferrara 2004; Piccolo 2004; Warshaw 2010b), and three of the six referral accuracy studies that reported the expertise of the face‐to‐face reference standard diagnosis (Jolliffe 2001b; Mahendran 2005; Shapiro 2004).
For flow and timing of participants, two referral accuracy studies were at low risk of bias (Jolliffe 2001b; Phillips 1998), and 17 (77%) were at high risk of bias; either because they did not use the same reference standard for all participants (Borve 2015; Bowns 2006; Grimaldi 2009; Massone 2014; Moreno Ramirez 2005), or they did not include all study participants in the final analysis (Arzberger 2016; Congalton 2015; Coras 2003; Jolliffe 2001a; Kroemer 2011; Mahendran 2005; Manahan 2015; Massone 2014; Moreno Ramirez 2005; Oliveira 2002; Shapiro 2004; Silveira 2014; Warshaw 2010b; Wolf 2013). Fourteen (64%) studies did not clearly report the interval between reference standard and index test (Arzberger 2016; Congalton 2015; Coras 2003; Ferrara 2004; Grimaldi 2009; Jolliffe 2001a; Kroemer 2011; Manahan 2015; Massone 2014; Moreno Ramirez 2005; Piccolo 2000; Piccolo 2004; Warshaw 2010b; Wolf 2013).
Findings
Study results are summarised below according to target condition with results of meta‐analyses in Table 3 with summary details in Appendix 8. Lack of data and between‐study variation in populations, approaches to teledermatology and target conditions considered, limited the pooled analyses that could be undertaken.
2. Summary estimates of sensitivity and specificity for teledermatology.
Index test,a target condition | Studies | Cases/number of participants | Summary sensitivity (95% CI) % | Summary specificity (95% CI) % |
Teledermatology photographic image, any | 4 | 452/717 | 94.9 (90.1 to 97.4) |
84.3 (48.5 to 96.8) |
Teledermatology photographic/dermoscopic image, MM + MiS | 4 | 93/664 | 85.4 (68.3 to 94.1) |
91.6 (81.1 to 96.5) |
Teledermatology photographic image vs histology, BCC | 3 | 62/301 | 93.5 (84.0 to 97.6) |
95.8 (92.4 to 97.7) |
BCC: basal cell carcinoma; CI: confidence interval; MM + MiS: invasive melanoma and atypical intraepidermal melanocytic variants.
aReference standard was histology for all comparisons.
1. Target condition: detection of any skin cancer
Seven studies with 1588 lesions and 638 cases of skin cancer reported the diagnostic accuracy of teledermatology assessment for the detection of the primary target condition of any skin cancer. Forest plots of study data are provided in Figure 6 with results of meta‐analysis in Figure 7.
Four studies compared diagnosis based on photographic images to histology (Figure 6). In three studies, sensitivities for the correct diagnosis of malignancy ranged from 93% (95% CI 77% to 99%) to 100% (95% CI 75% to 100%) and specificities from 88% (95% CI 77% to 95%) to 96% (95% CI 82% to 100%) (Jolliffe 2001a; Kroemer 2011; Moreno Ramirez 2005). All three studies reported a cross‐tabulation of lesion final diagnoses against the diagnosis on teledermatology such that data could be extracted for the detection of any malignancy, regardless of any misclassification of one skin cancer for another (e.g. a BCC diagnosed as a melanoma or vice versa). Similarly there was high sensitivity in Silveira 2014 (96%, 95% CI 94% to 98%) but specificity was outlying at 25% (95% CI 14% to 39%); data here were presented for the detection of 'malignant' versus benign cases with no breakdown of individual lesion diagnoses given. The low specificity in Silveira 2014 result was likely due to the recruitment of participants with lesions deemed to be highly clinically suspicious after visual inspection of the lesion during a community screening programme; the prevalence of malignancy was 87.5% with only 52 benign cases in a sample of 416 lesions.
The pooled result across the four studies indicated a summary sensitivity of 94.9% (95% CI 90.1% to 97.4%) and summary specificity of 84.3% (95% CI 48.5% to 96.8%) based on 717 lesions and 452 cases of skin cancer (Table 3; Figure 7).
One study also reported diagnosis based on dermoscopic images only versus histology (Kroemer 2011). Sensitivity was lower than for assessment using photographic images (85%, 95% CI 71% to 94% with histology versus 93%, 95% CI 82% to 99% with dermoscopic images) with similar specificities (Figure 6).
Three studies with 928 lesions and 215 cases of skin cancer compared image‐based diagnosis based on both clinical and dermoscopic images to histology (Figure 6). Each study used a slightly different threshold to decide test positivity: the correct classification of lesions as malignant versus benign (Massone 2014), lesions considered malignant or possibly malignant (Borve 2015), or lesion recommended for excision (Arzberger 2016). Sensitivities were 100% in all three studies. Specificities ranged from 25% (95% CI 5% to 57%) (Arzberger 2016) to 92% (95% CI 74% to 99%) (Massone 2014). Massone 2014 was a primary care‐based study for which accuracy data could only be estimated for a subgroup of 32 of the original 962 lesions. Arzberger 2016 included participants at particularly high risk for melanoma, while Borve 2015 included large proportions of seborrhoeic keratosis (19%) and actinic keratosis (9%) in the disease‐negative group. These factors may have made differentiating a malignant case from a benign one more challenging.
We undertook no statistical pooling for these studies due to variation in threshold and heterogeneity in specificities.
Two studies compared remote image‐based assessment with in‐person diagnosis by a dermatologist for the detection of any skin cancer using the same diagnostic thresholds as for the teledermatology decision (Jolliffe 2001a; Kroemer 2011). Jolliffe 2001a reported 100% (95% CI 75% to 100%) sensitivity for both assessments, while Kroemer 2011 reported 100% (95% CI 92% to 100%) sensitivity for in‐person diagnosis compared to 93% (95% CI 82% to 99%) for diagnosis using photographic images and 85% (95% CI 71% to 94%) for dermoscopic image‐based assessment (Table 4; Figure 6). Both studies reported only marginal differences in specificity between approaches.
3. Direct comparisons of teledermatology with face‐to‐face diagnosis of melanoma and other types of skin cancer.
Study | Sensitivity (true positives/cases) % | Difference (95% CI) | Specificity (true negatives/non cases) % | Difference (95% CI) | ||
Any skin cancer | ||||||
Teledermatology photographic image | Expert face‐to‐face |
Teledermatology photographic image |
Expert face‐to‐face | |||
Jolliffe 2001a | 100 (13/13) | 100 (13/13) | 0.00 (‐2.28 to 2.28) |
92.9 (118/127) | 94.7 (124/131) | ‐1.74 (‐8.18 to 4.49) |
Kroemer 2011 | 93 (43/46) | 100 (46/46) | ‐0.06 (‐0.17 to 0.02) |
88 (51/58) | 90 (52/58) | ‐0.02 (‐0.14 to 0.10) |
Teledermatology dermoscopic image | Expert face‐to‐face | Teledermatology dermoscopic image | Expert face‐to‐face | |||
Kroemer 2011 | 85 (39/46) | 100 (46/46) | ‐0.15 (‐0.28 to ‐0.04) |
91 (53/58) | 90 (52/58) | 0.02 (‐0.13 to 0.09) |
Invasive melanoma or atypical intraepidermal melanocytic variants | ||||||
Teledermatology photographic + dermoscopic image | Expert face‐to‐face | Teledermatology photographic + dermoscopic image | Expert face‐to‐face | |||
Coras 2003 | 81 (13/16) | 88 (14/16) | ‐0.06 (‐0.32 to 0.20) |
93 (27/29) | 93 (27/29) | 0.00 (‐0.16 to 0.16) |
Teledermatology dermoscopic image | Expert face‐to‐face | Teledermatology dermoscopic image | Expert face‐to‐face | |||
Kroemer 2011 | 100 (5/5) | 100 (5/5) | 0.00 (‐0.43 to 0.43) |
98 (97/99) | 99 (98/99) | ‐0.01 (‐0.06 to 0.04) |
Warshaw 2010b | 59 (24/41) | 73 (30/41) | ‐0.15 (‐0.33 to 0.06) |
41 (604/1473) | 63 (930/1473) | ‐0.22 (‐0.26 to ‐0.19)a |
Invasive melanoma | ||||||
Teledermatology photographic/dermoscopic image | Expert face‐to‐face | Teledermatology photographic/dermoscopic image | Expert face‐to‐face | |||
Piccolo 2000 | 81.8 (9/11) | 72.7 (8/11) | 9.10 (‐25.2 to 41.2) |
100 (32/32) | 96.9 (31/32) | 3.13 (‐7.90 to 15.7) |
Basal cell carcinoma | ||||||
Teledermatology photographic image | Expert face‐to‐face | Teledermatology photographic image | Expert face‐to‐face | |||
Jolliffe 2001a | 100 (9/9) | 100 (9/9) | 0.00 (‐29.9 to 29.9) |
97.0 (127/131) | 97.8 (132/135) | ‐0.83 (‐5.60 to 3.68) |
CI: confidence interval.
adenotes statistically significant difference.
2. Target condition: invasive melanoma or atypical intraepidermal melanocytic variants
Nine studies with 2510 lesions and 206 melanoma reported the diagnostic accuracy of teledermatology assessment for the detection of invasive cutaneous melanoma or atypical intraepidermal variants (Figure 8), only two of which also reported data for the detection of any skin cancer (Table 3).
In four studies with 1834 lesions and 106 melanomas comparing diagnosis based on photographic images to histology, sensitivities ranged from 59% (95% CI 42% to 74%) to 100% (95% CI 48% to 100%) and specificities from 30% (95% CI 22% to 40%) to 100% (95% CI 93% to 100%) (Kroemer 2011; Moreno Ramirez 2005; Warshaw 2010b; Wolf 2013). In three studies, the data extracted were for the correct diagnosis of melanoma whereas in Wolf 2013, lesions were classified as 'atypical' or 'typical' as opposed to reporting the correct diagnosis of melanoma. The difference in the diagnostic decision recorded in Wolf 2013 is likely to account for the low specificity observed.
The relatively low sensitivity and specificity observed in Warshaw 2010b is difficult to explain but may be related to differences in population characteristics between studies. Two studies restricted inclusion to pigmented lesions considered clinically atypical by at least one dermatologist (Wolf 2013), or meeting explicit criteria that might suggest a higher risk for melanoma (Moreno Ramirez 2005). Wolf 2013 selecting specific lesion types and excluding those with an equivocal diagnosis. Kroemer 2011 included participants with any lesion type who were either self‐referred or referred by a local doctor to a general dermatology clinic. Warshaw 2010b also included any lesion type (pigmented or non‐pigmented) from participants who required or requested removal of one or more skin lesions (denoted 'high risk' by the study authors) or from people who were referred by non‐dermatology healthcare providers for specialist assessment (denoted as 'lower risk'); furthermore, lesions in 30 histopathological categories with fewer than 25 lesions were excluded (171 lesions).
Two studies compared dermoscopic image‐based diagnosis to histology (Ferrara 2004; Kroemer 2011). Sensitivities were 71% (95% CI 29% to 96%; Ferrara 2004) and 100% (95% CI 48% to 100%; Kroemer 2011) and specificities were 60% (95% CI 15% to 95%; Ferrara 2004) to 97% (95% CI 91% to 99%; Kroemer 2011). Ferrara 2004 reported sensitivity of 71% (95% CI 29% to 96%) and specificity 60% (95% CI 15% to 95%) for the correct differentiation of seven melanomas from five benign lesions. Kroemer 2011 reported very similar sensitivity and specificity for the correct diagnosis of melanoma using only dermoscopic images to their result using non‐magnified photographic images: sensitivities with both approaches were 100% (95% CI 48% to 100%) while specificities were 98% (95% CI 96% to 100%) using photographic images compared to 97% (95% CI 91% to 99%) for dermoscopic images. Data were not presented for diagnosis using both clinical and dermoscopic images; however, the study reported that "re‐evaluation of both image types in (discordant cases) did not improve the diagnostic accuracy of teleconsultations" (Kroemer 2011).
Four studies with 664 lesions and 93 cases of melanoma or atypical intraepidermal melanocytic variants compared teledermatology based on both clinical and dermoscopic images to histology (Bowns 2006; Congalton 2015; Coras 2003; Grimaldi 2009). Summary estimates of sensitivity were 85.4% (95% CI 68.3% to 94.1%) and specificity 91.6% (95% CI 81.1% to 96.5%) (Table 3; Figure 9).
Across the 10 teledermatology datasets, the number of melanomas missed ranged from 0 (two datasets, both from Kroemer 2011) to 17 (Warshaw 2010b).
Three studies in this group compared teledermatology assessment of images to face‐to‐face diagnosis by a dermatologist (Coras 2003; Kroemer 2011; Warshaw 2010b) (Table 4). In Warshaw 2010b, the diagnostic accuracy of teledermatology diagnosis of melanoma using photographic images and patient history was considerably lower compared to an in‐person dermatologist diagnosis (using visual inspection with or without the use of dermoscopy as determined by the individual clinician); data from the author showed that sensitivity was 15% lower for teledermatology assessment (95% CI ‐33% to 6%) and specificity was 22% lower (95% CI ‐26% to ‐19%) (Table 4). The accuracy of the expert face‐to‐face diagnosis was nevertheless relatively low, with sensitivity of 73% (95% CI 57% to 86%) and specificity of 63% (95% CI 61% to 66%) (Figure 8). The two studies comparing teledermatology diagnosis using both macro and dermoscopic images demonstrated only marginal differences between the two approaches (Coras 2003; Kroemer 2011) (Table 4).
3. Target condition: invasive cutaneous melanoma
Piccolo 2000 with 43 lesions reported the diagnostic accuracy of teledermatology assessment for the detection of invasive melanoma selected for their 'diagnostic difficulty (11 cases of melanoma) and Piccolo 2004 for the differentiation of six acral melanoma from 71 benign acral lesions (Figure 10). In Piccolo 2000, the sensitivity for store‐and‐forward teledermatology assessment was 82% (95% CI 48% to 98%) and specificity was 100% (95% CI 89% to 100%) in comparison to in‐person dermatologist assessment of the same lesions where sensitivity was 73% (95% CI 39% to 94%) and specificity 97% (95% CI 84% to 100%) (one invasive melanoma missed in the face‐to‐face encounter was identified using teledermatology). Similar teledermatology accuracy (based on a consensus' of six out of 11 observers) was obtained for acral lesions in Piccolo 2004 using only dermoscopic images: observed sensitivity was 83% (95% CI 36% to 100%) and specificity was 96% (95% CI 88% to 99%).
4. Target condition: BCC
Four studies reported the diagnostic accuracy of teledermatology assessment for the detection BCC (Figure 11; Figure 12).
Three evaluations of photographic images for 301 lesions with 62 cases of BCC (Jolliffe 2001a; Kroemer 2011; Moreno Ramirez 2005), produced summary estimates of sensitivity of 93.5% (95% CI 84.0% to 97.6%) and specificity 95.8% (95% CI 92.4% to 97.7%) (Table 3; Figure 12). Four BCCs were missed in two studies (Kroemer 2011; Moreno Ramirez 2005).
One study reported lower sensitivity using dermoscopic images (80%, 95% CI 61% to 92%) compared to photographic images (90%, 95% CI 73% to 98%), due to an additional two BCCs being mistaken for actinic keratosis (Kroemer 2011). A further study reporting data only for teledermatology using clinical and dermoscopic images also reported low sensitivity for BCC of 66% (95% CI 46% to 82%) (Bowns 2006). Both studies reported specificities of 93% and over (Figure 11).
Two studies provided a comparison with expert face‐to‐face assessment (Table 4). Kroemer 2011 reported higher sensitivity and specificity in the face‐to‐face assessments and Jolliffe 2001a reported almost identical sensitivity and specificity estimates from the two approaches.
5. Target condition: cutaneous squamous cell carcinoma
Kroemer 2011 reported accuracy for the detection of cSCC in 104 lesions with 10 cases of cSCC. There was a sensitivity of 90% (95% CI 55% to 100%) (one cSCC missed) for diagnosis based on photographic images and for the face‐to‐face assessments, with a sensitivity of 60% (95% CI 26% to 88%) (four cSCCs missed) for remote assessment based on dermoscopic images (Figure 13). Specificities were over 98% for all three approaches to diagnosis.
6. Referral accuracy
Six studies gave information on diagnostic decision making by teledermatology consultants compared to expert face‐to‐face decisions (as the reference standard) (Figure 14).
Four studies reported data for store‐and‐forward teledermatology using photographic images, for the diagnosis of malignancy (Oliveira 2002), for the decision to excise a lesion (Mahendran 2005; Shapiro 2004), for the decision to refer versus not refer (Jolliffe 2001b), or to excise or follow‐up at a later date (Mahendran 2005). Jolliffe 2001b reported data both for teledermatology by a dermatologist and by a dermatology registrar. It was not possible to pool results across these studies due to heterogeneity in the teledermatology.
Two studies found perfect or almost perfect agreement between teledermatology and face‐to‐face consultation (sensitivities 100% and specificities 98% to 100%; Oliveira 2002; Shapiro 2004), while Mahendran 2005 also reported 100% sensitivity both for the decision to excise a lesion and the decision to excise or follow‐up (Figure 14). In Jolliffe 2001b, the sensitivity of teledermatology by the dermatologist was 69% (95% CI 61% to 77%), with 44 lesions recommended for face‐to‐face consultation 'missed' by the remote observer, compared to 92% for the registrar's teledermatology assessment (12 lesions 'missed').
Specificities were more variable with 69% (95% CI 55% to 81%) and 57% (95% CI 39% to 73%) reported in Mahendran 2005 for the decision to excise a lesion and the decision to excise or follow‐up a lesion and 85% (95% CI 82% to 87%) and 67% (95% CI 63% to 70%) reported in Jolliffe 2001b for the dermatologist and dermatology registrar for the decision to refer a lesion. The number of lesions recommended for some action by the teledermatologist that were not recommended for action by the face‐to‐face expert in these studies ranged from 16 (Mahendran 2005) to 217 (Jolliffe 2001b).
Manahan 2015 reported the sensitivity and specificity of the teledermatologist's decision to recommend a lesion for face‐to‐face consultation based on macro and dermoscopic images compared to the same recommendation by the face‐to‐face dermatologist (301 lesions; 35 recommended for face‐to‐face consultation): the resulting sensitivity was 91% (95% CI 77% to 98%) with three lesions 'missed' and specificity was 89% (95% CI 85% to 93%) with 28 lesions recommended for a face‐to‐face visit that were not selected by the in‐person dermatologist.
Finally, Phillips 1998 reported data for the accuracy of live‐link teledermatology using videoconferencing compared to a dermatologist's face‐to‐face decision (as the reference standard) for 107 lesions. Data were reported at three different thresholds: for the correct diagnosis of a skin cancer (melanoma, BCC or cSCC), for the classification of a lesion as definitely or probably malignant and for the decision to biopsy a lesion. Sensitivities were 67% (95% CI 22% to 96%) for correct diagnosis, 60% (95% CI 15 to 95%) for definitely or probably malignant and 82% (95% CI 48 to 98%) for decision to biopsy a lesion Specificities were 96% (95% CI 90 to 99%) for both the correct diagnosis of a skin cancer and lesions definitely or probably malignant and 86% (95% CI 78 to 93%) for decision to biopsy a lesion.
Investigations of heterogeneity
We were unable to undertake planned formal investigations of heterogeneity due to insufficient number of studies.
Discussion
Although in some countries teledermatology services may provide recommendations to allow skin cancer management (including biopsy or excision) in a primary care setting, in the UK teledermatology consultations for the most part ensure that people with potentially malignant skin lesions are appropriately referred from a generalist (usually primary care) setting for specialist assessment and treatment. Therefore, the primary objective of this review was to assess the accuracy of teledermatology for the detection of any skin cancer in adults, comparing its accuracy with that of an in‐person specialist diagnosis.
Summary of main results
We included 22 studies: 16 considering the diagnostic accuracy of image‐based teledermatology (five in comparison to an in‐person assessment), and six examining the referral accuracy of teledermatology assessment. Key results are presented in the Table 1. The overall risk of bias was rated as high or unclear for participant selection, reference standard, and participant flow and timing in at least half of all studies; the majority were considered at low risk of bias for the index test. The applicability of study findings were of high or unclear concern for most studies in all domains assessed due to the recruitment of study participants from secondary care settings or specialist clinics rather than from the primary or community‐based settings in which teledermatology is more likely to be used and due to the acquisition of lesion images by dermatologists or in specialist imaging units rather than by primary care clinicians.
Seven studies addressed our primary objective of the detection of any skin cancer. For the correct diagnosis of lesions as malignant, summary sensitivity from four studies using photographic images was 94.9% (95% CI 90.1% to 97.4%) and summary specificity 84.3% (95% CI 48.5% to 96.8%). Individual study estimates using dermoscopic images or a combination of photographic and dermoscopic images generally suggested similarly high sensitivities with highly variable specificities. Limited comparative data suggested similar diagnostic accuracy between teledermatology assessment and in‐person diagnosis by a dermatologist; however, data were too scarce to draw firm conclusions.
For the detection of invasive melanoma or atypical intraepidermal melanocytic variants, both sensitivities and specificities were very variable, with reported diagnostic thresholds including the correct diagnosis of melanoma, classification of lesions as 'atypical' or 'typical', and the decision to refer or to excise a lesion. For teledermatology using photographic images, sensitivities ranged from 59% (95% CI 42% to 74%) to 100% (95% CI 48% to 100%) and specificities from 30% (95% CI 22% to 40%) to 100% (95% CI 93% to 100%) in four studies. For teledermatology using both photographic and dermoscopic images summary estimates for another four studies were 85.4% (95% CI 68.3% to 94.1%) for sensitivity and 91.6% (95% CI 81.1% to 96.5%) for specificity. The number of melanomas missed ranged from 0 to 17.
Referral accuracy data comparing teledermatology against a face‐to‐face reference standard was based on a number of different diagnostic decisions including the diagnosis of malignancy, the decision to excise a lesion, the decision to refer versus not refer, or to excise or follow‐up at a later date. Agreement was generally good for lesions considered to require some positive action by face‐to‐face assessment (sensitivities of over 90%). For lesions considered of less concern when assessed face‐to‐face (e.g. for those not recommended for excision or referral), agreement was more variable with teledermatology specificities ranging from 57% (95% CI 39% to 73%) to 100% (95% CI 86% to 100%), suggesting that remote assessment is more likely recommend excision, referral or follow‐up compared to in‐person decisions.
Across all studies, there were wide variations in sensitivity and specificity for all definitions of the target condition. Studies were generally small with varying approaches to teledermatology assessment, including the use of clinical or dermoscopic images (or both); use of mobile phone cameras, digital cameras or video images; and varying thresholds for deciding test positivity. The definition of the target condition also varied such that data for the primary objective could only be extracted from seven of the 16 studies assessing diagnostic accuracy, nine studies reporting data only for the detection of individual skin cancers. These factors somewhat limited our ability to pool results across studies and further, to draw conclusions regarding the accuracy of teledermatology.
Overall, there were four key limitations of the studies.
The spectrum (or case mix) of different lesion types varied across studies, with a relatively high prevalence of malignant lesions.
Study participants were largely recruited from secondary care settings or from pigmented lesion clinic databases rather than from primary care or other limited prior testing settings and 50% of the studies assessing diagnostic accuracy relied on a histological reference standard (i.e. all included participants underwent lesion excision). In others, accuracy data could be extracted for less than 25% of lesions assessed at a virtual lesion clinic (Congalton 2015), or as part of a preventive medical screening programme (Massone 2014). Therefore, recruited participants were more likely to have lesions with a higher index of suspicion of malignancy compared to those for whom a GP might have considered a teledermatology assessment in practice, thereby limiting the generalisability of study results. These 'spectrum effects' are an increasingly recognised concept for medical tests, often leading to lower sensitivity and higher specificity when applied in settings with participants with limited prior testing compared to participants further down the referral pathway (Usher‐Smith 2016). However, the direction of effect is not consistent across tests and diseases (Leeflang 2013), the mechanisms in action often being complex and sometimes difficult to identify.
Study definitions of 'malignancy' varied and teledermatology results were often not provided according to lesion type.
In four included studies, the reported definition of malignancy did not accord with our protocol‐defined definition, with studies including melanoma metastases, Bowens disease or 'in situ cSCC', actinic keratosis or severely dysplastic naevi as 'malignant.' In some cases, data were reported to allow reclassification of these lesions as disease negative for at least some of the reported thresholds for test positivity (Borve 2015; Massone 2014); however, for others, reclassification of these lesions as disease positive was not possible and these studies could only be included in our analyses for the detection of individual skin cancers (melanoma or BCC) (Bowns 2006; Congalton 2015). Other studies had to be excluded from the review altogether due to varying definitions of 'malignant' (e.g. Borve 2013; Tandjung 2015). The lack of teledermatology results according to lesion type further limited our ability to comment on the implications of missed malignancies; the failure to pick up a melanoma or cSCC potentially carrying more severe consequences in comparison to a missed BCC.
The definition of a positive teledermatology result varied and was not always relevant to decision making in practice.
Of the 16 diagnostic accuracy studies, only three studies reported data for the decision to excise a lesion (one for any skin cancer and two for the detection of melanoma), and five reported data for teledermatologists' classification of lesions as malignant (or probably malignant). Nine of the 16 studies focused on teledermatologists' ability to correctly diagnose lesions as melanomas (eight lesions) or as BCCs (four lesions) which, although of interest, is not the primary factor driving teledermatology decisions in practice where the key judgement in most circumstances is whether or not a lesion should be referred for a face‐to‐face consultation.
Insufficient comparisons were available for the diagnostic accuracy of teledermatology‐based diagnosis and diagnosis based on a face‐to‐face dermatology clinic visit.
Only five of the 16 studies assessing diagnostic accuracy included a comparison of teledermatology‐based diagnosis with diagnosis based on in‐person diagnosis by a dermatologist; two for the detection of any skin cancer (Jolliffe 2001a; Kroemer 2011), and three for the detection of melanoma (Jolliffe 2001a; Kroemer 2011; Piccolo 2000). Therefore, we were unable to adequately assess whether a teledermatology diagnosis of malignancy accurately reflects a diagnosis made in‐person. The six studies of referral accuracy suggested that diagnosis of individual lesions using store‐and‐forward teledermatology could miss around 10% of lesions recommended for clinical action (e.g. surgical excision) during a face‐to‐face consultation (even up to 31% in one study) and is also likely to recommend an action is required for lesions that are considered of less or no concern when seen in‐person. In practice, a face‐to‐face consultation also allows a total body skin examination which may lead to incidental skin cancers being picked up which could also be missed by the teledermatology referral of only one or two lesions (Hanson 2016).
Our systematic review of dermoscopy as an addition to visual inspection of a lesion for the diagnosis of melanoma found in‐person dermoscopy (including 26 studies) to be substantially more accurate compared to diagnosis based on dermoscopic images (including 60 studies) (relative diagnostic odds ratio 4.6, 95% CI 2.4 to 9.0; P < 0.001) (Dinnes 2018a). Despite a number of contributing factors, including differences in study populations, different algorithms to assist test interpretation and differences in observer experience, we concluded that remote test interpretation cannot approximate a physical, face‐to‐face patient‐to‐clinician interaction. In particular, total body skin examination is likely to have a significant impact on the decision to excise a lesion suspected to be melanoma (Aldridge 2013; Argenziano 2012; Grob 1998). Only two of the 22 included teledermatology studies mentioned the use of total body photography (Arzberger 2016; Phillips 1998). It is notable also, that across the 60 image‐based evaluations in the review of dermoscopy, 30 (50%) were blinded to all other participant information and only 17 (28%) provided observers with the photographic image of the same lesion to assist test interpretation (Dinnes 2018a). It is conceivable that an image‐based assessment with full patient information provided in the context of a proper teledermatology consultation would provide a closer approximation of the diagnostic decision that would be made with the patient present.
Strengths and weaknesses of the review
The strengths of this review include an indepth and comprehensive electronic literature search, systematic review methods including double extraction of papers by both clinicians and methodologists, and contact with authors to allow study inclusion or clarify data. A clear analysis structure was planned to allow test accuracy to be estimated according to varying definitions of the target condition and a detailed and replicable analysis of methodological quality was undertaken.
The main concerns for the review were the lack of studies, small sample sizes, heterogeneity in teledermatology assessments, inadequate reporting of primary studies to allow quality to be fully judged and importantly, the lack of clinical applicability of the findings due to participant recruitment from, and image acquisition in, referral settings.
In comparison to other available systematic (Ndegwa 2010; Warshaw 2011) and non‐systematic (Bashshur 2015; Whited 2006; Whited 2016) reviews, our review provides a focus on the triaging or diagnosis of skin cancer as opposed to the evaluation of teledermatology for any dermatological disorder. The two most recently published reviews either found evidence generally in support of teledermatology (Bashshur 2015), or suggesting inferior accuracy compared to in‐person assessments for pigmented lesions (Whited 2016). In contrast we were unable to identify sufficient evidence to establish the diagnostic accuracy of teledermatology in comparison to a face‐to‐face clinical assessment. Our reviews of the diagnostic accuracy of visual inspection of suspicious skin lesions for the detection of melanoma (Dinnes 2018b), and of dermoscopy in comparison to visual inspection of a suspicious skin lesion (Dinnes 2018a), suggest that image‐based assessment may not be equivalent to a face‐to‐face patient:clinician interaction. However, both reviews focused on the correct diagnosis of melanoma as opposed to any skin cancer, neither included studies that were specifically designed to evaluate teledermatology programmes and did not examine any effect on accuracy from potential improvements in image quality over time.
Our a priori decision to exclude studies with fewer than five malignant cases could be construed as a weakness of the review; however, of the seven studies excluded on this basis, five were also excluded due other reasons such as inability to construct a 2×2 contingency table or ineligible study populations and one used multiple images of only four lesions to examine the effect of the positioning of a lesion within an image on clinicians ability to detect it (Chen 2002). The final study which was excluded only on the basis of sample size reported the effect of adding dermoscopic images to an existing teledermatology consultation system for 63 lesions fulfilling the criteria for teleconsultation (Moreno‐Ramirez 2006); all three malignant lesions were correctly picked up using both photographic images and with the addition of dermoscopic images (100% sensitivities) while specificities were 65% with photographic images and 78% with photographic and dermoscopic images.
We were also unsuccessful in our attempts to contact the authors of eight of 12 studies which could have been eligible for inclusion in the review, especially those studies which reported only agreement between observers or agreement with final lesion diagnoses rather than providing results in a 2×2 contingency table format. Furthermore, only partial data were provided to allow the inclusion of one study in the review (Warshaw 2010b); the review would be considerably strengthened if data for teledermatology using dermoscopic images could have been included. Finally, the review was limited to the identification of skin cancer rather than assessing the potential additional benefits of teledermatology such as positive identification of benign lesions such as actinic keratosis (Janda 2015).
Ongoing technological advances are continually improving the quality, clarity and colour‐rendition of digital images taken with cameras and mobile phones. The ability to zoom in on new larger image files potentially provides even more detailed information compared to observation of a lesion with the naked eye alone. Such advances could be of particular help in the triage of lesions with no distinguishing features (e.g. for the identification of amelanotic melanomas), the additional magnification potentially giving subtle clues to aid diagnosis. Although we have documented the equipment used to obtain images in the included studies, we were unable to identify any clear effects from changes in technology.
Our review of the diagnostic accuracy of teledermatology was also unable to evaluate a number of other pertinent factors.
The archiving and auditable trail provided by a teledermatology consultation. With conventional face‐to‐face consultations, much of the interchange is verbal and unrecorded; for teleconsultation however, every aspect of the referral and diagnostic opinion are recorded such that it can be reviewed and audited at a later date.
The possibility of 'crowd‐review' of lesion images. In‐person clinical assessment is often conducted by a single clinician, consultation with other qualified clinicians reserved for more difficult lesions or to support more junior clinicians. The nature of teledermatology diagnosis lends itself to lesion review by multiple clinicians and to have virtual multidisciplinary review of cases if necessary.
Changing the referral behaviour of primary care clinicians. The availability of a teledermatology service could result in a specialist opinion being sought for much earlier presentations of conditions than would normally be the case, which not only changes the spectrum of lesion types observed by teledermatologists, potentially impacting on their accuracy, but could ultimately result in overdiagnosis and treatment. For example, some lesions identified and either excised or treated non‐surgically following a teleconsultation might have resolved spontaneously if monitored for longer in primary care, or if referred for a standard face‐to‐face consultation. The effect on referral behaviour is likely to be exacerbated where ease of access to a teledermatology service for a specialist opinion may be preferred by GP trainees to seeking the opinion of a more experienced GP.
Possible over‐reliance by GPs and reassurance for patients on a benign diagnosis from a teledermatology consultation. As with all clinical consultations, a diagnostic opinion from a teledermatology consultation is limited to the quality of the clinical information provided and the circumstances at the time of consultation. If those circumstances change, for example if a pigmented lesion diagnosed as 'benign' evolves or changes its nature in some way, then that lesion should be reviewed. 'Safety‐netting' such lesions is an important part of management and monitoring in primary care.
Applicability of findings to the review question
The data included in this review are unlikely to be generally applicable to the intended setting. Most studies recruited participants from secondary care or referral settings rather than from primary care settings where patients are far less likely to have skin cancer, and potentially suspicious lesions are likely to be earlier in their development and evolution. Lesion images were often acquired in secondary care rather than being acquired in primary care and transmitted for a specialist opinion using teledermatology. Considerable heterogeneity in approaches to teledermatology were also observed limiting generalisability.
Authors' conclusions
Implications for practice.
Studies were generally small and heterogeneous and methodological quality was difficult to judge due to poor reporting. Considering concerns regarding the applicability of study participants and of lesion image acquisition in specialist settings, our results suggest that teledermatology can be relied upon to correctly identify most malignant lesions. Using a more widely defined threshold to identify 'possibly' malignant cases or lesions that should be considered for excision is likely to appropriately triage those lesions requiring face‐to‐face assessment by a specialist.
Implications for research.
Despite the increasing use of teledermatology on a national and international level, the evidence base to support its ability to accurately triage lesions from primary to secondary care is lacking and further prospective and pragmatic evaluation is needed. Consecutive series of participants with suspicious skin lesions judged to require a specialist opinion by general practitioners should be recruited (i.e. excluding those clearly judged to be malignant or benign) and referred for store‐and‐forward teledermatology in comparison to routine referral to a dermatologist. The reason for referral (e.g. exclusion of melanoma to avoid an urgent or 'two‐week wait' referral, exclusion of cSCC or basal cell carcinoma, or exclusion of 'any skin cancer') should be clearly recorded. 'State‐of‐the‐art' digital photography should be used (potentially utilising mobile phone cameras) to allow the full benefit of current technology to be exploited, and compared with smart phone applications, and systematic follow‐up of non‐excised lesions implemented to avoid over‐reliance on a histological reference standard. The level of training and experience of both the referring and specialist clinicians should be explicit to allow the generalisability of results to be judged. Any future research study needs to be clear about the diagnostic pathway followed by study participants, and should conform to the updated Standards for Reporting of Diagnostic Accuracy (STARD) guideline (Bossuyt 2015).
What's new
Date | Event | Description |
---|---|---|
19 December 2018 | Amended | Affiliations, Disclaimer and Sources of support updated |
Acknowledgements
Members of the Cochrane Skin Cancer Diagnostic Test Accuracy Group include:
the full project team (Susan Bayliss, Naomi Chuchu, Clare Davenport, Jonathan Deeks, Jacqueline Dinnes, Lavinia Ferrante di Ruffano, Kathie Godfrey, Rubeta Matin, Colette O'Sullivan, Yemisi Takwoingi, Hywel Williams);
our 12 clinical reviewers (Rachel Abbott, Ben Aldridge, Oliver Bassett, Sue Ann Chan, Alana Durack, Monica Fawzy, Abha Gulati, Jacqui Moreau, Lopa Patel, Daniel Saleh, David Thompson, Kai Yuen Wong) and two methodologists (Lavinia Ferrante di Ruffano and Louise Johnston) who assisted with full‐text screening, data extraction and quality assessment across the entire suite of reviews of diagnosis and staging and skin cancer;
our expert advisors and co‐authors on the review (Fiona Walter and Richard Motley) and on the protocol for keratinocyte skin cancer (Fiona Bath‐Hextall); and
all members of our Advisory Group (Jonathan Bowling, Seau Tak Cheung, Colin Fleming, Matthew Gardiner, Abhilash Jain, Susan O'Connell, Pat Lawton, John Lear, Mariska Leeflang, Richard Motley, Paul Nathan, Julia Newton‐Bishop, Miranda Payne, Rachael Robinson, Simon Rodwell, Julia Schofield, Neil Shroff, Hamid Tehrani, Zoe Traill, Fiona Walter, Angela Webster).
The Cochrane Skin Group editorial base wishes to thank Michael Bigby, who was the Dermatology Editor for this review, and the clinical referee, David De Berker. We also wish to thank the Cochrane DTA editorial base and colleagues, as well as Anne Lawson, who copy‐edited this review.
Appendices
Appendix 1. Current content and structure of the Programme Grant
LIST OF REVIEWS | Number of studies | |
Diagnosis of melanoma | ||
1 | Visual inspection | 49 |
2 | Dermoscopy +/‐ visual inspection | 104 |
3 | Teledermatology | 22 |
4 | Smartphone applications | 2 |
5a | Computer‐assisted diagnosis – dermoscopy‐based techniques | 42 |
5b | Computer‐assisted diagnosis – spectroscopy‐based techniques | Review amalgamated into 5a |
6 | Reflectance confocal microscopy | 18 |
7 | High‐frequency ultrasound | 5 |
Diagnosis of keratinocyte skin cancer (BCC and cSCC) | ||
8 | Visual inspection +/‐ Dermoscopy | 24 |
5c | Computer‐assisted diagnosis – dermoscopy‐based techniques | Review amalgamated into 5a |
5d | Computer‐assisted diagnosis – spectroscopy‐based techniques | Review amalgamated into 5a |
9 | Optical coherence tomography | 5 |
10 | Reflectance confocal microscopy | 10 |
11 | Exfoliative cytology | 9 |
Staging of melanoma | ||
12 | Imaging tests (ultrasound, CT, MRI, PET‐CT) | 38 |
13 | Sentinel lymph node biopsy | 160 |
Staging of cSCC | ||
Imaging tests review | Review dropped; only one study identified | |
13 | Sentinel lymph node biopsy | Review amalgamated into 13 above (n = 15 studies) |
Appendix 2. Glossary of terms
Term | Definition |
Atypical intraepidermal melanocytic variant | Unusual area of darker pigmentation contained within the epidermis that may progress to an invasive melanoma; includes melanoma in situ and lentigo maligna |
Atypical naevi | Unusual looking but non‐cancerous mole or area of darker pigmentation of the skin |
BRAF V600 mutation | BRAF is a human gene that makes a protein called B‐Raf which is involved in the control of cell growth. BRAF mutations (damaged DNA) occur in around 40% of melanomas, which can then be treated with particular drugs. |
BRAF inhibitors | Therapeutic agents which inhibit the serine‐threonine protein kinase BRAF‐mutated metastatic melanoma. |
Breslow thickness | A scale for measuring the thickness of melanomas by the pathologist using a microscope, measured in millimetres from the top layer of skin to the bottom of the tumour. |
Congenital naevi | A type of mole found on infants at birth |
Dermoscopy | Whereby a handheld microscope is used to allow more detailed, magnified, examination of the skin compared to examination by the naked eye alone. |
False negative | A person who is truly positive for a disease, but whom a diagnostic test classifies them as disease‐free. |
False positive | A person who is truly disease‐free, but whom a diagnostic test classifies them as having the disease. |
Histopathology/histology | The study of tissue, usually obtained by biopsy or excision, e.g. under a microscope. |
Incidence | The number of new cases of a disease in a given time period. |
Index test | A diagnostic test under evaluation in a primary study. |
Lentigo maligna | Unusual area of darker pigmentation contained within the epidermis which includes malignant cells but with no invasive growth. May progress to an invasive melanoma. |
Lymph node | Lymph nodes filter the lymphatic fluid (clear fluid containing white blood cells) that travels around the body to help fight disease; they are located throughout the body often in clusters (nodal basins). |
Melanocytic naevus | An area of skin with darker pigmentation (or melanocytes) also referred to as 'moles.' |
Meta‐analysis | A form of statistical analysis used to synthesise results from a collection of individual studies. |
Metastases/metastatic disease | Spread of cancer away from the primary site to somewhere else through the bloodstream or the lymphatic system. |
Micrometastases | Micrometastases are metastases so small that they can only be seen under a microscope. |
Mitotic rate | Microscopic evaluation of number of cells actively dividing in a tumour |
Morbidity | Detrimental effects on health |
Mortality | Either the condition of being subject to death; or the death rate, which reflects the number of deaths per unit of population in relation to any specific region, age group, disease, treatment or other classification, usually expressed as deaths per 100, 1000, 10,000 or 100,000 people. |
Multidisciplinary team | A team with members from different healthcare professions and specialties (e.g. urology, oncology, pathology, radiology and nursing). Cancer care in the National Health Service (NHS) uses this system to ensure that all relevant health professionals are engaged to discuss the best possible care for that patient. |
Prevalence | The proportion of a population found to have a condition. |
Prognostic factors/indicators | Specific characteristics of a cancer or the person who has it which might affect the patient's prognosis. |
Receiver operating characteristic (ROC) analysis | The analysis of an ROC plot of a test to select an optimal threshold for test positivity. |
Receiver operating characteristic (ROC) plot | A plot of the sensitivity and 1 minus the specificity of a test at the different possible thresholds for test positivity; represents the diagnostic capability of a test with a range of binary test results. |
Recurrence | When new cancer cells are detected following treatment. This can occur either at the site of the original tumour or at other sites in the body. |
Reference standard | A test or combination of tests used to establish the final or 'true' diagnosis of a patient in an evaluation of a diagnostic test. |
Reflectance confocal microscopy (RCM) | A microscopic technique using infrared light (either in a handheld device or a static unit) that can create images of the deeper layers of the skin. |
Sensitivity | In this context the term is used to mean the proportion of people with a disease who have that disease correctly identified by the study test. |
Specificity | The proportion of people without the disease of interest (in this case with benign skin lesions) who have that absence of disease correctly identified by the study test. |
Staging | Clinical description of the size and spread of a patient's tumour, fitting into internationally agreed categories. |
Subclinical (disease) | Disease that is usually asymptomatic and not easily observable, e.g. by clinical or physical examination. |
Systemic treatment | Treatment, usually given by mouth or by injection, that reaches and affects cancer cells throughout the body rather than targeting 1 specific area. |
Appendix 3. Content of algorithms used to assist melanoma diagnosis by visual inspection alone
ABCD (Friedman 1985; Rigel 1993; Pehamberger 1993) ABCDE (Carli 1994; Cristofolini 1994; Thomas 1998; Benelli 1999; Benelli 2001; Abbasi 2004) BCD (McGovern 1992) |
Seven‐point checklist (MacKie 1985; MacKie 1990; Keefe 1990) | Seven‐point checklist (revised) (MacKie 1990; Healsmith 1994) |
A – asymmetry
B – irregular borders
C – colour
D – diameter ≥ 6 mm
E – evolution
McGovern 1992 described 7 characteristics as: "increasing size, variegation, inflammation, irregular outline, greater than 1cm diameter, itch, bleeding." These are expanded on in MacKie 1990, who described the original (MacKie 1985) criteria as:
|
Presence of ≥ 3 suggestive of melanoma |
MacKie 1990, Mackie 1991, and Healsmith 1994 describe the revised criteria as: Major signs
Minor signs
"a patient with a pigmented lesion with any one of the major signs should be considered for referral and that the presence of any of the minor signs should be a further stimulus to referral." (MacKie 1990) |
Appendix 4. Proposed sources of heterogeneity
1. Population characteristics
General versus higher risk populations
Participant population: primary/secondary/specialist unit
Lesion suspicion: general suspicion/atypical/equivocal/NR
Lesion type: any pigmented; melanocytic
Inclusion of multiple lesions per participant
Ethnicity
2. Index test characteristics
Nature of, and definition of, criteria for test positivity
Observer experience with the index test
Approaches to lesion preparation (e.g. the use of oil or antiseptic gel for dermoscopy)
3. Reference standard characteristics
Reference standard used
Whether histology‐reporting meets pathology‐reporting guidelines
Use of excisional versus diagnostic biopsy
Whether two independent dermatopathologists reviewed histological diagnosis
4. Study quality
Consecutive or random sample of participants recruited
Index test interpreted blinded to the reference standard result
Index test interpreted blinded to the result of any other index test
Presence of partial or differential verification bias (whereby only a sample of those subject to the index test are verified by the reference test or by the same reference test with selection dependent on the index test result)
Use of an adequate reference standard
Overall risk of bias
Appendix 5. Final search strategies
Melanoma search strategies to August 2016
Database: Ovid MEDLINE(R) 1946 to August week 3 2016
Search strategy:
1 exp melanoma/
2 exp skin cancer/
3 exp basal cell carcinoma/
4 basalioma$1.ti,ab.
5 ((basal cell or skin) adj2 (cancer$1 or carcinoma$1 or mass or masses or tumour$1 or tumor$1 or neoplasm$1 or adenoma$1 or epithelioma$1 or lesion$1 or malignan$ or nodule$1)).ti,ab.
6 (pigmented adj2 (lesion$1 or mole$ or nevus or nevi or naevus or naevi or skin)).ti,ab.
7 (melanom$1 or nonmelanoma$1 or non‐melanoma$1 or melanocyt$ or non‐melanocyt$ or nonmelanocyt$ or keratinocyt$).ti,ab.
8 nmsc.ti,ab.
9 (squamous cell adj2 (cancer$1 or carcinoma$1 or mass or masses or tumor$1 or tumour$1 or neoplasm$1 or adenoma$1 or epithelioma$1 or epithelial or lesion$1 or malignan$ or nodule$1) adj2 (skin or epiderm$ or cutaneous)).ti,ab.
10 (BCC or CSCC or NMSC).ti,ab.
11 keratinocy$.ti,ab.
12 Keratinocytes/
13 or/1‐12
14 dermoscop$.ti,ab.
15 dermatoscop$.ti,ab.
16 photomicrograph$.ti,ab.
17 exp epiluminescence microscopy/
18 (epiluminescence adj2 microscop$).ti,ab.
19 (confocal adj2 microscop$).ti,ab.
20 (incident light adj2 microscop$).ti,ab.
21 (surface adj2 microscop$).ti,ab.
22 (visual adj (inspect$ or examin$)).ti,ab.
23 ((clinical or physical) adj examin$).ti,ab.
24 3 point.ti,ab.
25 three point.ti,ab.
26 pattern analys$.ti,ab.
27 ABCD$.ti,ab.
28 menzies.ti,ab.
29 7 point.ti,ab.
30 seven point.ti,ab.
31 (digital adj2 (dermoscop$ or dermatoscop$)).ti,ab.
32 artificial intelligence.ti,ab.
33 AI.ti,ab.
34 computer assisted.ti,ab.
35 computer aided.ti,ab.
36 neural network$.ti,ab.
37 exp diagnosis, computer‐assisted/
38 MoleMax.ti,ab.
39 image process$.ti,ab.
40 automatic classif$.ti,ab.
41 image analysis.ti,ab.
42 SIAscop$.ti,ab.
43 Aura.ti,ab.
44 (optical adj2 scan$).ti,ab.
45 MelaFind.ti,ab.
46 SIMSYS.ti,ab.
47 MoleMate.ti,ab.
48 SolarScan.ti,ab.
49 VivaScope.ti,ab.
50 (high adj3 ultraso$).ti,ab.
51 (canine adj2 detect$).ti,ab.
52 ((mobile or cell or cellular or smart) adj ((phone$1 adj2 app$1) or application$1)).ti,ab.
53 smartphone$.ti,ab.
54 (DermoScan or SkinVision or DermLink or SpotCheck).ti,ab.
55 Mole Detective.ti,ab.
56 Spot Check.ti,ab.
57 (mole$1 adj2 map$).ti,ab.
58 (total adj2 body).ti,ab.
59 exfoliative cytolog$.ti,ab.
60 digital analys$.ti,ab.
61 (image$1 adj3 software).ti,ab.
62 (teledermatolog$ or tele‐dermatolog$ or telederm or tele‐derm or teledermoscop$ or tele‐dermoscop$ or teledermatoscop$ or tele‐dermatoscop$).ti,ab.
63 (optical coherence adj (technolog$ or tomog$)).ti,ab.
64 (computer adj2 diagnos$).ti,ab.
65 exp sentinel lymph node biopsy/
66 (sentinel adj2 node).ti,ab.
67 nevisense.mp. or HFUS.ti,ab.
68 electrical impedance spectroscopy.ti,ab.
69 history taking.ti,ab.
70 patient history.ti,ab.
71 (naked eye adj (exam$ or assess$)).ti,ab.
72 (skin adj exam$).ti,ab.
73 physical examination/
74 ugly duckling.mp. or UD.ti,ab.
75 ((physician$ or clinical or physical) adj (exam$ or triage or recog$)).ti,ab.
76 ABCDE.mp. or VOC.ti,ab.
77 clinical accuracy.ti,ab.
78 Family Practice/ or Physicians, Family/ or clinical competence/
79 (confocal adj2 microscop$).ti,ab.
80 diagnostic algorithm$1.ti,ab.
81 checklist$.ti,ab.
82 virtual imag$1.ti,ab.
83 volatile organic compound$1.ti,ab.
84 dog$1.ti,ab.
85 gene expression analy$.ti,ab.
86 reflex transmission imag$.ti,ab.
87 thermal imaging.ti,ab.
88 elastography.ti,ab.
89 or/14‐88
90 (CT or PET).ti,ab.
91 PET‐CT.ti,ab.
92 (FDG or F18 or Fluorodeoxyglucose or radiopharmaceutical$).ti,ab.
93 exp Deoxyglucose/
94 deoxy‐glucose.ti,ab.
95 deoxyglucose.ti,ab.
96 CATSCAN.ti,ab.
97 exp Tomography, Emission‐Computed/
98 exp Tomography, X‐ray computed/
99 positron emission tomograph$.ti,ab.
100 exp magnetic resonance imaging/
101 (MRI or fMRI or NMRI or scintigraph$).ti,ab.
102 exp echography/
103 Doppler echography.ti,ab.
104 sonograph$.ti,ab.
105 ultraso$.ti,ab.
106 doppler.ti,ab.
107 magnetic resonance imag$.ti,ab.
108 or/90‐107
109 (stage$ or staging or metasta$ or recurrence or sensitivity or specificity or false negative$ or thickness$).ti,ab.
110 "Sensitivity and Specificity"/
111 exp cancer staging/
112 or/109‐111
113 108 and 112
114 89 or 113
115 13 and 114
Database: Ovid MEDLINE(R) In‐Process & Other Non‐Indexed Citations 29 August 2016
Search strategy:
1 basalioma$1.ti,ab.
2 ((basal cell or skin) adj2 (cancer$1 or carcinoma$1 or mass or masses or tumour$1 or tumor$1 or neoplasm$1 or adenoma$1 or epithelioma$1 or lesion$1 or malignan$ or nodule$1)).ti,ab.
3 (pigmented adj2 (lesion$1 or mole$ or nevus or nevi or naevus or naevi or skin)).ti,ab.
4 (melanom$1 or nonmelanoma$1 or non‐melanoma$1 or melanocyt$ or non‐melanocyt$ or nonmelanocyt$ or keratinocyt$).ti,ab.
5 nmsc.ti,ab.
6 (squamous cell adj2 (cancer$1 or carcinoma$1 or mass or masses or tumor$1 or tumour$1 or neoplasm$1 or adenoma$1 or epithelioma$1 or epithelial or lesion$1 or malignan$ or nodule$1) adj2 (skin or epiderm$ or cutaneous)).ti,ab.
7 (BCC or CSCC or NMSC).ti,ab.
8 keratinocy$.ti,ab.
9 or/1‐8
10 dermoscop$.ti,ab.
11 dermatoscop$.ti,ab.
12 photomicrograph$.ti,ab.
13 (epiluminescence adj2 microscop$).ti,ab.
14 (confocal adj2 microscop$).ti,ab.
15 (incident light adj2 microscop$).ti,ab.
16 (surface adj2 microscop$).ti,ab.
17 (visual adj (inspect$ or examin$)).ti,ab.
18 ((clinical or physical) adj examin$).ti,ab.
19 3 point.ti,ab.
20 three point.ti,ab.
21 pattern analys$.ti,ab.
22 ABCD$.ti,ab.
23 menzies.ti,ab.
24 7 point.ti,ab.
25 seven point.ti,ab.
26 (digital adj2 (dermoscop$ or dermatoscop$)).ti,ab.
27 artificial intelligence.ti,ab.
28 AI.ti,ab.
29 computer assisted.ti,ab.
30 computer aided.ti,ab.
31 neural network$.ti,ab.
32 MoleMax.ti,ab.
33 image process$.ti,ab.
34 automatic classif$.ti,ab.
35 image analysis.ti,ab.
36 SIAscop$.ti,ab.
37 Aura.ti,ab.
38 (optical adj2 scan$).ti,ab.
39 MelaFind.ti,ab.
40 SIMSYS.ti,ab.
41 MoleMate.ti,ab.
42 SolarScan.ti,ab.
43 VivaScope.ti,ab.
44 (high adj3 ultraso$).ti,ab.
45 (canine adj2 detect$).ti,ab.
46 ((mobile or cell or cellular or smart) adj ((phone$1 adj2 app$1) or application$1)).ti,ab.
47 smartphone$.ti,ab.
48 (DermoScan or SkinVision or DermLink or SpotCheck).ti,ab.
49 Mole Detective.ti,ab.
50 Spot Check.ti,ab.
51 (mole$1 adj2 map$).ti,ab.
52 (total adj2 body).ti,ab.
53 exfoliative cytolog$.ti,ab.
54 digital analys$.ti,ab.
55 (image$1 adj3 software).ti,ab.
56 (teledermatolog$ or tele‐dermatolog$ or telederm or tele‐derm or teledermoscop$ or tele‐dermoscop$ or teledermatoscop$ or tele‐dermatoscop$).ti,ab.
57 (optical coherence adj (technolog$ or tomog$)).ti,ab.
58 (computer adj2 diagnos$).ti,ab.
59 (sentinel adj2 node).ti,ab.
60 nevisense.mp. or HFUS.ti,ab.
61 electrical impedance spectroscopy.ti,ab.
62 history taking.ti,ab.
63 patient history.ti,ab.
64 (naked eye adj (exam$ or assess$)).ti,ab.
65 (skin adj exam$).ti,ab.
66 ugly duckling.mp. or UD.ti,ab.
67 ((physician$ or clinical or physical) adj (exam$ or triage or recog$)).ti,ab.
68 ABCDE.mp. or VOC.ti,ab.
69 clinical accuracy.ti,ab.
70 (Family adj (Practice or Physicians)).ti,ab.
71 (confocal adj2 microscop$).ti,ab.
72 clinical competence.ti,ab.
73 diagnostic algorithm$1.ti,ab.
74 checklist$.ti,ab.
75 virtual imag$1.ti,ab.
76 volatile organic compound$1.ti,ab.
77 dog$1.ti,ab.
78 gene expression analy$.ti,ab.
79 reflex transmission imag$.ti,ab.
80 thermal imaging.ti,ab.
81 elastography.ti,ab.
82 or/10‐81
83 (CT or PET).ti,ab.
84 PET‐CT.ti,ab.
85 (FDG or F18 or Fluorodeoxyglucose or radiopharmaceutical$).ti,ab.
86 deoxy‐glucose.ti,ab.
87 deoxyglucose.ti,ab.
88 CATSCAN.ti,ab.
89 positron emission tomograph$.ti,ab.
90 (MRI or fMRI or NMRI or scintigraph$).ti,ab.
91 Doppler echography.ti,ab.
92 sonograph$.ti,ab.
93 ultraso$.ti,ab.
94 doppler.ti,ab.
95 magnetic resonance imag$.ti,ab.
96 or/83‐95
97 (stage$ or staging or metasta$ or recurrence or sensitivity or specificity or false negative$ or thickness$).ti,ab.
98 96 and 97
99 82 or 98
100 9 and 99
Database: Embase 1974 to 29 August 2016
Search strategy:
1 *melanoma/
2 *skin cancer/
3 *basal cell carcinoma/
4 basalioma$.ti,ab.
5 ((basal cell or skin) adj2 (cancer$1 or carcinoma$1 or mass or masses or tumour$1 or tumor$1 or neoplasm$ or adenoma$ or epithelioma$ or lesion$ or malignan$ or nodule$)).ti,ab.
6 (pigmented adj2 (lesion$1 or mole$ or nevus or nevi or naevus or naevi or skin)).ti,ab.
7 (melanom$1 or nonmelanoma$1 or non‐melanoma$1 or melanocyt$ or non‐melanocyt$ or nonmelanocyt$ or keratinocyt$).ti,ab.
8 nmsc.ti,ab.
9 (squamous cell adj2 (cancer$1 or carcinoma$1 or mass or tumor$1 or tumour$1 or neoplasm$1 or adenoma$1 or epithelioma$1 or epithelial or lesion$1 or malignan$ or nodule$1) adj2 (skin or epiderm$ or cutaneous)).ti,ab.
10 (BCC or cscc).mp. or NMSC.ti,ab.
11 keratinocyte.ti,ab.
12 keratinocy$.ti,ab.
13 or/1‐12
14 dermoscop$.ti,ab.
15 dermatoscop$.ti,ab.
16 photomicrograph$.ti,ab.
17 *epiluminescence microscopy/
18 (epiluminescence adj2 microscop$).ti,ab.
19 (confocal adj2 microscop$).ti,ab.
20 (incident light adj2 microscop$).ti,ab.
21 (surface adj2 microscop$).ti,ab.
22 (visual adj (inspect$ or examin$)).ti,ab.
23 ((clinical or physical) adj examin$).ti,ab.
24 3 point.ti,ab.
25 three point.ti,ab.
26 pattern analys$.ti,ab.
27 ABCD$.ti,ab.
28 menzies.ti,ab.
29 7 point.ti,ab.
30 seven point.ti,ab.
31 (digital adj2 (dermoscop$ or dermatoscop$)).ti,ab.
32 artificial intelligence.ti,ab.
33 AI.ti,ab.
34 computer assisted.ti,ab.
35 computer aided.ti,ab.
36 neural network$.ti,ab.
37 MoleMax.ti,ab.
38 exp diagnosis, computer‐assisted/
39 image process$.ti,ab.
40 automatic classif$.ti,ab.
41 image analysis.ti,ab.
42 SIAscop$.ti,ab.
43 (optical adj2 scan$).ti,ab.
44 Aura.ti,ab.
45 MelaFind.ti,ab.
46 SIMSYS.ti,ab.
47 MoleMate.ti,ab.
48 SolarScan.ti,ab.
49 VivaScope.ti,ab.
50 confocal microscop$.ti,ab.
51 (high adj3 ultraso$).ti,ab.
52 (canine adj2 detect$).ti,ab.
53 ((mobile or cell$ or cellular or smart) adj ((phone$1 adj2 app$1) or application$1)).ti,ab.
54 smartphone$.ti,ab.
55 (DermoScan or SkinVision or DermLink or SpotCheck).ti,ab.
56 Spot Check.ti,ab.
57 Mole Detective.ti,ab.
58 (mole$1 adj2 map$).ti,ab.
59 (total adj2 body).ti,ab.
60 exfoliative cytolog$.ti,ab.
61 digital analys$.ti,ab.
62 (image$1 adj3 software).ti,ab.
63 (optical coherence adj (technolog$ or tomog$)).ti,ab.
64 (teledermatolog$ or tele‐dermatolog$ or telederm or tele‐derm or teledermoscop$ or tele‐dermoscop$ or teledermatoscop$).mp. or tele‐dermatoscop$.ti,ab.
65 (computer adj2 diagnos$).ti,ab.
66 *sentinel lymph node biopsy/
67 (sentinel adj2 node).ti,ab.
68 nevisense.ti,ab.
69 HFUS.ti,ab.
70 electrical impedance spectroscopy.ti,ab.
71 history taking.ti,ab.
72 patient history.ti,ab.
73 (naked eye adj (exam$ or assess$)).ti,ab.
74 (skin adj exam$).ti,ab.
75 *physical examination/
76 ugly duckling.ti,ab.
77 UD sign$.ti,ab.
78 ((physician$ or clinical or physical) adj (exam$ or recog$ or triage)).ti,ab.
79 ABCDE.ti,ab.
80 clinical accuracy.ti,ab.
81 *general practice/
82 (confocal adj2 microscop$).ti,ab.
83 clinical competence/
84 diagnostic algorithm$.ti,ab.
85 checklist$1.ti,ab.
86 virtual image$1.ti,ab.
87 volatile organic compound$1.ti,ab.
88 VOC.ti,ab.
89 dog$1.ti,ab.
90 gene expression analys$.ti,ab.
91 reflex transmission imaging.ti,ab.
92 thermal imaging.ti,ab.
93 elastography.ti,ab.
94 dog$1.ti,ab.
95 gene expression analys$.ti,ab.
96 reflex transmission imaging.ti,ab.
97 thermal imaging.ti,ab.
98 elastography.ti,ab.
99 or/14‐93
100 PET‐CT.ti,ab.
101 (CT or PET).ti,ab.
102 (FDG or F18 or Fluorodeoxyglucose or radiopharmaceutical$).ti,ab.
103 exp Deoxyglucose/
104 CATSCAN.ti,ab.
105 deoxyglucose.ti,ab.
106 deoxy‐glucose.ti,ab.
107 *positron emission tomography/
108 *computer assisted tomography/
109 positron emission tomograph$.ti,ab.
110 *nuclear magnetic resonance imaging/
111 (MRI or fMRI or NMRI or scintigraph$).ti,ab.
112 *echography/
113 Doppler.ti,ab.
114 sonograph$.ti,ab.
115 ultraso$.ti,ab.
116 magnetic resonance imag$.ti,ab.
117 or/100‐116
118 (stage$ or staging or metasta$ or recurrence or sensitivity or specificity or false negative$ or thickness$).ti,ab.
119 "Sensitivity and Specificity"/
120 *cancer staging/
121 or/118‐120
122 117 and 121
123 99 or 122
124 13 and 123
Database: Cochrane Library (Wiley) 2016 searched 30 August 2016 CDSR Issue 8 of 12 2016 CENTRAL Issue 7 of 12 2016 HTA Issue 3 of 4 July 2016 DARE Issue 3 of 4 2015
Search strategy:
#1 melanoma* or nonmelanoma* or non‐melanoma* or melanocyt* or non‐melanocyt* or nonmelanocyt* or keratinocyte*
#2 MeSH descriptor: [Melanoma] explode all trees
#3 "skin cancer*"
#4 MeSH descriptor: [Skin Neoplasms] explode all trees
#5 skin near/2 (cancer* or carcinoma* or mass or masses or tumour* or tumor* or neoplasm* or adenoma* or epithelioma* or lesion* or malignan* or nodule*)
#6 nmsc
#7 "squamous cell" near/2 (cancer* or carcinoma* or mass or masses or tumour* or tumor* or neoplasm* or adenoma* or epithelioma* or lesion* or malignan* or nodule*) near/2 (skin or epiderm* or cutaneous)
#8 "basal cell" near/2 (cancer* or carcinoma* or mass or masses or tumour* or tumor* or neoplasm* or adenoma* or epithelioma* or lesion* or malignan* or nodule*)
#9 pigmented near/2 (lesion* or nevus or mole* or naevi or naevus or nevi or skin)
#10 #1 or #2 or #3 or #4 or #5 or #6 or #7 or #8 or #9
#11 dermoscop*
#12 dermatoscop*
#13 Photomicrograph*
#14 MeSH descriptor: [Dermoscopy] explode all trees
#15 confocal near/2 microscop*
#16 epiluminescence near/2 microscop*
#17 incident next light near/2 microscop*
#18 surface near/2 microscop*
#19 "visual inspect*"
#20 "visual exam*"
#21 (clinical or physical) next (exam*)
#22 "3 point"
#23 "three point"
#24 "pattern analys*"
#25 ABDC
#26 menzies
#27 "7 point"
#28 "seven point"
#29 digital near/2 (dermoscop* or dermatoscop*)
#30 "artificial intelligence"
#31 "AI"
#32 "computer assisted"
#33 "computer aided"
#34 AI
#35 "neural network*"
#36 MoleMax
#37 "computer diagnosis"
#38 "image process*"
#39 "automatic classif*"
#40 SIAscope
#41 "image analysis"
#42 "optical near/2 scan*"
#43 Aura
#44 MelaFind
#45 SIMSYS
#46 MoleMate
#47 SolarScan
#48 Vivascope
#49 "confocal microscopy"
#50 high near/3 ultraso*
#51 canine near/2 detect*
#52 Mole* near/2 map*
#53 total near/2 body
#54 mobile* or smart near/2 phone*
#55 cell next phone*
#56 smartphone*
#57 "mitotic index"
#58 DermoScan or SkinVision or DermLink or SpotCheck
#59 "Mole Detective"
#60 "Spot Check"
#61 mole* near/2 map*
#62 total near/2 body
#63 "exfoliative cytolog*"
#64 "digital analys*"
#65 image near/3 software
#66 teledermatolog* or tele‐dermatolog* or telederm or tele‐derm or teledermoscop* or tele‐dermoscop* or teledermatoscop* or tele‐dermatolog*
#67 "optical coherence" next (technolog* or tomog*)
#68 computer near/2 diagnos*
#69 sentinel near/2 node*
#70 #11 or #12 or #13 or #14 or #15 or #16 or #17 or #18 or #19 or #20 or #21 or #22 or #23 or #24 or #25 or #26 or #27 or #28 or #29 or #30 or #31 or #32 or #33 or #34 or #35 or #36 or #37 or #38 or #39 or #40 or #41 or #42 or #43 or #44 or #45 or #46 or #47 or #48 or #49 or #50 or #51 or #52 or #53 or #54 or #55 or #56 or #57 or #58 or #59 or #60 or #61 or #62 or #63 or #64 or #65 or #66 or #67 or #68 or #69
#71 ultraso*
#72 sonograph*
#73 MeSH descriptor: [Ultrasonography] explode all trees
#74 Doppler
#75 CT or PET or PET‐CT
#76 "CAT SCAN" or "CATSCAN"
#77 MeSH descriptor: [Positron‐Emission Tomography] explode all trees
#78 MeSH descriptor: [Tomography, X‐Ray Computed] explode all trees
#79 MRI
#80 MeSH descriptor: [Magnetic Resonance Imaging] explode all trees
#81 MRI or fMRI or NMRI or scintigraph*
#82 "magnetic resonance imag*"
#83 MeSH descriptor: [Deoxyglucose] explode all trees
#84 deoxyglucose or deoxy‐glucose
#85 "positron emission tomograph*"
#86 #71 or #72 or #73 or #74 or #75 or #76 or #77 or #78 or #79 or #80 or #81 or #82 or #83 or #84 or #85
#87 stage* or staging or metasta* or recurrence or sensitivity or specificity or "false negative*" or thickness*
#88 MeSH descriptor: [Neoplasm Staging] explode all trees
#89 #87 or #88
#90 #89 and #86
#91 #70 or #90
#92 #10 and #91
#93 BCC or CSCC or NMCS
#94 keratinocy*
#95 #93 or #94
#96 #10 or #95
#97 nevisense
#98 HFUS
#99 "electrical impedance spectroscopy"
#100 "history taking"
#101 "patient history"
#102 naked next eye near/1 (exam* or assess*)
#103 skin next exam*
#104 "ugly duckling" or (UD sign*)
#105 MeSH descriptor: [Physical Examination] explode all trees
#106 (physician* or clinical or physical) near/1 (exam* or recog* or triage*)
#107 ABCDE
#108 "clinical accuracy"
#109 MeSH descriptor: [General Practice] explode all trees
#110 confocal near microscop*
#111 "diagnostic algorithm*"
#112 MeSH descriptor: [Clinical Competence] explode all trees
#113 checklist*
#114 "virtual image*"
#115 "volatile organic compound*"
#116 dog or dogs
#117 VOC
#118 "gene expression analys*"
#119 "reflex transmission imaging"
#120 "thermal imaging"
#121 elastography
#122 #97 or #98 or #99 or #100 or #101 or #102 or #103 or #104 or #105 or #106 or #107 or #108 or #109 or #110 or #111 or #112 or #113 or #114 or #115 or #116 or #117 or #118 or #119 or #120 or #121
#123 #70 or #122
#124 #96 and #123
#125 #96 and #90
#126 #125 or #124
#127 #10 and #126
Database : CINAHL Plus (EBSCO) 1937 to 30 August 2016
Search strategy:
S1 (MH "Melanoma") OR (MH "Nevi and Melanomas+")
S2 (MH "Skin Neoplasms+")
S3 (MH "Carcinoma, Basal Cell+")
S4 basalioma*
S5 (basal cell) N2 (cancer* or carcinoma* or mass or masses or tumor* or tumour* or neoplasm* or adenoma* or epithelioma* or lesion* or malignan* or nodule*)
S6 (pigmented) N2 (lesion* or mole* or nevus or nevi or naevus or naevi or skin)
S7 melanom* or nonmelanoma* or non‐melanoma* or melanocyt* or non‐melanocyt* or nonmelanocyt*
S8 nmsc
S9 TX BCC or cscc or NMSC
S10 (MH "Keratinocytes")
S11 keratinocyt*
S12 S1 OR S2 OR S3 OR S4 OR S5 OR S6 OR S7 OR S8 OR S9 OR S10 OR S11
S13 dermoscop* or dermatoscop* or photomicrograph* or (3 point) or (three point) or ABCD* or menzies or (7 point) or (seven point) or AI or Molemax or SIASCOP* or Aura or MelaFind or SIMSYS or MoleMate or SolarScan or smartphone* or DermoScan or SkinVision or DermLink or SpotCheck
S14 (epiluminescence or confocal or incident or surface) N2 (microscop*)
S15 visual N1 (inspect* or examin*)
S16 (clinical or physical) N1 (examin*)
S17 pattern analys*
S18 (digital) N2 (dermoscop* or dermatoscop*)
S19 (artificial intelligence)
S20 (computer) N2 (assisted or aided)
S21 (neural network*)
S22 (MH "Diagnosis, Computer Assisted+")
S23 (image process*)
S24 (automatic classif*)
S25 (image analysis)
S26 SIAScop*
S27 (optical) N2 (scan*)
S28 (high) N3 (ultraso*)
S29 elastography
S30 (mobile or cell or cellular or smart) N2 (phone*) N2 (app or application*)
S31 (mole*) N2 (map*)
S32 total N2 body
S33 exfoliative cytolog*
S34 digital analys*
S35 image N3 software
S36 teledermatolog* or tele‐dermatolog* or telederm or tele‐derm or teledermoscop* or tele‐dermoscop* or teledermatoscop* or tele‐dermatoscop* teledermatolog* or tele‐dermatolog* or telederm or tele‐derm or teledermoscop*
S37 (optical coherence) N1 (technolog* or tomog*)
S38 computer N2 diagnos*
S39 sentinel N2 node
S40 (MH "Sentinel Lymph Node Biopsy")
S41 nevisense or HFUS or checklist* or VOC or dog*
S42 electrical impedance spectroscopy
S43 history taking
S44 "Patient history"
S45 naked eye
S46 skin exam*
S47 physical exam*
S48 ugly duckling
S49 UD sign*
S50 (physician* or clinical or physical) N1 (exam*)
S51 clinical accuracy
S52 general practice
S53 (physician* or clinical or physical) N1 (recog* or triage)
S54 confocal microscop*
S55 clinical competence
S56 diagnostic algorithm*
S57 checklist*
S58 virtual image*
S59 volatile organic compound*
S60 gene expression analys*
S61 reflex transmission imag*
S62 thermal imaging
S63 S13 or S14 or S15 OR S16 OR S17 OR S18 OR S19 OR S20 OR S21 OR S22 OR S23 OR S24 OR S25 OR S26 OR S27 OR S28 OR S29 OR S30 OR S31 OR S32 OR S33 OR S34 OR S35 OR S36 OR S37 OR S38 OR S39 OR S40 OR S41 OR S42 OR S43 OR S44 OR S45 OR S46 OR S47 OR S48 OR S49 OR S50 OR S51 OR S52 OR S53 OR S54 OR S55 OR S56 OR S57 OR S58 OR S59 OR S60 OR S61 OR S62
S64 CT or PET
S65 PET‐CT
S66 FDG or F18 or Fluorodeoxyglucose or radiopharmaceutical*
S67 (MH "Deoxyglucose+")
S68 deoxy‐glucose or deoxyglucose
S69 CATSCAN
S70 CAT‐SCAN
S71 (MH "Deoxyglucose+")
S72 (MH "Tomography, Emission‐Computed+")
S73 (MH "Tomography, X‐Ray Computed")
S74 positron emission tomograph*
S75 (MH "Magnetic Resonance Imaging+")
S76 MRI or fMRI or NMRI or scintigraph*
S77 echography
S78 doppler
S79 sonograph*
S80 ultraso*
S81 magnetic resonance imag*
S82 S64 OR S65 OR S66 OR S67 OR S68 OR S69 OR S70 OR S71 OR S72 OR S73 OR S74 OR S75 OR S76 OR S77 OR S78 OR S79 OR S80 OR S81
S83 stage* or staging or metasta* or recurrence or sensitivity or specificity or (false negative*) or thickness
S84 (MH "Neoplasm Staging")
S85 S83 OR S84
S86 S82 AND S85
S87 S63 OR S86
S88 S12 AND S87
Database: Science Citation Index SCI Expanded (Web of Science) 1900 to 30 August 2016
Conference Proceedings Citation Index (Web of Science) 1900 to 1 September 2016
Search strategy:
#1 (melanom* or nonmelanom* or non‐melanoma* or melanocyt* or non‐melanocyt* or nonmelanocyt* or keratinocyt*)
#2 (basalioma*)
#3 ((skin) near/2 (cancer* or carcinoma or mass or masses or tumour* or tumor* or neoplasm* or adenoma* or epithelioma* or lesion* or malignan* or nodule*))
#4 ((basal) near/2 (cancer* or carcinoma* or mass or masses or tumour* or tumor* or neoplasm* or adenoma* or epithelioma* or lesion* or malignan* or nodule*))
#5 ((pigmented) near/2 (lesion* or mole* or nevus or nevi or naevus or naevi or skin))
#6 (nmsc or BCC or NMSC or keratinocy*)
#7 ((squamous cell (cancer* or carcinoma* or mass or masses or tumour* or tumor* or neoplasm* or adenoma* or epithelioma* or lesion* or malignan* or nodule*))
#8 (skin or epiderm* or cutaneous)
#9 #8 AND #7
#10 #9 OR #6 OR #5 OR #4 OR #3 OR #2 OR #1
#11 ((dermoscop* or dermatoscop* or photomicrograph* or epiluminescence or confocal or "incident light" or "surface microscop*" or "visual inspect*" or "physical exam*" or 3 point or three point or pattern analy* or ABCDE or menzies or 7 point or seven point or dermoscop* or dermatoscop* or AI or artificial or computer aided or computer assisted or neural network* or Molemax or image process* or automatic classif* or image analysis or siascope or optical scan* or Aura or melafind or simsys or molemate or solarscan or vivascope or confocal microscop* or high ultraso* or canine detect* or cellphone* or mobile* or phone* or smartphone or dermoscan or skinvision or dermlink or spotcheck or spot check or mole detective or mole map* or total body or exfoliative psychology or digital or image software or optical coherence or teledermatology or telederm* or teledermoscop* or teledermatoscop* or computer diagnos* or sentinel))
#12 ((nevisense or HFUS or impedance spectroscopy or history taking or patient history or naked eye or skin exam* or physical exam* or ugly duckling or UD sign* or physician* exam* or physical exam* or ABCDE or clinical accuracy or general practice or confocal microscop* or clinical competence or diagnostic algorithm* or checklist* or virtual image* or volatile organic or VOC or dog* or gene expression or reflex transmission or thermal imag* or elastography))
#13 #11 or #12
#14 ((PET or CT or FDG or deoxyglucose or deoxy‐glucose or fluorodeoxy* or radiopharma* or CATSCAN or positron emission or computer assisted or nuclear magnetic or MRI or FMRI or NMRI or scintigraph* or echograph* or Doppler or sonograph* or ultraso* or magnetic reson*))
#15 ((stage* or staging or metast* or recurrence or sensitivity or specificity or false negative* or thickness*))
#16 #14 AND #15
#17 #16 OR #13
#18 #10 AND #17
Refined by: DOCUMENT TYPES: (MEETING ABSTRACT OR PROCEEDINGS PAPER)
Appendix 6. Full‐text inclusion criteria
Criterion | Inclusion | Exclusion |
Study design |
For diagnostic and staging reviews
|
|
Target condition |
|
|
Population |
For diagnostic reviews
For staging reviews
|
|
Index tests |
For diagnosis
For staging
Any test combination and in any order Any test positivity threshold Any variation in testing procedure (e.g. radioisotope used) |
|
Reference standard |
For diagnostic studies
For studies of imaging tests for staging
For studies of SLNB accuracy for staging
|
For diagnostic studies
|
BCC: basal cell carcinoma; cSCC: cutaneous squamous cell carcinoma; CT: computed tomography; FNAC: fine needle aspiration cytology; LND: lymph node dissection; MRI: magnetic resonance imaging; PET: positron emission tomography; PET‐CT: positron emission tomography computed tomography; RCT: randomised controlled trial; SCC: squamous cell carcinoma; SLN+: positive sentinel lymph node; SLn: negative sentinel lymph node; SLNB: sentinel lymph node biopsy. |
Appendix 7. Quality assessment (based on QUADAS‐2)
The QUADAS‐2 checklist was tailored to the review topic as follows below (Whiting 2011).
Participant selection domain (1)
Selective recruitment of study participants can be a key influence on test accuracy. In general terms, all participants eligible to undergo a test should be included in a study, allowing for the intended use of that test within the context of the study. We considered studies that separately sampled malignant and benign lesions to have used a case‐control design; and those that supplemented a series of suspicious lesions with additional malignant or benign lesions to be at unclear risk of bias.
In terms of exclusions, we considered studies that excluded particular lesion types (e.g. lentigo maligna), particular lesion sites, or that excluded lesions on the basis of image quality or lack of observer agreement (e.g. on histopathology) to be at high risk of bias.
In judging the applicability of patient populations to the review question, we considered restriction to particular lesion populations, such as melanocytic, nodular, high risk or restrictions by size or to lesions that had been excised to be of high concern for applicability. For teledermatology, lesions selected from referred populations rather than selected by general practitioners (GPs) in a primary care setting were also judged to be high concern for applicability.
Given that diagnosis of skin cancer is primarily lesion‐based, there is the potential for study participants with multiple lesions to contribute disproportionately to estimates of test accuracy, especially if they are at particular risk of having skin cancer. We considered studies that include a high number of lesions in relation to the number of participants in the study to be less representative than studies conducted in a more general population of participants (i.e. if the difference between the number of included lesions and number of included participants is greater than 5%).
Index test domain (2)
Given the potential for subjective differences in test interpretation for melanoma, the interpretation of the index test blinded to the result of the reference standard is a key means of reducing bias. For prospective studies and retrospective studies that used the original index test interpretation, the diagnosis will by nature be interpreted and recorded before the result of the reference standard is known; however, studies using previously acquired images could be particularly susceptible to information bias. For these studies to be at low risk of bias, we required a clear indication that observers were unaware of the reference standard diagnosis at time of test interpretation. An item was also added to assess the presence of blinding between interpretations of different algorithms; however this item was not included in the overall assessment of risk of bias.
Prespecification of the index test threshold was considered present if the study clearly reported that the threshold used was not data driven, i.e. was not based on study results. Studies that did not clearly describe the threshold used but that required clinicians to record a diagnosis or management decision for a lesion were considered to be unclear on this criterion. Studies reporting accuracy for multiple numeric thresholds, where receiver operating characteristic (ROC) analysis was used to select the threshold, or that reported accuracy for the presence of independently significant lesion characteristics with no separate test set of lesions were considered at high risk of bias.
In terms of applicability of the index test to the review question, we required images for teledermatology assessment to be acquired in primary settings by GPs or other primary care staff rather than by expert dermatologists or medical photographers in a specialist setting. We also required diagnosis to be made by a single observer as opposed to a consensus decision or mean across multiple observers.
Despite the often subjective nature of test interpretation, it is also important for study authors to outline the particular lesion characteristics that were considered to be indicative for melanoma, for basal cell carcinoma (BCC) or for cutaneous squamous cell carcinoma (cSCC), particularly where established algorithms or checklists were not used. Studies were considered of low concern if the threshold used was established in a prior study or sufficient threshold details were presented to allow replication.
Reference standard domain (3)
In an ideal study, consecutively recruited participants should all undergo incisional or excisional biopsy of the skin lesion regardless of level of clinical suspicion of melanoma. In reality, both partial and differential verification bias are likely. Partial verification bias may occur where histology is the only reference standard used, and only those participants with a certain degree of suspicion of malignancy based on the result of the index test undergo verification, the others either being excluded from the study or defined as being disease‐negative without further assessment or follow‐up, as discussed above.
Differential verification bias will be present where other reference standards are used in addition to histological verification of suspicious lesions. A typical example of verification bias in skin cancer occurs when investigators do not biopsy people with benign‐appearing lesions but instead follow them up for a period of time to determine whether any malignancy subsequently develops (these would be false‐negatives on the index test). We defined an 'adequate' reference standard as: all disease‐positive people having a histological reference standard either at the time of application of the index test or after a period of clinical follow‐up; and at least 80% of disease‐negative participants have received a histological diagnosis, with up to 20% undergoing at least three months' follow‐up of benign‐appearing lesions.
A further challenge is the potential for incorporation bias, i.e. where the result of the index test is used to help determine the reference standard diagnosis. It is normal practice for the clinical diagnosis (usually by visual inspection or dermoscopy) to be included on pathology request forms and for the histopathologist to use this diagnosis to help with the pathology interpretation. Although inclusion of such clinical information on the histopathology request form is theoretically a form of incorporation bias, blinded interpretation of the histopathology reference standard is not normal practice, and enforcement of such conditions would significantly limit the generalisability of the study results. For studies comparing teledermatology against a histological reference standard, this item was therefore scored but did not contribute to the overall risk of bias. For studies comparing teledermatology against a face‐to‐face expert diagnosis, however, this item was scored and did contribute to overall risk of bias.
In judging the applicability of the reference standard to our review question, scored studies as high concern around applicability if they used expert diagnosis (with no follow‐up) as a reference standard in any patient, or did not report histology interpretation by a dermatopathologist.
Flow and timing domain (4)
In the ideal study, the acquisition of images for the teledermatology diagnosis and the reference standard diagnosis should be made consecutively or as near to each other in time as possible to avoid changes in lesion over time. We have defined a one‐month period as an appropriate interval between application of the index test and the reference standard (either histological or face‐to‐face). For studies using clinical follow‐up, a minimum three‐month follow‐up period has been defined as at low risk of bias for detecting false‐negatives. This interval was chosen based on a study showing that most false‐negative melanomas will be diagnosed within three months of the initial negative index test although a small number will be diagnosed up to 12 months subsequently (Altamura 2008).
In assessing whether all patients were included in the analysis, we considered studies at high risk of bias if participants were excluded following recruitment.
Item | Response (delete as required) |
Participant selection (1) –risk of bias | |
1. Was a consecutive or random sample of participants or images enrolled? |
Yes – if paper states consecutive or random No – if paper describes other method of sampling Unclear – if participant sampling not described |
2. Was a case‐control design avoided? (note: a diagnostic case‐control study separately recruits participants according to selected final diagnoses, e.g. those with melanoma, with BCC, with severe dysplasia and with mild dysplasia AND will usually deliberately sample certain numbers from each group such that the overall case mix of included participants and disease prevalence is not reflective of usual care.) |
Yes – if case‐control design clearly not used No – if study described as case‐control or describes sampling specific numbers of participants with particular diagnoses Unclear – if not clearly described or you have any concerns that the authors have not selected a series of participants |
3. Did the study avoid inappropriate exclusions, e.g.
|
Yes – if inappropriate exclusions were avoided No – if lesions were excluded that might affect test accuracy, e.g. 'difficult to diagnose' lesions, OR where disagreement between evaluators was observed Unclear – if not clearly reported but there is suspicion that difficult to diagnose lesions may have been excluded |
Could the selection of participants have introduced bias? If answers to all of questions 1. AND 2. AND 3. 'Yes' If answers to any 1 of questions 1. OR 2. OR 3. 'No' If answers to any 1 of questions 1. OR 2. OR 3. 'Unclear' |
Risk is low Risk is high Risk is unclear |
Participant selection (1) –concerns regarding applicability | |
1. Are the included participants and chosen study setting appropriate to answer the review question, i.e. are the study results generalisable? This item is not asking whether exclusion of certain participant groups might bias the study's results (as in 'Risk of bias' above), but is asking whether the chosen study participants and setting are appropriate to answer our review question. Because we are looking to establish test accuracy in both primary presentation and referred participants, a study could be appropriate for 1 setting and not for the other, or it could be unclear as to whether the study can appropriately answer either question. |
Yes – if participants included in the study appear to be generally representative of those who might present in a usual practice setting No – if study participants were restricted to those in lesion subgroups, e.g. melanocytic only, or small lesions only, if only excised lesions were included, or lesions were selected from referred populations rather than selected by general practitioners in a primary care setting Unclear – if insufficient details are provided to determine the generalisability of study participants |
2. Did the study avoid including participants with multiple lesions? |
Yes – if the difference between the number of included lesions and number of included participants is less than 5% No – if the difference between the number of included lesions and number of included participants is greater than 5% Unclear – if it is not possible to assess |
Is there concern that the included participants do not match the review question? If the answer to question 1. and 2. 'Yes' If the answer to question 1. or 2. 'No' If the answer to question 1. or 2. 'Unclear' |
Concern is low Concern is high Concern is unclear |
Index test (2) –risk of bias (to be completed per test evaluated) | |
1. Was the index test or testing strategy result interpreted without knowledge of the results of the reference standard? |
Yes – if index test described as interpreted without knowledge of the reference standard result or, for prospective studies, if index test is always conducted and interpreted prior to the reference standard No – if index test described as interpreted in knowledge of reference standard result Unclear – if index test blinding is not described |
2. Was the diagnostic threshold at which the test was considered positive (i.e. melanoma, BCC or cSCC present) prespecified? |
Yes – if threshold was prespecified (i.e. prior to analysing study results), i.e. results were not data driven No – if threshold was not prespecified but was selected after analysis of results usually to maximise sensitivity or specificity (or both), or multiple thresholds were tested Unclear – if not possible to tell whether or not diagnostic threshold was prespecified |
Could the conduct or interpretation of the index test have introduced bias? For NC and BPC studies If answers to questions 1. and 2. 'Yes' If answers to either questions 1. or 2. 'No' If answers to either questions 1). or 2. 'Unclear' For WPC studies If answers to all questions 'Yes' If answers to any 1 question 'No' If answers to any 1 of questions is 'Unclear' |
For NC and BPC studies Risk is low Risk is high Risk is unclear For WPC studies Risk is low Risk is high Risk is unclear |
Index test (2) –concern about applicability | |
1. Was the test applied and interpreted in a clinically applicable manner? |
Yes – in‐person evaluation and single observer result present No – image based, or mean or consensus result presented, or both Unclear – if cannot tell |
2. Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? Study results can only be reproduced if the diagnostic threshold is described in sufficient detail. This item applies equally to studies using pattern recognition and those using checklists or algorithms to aid test interpretation |
Yes – if the criteria for diagnosis of the target disorder were reported in sufficient detail to allow replication. If the study does not describe the threshold in detail BUT evaluates an established test/algorithm AND provides a citation to a previous study of the test in the Methods or Results, then respond Yes. No – if the criteria for diagnosis of the target disorder were not reported in sufficient detail to allow replication. Unclear – if some but not sufficient information on criteria for diagnosis to allow replication were provided. If the study does not describe the threshold in detail BUT evaluates an established test/algorithm but with NO citation to a previous study of the test in the methods, then respond Unclear. |
3) Was the test interpretation carried out by an experienced examiner? |
Yes – if the test was interpreted by ≥ 1 speciality accredited dermatologists, or by examiners of any clinical background with special interest in dermatology and with any formal training in the use of the test No – if the test was not interpreted by an experienced examiner (see above) Unclear – if the experience of the examiner(s) was not reported in sufficient detail to judge OR if examiners described as 'Expert' with no further detail given N/A – if system‐based diagnosis, i.e. no observer interpretation |
Is there concern that the index test, its conduct, or interpretation differ from the review question? If answers to questions 1., 2. AND 3. 'Yes' If answers to questions 1., 2. OR 3. 'No' If answers to questions 1., 2. OR 3. 'Unclear' |
Concern is low Concern is high Concern is unclear |
Reference standard (3) –risk of bias | |
1). Is the reference standard likely to correctly classify the target condition? a) Disease positive – ≥ 1 of:
b) Disease negative – ≥ 1 of:
|
a) Disease positive Yes – if all disease‐positive participants underwent 1 of the listed reference standards No – if a final diagnosis for any disease‐positive participant was reached without histopathology Unclear – if the method of final diagnosis was not reported for any disease‐positive participant OR if the length of clinical follow‐up used was not clear OR if a clinical follow‐up reference standard was reported in combination with a participant‐based analysis and it was not possible to determine whether the detection of a malignant lesion during follow‐up is the same lesion that originally tested negative on the index test a) Disease negative Yes – if ≥ 80% of benign diagnoses were reached by histology and up to 20% were reached by clinical follow‐up of for a minimum of 3 (or 6) months following the index test No – if > 20% of benign diagnoses were reached by clinical follow‐up of a minimum of 3 (or 6) months following the index test OR if clinical follow‐up period was less than 3 (or 6) months Unclear – if the method of final diagnosis was not reported for any participant with benign or disease‐negative diagnosis |
2. Were the reference standard results interpreted without knowledge of the results of the index test? Please score this item for all studies. Response to the item will not be incorporated into the overall risk of bias assessment for comparisons against a histological reference standard as histopathology interpretation is usually conducted with knowledge of the clinical diagnosis (from visual inspection or dermoscopy (or both)). Response to the item will be incorporated into the overall risk of bias assessment for comparisons against a face‐to‐face reference standard |
For studies comparing teledermatology against a histological reference standard Yes – if the histological reference standard diagnosis was reached blinded to the index test result No – if the histological reference standard diagnosis was reached with knowledge of the index test result Unclear – if blinded reference test interpretation was not clearly reported If the histopathologist is described as 'blinded' with no further detail as to whether the blinding applies to both index test or to clinical information (prior testing), we will assume that blinding is to the index test result only, unless further detail is provided For studies comparing teledermatology against a face‐to‐face expert diagnosis Yes – if the face‐to‐face reference standard diagnosis was described as interpreted without knowledge of the teledermatology diagnosis (e.g. the remote and face‐to‐face diagnosis was made by 2 different dermatologists) No – if the face‐to‐face reference standard diagnosis was made with knowledge of the teledermatology diagnosis or was made by the same dermatologist within a month of the remote image‐based diagnosis Unclear – if it is not possible to tell whether knowledge of the teledermatology diagnosis could have influenced the reference standard diagnosis |
Could the reference standard, its conduct, or its interpretation have introduced bias? For comparisons against a histological reference standard: If answer to question 1. 'Yes' If answer to question 1. 'No' If answer to question 1. 'Unclear' For comparisons against a face‐to‐face reference standard: If answers to questions 1. AND 2. 'Yes' If answers to questions 1. OR 2. 'No' If answers to questions 1. OR 2. 'Unclear' |
For comparisons against a histological reference standard: Risk is low Risk is high Risk is unclear For comparisons against a face‐to‐face reference standard: Risk is low Risk is high Risk is unclear |
Reference standard (3) –concern about applicability | |
1. For studies comparing teledermatology/face‐to‐face clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? |
Yes – if histology interpretation was reported to be carried out by an experienced histopathologist or dermatopathologist No – if histology interpretation was reported to be carried out by a less‐experienced histopathologist Unclear – if the experience/qualifications of the pathologist were not reported |
2. For studies comparing teledermatology to face‐to‐face diagnosis, was the clinical diagnosis carried out by an experienced observer? |
Yes – if face‐to‐face interpretation was reported to be carried out by an experienced dermatologist No – if face‐to‐face interpretation was reported to be carried out by a less‐experienced dermatologist Unclear – if the experience/qualifications of the face‐to‐face clinician were not reported |
Is there concern that the target condition as defined by the reference standard does not match the review question? If answer to either questions 1. or 2. 'Yes' If answer to either questions 1. OR 2. 'No' If answer to either questions 1. OR 2. 'Unclear' |
Concern is low Concern is high Concern is unclear |
Flow and timing (4): risk of bias | |
1. Was there an appropriate interval between index test and reference standard? a) For histopathological reference standard, was the interval between index test and reference standard ≤ 1month? b) If the reference standard includes clinical follow‐up of borderline/benign appearing lesions, was there a minimum follow‐up following application of index test(s) of:
|
a) Yes – if study reports ≤ 1 month between index and reference standard No – if study reports > 1 month between index and reference standard Unclear – if study does not report interval between index and reference standard b) Yes – if study reports ≥ 3 (or 6) months follow‐up No – if study reports < 3 (or 6) months follow‐up Unclear – if study does not report length of clinical follow‐up |
2. Did all participants receive the same reference standard? |
Yes – if all participants underwent the same reference standard No – if > 1 reference standard was used Unclear – if not clearly reported |
3. Were all participants included in the analysis? |
Yes – if all participants were included in the analysis No – if some participants were excluded from the analysis Unclear – if not clearly reported |
Could the participant flow have introduced bias? For NC and BPC studies If answers to questions 1. AND 2. AND 3. 'Yes' If answers to any 1 of questions 1. OR 2. OR 3. 'No' If answers to any 1 of questions 1. OR 2. OR 3. 'Unclear' |
Risk is low Risk is high Risk is unclear |
Comparative domain: for BPC or WPC (or both) of index tests or testing strategies (i.e. > 1 index test applied per participant) | |
Index tests | |
1. Was each index test result interpreted without knowledge of the results of other index tests or testing strategies? |
Yes – if all index tests were described as interpreted without knowledge of the results of the others No – if the index tests were described as interpreted in the knowledge of the results of the others Unclear – if it is not possible to tell whether knowledge of other index tests could have influenced test interpretation N/A – if only 1 index test was evaluated |
2. Was the interval between application of index tests ≤ 1 month? |
Yes – if study reports ≤ 1 month between index tests No – if study reports > 1 month between index tests Unclear – if study does not report interval between index tests |
Clinical applicability of comparison 1. Were both tests applied and interpreted in a clinically applicable manner? |
— |
BPC: between‐person comparison; BCC: basal cell carcinoma; cSCC: cutaneous squamous cell carcinoma; N/A: not applicable; NC: non‐comparative; WPC: within‐person comparison.
Appendix 8. Summary study details
Author and year Outcome |
Study type County Setting Participants (lesions) |
Inclusion criteria |
Store and forward teledermatology Image acquisition Image interpretation Diagnostic decision |
Face to face diagnosis Method of diagnosis Diagnostic decision |
Reference standard Final diagnoses Prevalence (malignant) |
Excluded participants (from final analysis) |
Studies of diagnostic accuracy (TDvs histology) | ||||||
Arzberger 2016 Any |
NC P‐CS Secondary Austria 20 (23) |
Moderate‐to‐high risk of melanoma based on:
|
Clinical and dermoscopic images acquired in secondary care with digital camera coupled with dermoscope (equipment described). 4 remote teledermoscopy experts (dermatologists) evaluated the total body images and dermoscopic images, and gave a recommendation for each lesion Decision recorded: "self‐monitoring", "short‐term monitoring" and "excision" |
Not evaluated | Histology
Malignant MM: 8; MiS: 1; BCC: 2 BN: 12 0.478 |
Of 70 eligible participants only 20 were excised; no information reported on the 50 participants who had expert dx. |
Borve 2015 Any |
BPC P‐CS Primary and secondary Sweden 772 (816) |
People with ≥ 1 skin lesions of concern requiring referral to a dermatologist. (Accuracy was compared to a control group of an equal number of consecutive participants referred from other PHCs via the traditional paper‐based referral system during the same period; data included in this review.) PHCs that regularly referred patients with skin lesions of concern were invited to participate. |
Clinical and dermoscopic images acquired in primary care by GPs using smartphone digital camera and portable dermoscope using an iPhone 4 with a FotoFinder Handyscope app 4 Remote dermatologists reviewed images along with clinical information on an online platform and selected from standardised triage responses. Decision recorded:
|
Not evaluated | Histology plus expert Malignant: MM: 19; MiS: 16; cSCC: 17; BCC: 109 Benign: dysplastic: 89; BN: 236; SK: 125; other benign: 137 *AK: 61 *cSCC (in situ): 7 0.281 |
86 excluded from TD group (including 4 with poor image quality, 21 < 18 years old, 50 'no shows;' other reasons also provided) *Authors included in situ cSCC and AK as D+; could only be disaggregated for threshold of malignancy as any differential dx. |
Bowns 2006 MEL BCC |
WPC tests P‐CS Specialist unit UK NR (256) |
People (with skin lesions) who were either referred to the 2‐week wait or 'target' clinics, or those initially referred to the normal outpatient service but who were diverted by the consultant on the basis of the referral form. | Clinical and dermoscopic images acquired in secondary care at a Medical Photography Department using a digital camera (equipment not described). 3 independent dermatologists assessed the images along with clinical information Decision recorded:
|
Not evaluated | Histology plus expert dx Malignant: MM: 19; MiS: 5; BCC: 29; cSCC: 16; other malignant: 1 *Severely dysplastic naevi: 3; BN: 64; *BD/in situ SCC: 9; SK: 70; solar keratosis: 12; *severely dysplastic solar keratosis: 3; other benign: 25 0.332 |
11 excluded: 7 wrong lesion imaged, 3 histology already undertaken, 3 image file lost *Authors include these as D+ for malignancy; data excluded from our 'Any' skin cancer analyse (author contacted) |
Congalton 2015 MEL |
NC P‐CS Secondary and private New Zealand 99 (129) excised Full sample: 310 participants (613 lesions) |
All participants referred with skin lesions suspicious for melanoma assessed at a Virtual Lesion Clinic for triage, instead of being seen FTF at a hospital clinic. Only those excised could be included | Clinical and dermoscopic images acquired in secondary care using a digital camera coupled with dermoscope 2 experienced dermatologists reviewed participant details and images remotely using the MoleMapDiagnose software Decision recorded: management decision;
|
Not evaluated | Histology plus expert dx MM: 47; melanoma metastases: 1*; BCC: 40; cSCC: 9 (including in situ*) Benign: 32 0.403 |
1 participant was excluded from further analysis because he attended the VLC after the referred lesion had been excised *1 MM Mets incl as D+; cannot disaggregate in situ SCC; data excluded from our 'Any' skin cancer analysis |
Coras 2003 MM |
WPC‐tests P‐CS Secondary and private Germany NR (46) |
Pigmented skin lesions undergoing excision due to dx of melanoma or atypical nevus, to rule out melanoma or at the participant's request | Clinical and dermoscopic images acquired at private clinic using digital camera coupled with dermoscope. 1 dermatologist evaluated the images and made a dx based on the images and history of the participant Decision recorded:
|
3 participating dermatologists with experience in dermoscopy established a clinical dx based on pattern analysis after personal consultation with the participant in their private practice clinics. | Histology MM: 16 Benign: 29 0.356 | Reported that many images were of poor quality (10) and that only 45 biopsies were done 50 participants who did not have histology excluded. |
Ferrara 2004 MEL |
NC CCS Unspecified NR; likely Italy NR (12) |
12 melanocytic lesions with dermoscopic images (a single image per case) and accompanying histological material were retrieved from our consultation files | Dermoscopic images acquired using a film or digital camera coupled with dermoscope; setting for lesion acquisition unclear. Stored images were viewed on a standard‐resolution colour monitor by 3 remote consultants in a single session. Dermoscopic images presented first (dx recorded by single observer), followed by histological image (for teledermatopathology dx), the original histological dx from the consultation file was then presented (apparently along with the original clinical dx). "Dermoscopic–pathological remarks" were made and finally a consensus dx was reached by 2 consultants; the latter was taken as a second 'gold standard' for the study. Decision recorded:
|
Not evaluated | Histology
MM: 4; MiS: 3
'Benign:' 5 (incl junctional nevus: 1; reed naevus: 1; blue naevus: 1; actinic lentigo: 1; SN: 1) 0.583 |
NR |
Grimaldi 2009 MM |
Case series WPC‐tests P‐CS Primary and secondary Italy 197 (235) |
Cutaneous pigmented lesions with digital images forwarded by primary care physicians to a referral centre for confirmation of dx. | Clinical and dermoscopic images acquired in primary care using a digital camera coupled with dermoscope. All photographed lesions uploaded from the peripheral units to the central research unit for telediagnosis. Images appraised at the reference unit by dermatologist and plastic surgeons numbers NR) Decision recorded:
|
Each of 13 GPs was asked to formulate a written first judgement of every lesion before digital acquisition using visual inspection alone and then following dermoscopy. The evaluation method followed the ABCD rule of dermoscopy according to Nachbar et al (Nachbar 1994) Decision recorded:
|
Histology plus follow‐up (208 diagnosed as benign after 6 months' follow‐up)
MM: 5
Benign: 230 0.021 |
NR |
Jolliffe 2001a Any BCC |
WPC‐tests P‐CS Specialist unit UK 138 (144) |
People referred by their GP for dermatological assessment of a pigmented lesion at the PLC | Images acquired at PLC using a single chip video camera. The image was archived using proprietary software and images were transmitted through a Fast Screen Machine2 video overlay card and viewed alongside clinical information on a 15 inch monitor (observer qualifications and expertise NR). Decision recorded: clinical dx |
1 dermatologist at the PLC made a clinical dx (± the use of dermoscopy) based upon information in the referral letter and examination findings | Histology
MM: 2; LM: 2; BCC: 9
Atypical: 5; BN: 89; SK: 9; solar lentigo: 7; blue naevus: 4; SN: 2; other: 15 0.090 |
In 4 cases (2.7%) it was impossible to make a dx from the image, due to poor image quality. |
Kroemer 2011 MEL BCC cSCC Any |
WPC‐tests P‐CS Secondary Austria 88 (113) |
People self‐referred or referred to general outpatient clinic at the Department of Dermatology, Medical University of Graz, Graz, Austria by a local doctor for evaluation of a skin tumour. | Clinical and dermoscopic images (up to 3 each per participant) acquired by dermatologist in secondary care using a mobile phone camera (Nokia N73 with Dermlite II Pro). Images reviewed by 1 board‐certified dermatologist with clinical expertise in TD and dermoscopy; images reviewed blinded to each other Decision recorded:
|
FTF clinical dx at general dermatology outpatient clinic by a single dermatologist. Not clearly reported; most likely visual inspection of the skin (± use of dermoscopy). No algorithm described | Histology
MM: 1; MiS: 1; BCC: 30; cSCC: 10, LM: 3; mel mets: 1; SK: 6; AK: 17; BD: 1; BN: 15; other: 19 0.058 |
9 lesions (8 participants); 3 participants declined participation. Of 322 clinical and 278 dermoscopic images, 2 clinical and 18 dermoscopic images |
Massone 2014 Any |
NC P‐CS Private care Austria 112 (121) Full sample: 690 (962) |
People undergoing a 2‐day 'health screening holiday' as part of a preventive medical screening programme; images acquired by GPs for any suspicious skin lesions. Only those recommended for excision or FTF assessment by TD were included. | Clinical and dermoscopic images acquired in primary care using a digital camera coupled with a dermoscope. Images were reviewed blinded to other information by 2 dermatologists within 48 hours ("No personal patients' data were transmitted; patients were identified only by a progressive number." Decision recorded:
Not evaluated |
Histology plus expert clinical dx
Malignant: MM 2; BCC 5
Dysplastic 11; SK 4; *AK 1; other 3 *authors considered AK as D+; could be disaggregated and considered D‐ 0.25 |
82/121 participants lost to follow‐up) i.e. no histology dx, including 4 considered to have melanoma and 8 with BCC | — |
Moreno Ramirez 2005 MEL BCC Any |
NC R‐CS Specialist unit Spain 108 referred to PLC, 57 participants included in the final analysis (57) |
People with pigmented, circumscribed lesions fulfilling ≥ 1 of the following criteria:
|
Photographic images acquired in primary care using a digital camera. Images evaluated at PLC by a dermatologist alongside clinical information Decision recorded:
*obviously benign were not referred; all other categories were routinely referred to the PLC for FTF assessment. |
Not evaluated | Histology plus follow‐up (follow‐up period NR)
Malignant: MiS: 1; BCC: 23; LM: 3;
dysplastic: 16; BN: 8; blue nevi: 4 0.105 |
Difficult to dx cases excluded (n = 13), those with 'malignant or suspicious lesions' (n = 28) |
Piccolo 2000 MM |
WPC tests P‐CS Unspecified Austria (Graz) 40 (43) |
People with pigmented skin lesions selected because of their diagnostic difficulty and subsequently excised for a histopathological evaluation. | Clinical and dermoscopic images acquired in secondary care using a dermoscopic digital camera. Images stored on a prototype TD workstation and distributed to remote centres via email together with basic participant data (initials, age, sex and site of the lesion). Observers included 6 dermatologists, 2 residents in dermatology, 1 internist, 1 GP, 1 oncologist Decision recorded: NR |
All lesions were examined with a dermatoscope by 1 expert dermatologist during the FTF clinical dx (most likely in a secondary care setting). No specific algorithm used. | Histology MM: 11 SK: 3, BN: 25; angiokeratoma: 1; lentigines: 3 0.259 |
Poor quality index test image (all images scoring 4 were excluded from the study) |
Piccolo 2004 MEL |
NC R‐CS Unspecified Images selected from University of Graz Austria and University of L'Aquila Italy 73 (77) |
People with melanocytic acral lesions | Dermoscopic images acquired in secondary care using a dermoscopic digital camera. Images analysed along with information on age, sex and lesion site on a computer monitor by each dermatologist (11; 5 rated highly experienced in dermoscopy, 2 medium experience and 4 low level of experience). Decision recorded:
|
— | Histology Malignant: MM (acral): 6; acral melanocytic naevi: 71 0.078 |
None reported |
Silveira 2014 Any |
Non‐comparative P‐CS Community Brazil NR (416) |
People with skin lesions that were determined to be suspicious after a direct visual inspection by a physician at a Community Mobile Prevention Unit (providing screening the local people for prostate, cervical and skin cancers. | Photographic images acquired in a community setting using a digital camera. Images were coded, stored and submitted at random to 2 oncologists at Barretos Cancer Hospital, blinded to the MPU physician's dx and pathology report but with age, skin type and lesion site provided. Decision recorded: Management decision
|
— | Histology plus expert (52 were diagnosed by expert opinion) Malignant: MM: 5; BCC: 286, cSCC: 59; malignant other: 14 Benign dx: 52 0.875 |
21 were excluded from the study because of poor quality photos, 23 were excluded because of incomplete data preventing the identification of the participant |
Warshaw 2010b MEL Full methods reported in Warshaw 2009a and Warshaw 2009b which reported data for primary lesions only |
WPC P‐CS Secondary USA NR/1514 (2152 participants with 3021 lesions enrolled in full trial; 1685 lesions were biopsied) |
People with pigmented and non‐pigmented lesions enrolled at the Department of Veteran Affairs dermatology clinic who required (or requested) removal of ≥ 1 skin neoplasms ('high‐risk group') and participants who were referred to general dermatology clinic by non‐dermatology healthcare providers for evaluation of a skin neoplasm (lower‐risk group). Biopsied lesions only were included; and for the 2010 paper, only lesion categories with ≥ 25 lesions. | Macro images* and PLD images were obtained for each lesion by research staff on attendance at Dermatology clinic. Contact immersion dermoscopy images also obtained for a sample of pigmented lesions. 1 of 3 board‐certified dermatologists with clinical expertise in dermoscopy were randomly assigned to review a macro package alone or with a PLD image along with standardised participant and lesion history. Images of all lesions for a single participant visit (i.e. ≥ 1 lesion per participant) were evaluated by the teledermatologist in a single session Recorded:
|
Clinical assessment by 1 of 11 staff dermatologists; history obtained in usual manner and clinical examination could include all options normally available in the clinical setting (e.g. palpation, diascopy, dermoscopy) Recorded:
|
Histology (board‐certified dermatopathologist) MM: 41; BCC: 410; cSCC: 240 Benign keratoses: 223; DN: 154; AK: 145; BN: 138; cysts: 73; benign appendageal tumours: 35; lentigines: 29; benign vascular neoplasms: 26 0.042 |
171 lesions in histopathologic categories with < 25 lesions *Paper presented accuracy data for each image type; underlying data to allow construction of 2×2 tables obtained from author only for macro images for primary dx of MM/MiS. |
Wolf 2013 MEL |
WPC CCS Secondary USA NR (159) |
People with pigmented lesions that were considered atypical in clinical appearance by ≥ 1 dermatologist and for which a clear histological dx had been rendered by a board‐certified dermatopathologist (invasive melanoma, MiS, lentigo, benign nevus, DF, SK and haemangioma). Excluded equivocal diagnoses and specific lesion types. | Photographic images acquired in routine secondary care prior to skin lesion removal). 4 applications for smartphone devices were evaluated including 1 store and forward application (app 4) which can be run on a smartphone or from a website. Images sent to a board‐certified dermatologist (number NR) for evaluation within 24 hours. Decision recorded:
|
— | Histology Melanoma: 44; MiS: 16 Benign dx: 34; SK: 20; lentigo: 8; haemangiona: 2; DF: 4 0.34 |
29 not evaluable excluded from analysis "send another photograph" or "unable to categorize," considered these images to be unevaluable in our analysis. (390 images for possible inclusion in this study. We excluded 202 as being of poor image quality, containing identifiable participant information or features, or lacking sufficient clinical or histological information.) |
Studies of referral accuracy (TD vs FTF dx) | ||||||
Jolliffe 2001b Refer/no referral |
NC P‐CS Secondary UK 611 (819) |
People referred to the dermatology departments of the Royal Free Hospital or the Whittington Hospital during the study period by their GPs for assessment of a pigmented lesion were seen in clinic. | Photographic images acquired in secondary care using a single chip video camera Images were viewed alongside GP referral information independently by 2 dermatologists and a registrar several months following the FTF encounter. Decision recorded:
|
Not evaluated as an index test | Expert opinion (in clinic dx by the registrar and 1 of either of the 2 consultants) Clinical diagnoses: MM: 9; BCC: 19; LM: 1 SK: 152; BN: 361; atypical: 112; congenital: 27; DF: 25; solar lentigo: 23; KA: 1; angioma: 18; AK: 13; SN: 1; blue: 2; other: 56 Clinical dx: 3.6% Action needed: 17.5% |
23 poor‐quality images (lesion referred on the basis of poor picture quality rather than known clinical need) |
Mahendran 2005 Action/no action |
WPC‐tests P‐CS Primary and secondary UK 163 (163) |
GPs recruited consecutive unselected participants with suspicious skin lesions whom they refer to the dermatology department in their normal practice | Photographic images acquired in primary care by the GP using a digital camera (Nikon Coolpix 950 digital camera (1200 × 1600 pixel resolution). Images reviewed by 1 of 2 consultant dermatologists along with clinical information. Decision recorded:
|
Not evaluated as an index test | Expert clinical dx (unclear but appears to be by the same TD consultants within 2 weeks) Consultant diagnoses (not histologically confirmed) MM: 4, BCC: 37; cSCC: 4; AK: 10; BD: 7; SC: 4; LM: 1; atypical: 6; SK: 27; BN: 20; DF: 11; other: 36 Malignant: 27.6% Action required (FTF): 65.1% |
57 (35%) excluded as no TD decision could be made; 24 (15%) poor‐quality images; 33 (20%) needed to be seen FTF |
Manahan 2015 Action (see FTF or not) |
NC P‐CS Community Australia 49/340 |
People aged 50–64 years at high risk of melanoma (fair skin type, previous skin excisions, personal or family history) recruited via the 'QSkin' study or volunteers who completed questionnaire and who had a suitable smartphone. Participants instructed to submit "photos of moles or spots that they 'did not like the look of," and were given instructions about how to select lesions based on asymmetry and colour. | Study participants used Handyscope
FotoFinder dermoscope smartphone attachment (FotoFinder Systems GmbH, Bad Birnbach, Germany) and Handyscope app, to obtain and send magnified lesion image along with a second clinical (macro) image to verify the anatomical site of each skin lesion. 1 teledermatologist recorded:
|
Not evaluated as an index test | Clinical skin examination by a dermatology registrar under supervision of the dermatologist who undertook the telediagnosis. The same management options were recommended in the FTF consultation FTF diagnoses (not histologically confirmed): BCC: 13; SCC/IEC: 1 Atypical naevus: 4; BN: 165; SL: 22; SK: 81 Non‐pigmented: AK: 34; DF: 2; other: 18 |
8/58 participants did not attend for FTF examination; 1/58 without age restriction; 32/341 lesions did not appear to have a primary TD dx |
Oliveira 2002 dx (malignant vs not) |
NC P?‐CS Primary Brazil 90/90 |
People with suspect dermatological conditions identified by an assistant nurse who had undergone training to identify potentially malignant skin lesions. Only those who attended for FTF assessment were included. | Lesions photographed by the nurse using a Kodak DC265 Zoom digital camera. 2‐hour training in the use of the camera was provided and included instruction on the installation of the camera's software and transferring the images to the computer. Images were sent by nurse with an electronic case report form and included her diagnostic impression whether the lesion was non‐malignant or malignant. All cases were assessed the cases remotely by a dermatologist prior to the in‐person evaluation and assessed as malignant or benign. |
Not evaluated as an index test | Within 1 week the same dermatologist saw the participant in‐person. Participants were referred for biopsy when skin cancer was the suspected dx. The in‐person assessments by the dermatologist (and the biopsy results in a few cases) were used as reference. Malignant: 8 Benign: 84 9 were referred for biopsy of whom 5 attended, including 1 SCC and 3 BCC |
2 lesions without a tele‐dx |
Phillips 1998 Dx SC Malignant/probably malignant Excise |
NC P‐CS Community USA 51/107 |
People attending 4 skin cancer screenings at community hospitals in rural eastern North Carolina. Participants were given a choice of having a total body examination, only the sun‐exposed skin, or a specific lesion(s) evaluated by the on‐site physician. | Live‐link TD using: a full‐body camera, a lens for viewing the lesions close up, and a magnifying lens that allowed even closer views as well as examination with polarised light. It is not clear who operated the cameras during the teleconsultations. If a complete patient skin examination was performed in‐person, representative lesions were selected by the on‐site physician for evaluation by the remote physician. Decision recorded:
|
Not evaluated as an index test | All participants were first evaluated by the on‐site physician who recorded specific lesions on an image of the human body and dx/management decisions as per remote observer. Target condition (FTF diagnoses) Malignant: BCC: 2; SCC: 3; LM: 1 Benign: SK: 27; BN: 32; AK: 14; lentigo: 10; other: 30 |
None reported |
Shapiro 2004 Biopsy |
NC P‐CS Secondary USA 49 (NR) |
Only people with skin growths that posed a true diagnostic challenge selected by PCP (board certified in internal medicine since 1984) | Photographic images acquired in primary care by network community primary care physician using a digital camera. Reviewed by a board certified academic dermatologist for teledermatological consultation alongside clinical information. Decision recorded:
|
Not evaluated as an index test | Expert (local dermatologist in private practice) 23 underwent biopsy: BCC: 5; cSCC: 4; benign: 17 Assumed benign (no biopsy): 23 (including 1 participant who refused biopsy) Malignancy: 18% Action recommended: 49% |
11 (4 failed to present for FTF assessment, 4 saw a different FTF dermatologist, 1 died, and 2 underwent evaluation of different lesions by the SAF teledermatologist and FTF dermatologist) |
ABCD: asymmetry, border, colour, differential structures; BCC: basal cell carcinoma; BN: benign naevi; BPC: between‐person comparison; CCS: case control study; cSCC: cutaneous squamous cell carcinoma; D+: disease positive; DF: dermatofibroma; dx: diagnosis; FTF: face‐to‐face; GP: general practitioner; IEC: intraepithelial carcinoma; KA: keratoacanthoma; LM: lentigo maligna; MEL: invasive melanoma or atypical intraepidermal melanocytic variants; MiS: melanoma in situ; MM: malignant (invasive) melanoma; NR: not reported; P‐CS: prospective case series; PHC: primary health care; PLC: pigmented lesion clinic; PLD: polarised light dermoscopy; SAF: store‐and‐forward; SK: seborrhoeic keratosis; TD: teledermatology; WPC: within‐person comparison.
Data
Presented below are all the data for all of the tests entered into the review.
Tests. Data tables by test.
Characteristics of studies
Characteristics of included studies [ordered by study ID]
Arzberger 2016.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: May to October 2009 Country: Austria |
||
Patient characteristics and setting |
Inclusion criteria: study participants with at least 1 of the following factors associated with moderate‐to‐high risk of melanoma: personal or first‐degree relative history of melanoma; history of dysplastic naevi; > 5 atypical naevi; > 100 naevi; lesion suspicious for melanoma. Setting: specialist clinic (Pigmented Skin Lesion Clinic Medical University of Graz) Prior testing: NR Setting for prior testing: NR Exclusion criteria: none reported Sample size (participants): number eligible: 70; number included: 20 Sample size (lesions): number eligible: 1922; number included: 23 Participant characteristics: Age: range: 11–81 years Gender: male: 35, female 35 Lesion characteristics: none reported |
||
Index tests |
TD Acquisition and transmission of images: after clinical examination participants had total body photography performed by an experienced dermatology nurse or dermatology resident (took on the role of a "melanographer"), images were taken using the MoleMap program, without regard of the decision made at the FTF examination. Without regard to the medical decision resulting from the F2F evaluation, the melanographer acquired body‐sector photographs (Nikon D40 and D50 digital SLR, Nikon Corporation Tokyo, Japan) and photographs of selected skin lesions were those that were: "(i) highly suspicious; (ii) concerning; (iii) changing and/or different; (iv) > 3 mm; (v) itching, bleeding, inflamed; or (vi) suspicious for basal cell carcinoma (BCC) or squamous cell carcinoma." Selected lesions had a close‐up macroscopic and dermoscopic image taken. Images were then uploaded to a centralised server in New Zealand, which was accessible to the participating teledermoscopy experts. Nature of images used: clinical and dermoscopic Any additional participant information provided: total body images Observer qualifications (remote diagnosis): dermatologists (experts in dermoscopy) Diagnosis based on: single observer Number: 4 Method of diagnosis: 4 remote teledermoscopy experts evaluated the total body images and dermoscopic images, and gave a recommendation for each lesion Management options (diagnostic threshold): recommendations for management included: "self‐monitoring," "short‐term monitoring" and "excision" |
||
Target condition and reference standard(s) |
Reference standard: histology Details:histology (excision) Lesions were selected for excision after conventional FTF total body and dermoscopic examination of all lesions, performed by individual dermatologists with expertise in the assessment of pigmented lesions. Histopathological examination of excised lesions was performed at the Dermatopathology Laboratory at the Medical University Graz, Graz, Austria
Target condition (final diagnoses)
|
||
Flow and timing | 50 participants were excluded from the final analysis (48 participants were discharged with the recommendation to do monthly self‐skin examination and 2 participants were followed up) 20 participants had their lesions excised (included in the final analysis) The interval between index test and reference standard was not clearly reported; however, it appeared to be simultaneously |
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Unclear | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | Yes | ||
Unclear | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | High | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
Low | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High |
Borve 2015.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: January 2011 to December 2012 Country: Sweden |
||
Patient characteristics and setting |
Inclusion criteria: adults > 18 years of age with ≥ 1 skin lesions of concern requiring referral to a dermatologist using the TD referral system Setting: primary (20 primary healthcare centres in western Sweden) Prior testing: N/A Setting for prior testing: N/A Exclusion criteria: did not attend FTF visit(s), did not comply or skin lesions were located on a body part that could not be photographed. Sample size (participants): number eligible: 902; number included: 816 Participant characteristics: Age: mean 54; range: 18–93 years Gender: 474 (61.3%) women Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: GPs used the smartphone TD referral system. The GP took 1 clinical and 1 dermoscopic image using an iPhone 4 with a FotoFinder Handyscope app, and completed a standardised query form including all the relevant clinical information. This was then sent through a secure web‐based TD platform (Tele‐Dermis) with a secure socket layer encryption. Simultaneously the participating dermatologists received an email that a new referral was ready for assessment. Nature of images used: clinical photographs and dermoscopic images Any additional participant information provided: clinical information Diagnosis based on: single; number of examiners: 4 Observer qualifications (remote diagnosis): 3 specialists in dermatology and 1 resident in dermatology Method of diagnosis: dermatologists logged onto the Tele‐Dermis platform to review the referrals on a 17‐ or 19‐inch liquid crystal display monitor. They chose from standardised triage responses including an assessment of the nature of the lesion (benign, malignant or unclear), ≥ 1 possible diagnoses, the priority given (high, within 2 weeks; medium, within 4 weeks or low, within 8–12 weeks), suggested management (none, medical therapy, destructive therapy or surgery) and, finally, a dermoscopic description. |
||
Target condition and reference standard(s) |
Histological plus expert diagnosis Histology: 551 Details: All MMs and SCCs (keratoacanthomas were classified as SCCs) were confirmed histopathologically. Final diagnosis confirmed histopathologically in 292 TD referrals (36%) and 259 paper referrals (35%) Expert diagnosis (FTF diagnosis at dermatology clinic): 265 Method of diagnosis: dermatologists used dermoscopy to evaluate the study lesions and carried out full body skin examination Prior test data: all relevant clinical information (FTF visits were not blinded to the results of the teledermoscopists) Diagnostic threshold: NR Diagnosis based on: single Number of examiners: NR Observer qualifications: dermatologists (specialists in dermoscopy) Experience in practice: high Experience with index test: high Target condition (final diagnoses) Malignant: MM: 19 (2.3%); MiS: 16 (2.0%); SCC: 17 (2.1%); SCC in situ: 7 (0.9%); BCC: 109 (13.4%); AK: 61 (7.5%); other malignant: 0 Benign: dysplastic nevi: 89 (10.9%); BN: 236 (28.9%); SK: 125 (15.3%); other benign: 137 (16.8%) |
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | Yes | ||
Did the study avoid including participants with multiple lesions? | Unclear | ||
Low | Unclear | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
High | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Yes | ||
Did all patients receive the same reference standard? | No | ||
Were all patients included in the analysis? | Yes | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | Yes | ||
High |
Bowns 2006.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: NR Country: UK |
||
Patient characteristics and setting |
Inclusion criteria: people either referred to the 2‐week wait or 'target' clinics, or those initially referred to the normal outpatient service but who were diverted by the consultant on the basis of the referral form. Setting: specialist unit (skin cancer/pigmented lesions clinic) Prior testing: clinical or dermatoscopic (or both) suspicion (not clearly described, but all referred with certain degree of concern based on referral to 2‐week wait or urgency graded by consultant based on referral letter) Setting for prior testing: primary Exclusion criteria: NR Sample size (participants): NR but < 256 as a number of participants were referred for > 1 lesion, all treated as independent Sample size (lesions): number eligible: 267; number included: 256 Participant characteristics: NR Age: classified by age band; 61% were aged > 55 years Gender: male: 46.9% Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: lesion images were taken at the Medical Photography Unit (equipment not described) before an outpatient appointment with a dermatologist using both normal photographic methods and a dermatoscope. Nature of images used: clinical and dermoscopic Any additional participant information provided: initial referral forms or letter provided Method of diagnosis: independent dermatologist assessed photographs and gave their most likely diagnosis and level of confidence in the diagnosis. They also gave an opinion on whether the lesion was malignant and a recommendation on whether they would wish to see the participant. Diagnosis based on: single observer Number of examiners: 3 consultants Observer qualifications (remote diagnosis): dermatologist (experience NR) |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis plus other Details: histology undertaken in 164 cases, including 78/85 malignant cases and 86/171 benign cases (50.3%) FTF diagnosis/expert opinion Details: final diagnoses for 92 lesions reached by FTF decision only (VI ± use of dermoscopy (not specified)), including 7/85 malignant ('mainly with diagnoses of BCC or Bowen's disease') and 85/171 benign cases Number of examiners: 7 consultant dermatologists Experience in practice: NR Experience with index test: NR Target condition (final diagnoses as per expert clinical diagnosis and histology)
|
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | Unclear | ||
Low | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Unclear | ||
Low | High | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
High | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Yes | ||
Did all patients receive the same reference standard? | No | ||
Were all patients included in the analysis? | Yes | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High |
Congalton 2015.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: 1 April 2012 to 31 March 2013 Country: New Zealand |
||
Patient characteristics and setting |
Inclusion criteria: people referred from primary care with skin lesions suspicious for melanoma triaged via a VLC instead of being seen FTF at a hospital clinic. Referrals that indicated 1–6 lesions of concern were included. Setting: secondary (general dermatology) and private care Prior testing: clinical or dermatoscopic (or both) suspicion Setting for prior testing: primary Exclusion criteria: difficult to diagnose lesions – location/site of lesion skin lesions on scalp and genitals were generally excluded, as were those where body site was not clearly identified in the referral. Sample size (participants): number eligible: 345; number included: 310 Sample size (lesions): number included: 613 Participant characteristics: Age: median: 58 (range: 15–92) years Gender: male: 142; female 168 Race/ethnicity (%): white: 242 (78%); black or African American: 12 (4%); Hispanic or Latino: 3 (< 1%); Asian: 16 (5%); other: Maori 16 (5%), Pacific islanders 12 (3%); missing: 12 (4%) Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: participants attended 2 imaging clinics run by a private TD company (MoleMap NZ). Total body photography not offered. Macroscopic and dermoscopic images captured using a Canon G6 camera or MoleCam; regional anatomic views captured using Nikon D3100. Information on the referred lesions such as whether the person had noticed any changes in size, colour, bleeding and itching was recorded. Hair and eye colour, skin type, previous history of sun exposure and family/personal history of skin cancers were also captured. Files were archived using proprietary software (MoleMap NZ) and uploaded to a secure server via a virtual private network. Nature of images used: clinical and dermoscopic images Any additional participant information provided: participant details Diagnosis based on: single observer Number of examiners: 2 Observer qualifications (remote diagnosis): dermatologist Method of diagnosis: using the MoleMapDiagnose software, 2 experienced dermatologists reviewed participant details and images remotely, making a diagnosis and suggesting management. Management options (diagnostic threshold): options included: "(i) specialist assessment or excision of the lesion; (ii) re‐imaging in 3‐months' time to detect change (e.g., atypical naevus without criteria for immediate excision); (iii) discharge to care of general practitioner (GP) e.g., for cryotherapy or topical therapy; (iv) self‐monitoring and (v) lesion of no concern." |
||
Target condition and reference standard(s) |
Reference standard – histology alone; 129 lesions excised; 123 considered suspicious for malignancy on TD, 5 considered benign and 1 'undiagnosable' Target condition (final diagnoses as per expert clinical diagnosis and histology)
|
||
Flow and timing |
|
||
Comparative | — | ||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | No | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | No | ||
High | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | High | ||
DOMAIN 2: Index Test FTF diagnosis | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | |||
If a threshold was used, was it pre‐specified? | |||
Was the test applied and interpreted in a clinically applicable manner? | |||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | |||
Was the test interpretation carried out by an experienced examiner? | |||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
Low | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | Unclear | ||
High | |||
DOMAIN 5: Comparative | |||
Was each index test result interpreted without knowledge of the results of other index tests or testing strategies? | |||
Was the interval between application of the index tests less than 1 month? | |||
Were all tests applied and interpreted in a clinically applicable manner? | |||
Coras 2003.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: 16‐month period. Did not say the date Country: Germany |
||
Patient characteristics and setting |
Inclusion criteria: PSLs undergoing excision due to diagnosis of melanoma or atypical nevus, to rule out melanoma or at the participant's request Setting: secondary (general dermatology) (teledermoscopy diagnosis); private care FTF diagnosis Prior testing: NR Setting for prior testing: NR Exclusion criteria: none reported Sample size (participants): NR Sample size (lesions): number eligible: 90; number included: 45 Participant characteristics: none reported Lesion characteristics: none reported |
||
Index tests |
In‐person assessment (for those comparing FTF vs histology) Method of diagnosis: participating dermatologists with experience in dermoscopy established a clinical diagnosis based on pattern analysis after personal consultation with the participant in their private practice clinics. Prior test data: NR Diagnostic threshold: NR Diagnosis based on: single Number of examiners: 3 Observer qualifications: dermatologist (experts with great experience in dermoscopy) Experience in practice: high Experience with index test: high TD Acquisition and transmission of images: each of the participating dermatologists acquired digital images after FTF consultation using the same technical equipment (Dermogenius ultra – hand‐held CCD camera with pixel size 512×512), and sent them via an email attachment with corresponding participant data and medical history. Nature of images used: clinical photographs and dermoscopic images Any additional participant information provided: clinical examination or case notes (or both) Observer qualifications (remote diagnosis): physician experienced in dermoscopy Diagnosis based on: single observer Method of diagnosis: a physician evaluated the images and made a diagnosis based on the images and history of the participant Other detail: the participating dermatologists used the same technical equipment for the acquisition of digital images. |
||
Target condition and reference standard(s) |
Reference standard: histology Details: the histological diagnosis of majority of cases was performed at the Department of Dermatology Regensburg Target condition (final diagnoses)
|
||
Flow and timing |
|
||
Comparative | Each of the participating dermatologists who conducted a FTF clinical diagnosis acquired digital images of the lesion, and send them via an email attachment with corresponding participant data and medical history. | ||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Unclear | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | Unclear | ||
Unclear | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | High | ||
DOMAIN 2: Index Test FTF diagnosis | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Yes | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Low | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
Low | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High | |||
DOMAIN 5: Comparative | |||
Was each index test result interpreted without knowledge of the results of other index tests or testing strategies? | Unclear | ||
Was the interval between application of the index tests less than 1 month? | Unclear | ||
Were all tests applied and interpreted in a clinically applicable manner? | Yes | ||
Unclear | Low |
Ferrara 2004.
Study characteristics | |||
Patient sampling |
Study design: case control Data collection: retrospective image selection/prospective interpretation Period of data collection: NR Country: NR; likely Italy |
||
Patient characteristics and setting |
Inclusion criteria: pigmented melanocytic lesions with dermoscopic images (a single image per case) and accompanying histological material were retrieved; approach to lesion selection was not described. Setting: unspecified Prior testing: clinical or dermatoscopic (or both) suspicion Setting for prior testing: NR Exclusion criteria: none reported Sample size (participants): NR Sample size (lesions): number included: 12 Participant characteristics: 10 males and 2 females, aged 14–79 (median 41) years. Lesion characteristics: 7 lesions removed from the trunk, 4 from the limbs and 1 from the face. Lesions ranged in size from 4 mm to 14 mm in diameter (median 6 mm). |
||
Index tests |
TD Acquisition and transmission of images: dermoscopic images were either acquired on film (Dermaphot) and then digitised (8) or were acquired directly from a digital camera (4); (MoleMax or Videocap) photographic images were also available in 9/12 cases. Nature of images used: dermoscopic images Any additional participant information provided: clinical examination or case notes (or both) Observer qualifications (remote diagnosis): dermatologist Diagnosis based on: single observer Number of examiners: 3 Method of diagnosis: stored images were viewed on a standard‐resolution colour monitor by 3 remote consultants in a single session; the teledermoscopy diagnosis (melanoma or other lesion type) was recorded by a single consultant, followed by a teledermatopathology diagnosis (based on histological image). The original histological diagnosis from the consultation file was then presented (apparently along with the original clinical diagnosis). "Dermoscopic–pathological remarks" were made and finally a consensus diagnosis was reached by 2 consultants; the latter was taken as the 'gold standard' for the study. |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis alone Details: 12 cases with dermoscopic images (1 image per case) and accompanying histological material were retrieved from consultation files. The conventional histopathology diagnosis was regarded as the gold standard: a consensus diagnosis between 2 consultants was requested in order to minimise any influence of the previous dermatopathology diagnosis Target condition (final diagnoses) Melanoma (in situ and invasive, or NR): 7. Invasive melanoma: 4; MiS: 3 'Benign' diagnoses: 5. Junctional nevus: 1; Reed naevus: 1; blue naevus: 1; actinic lentigo: 1; SN: 1 |
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | No | ||
Did the study avoid inappropriate exclusions? | Unclear | ||
Are the included patients and chosen study setting appropriate? | Unclear | ||
Did the study avoid including participants with multiple lesions? | Unclear | ||
High | Unclear | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | High | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Yes | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
Low | Low | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | Yes | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
Unclear |
Grimaldi 2009.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: October 2005 to March 2006 Country: Italy |
||
Patient characteristics and setting |
Inclusion criteria: cutaneous pigmented lesions with digital images forwarded by primary care physicians to a referral centre for confirmation of diagnosis. Setting: primary (lesions selected for referral by GPs; accuracy of GP diagnosis assessed); secondary (general dermatology); telediagnosis by expert observer also assessed Prior testing: NR Setting for prior testing: NR Exclusion criteria: lesions whose removal had been explicitly demanded by the patients for aesthetic reasons, and those irritated or subjected to trauma Sample size (participants): number included: 197 Sample size (lesions): number included: 235 Participant characteristics: NR Lesion characteristics: NR |
||
Index tests |
In‐person FTF clinical assessment by GP: not included in this review TD Acquisition and transmission of images: images acquired by PCPs using Konica Minolta Dimage Z10 digital cameras (zoom 0, automatic setting, macro off, flash off) coupled with 3Gen 37 mm dermoscopes (with annular white LED lamp). All PCPs involved in the programme were asked not to start any therapy, but to send the images acquired to the reference centre first. All photographed lesions were uploaded from the peripheral units to the central research unit for telediagnosis with only a 2‐step judgement (before and after dermoscopy) formulated by the sending physician. Nature of images used: clinical photographs; dermoscopic images Any additional participant information provided: unclear Observer qualifications (remote diagnosis): dermatologist; plastic surgeons Number of observers: unclear Experience in practice: high experience or 'Expert' Experience with index test: not described Method of diagnosis: unclear but telediagnosis may also have followed ABCD, study stated, "after second appraisal of the images received made by the reference unit) the appropriate guidelines for every case, established in relation to the formulated and controlled diagnosis... When the diagnosed lesion was considered as 'needing control' by the medical staff of the reference centre, the patient was included in a periodic observation programme, according to which images of the lesions were recorded and compared at set intervals, according to a protocol if the cutaneous lesion was judged as 'needing surgery' the therapeutic programme (radical removal, sentinel lymph node biopsy, reconstructive surgery, other therapies) was carried out at the Plastic Surgery Unit of the University of Siena in all cases, after verification of the digital images and checks on the patient." |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis + FU Details: 219 benign lesions were investigated by dermoscopic follow‐up (after 2, 4 and 6 months) in the peripheral centres, and in all cases the diagnosis was confirmed (no clinical change) by telediagnosis from the main centre. These cases were subsequently controlled at regular 6 month FU checks. 16 lesions were labelled as 'to be removed' at the final check. Histology (not further described) number participants/lesions: 16; disease positive: 5; disease negative: 11 Clinical FU + histology of suspicious lesions: length of FU: 6 months
Target condition (final diagnoses)
|
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | Yes | ||
Did the study avoid including participants with multiple lesions? | No | ||
Low | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
High | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | No | ||
Were all patients included in the analysis? | Yes | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | Yes | ||
High |
Jolliffe 2001a.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: NR Country: UK |
||
Patient characteristics and setting |
Inclusion criteria: people referred by their GP for dermatological assessment of a pigmented lesion at the PLC Setting: specialist unit (skin cancer/pigmented lesions clinic) Prior testing: not explicitly mention but most likely clinical examination Setting for prior testing: primary Sample size (participants): number eligible: 138; number included: 138 Sample size (lesions): number eligible: 144; number included: 144; clinical diagnosis 140; teledermoscopy Participant characteristics: Age: range 15–94 years Gender: male: 48 (34%); female 90 (66%) Lesion characteristics: NR |
||
Index tests |
In‐person FTF clinical assessment Method of diagnosis: at the PLC a clinical diagnosis (± the use of dermoscopy) based upon information in the referral letter and examination findings was made and recorded by the examining doctor Prior test data: clinical examination or case notes (or both) Diagnostic threshold: NR Diagnosis based on: single Number of examiners: 1 Observer qualifications: dermatologist Experience in practice: unclear Experience with index test: unclear TD Acquisition and transmission of images: the examining doctor using a single chip video camera, obtained an image of the pigmented lesion. The image was then archived using proprietary software and images were transmitted through a Fast Screen Machine 2 video overlay card and viewed on a 15 inch monitor. Nature of images used: clinical Any additional participant information provided: clinical examination or case notes (or both) Observer qualifications (remote diagnosis): dermatologist Diagnosis based on: single observer Number of observers: 1 Method of diagnosis: the anonymous video images and the GP's referral letter were then viewed several months later by the same doctor who performed the in‐person assessment and a diagnosis made. Management options: NR |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis alone Lesions had been excised either to confirm or refute clinical suspicion of malignancy or atypia. No participant had a lesion removed on account of the study Target condition (final diagnoses) Malignant: melanoma (in situ and invasive, or NR): 2; Lentigo maligna: 2; BCC: 9. Benign diagnoses: atypical naevus: 5; benign melanocytic naevus: 89; SK: 9; solar lentigo: 7; blue naevus: 4; freckle: 2; SN: 2; dermoid cyst: 2; pyogenic granuloma: 2; congenital naevus: 1; naevus sebaceous: 1; DF: 1; haemangioma: 1; abscess: 1; nodular hidradenoma: 1; non‐caseating granuloma: 1; apocrine hidrocystoma: 1; angiokeratoma circumscriptum: 1 |
||
Flow and timing |
|
||
Comparative | The anonymous video images and the GP's referral letter were then viewed several months later by the same doctor and a diagnosis made. The same doctor who performed the clinical examination viewed the images. The doctor's potential memory of a lesion may, therefore, be perceived to be a source of bias. In reality, > 800 pigmented lesions had been seen by this doctor between the in‐person and video examinations, making memory of a specific lesion less likely. | ||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Unclear | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | Yes | ||
Unclear | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Unclear | ||
Low | High | ||
DOMAIN 2: Index Test FTF diagnosis | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Unclear | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Unclear | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
Low | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High | |||
DOMAIN 5: Comparative | |||
Was each index test result interpreted without knowledge of the results of other index tests or testing strategies? | Unclear | ||
Was the interval between application of the index tests less than 1 month? | No | ||
Were all tests applied and interpreted in a clinically applicable manner? | No | ||
High | High |
Jolliffe 2001b.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: NR Country: UK |
||
Patient characteristics and setting |
Inclusion criteria: people referred to the dermatology departments of the Royal Free Hospital or the Whittington Hospital during the study period by their GPs for assessment of a pigmented lesion Setting: specialist unit (skin cancer/pigmented lesions clinic) Prior testing: GP referral for dermatological assessment at a PLC Setting for prior testing: primary Exclusion criteria: none reported Sample size (participants): number included: 611 Sample size (lesions): number included: 819 Participant characteristics: Age: range 8–94 years Gender: male: 196 (24%); female: 90 (66%) Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: lesion images were taken by the examining doctor using a single‐chip video camera; images were viewed on a 15‐inch monitor and stored as JPEG files with minimum compression; overhead artificial illumination was used throughout and the best image used if a series were taken. Nature of images used: clinical Any additional participant information provided: clinical examination or case notes (or both) Observer qualifications (remote diagnosis): dermatologist Diagnosis based on: single observer Number of observers: 3 Method of diagnosis: images were viewed several months later alongside GP referral information independently by all 3 doctors. Management options: the clinician made a decision whether a lesion warranted a referral or not on the basis of the image and referral information. |
||
Target condition and reference standard(s) |
Reference standard: expert diagnosis (FTF diagnosis at dermatology clinic) Details: participants were seen in clinic by a registrar or 1 of 2 consultant dermatologists. Following history taking and clinical examination, a treatment plan was formed (reassure, review with photograph or biopsy lesion) and clinical diagnosis recorded. Clinical diagnoses were then grouped into lesions 'not to be missed' (i.e. reference standard positive) including "malignant melanoma, basal cell carcinoma (BCC), atypical naevus, keratoacanthoma and pyogenic granuloma (owing to the potential clinical confusion with amelanotic melanoma)" and benign lesions (i.e. reference standard negative), including benign melanocytic naevus, SK, congenital naevus, DF, solar lentigo and AK. Target condition (final diagnoses as per expert clinical diagnosis) Disease positive: melanoma (invasive): 9; BCC: 19; lentigo maligna: 1 Disease negative: SK: 152; benign melanocytic naevus: 361; postinflammatory hyperpigmentation: 2; blue naevus: 2; atypical naevus: 112; nail infection: 2; congenital naevus: 27; haematoma: 2; DF: 25; eczema: 2; solar lentigo: 23; keratoacanthoma: 1; foreign body: 1; angioma: 18; abscess: 1; AK: 13; SN: 1; fibroepithelial polyp: 11; dermoid cyst: 1; viral wart: 10; apocrine hidradenoma: 1; chloasma: 1; comedone: 5; cutaneous horn: 1; dermatosis papulosis nigrans: 4; congenital arteriovenous malformation: 1; naevus sebaceous: 4; psoriasis: 1; scar/fibrosis: 3; spider naevus: 1; pyogenic granuloma: 2 |
||
Flow and timing | Participants had the FTF consultation first before having their lesions images. The images were then viewed several months later in conjunction with the GP's referral information by all 3 doctors independently. | ||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | No | ||
Low | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Unclear | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Unclear | High | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Yes | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Yes | ||
High | Low | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Yes | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | Yes | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
Low |
Kroemer 2011.
Study characteristics | |||
Patient sampling |
Study design: NR Data collection: prospective Period of data collection: reported "a 3 month period" – no dates mentioned Country: Austria |
||
Patient characteristics and setting |
Inclusion criteria: people self‐referred or referred by a local doctor for evaluation of a skin tumour. Men or women with benign or malignant (or both) skin tumours of melanocytic or non‐melanocytic origin Setting: secondary (general dermatology) Department of Dermatology, Medical University of Graz, Austria Prior testing: clinical suspicion of malignancy without dermatoscopic suspicion; patient request for evaluation/excision physician or self‐referral Setting for prior testing: primary; all participants were self‐referred or referred by a local doctor Exclusion criteria: none reported Sample size (participants): number eligible: 88; number included: 80/88 Sample size (lesions): number eligible: 113 lesions; number included: they reported 104/113 tumours of 80/88 participants were tele‐evaluated. However, adding all in table 1 gives > 113 (there were up to 3 (total 322) clinical and 3 dermoscopic (total 278) images). Participant characteristics: Age: mean: missing; median: 69; range: 3–93 years Gender: male: 41/88 available. Not stated who withdrew; female 47/88 available. Not clear who remained Race/ethnicity (%): missing: not stated, but they were Austrian Lesion characteristics: NR |
||
Index tests |
In‐person FTF clinical assessment Method of diagnosis: not clear from paper how in‐person assessment was conducted but most likely VI of the skin (± use of dermoscopy) no algorithm described Prior test data: unclear Diagnostic threshold: unclear Diagnosis based on: single Number of examiners: unclear Observer qualifications: NR Experience in practice: NR Experience with index test: NR TD Acquisition and transmission of images: lesions were selected during the outpatient visit and up to 3 clinical (autofocus mode) and dermoscopic images (macro mode) images were obtained by the clinician using a mobile phone with a built in mega‐pixel camera (for clinical photos) (Nokia N73 with a built‐in 3.2‐megapixel camera; Nokia, Helsinki, Finland) with the addition of a pocket dermoscopy device attached to the camera lens for dermoscopic images (DermLite II PRO HR; 3Gen LLC, Dana Point, CA, USA). Images were stored in JPEG format and saved on a computer USB port. Clinical and dermoscopic datasets of each lesion together with relevant clinical information (age, sex, tumour onset, location and participant history) were separately transmitted via a virtual private network for online consultation. Nature of images used: clinical photographs and dermoscopic images Any additional participant information provided: clinical examination or case notes (or both) relevant clinical information Observer qualifications (remote diagnosis): dermatologist (board certified with clinical expertise in TD and dermoscopy) Diagnosis based on: single observe Method of diagnosis: a board‐certified dermatologist with clinical expertise in TD and dermoscopy reviewed each set of clinical and dermoscopic images separately. The lesions were grouped into 4 diagnostic categories (benign melanocytic, benign non‐melanocytic, malignant melanocytic and malignant non‐melanocytic skin tumours). The teleconsultant based on this then recorded 1 primary and 1 differential diagnosis. |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis and expert diagnosis Details: histopathology was used as the gold standard in 78/104 (75%), including for 32/58 benign lesions (55%). According to ethical principles and to the standards of routine practice, the clinical and dermoscopic FTF diagnoses were considered adequate in those participants with clinically and dermoscopically benign and non‐suspicious lesions (i.e. 44% of benign group), and no biopsy procedure was performed. Target condition (final diagnoses) Malignant: melanoma (invasive): 2; melanoma (in situ): 1; BCC: 30; cSCC:10; lentigo maligna: 3 Benign: SK: 6; AK: 17; BD: 1; benign naevus: 15; other: soft tissue tumour: 4; angioma: 4; solar lentigo: 3; virus‐induced tumour: 1; trichilemmoma: 0; other: 7 |
||
Flow and timing |
|
||
Comparative | 1‐month delay between FTF assessment and clinical and dermoscopic telederm evaluation | ||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | No | ||
Unclear | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Yes | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | High | ||
DOMAIN 2: Index Test FTF diagnosis | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Unclear | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Unclear | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Unclear | ||
High | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High | |||
DOMAIN 5: Comparative | |||
Was each index test result interpreted without knowledge of the results of other index tests or testing strategies? | Unclear | ||
Was the interval between application of the index tests less than 1 month? | Yes | ||
Were all tests applied and interpreted in a clinically applicable manner? | No | ||
Unclear | High |
Mahendran 2005.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: no dates, but over 18 months Country: UK |
||
Patient characteristics and setting |
Inclusion criteria: people with a suspicious skin lesion seen by their local GP, had a lesion worthy of dermatologist assessment, willing to have photographic images taken of lesion, willing to see a dermatologist outpatient Setting: primary where participants were recruited; secondary (general dermatology): where final diagnosis was made. Prior testing: clinical suspicion of malignancy without dermatoscopic suspicion Setting for prior testing: primary Exclusion criteria: 15% poor‐quality index test image Sample size (participants): number included: unclear Sample size (lesions): number included: 106 Participant characteristics: none reported Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: GPs took images of suspicious skin lesions using a digital camera (Nikon Coolpix 950 digital camera (1200 × 1600 pixel resolution), the photograph together with all the relevant history and details about the skin lesion were sent via email to the dermatology department. Nature of images used: clinical photographs Any additional participant information provided: past history; dermographics, site of lesion Observer qualifications (remote diagnosis): dermatologist (consultant dermatologists) Diagnosis based on: single observer (diagnosis made by 1 of 2 consultants) Method of diagnosis: the remote observer gave a diagnosis or differential diagnosis plus a hypothetical management plan where possible. Management options: management options included reassuring the participant, minor operation, no action but further review appointment required |
||
Target condition and reference standard(s) |
Reference standard: expert diagnosis (FTF diagnosis at dermatology clinic) Details: all participants were subsequently seen in dermatology outpatient clinic by 1 of the same 2 consultants within 2 weeks and the clinical diagnosis and actual management plan were recorded; lesions also seen FTF by a trainee dermatologist (specialist registrar year 3) 'blinded' to the consultants' reports but the 2×2 data appeared to be for the consultant FTF assessment. Target condition (final diagnoses) Malignant: melanoma (invasive): 4; BCC: 37; cSCC 4 Benign diagnoses: AK: 10; BD: 7; lentigo maligna: 1; atypical dysplastic nevi: 6; SK: 27; BN: 20; other: DF: 11; inflammatory dermatoses: 8; haemangioma: 3; scar: 3; viral wart: 3; cellular naevus: 2; chondrodermatitis nodularis helices: 2; congenital naevus: 2; dilated pore of Winer: 2; lesion resolved: 2; squamous papilloma: 2; blue naevus: 1; halo naevus: 1; lichenoid keratosis: 1; myxoid cyst: 1; pressure sore: 1; pyogenic granuloma: 1; sebaceous gland hyperplasia: 1 |
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | Yes | ||
Did the study avoid including participants with multiple lesions? | Unclear | ||
Low | Unclear | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | No | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Yes | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Yes | ||
High | Low | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Yes | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | Yes | ||
High |
Manahan 2015.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: NR Country: Australia |
||
Patient characteristics and setting |
Inclusion criteria: participants aged 50–64 years at high risk of melanoma (fair skin type, previous skin excisions, personal or family history) recruited via the 'QSkin' study (500/43,794 participants were mailed an invitation to participate; plus 59 volunteers who requested participation after learning about the study via university websites or in the local news. Only those with a suitable smartphone could participate. Participants instructed to submit "photos of moles or spots that they 'did not like the look of'," and were given instructions about how to select lesions based on asymmetry and colour. Setting: community Prior testing: none Setting for prior testing: N/A Exclusion criteria: no smartphone Sample size (participants): 500 (invited) plus 58 volunteers; of the 230 who completed the questionnaire, the first 58 who expressed interest and had a suitable smartphone were enrolled. 50 attended for FTF skin examination, 1 of whom was later excluded. Sample size (lesions): 341 lesions included; 309 with a primary TD diagnosis Participant characteristics: Age: 50–64 years Gender: 49% male Other: 31% with first‐degree family member with melanoma and 90% self‐reported a fair skin type Lesion characteristics: back: 106 (34%); chest/abdomen: 57 (18%); legs: 56 (18%); arms: 46 (15%); head/neck: 44 (14%) |
||
Index tests |
TD Acquisition and transmission of images: study primarily aimed to evaluate skin self‐examination (participants randomised to receive 10‐step guide to skin self‐examination) and mobile TD. All participants used Handyscope FotoFinder dermoscope smartphone attachment (FotoFinder Systems GmbH, Bad Birnbach, Germany) and Handyscope app, to obtain and send magnified lesion image along with a second clinical (macro) image to verify the anatomical site of each skin lesion. Nature of images used: clinical and dermoscopic Any additional participant information provided: Diagnosis based on: single observer Number of examiners: 1 Observer qualifications (remote diagnosis): board‐certified dermatologist, experienced in TD Method of diagnosis: unclear; method of viewing images NR. Dermatologist indicated whether the photograph was suitable to provide a diagnosis before making management recommendation. Management options: primary diagnosis, with up to 2 differential diagnoses (cannot extract 2×2), and whether clinical skin examination (FTF) was required (action). |
||
Target condition and reference standard(s) |
Reference standard: FTF expert diagnosis (referral accuracy) Details: clinical skin examination performed by a dermatology registrar under supervision of the dermatologist who undertook the telediagnosis. The same management options were recommended in the FTF consultation Target condition (clinical diagnoses FTF) Malignant: BCC: 13; SCC/IEC: 1 Benign: atypical naevus: 4; benign naevus: 165; solar lentigo: 22; SK: 81. Non‐pigmented: AK: 34; DF: 2; other: 18 Recommendation to see FTF (reference standard): 35 |
||
Flow and timing |
Excluded participants: 8/58 participants did not attend for FTF examination; 1/58 without age restriction; 32/341 lesions did not appear to have a primary TD diagnosis Time interval to reference test: NR Time interval between index test(s): N/A |
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | Unclear | ||
Did the study avoid including participants with multiple lesions? | No | ||
Unclear | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Unclear | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | |||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Unclear | ||
High | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High |
Massone 2014.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: February 2008 to February 2010 Country: Austria |
||
Patient characteristics and setting |
Inclusion criteria: people undergoing health screening for a health insurance company at 3 preventive healthcare centres in Austria selected by GP for second‐opinion teleconsulting as part of a preventive medical screening programme Setting: private care Prior testing: clinical or dermatoscopic suspicion (or both) Setting for prior testing: private care Exclusion criteria: poor‐quality index test image Sample size (participants): number eligible: 112; number included: 30 Sample size (lesions): number eligible: 121; number included: 32 Participant characteristics: Age: mean: 47; median: 47; range: 18–84 years Gender: male: NR for the number of participants for whom data were presented only given the overall number from eligible participants (642; 93%). Female: NR for the number of participants for whom data were presented only given the overall number from eligible participants (48; 7%). Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: GPs screened patients and if they noted a suspicious skin lesion then they acquired dermoscopic and if needed photographic images of the same skin lesion. Photographic images were taken using a digital camera, with the addition of a polarised light contact dermatoscope for the dermoscopic images (Canon Powershot digital camera (Canon Inc., Tokyo, Japan) and DermLite Photo; 3Gen LLC, San Juan Capistrano, CA, USA) adjusted from MoleMax System (Derma Medical Systems, Vienna, Austria)). Images correlated by only age, sex and location of the lesion were transmitted via a virtual private network for teleconsultation. No personal participants' data were transmitted. Nature of images used: clinical photographs and dermoscopic images Any additional participant information provided: age, sex and location of the lesion Diagnosis based on: single observer Number of examiners: 2 Observer qualifications (remote diagnosis): dermatologist (with high experience of dermoscopy) Method of diagnosis: 1 of 2 dermatologists reviewed the images within 48 hours. First, they reviewed the quality of images on a 3‐point scale, ranging from excellent (1) to low quality (3). Second, they assessed the lesion and grouped them into 1 of 4 groups: (1) benign melanocytic, (2) malignant melanocytic, (3) benign non‐melanocytic and (4) malignant NMLs and defined them according to WHO guidelines. Management options: they also recorded management of the participant as follows, "(i) no further treatment or FU in 3, 6 or 12 months interval in case of benign skin lesions, (ii) referral to a local dermatologist for FTF examination in case of suspicious skin lesions and (iii) excision in case of suspected malignancy." |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis plus FTF diagnosis/expert opinion Details: people visiting the healthcare centres were coming from different towns of Austria therefore no institutions were specifically recommended but the patients were free to select a dermatologist of their choice for further assessment. No feedback was requested, it was possible to collect FU data only for the participants referred to the department or who responded to a phone call or a letter. Of these cases, only 19 had histology and 13 had an FTF expert assessment. Histology (not further described): number participants/lesions: 19; disease positive: 7; disease negative: 12 Expert opinion: number participants: 13; disease positive: 0; disease negative: 13 Target condition (final diagnoses) Malignant: melanoma: 2, BCC: 5 Benign: dysplastic nevi: 11, SK: 4; other: angioma: porokeratosis: 1; AK: 1 |
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Unclear | ||
Did the study avoid inappropriate exclusions? | No | ||
Are the included patients and chosen study setting appropriate? | Unclear | ||
Did the study avoid including participants with multiple lesions? | Yes | ||
High | Unclear | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Yes | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Low | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Unclear | ||
High | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | No | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High |
Moreno Ramirez 2005.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: retrospective Period of data collection: January to April 2004 Country: Spain |
||
Patient characteristics and setting |
Inclusion criteria: people with pigmented, circumscribed lesions fulfilling ≥ 1 of the following criteria: changing lesion ('ABCD changes'), recent lesion (< 3‐year history), multiple lesions (> 20 MN counted by the GP), symptomatic lesion (pain, itching, bleeding) or concerned about moles. Accuracy data reported only for those subsequently referred to the PLC and for whom pathology results were available. Setting: specialist unit (skin cancer/pigmented lesions clinic) Prior testing: most likely clinical examination (the reasons for teleconsultation were listed as concern about moles; recent pigmented lesion; changing lesions; symptoms; multiple lesions) Setting for prior testing: unspecified Exclusion criteria: Excluded difficult to diagnose Sample size (participants): number eligible: 219; number included: 108 referred to PLC, 57 participants included in the final analysis Participant characteristics: Age: mean: 43; range: 2–84 years Gender: male: 77 (35%); female 142 (65%) Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: 2 digital pictures were taken by the GP using a digital camera at a resolution of 1600×1200 pixels (Coolpix 4300, Nikon). A panoramic view of the lesion area and a close up of the lesion were taken. Images were inserted into a word document with relevant clinical information. This was transmitted via the intranet to an email account at the PLC. Nature of images used: clinical photographs Any additional participant information provided: clinical examination or case notes (or both) Diagnosis based on: unclear how many remote observers Number of examiners: NR Observer qualifications (remote diagnosis): dermatologist Method of diagnosis: at the teleconsultation, observers classified the lesions as being, "benign melanocytic naevus, multiple MN (>20 naevi as seen on teleconsultation), atypical naevus,7 congenital naevus, blue naevus, solar lentigo, lentigo maligna, melanoma, special melanocytic lesion (genital naevus, acral naevus, recurrent naevus), seborrhoeic keratoses, basal cell carcinoma (BCC), DF, vascular lesion, non‐pigmented lesion, or a 'difficult to diagnose' lesion." After evaluation of the pictures and clinical information, a report was returned to the GP at the primary care centre, with suggestions regarding the diagnosis and management of the case. Management options: limited to 'referral' or 'non‐referral' of the participant to the FTF clinic. Participants who had readily identifiable benign lesions such as benign melanocytic naevus, solar lentigo, SK, DF, vascular lesions and non‐pigmented lesions were not referred to the PLC. All other remaining categories were routinely referred to the PLC for FTF assessment. |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis Details: at the PLC, physical and dermoscopic examinations were carried out, as well as excisional biopsy in suspicious or malignant cases, and FU of participants with risk factors for melanoma (16/25 benign underwent histology) Target condition (final diagnoses) Melanoma (in situ): 1, BCC: 23; lentigo maligna: 3; dysplastic nevi: 16, common: nevi 8; blue nevi: 4 |
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | No | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | Unclear | ||
High | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Unclear | ||
High | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | No | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High |
Oliveira 2002.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: unclear but appeared prospective Period of data collection: NR Country: Brazil |
||
Patient characteristics and setting |
Inclusion criteria: participants with suspect dermatological identified by an assistant nurse who had undergone training to identify potentially malignant skin lesions. Only those who attended for FTF assessment were included. Setting: primary care; Centro de Saúde Escola Geraldo de Paula Souza (primary care public health service) Prior testing: none Setting for prior testing: N/A Exclusion criteria: none reported Sample size (participants): 103 eligible; 90 included Sample size (lesions): 90 Participant characteristics: NR Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: lesions photographed in primary care by an assistant nurse using a Kodak DC265 Zoom digital camera. 2 hours' training in the use of the camera was provided and included instruction on the installation of the camera's software and transferring the images to the computer. Images were sent by nurse with an electronic case report form and included her diagnostic impression whether the lesion was non‐malignant or malignant. Nature of images used: clinical Any additional participant information provided: participant record Diagnosis based on: single observer Number of examiners: 1 Observer qualifications (remote diagnosis): dermatologist from the Department of Dermatology of the Faculty of Medicine of the University of São Paulo Method of diagnosis: not clearly described. All cases were assessed remotely by a dermatologist prior to the in‐person evaluation Management options: malignant or benign; malignant diagnosis indicated biopsy needed |
||
Target condition and reference standard(s) |
Reference standard: expert diagnosis (referral accuracy) Details: within 1 week the same dermatologist saw the participant in‐person. Participants were referred for biopsy when skin cancer was the suspected diagnosis. The in‐person assessments by the dermatologist (and the biopsy results in a few cases) were used as reference. Target condition (FTF diagnoses) Malignant: 8 Benign: 82 |
||
Flow and timing |
Excluded participants: 2 lesions without a TD diagnosis Time interval to reference test: 1 week from photographs being taken Time interval between index test(s): N/A |
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | Yes | ||
Did the study avoid including participants with multiple lesions? | Yes | ||
Low | Low | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Unclear | ||
Low | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | No | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | |||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Unclear | ||
High | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Yes | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High |
Phillips 1998.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: NR but appeared prospective Period of data collection: 1996 Country: USA |
||
Patient characteristics and setting |
Inclusion criteria: participants attending 4 skin cancer screenings at community hospitals in rural eastern North Carolina, USA Setting: community Prior testing: none Setting for prior testing: N/A Exclusion criteria: none reported Sample size (participants): 51 Sample size (lesions): 107 Participant characteristics: Age: mean: 46.7 years Gender: male: 8 (15.7%) Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: all sites were on a 1/2 T‐1 link (786 kbs). All sites had 3 cameras available, each of which was used in evaluating the participants: a full‐body camera, a lens for viewing the lesions close up, and a magnifying lens that allowed even closer views as well as examination with polarised light (CLI CODEC (Panasonic 3‐chip or Canon 1‐chip)). All monitors offered 620 lines of resolution. It was not clear who operated the cameras during the teleconsultations. The in‐person evaluation was conducted first so that if a complete skin examination was performed, representative lesions were selected by the on‐site physician for evaluation by the remote physician. Nature of images used: live link Any additional participant information provided: physicians could communicate directly with the participant Diagnosis based on: single observer Number of examiners: 2; each examiner was the on‐site physician at 2 screenings and the "remote physician" at 2 screenings. Observer qualifications (remote diagnosis): dermatologist Method of diagnosis: live link consultation; physicians could communicate directly with the study participant Management options: most likely diagnosis for a given lesion; the degree of concern that a specific lesion was malignant; and recommendation as to whether to do a biopsy of the lesion. |
||
Target condition and reference standard(s) |
Reference standard: expert diagnosis (referral accuracy) Details: all participants were first evaluated by the on‐site physician. Participants were given a choice of having a total body examination, only the sun‐exposed skin, or a specific lesion(s) evaluated by the on‐site physician. This physician recorded specific lesions on an image of the human body and the most likely diagnosis for a given lesion; the degree of concern that a specific lesion was malignant; and recommendation as to whether to do a biopsy of the lesion. If a complete skin examination was performed, representative lesions were selected by the on‐site physician for evaluation by the remote physician. The participant was subsequently seen by the remote dermatologist, and the same data were recorded. Target condition (FTF diagnoses) Malignant: BCC: 2; SCC: 3; lentigo maligna: 1 Benign: SK: 27; BN: 32; AK: 14; lentigo: 10; other: 30 |
||
Flow and timing |
Excluded participants: none described Time interval to reference test: consecutive Time interval between index test(s): N/A |
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | Yes | ||
Did the study avoid including participants with multiple lesions? | Yes | ||
Unclear | Low | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Unclear | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Unclear | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Unclear | ||
Unclear | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Yes | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | |||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Unclear | ||
High | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Yes | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | Yes | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
Low |
Piccolo 2000.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: states 3 months but no specific dates given Country: Austria (Graz) |
||
Patient characteristics and setting |
Inclusion criteria: people with PSL selected because of their diagnostic difficulty and subsequently excised for a histopathological evaluation. Setting: unspecified described as a multicentre study Prior testing: lesions included in the study were selected because of their diagnostic difficulty; did not specify what prior tests were done Setting for prior testing: unspecified Exclusion criteria: poor‐quality index test image (all images scoring 4 were excluded from the study) Sample size (participants): number included: 40 Sample size (lesions): number included: 43 Participant characteristics: Age: median: 39.5 years; range: 3–91 years Gender: male: 21 (53%); female 19 (47%) Lesion characteristics: site: face: 2; head: 1; neck: 1; trunk: 8; arms: 3; legs: 7; back: 20; buttocks: 1 |
||
Index tests |
In‐person FTF clinical assessment Method of diagnosis: all lesions were examined with a dermatoscope during the FTF clinical diagnosis. Diagnosis was made by an expert dermatologist based on clinical features and dermoscopic findings. No specific algorithm (e.g. the Stolz index) was used for dermoscopic diagnosis. Prior test data: unclear Diagnostic threshold: NR Diagnosis based on: single Number of examiners: 1 Observer qualifications: dermatologist (an expert in the diagnosis of PSL) Experience in practice: high Experience with index test: high TD Acquisition and transmission of images: each image was acquired following the in‐person consultation with the digital camera at a fixed 10‐fold magnification. 2 different lenses were used to capture clinical and dermoscopic images (DCS 460, Kodak, Rochester, NY, USA), which used a Nikon body (N90, Nikon, Tokyo, Japan); original image size 2036×3060 pixels in RGB colour mode (32 bit/pixel), then compressed to 511×768 pixels (24 bit/pixel). These were stored on a prototype TD workstation and distributed to remote centres via email together with basic participant data (initials, age, sex and site of the lesion). Nature of images used: clinical photographs and dermoscopic images Any additional participant information provided: participant data (initials, age, sex and site of the lesion). Diagnosis based on: single observer Number of examiners: 11 Observer qualifications (remote diagnosis): dermatologists (6), residents in dermatology (2), internist (1), GP (1), oncologist (1) Experience in practice: not described Experience with index test: mixed experience (low and high experience combined) Method of diagnosis: NR |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis alone Details: all lesions were excised for a histopathological evaluation Target condition (final diagnoses) Melanoma (in situ and invasive, or NR): 11; BCC: 3; SK: 2; benign naevus: melanocytic naevus: 23; 'benign' diagnoses: angiokeratoma: 1; lentigines: 3 |
||
Flow and timing |
|
||
Comparative | Remote observers received images via email | ||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Unclear | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | Yes | ||
Unclear | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | High | ||
DOMAIN 2: Index Test FTF diagnosis | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Unclear | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Unclear | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
Low | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | Yes | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
Unclear | |||
DOMAIN 5: Comparative | |||
Was each index test result interpreted without knowledge of the results of other index tests or testing strategies? | Yes | ||
Was the interval between application of the index tests less than 1 month? | Unclear | ||
Were all tests applied and interpreted in a clinically applicable manner? | No | ||
Unclear | High |
Piccolo 2004.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: retrospective image selection/prospective interpretation Period of data collection: NR Country: multicentre study participating centres were in Italy, Japan, Austria and Slovenia Images used were from University of Graz, Austria and University of L'Aquila, Italy |
||
Patient characteristics and setting |
Inclusion criteria: people with melanocytic acral lesions (71 common MN and 6 melanomas) from 73 people at the Department of Dermatology, University of Graz and Department of Dermatology, University of L'Aquila Setting: secondary Prior testing: selected for excision (no further detail) Setting for prior testing: unspecified Exclusion criteria: none reported Sample size (participants): number included: 73 Sample size (lesions): number included: 77 Participant characteristics: Age: mean: 28 years; range: 4–77 years Gender: male: 34; female 39 Lesion characteristics: site: 67 lesions located on lower extremities (58 on plantar surface, 4 on external part of foot, 3 on toe, and 2 on dorsal aspect of foot) and 10 on upper extremities (8 on palm and 2 on finger). |
||
Index tests |
TD Acquisition and transmission of images: dermoscopic images of 48 melanocytic acral lesions were acquired at the Department of Dermatology, University of Graz, using the MoleMax II System at 30× magnification. Dermoscopic photographs acquired with Heine Dermaphot equipment at 10× magnification were retrieved from the database of the Department of Dermatology, University of L'Aquila images. All images in the study were compressed (to facilitate email transmission) at the Department of Dermatology, University of L'Aquila. The images selected represented all the acral lesions included in the databases of the 2 dermatology departments. photographic images were not included. The dermoscopic images, together with the essential clinical data (age and sex of participant and site of the lesion), were transmitted individually by email to 11 colleagues in 8 remote centres. Nature of images used: dermoscopic images Any additional participant information provided: case notes Diagnosis based on: single observer Number of examiners: 11 dermatologists Observer qualifications (remote diagnosis): dermatologist (varying experience from high to low dependant on numbers of years of specialisation in PSL) Method of diagnosis: images were analysed the images on a computer monitor by each observer, first to diagnose acral melanoma or atypical lesions, and second to categorise the lesions according to the Saida classification. An acral naevus was considered to be atypical when ≥ 6 of the 11 observers made this diagnosis. Management options: observers made a management recommendation of digital dermoscopy FU or surgical excision |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis alone Details: all lesions were surgically excised and histopathologically diagnosed by 2 dermatopathologists. Target condition (final diagnoses)
|
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Unclear | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | Yes | ||
Unclear | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Yes | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | High | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Yes | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
Low | Low | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | Yes | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
Unclear |
Shapiro 2004.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: 10 July 1998 to 4 August 2000 Country: USA |
||
Patient characteristics and setting |
Inclusion criteria: PCP referred only those people with skin growths that posed a true diagnostic challenge. Setting: primary, recruitment of study participants from PCP (estimated 50% of PCP participants had dermatological lesions that were encountered during routine evaluation and 3% present exclusively for dermatological reasons). Private (FTF consultation with the local dermatologist in private practice). Secondary care (images sent from PCP to academic dermatologist for SAF dermatological consultation) Prior testing: a network community PCP‐recruited participants whom he judged to require dermatological consultation for evaluation of a cutaneous growth. Setting for prior testing: primary Exclusion criteria: people who underwent previous evaluation by a dermatologist Sample size (participants): number eligible: 61; number included: 49 Sample size (lesions): NR Participant characteristics: NR Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: images were acquired by the PCP using an Olympus D‐600L digital camera. The first image captured the head and upper trunk. This was followed by an image of the affected body part. The image and a clinical history were downloaded to a personal computer using a serial port interface and accompanying software. The image transmission was performed via e‐mail using HUPNet, a private encrypted University of Pennsylvania Health System area network Nature of images used: clinical photographs Any additional participant information provided: clinical history Observer qualifications (remote diagnosis): dermatologist (1) (academic dermatologist with over 20 years' experience in clinical dermatology) Diagnosis based on: single observer Number of observers: 1 Method of diagnosis: the images were reviewed by the PCP immediately before they were sent to a board certified academic dermatologist for teledermatological consultation. After assessing the case, the teledermatologist notified the PCP of the diagnosis or differential diagnosis and indicate on a standard data collection sheet whether a sampling biopsy was necessary. The reason for recommending biopsy was specified as well. Management options: asked to choose a management plan from 15 entries including 3 biopsy plans (to rule out malignancy, to establish a diagnosis or to remove a benign lesion for cosmetic purposes) |
||
Target condition and reference standard(s) |
FTF diagnosis as reference standard Method of diagnosis: the participants were simultaneously scheduled for an FTF visit with the local dermatologist, who is in private practice, within 1 month. The FTF dermatologist completed a standardised consultation form notifying the PCP of his decision regarding the diagnosis, differential diagnosis and whether a biopsy was indicated. A biopsy was performed at that visit by the FTF dermatologist if the FTF dermatologist or the SAF teledermatologist favoured biopsy of the lesion. Prior test data: the telediagnosis triage decision was contained in a sealed envelope which was opened by the FTF dermatologist after making his decision. Biopsy was then carried out by the FTF dermatologists if recommended by either dermatologist. Diagnostic threshold: not described Diagnosis based on: single Number of examiners: 1 Observer qualifications: dermatologist Experience in practice: > 20 years of experience in clinical dermatology Experience with index test: high Target condition (for 26 lesions undergoing biopsy; 23 on dermatologist recommendation and 3 at participant request) Malignant: BCC: 5; cSCC: 4 Benign: benign neoplasms: 17 Assumed benign (no biopsy): 23 (including 1 participant who refused biopsy) |
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Unclear | ||
Are the included patients and chosen study setting appropriate? | Yes | ||
Did the study avoid including participants with multiple lesions? | Unclear | ||
Unclear | Unclear | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | No | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Yes | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Yes | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Yes | ||
High | Low | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Yes | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High |
Silveira 2014.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: April 2010 and July 2011 Country: Brazil |
||
Patient characteristics and setting |
Inclusion criteria: people with skin lesions that were determined to be suspicious after a direct VI by a physician. All of the patients examined at the MPU were previously screened by a nurse from the local municipality who was trained at Barretos Cancer Hospital. Setting: community MPU Prior testing: not described Setting for prior testing: unspecified Exclusion criteria: none reported Sample size (lesions): number included: 416 Participant characteristics: Age: mean: 63.5, range: 19–93 years Gender: NR Lesion characteristics: site: head and neck: 273 (75.0); trunk: 28 (7.6); upper limbs: 61 (16.7); lower limbs: 2 (0.5); skin type scale: 1–2: 295 (81); 3–4: 69 (19); 5–6: 0 (0) |
||
Index tests |
TD Acquisition and transmission of images: community (participants were evaluated in the MPU, and their lesions were photographed by the MPU physician using a digital camera, Sony Cybershot DSC‐5780 digital camera with 8.1‐megapixel resolution) Nature of images used: clinical photographs Any additional participant information provided: information such as age, skin complexion, location of the lesion, stage and pathology results were collected. Observer qualifications (remote diagnosis): oncologists at the Barretos Cancer Hospital; both the oncologists and the MPU physician had more than 10 years of experience in skin cancer screening. Number of observers: 2 Diagnosis based on: single observer Method of diagnosis: all digital images were coded, stored and submitted at random to 2 oncologists at Barretos Cancer Hospital, they were blinded to the MPU physician's diagnosis and pathology reports, and classified the images using the following options: malignant lesion, oncological treatment is indicated; benign lesion, no treatment required; unknown or a low‐quality image |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis Details: lesions classified as possibly malignant at the mobile unit were excised or biopsied Target condition (final diagnoses)
|
||
Flow and timing |
Excluded participants: 21 (4.6%) were excluded from the study because of poor‐quality photographs, leaving 439 participants with pathological results. 23/439 (5.2%) were excluded because of incomplete data preventing the identification of the participant. Interval between reference standard and index test: appeared consecutive "lesions were imaged, biopsied/removed and submitted for histopathological examination." 364 (87.5%) were confirmed to be malignant by the biopsy, 52 were diagnosed by expert opinion. |
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Unclear | ||
Are the included patients and chosen study setting appropriate? | Yes | ||
Did the study avoid including participants with multiple lesions? | Unclear | ||
Unclear | Unclear | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | Unclear | ||
Low | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Yes | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High |
Warshaw 2010b.
Study characteristics | |||
Patient sampling |
Study design: case series Data collection: prospective Period of data collection: November 2002 to August 2005 Country: USA |
||
Patient characteristics and setting |
Inclusion criteria: people enrolled at the Department of VA dermatology clinic who required (or requested) removal of ≥ 1 skin neoplasms ('high‐risk group') and participants who were referred to general dermatology clinic by non‐dermatology healthcare providers for evaluation of a skin neoplasm (lower‐risk group). Biopsied lesions only were included. Warshaw 2009 and Warshaw 2009a included data for participants primary lesions only whereas Warshaw 2010 includes all biopsied lesions, pigmented or non‐pigmented, from histopathologic lesion categories with ≥ 5 lesions Setting: secondary (general dermatology) Prior testing: selected for excision (no further detail) Setting for prior testing: secondary (general dermatology) Exclusion criteria: individuals requesting or referred for skin tag removal only or with papulosquamous or eczematous conditions (non‐neoplastic), previous biopsy of the lesion and inability to comprehend and give informed consent Sample size (participants): number eligible: 2152; number included: NR Sample size (lesions): number eligible: 3021 enrolled; 1685 biopsied and eligible for inclusion; number included: 1514 Age: mean: pigmented: 66; non‐pigmented: 71; range: pigmented: 23–94; non‐pigmented: 21–94 years Gender: male: pigmented: 519 (95.8%); non‐pigmented: 712 (97.8%) Race/ethnicity: white: pigmented: 97.1%; non‐pigmented: 98.9%; black or African‐American: pigmented: 1.3%; non‐pigmented: 0.7%; other: pigmented: 1.5%; non‐pigmented: 0.4% High‐risk characteristics
Lesion characteristics
Lesion site
|
||
Index tests |
In‐person assessment Method of diagnosis: VI ± use of dermoscopy ("the clinical examination could include all options normally available in the clinical setting (e.g., palpation, diascopy, dermatoscopy") Prior test data: selected for excision (no further detail) Diagnostic threshold: qualitative (recorded primary diagnosis and up to 2 differential diagnoses, plus a choice of 4 basic management plans (remove/biopsy/destroy, observe/reassure, antifungal treatment, antibiotic treatment, anti‐inflammatory treatment) Primary diagnosis: clinicians had a choice of 17 common diagnoses for 1 primary and 1 or 2 differential diagnoses Diagnosis based on: single observer Number of examiners: 11 staff dermatologists Observer qualifications: dermatologist Experience in practice: not described Experience with index test: not described TD Acquisition and transmission of images: the standard method used in most TD settings at the onset of the study, and PLD images were obtained for each lesion following in‐person consultation. 2 macro images (distance and close‐up; digital Nikon Coolpix 4500 with a Nikon SL‐1 ring flash (Nikon, Melville, NY)) were obtained of each lesion. In addition, for lesions > 2 mm in height, a macro angle (approximately 908 from the skin surface) was also taken. 1 PLD (digital Nikon Coolpix 4500 with a 3Gen DermLite lens attachment) was also obtained. Dermoscopy images taken and accuracy reported but insufficient data obtained from authors to allow their inclusion in this review. Dermoscopy images unlikely to have influenced macro image interpretation as macro images (photographs) alone and macro images plus dermoscopic images were interpreted two weeks apart. Nature of images used: clinical photographs Any additional participant information provided: clinical examination or case notes (the standardised participant and lesion history collected by the research assistants), or both Observer qualifications (remote diagnosis): dermatologist (with > 5 years' experience and recognised PSL expert) Diagnosis based on: single observer Number of observers: 3 Method of diagnosis: using the same diagnostic and management categories as used by clinic dermatologists, the teledermatologists recorded 1 primary diagnosis, up to 2 differential diagnoses, and a management plan for each lesion. |
||
Target condition and reference standard(s) |
Histological reference standard: a board‐certified dermatopathologist who was not involved with any clinic assessments coded all histopathological diagnoses based on the pathology report. Number of participants/lesions: 1514 Target condition Malignant: melanoma: 41; BCC: 410; cSCC: 240 Benign: 'benign' diagnoses: benign keratoses: 223; dysplastic nevi: 154; actinic keratoses: 145; benign nevi: 138; cysts: 73; benign appendageal tumours: 35; lentigines: 29; benign vascular neoplasms: 26 |
||
Flow and timing |
Excluded participants: histopathologic categories with < 25 lesions (171) Time interval to reference test: participants (from the VA clinic) were scheduled for a research appointment before, but on the same day as, their dermatology clinic consult appointment. But timing to histology NR. Participants from the general dermatology clinic undergoing a biopsy for a skin neoplasm (because of physician recommendation or participant request) were also invited to participate. Likely that photographs taken on same day as excision but again NR as such Time interval between index test(s): immediate; photographs taken on day of FTF appointment |
||
Comparative | — | ||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | Yes | ||
Did the study avoid inappropriate exclusions? | Yes | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | No | ||
Low | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | High | ||
DOMAIN 2: Index Test FTF diagnosis | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | Yes | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Unclear | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | Unclear | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Yes | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
Low | Low | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High | |||
DOMAIN 5: Comparative | |||
Was each index test result interpreted without knowledge of the results of other index tests or testing strategies? | Unclear | ||
Was the interval between application of the index tests less than 1 month? | Unclear | ||
Were all tests applied and interpreted in a clinically applicable manner? | Yes | ||
Unclear | Low |
Wolf 2013.
Study characteristics | |||
Patient sampling |
Study design: case control Data collection: retrospective image selection/prospective interpretation Period of data collection: NR Country: USA |
||
Patient characteristics and setting |
Inclusion criteria: people with pigmented lesions that were considered atypical in clinical appearance by ≥ 1 dermatologist and for which a clear histological diagnosis had been rendered by a board‐certified dermatopathologist. Images selected from image database in following categories: categories: invasive melanoma, MiS, lentigo, benign nevus (including compound, junctional and low‐grade dysplastic nevi), DF, SK and haemangioma. Sampling not described Setting: unspecified Prior testing: selected for excision (no further detail) Setting for prior testing: unspecified Exclusion criteria: poor‐quality index test image. Images that contained any identifiable features, such as facial features, tattoos or labels with participant information were excluded or cropped to remove the identifiable features or information. Lesions with equivocal diagnoses, such as "melanoma cannot be ruled out" or "atypical melanocytic proliferation," were excluded, as were SN, pigmented spindle cell nevus of Reed, other uncommon or equivocal lesions, and lesions with moderate‐ or high‐grade atypia. Sample size (participants): NR Sample size (lesions): 188 Participant characteristics: NR Lesion characteristics: NR |
||
Index tests |
TD Acquisition and transmission of images: the images of skin lesions were selected from a database of images that are captured routinely before skin lesion removal to allow clinicopathological correlation in making medical management decisions. Only used close‐up images of lesions and those that contained any identifiable features, such as facial features, tattoos or labels with participant information, were excluded or cropped to remove the identifiable features or information. Nature of images used: photographic images Any additional participant information provided: no further information used Method of diagnosis: application run on a smartphone sent each image to a board‐certified dermatologist for evaluation, and assessment that was returned to the user within 24 hours. The identity of the dermatologist was not given, and it was unclear whether all the images were read by the same dermatologist or by several different dermatologists. The output given was "atypical," which was considered to be a positive test result, or "typical," which was considered to be a negative test result. Diagnosis based on: unclear how many observers Number of examiners: unclear Observer qualifications (remote diagnosis): board‐certified dermatologist Experience in practice: unclear Experience with index test: unclear |
||
Target condition and reference standard(s) |
Reference standard: histological diagnosis alone Details:histology (not further described) Target condition (final diagnoses) Malignant: melanoma (in situ and invasive): 60 Benign: 'benign' diagnoses: 128 |
||
Flow and timing |
|
||
Comparative | |||
Notes | — | ||
Methodological quality | |||
Item | Authors' judgement | Risk of bias | Applicability concerns |
DOMAIN 1: Patient Selection | |||
Was a consecutive or random sample of patients enrolled? | Yes | ||
Was a case‐control design avoided? | No | ||
Did the study avoid inappropriate exclusions? | No | ||
Are the included patients and chosen study setting appropriate? | No | ||
Did the study avoid including participants with multiple lesions? | Unclear | ||
High | High | ||
DOMAIN 2: Index Test Teledermatology | |||
Were the index test results interpreted without knowledge of the results of the reference standard? | Yes | ||
If a threshold was used, was it pre‐specified? | Yes | ||
Was the test applied and interpreted in a clinically applicable manner? | No | ||
Were thresholds or criteria for diagnosis reported in sufficient detail to allow replication? | Yes | ||
Was the test interpretation carried out by an experienced examiner? | Yes | ||
Low | High | ||
DOMAIN 3: Reference Standard | |||
Is the reference standards likely to correctly classify the target condition? | Yes | ||
Were the reference standard results interpreted without knowledge of the results of the index tests? | Unclear | ||
For studies comparing TD/FTF clinical diagnosis to histology, was histology interpretation carried out by an experienced histopathologist or by a dermatopathologist? | Unclear | ||
For studies comparing TD to FTF diagnosis, was the clinical diagnosis carried out by an experienced observer? | |||
Low | Unclear | ||
DOMAIN 4: Flow and Timing | |||
Was there an appropriate interval between index test and reference standard? | Unclear | ||
Did all patients receive the same reference standard? | Yes | ||
Were all patients included in the analysis? | No | ||
If the reference standard includes clinical FU of borderline/benign appearing lesions, was there a minimum FU following application of index test(s) of at least: 3 months for melanoma or cSCC or 6 months for BCC? | |||
High |
ABCD(E): asymmetry, border, colour, differential structures (enlargement); AK: actinic keratosis; AMN: atypical melanocytic naevi; BCC: basal cell carcinoma; BD: Bowen's disease; BN: benign naevi; cSCC: cutaneous squamous cell carcinoma; DF: dermatofibroma; FTF: face‐to‐face; FU: follow‐up; GP: general practitioner; IEC: intraepithelial carcinoma; MM: malignant melanoma; MiS: melanoma in situ (or lentigo maligna); MN: melanocytic naevi; MPU: Mobile Prevention Unit; N/A: not applicable; NML: non‐melanocytic lesion; NMSC: non‐melanoma skin cancer; NR: not reported; PCP: primary care provider; PLC: pigmented lesion clinic; PLD: polarised light dermoscopy; PSL: pigmented skin lesion; SAF: safe‐and‐forward; SCC: squamous cell carcinoma; SK: seborrhoeic keratosis; SN: Spitz nevi; TD: teledermatology; VA: Veteran Affairs; VI: visual inspection; VLC: virtual lesion clinic; WHO: World Health Organization; WPC: within‐person comparison (of tests).
Characteristics of excluded studies [ordered by study ID]
Study | Reason for exclusion |
---|---|
Armstrong 2007 | Exclude not a primary study |
Baba 2005 | Exclude on study population |
Badertscher 2015 | Exclude on 2×2 data; for VI/Derm – only gave number of correct diagnoses (not broken down by TP/TN) and gives GP 'score' between T0 and T1 Exclude on index test – not a teledermatology study |
Barnard 2000 | Exclude on target condition; the primary target condition was not relevant to our reviews. |
Bashshur 2015 | Exclude not a primary study; narrative review |
Bataille 2011 | Exclude conference abstract |
Bergmo 2000 | Exclude not a primary study |
Borve 2013 | Exclude on target condition; included 1 melanoma metastases and 1 in situ SCC as D+ (these made up 5% of malignant group); author contacted ("Table 2 provided estimates of the diagnostic accuracy of the face‐to‐face dermatologist and the two teledermoscopists, however in order to include the results in our review we would need the underlying 2×2 contingency tables for these statistics. Is it at all possible for you to provide us with them for each observer, particularly in regard to the 'Primary diagnosis' and the 'Benign vs malignant'") |
Boyce 2011 | Exclude on study population; no breakdown given, just 7 suspicious lesions. Exclude on reference standard; no data given but only 7 referred on for formal assessment. Definitely < 50% histology rate |
Braun 2000 | Exclude on 2×2 data Exclude but contacted authors. Sensitivty was reported in Table II but specificity unclear. Study reports % of correct identification of each lesion type rather than FPs and does not provide numbers misclassified as melanoma, or other malignancy. Also possibility of overlap with later publications, e.g. Coras 2003 |
Brown 2000 | Exclude not a primary study |
Burgiss 1997 | Exclude on reference standard reports actions on telediagnosis (primarily for cost purposes) and did not give final diagnoses (histo) or FTF diagnosis for all lesions |
Chen 2002 | Exclude on sample size Study was based on individual participant images (< 5 cases). Only 4 lesions in total used in study (1 non‐BCC, 1 SK, 1 KA, 1 AK) |
D'Elia 2007 | Exclude on study population; not focused on suspected skin cancer Exclude on 2×2 data; looked at agreement between diagnoses only |
Di Stefani 2007 | Exclude on sample size; < 5 malignant: of 7 excised: 1 melanoma (MiS associated with a nevus), 1 pigmented BCC and 5 melanocytic nevi. Both tele‐dx recommended the 2 malignancies for excision but no further breakdown of agreement/disagreement given Exclude on 2×2 data 7 'D+' according to FTF decision to excise but only reports kappa values for agreement with tele‐dx |
Du Moulin 2003 | Exclude on study population Exclude on target condition |
Edison 2008 | Exclude on study population |
Eminovic 2009 | Exclude, not a primary study Exclude on study population Exclude on target condition |
Fabbrocini 2008 | Exclude on 2×2 data; there was insufficient data provided for each index test to populate 2×2 table Exclude but contacted authors: "As we can only include DTA studies – do you have a cross tabulation of each clinician's diagnosis (e.g. at threshold of >=3 on 7 point checklist) against the histological diagnosis and/or a cross tabulation of the remote diagnosis against the Face to Face diagnoses?" |
Ferrandiz 2007 | Exclude on study population; all had a strong clinical diagnosis of NMSC or fast‐growing tumour, the purpose of the test was primarily to inform treatment plans. Exclude on 2×2 data; 4 of the original 134 appeared to have missing histology and did not know what the additional clinical cases of each lesion were classed as on histology, e.g. the 14 extra BCCs and 5 KAs. |
Ferrandiz 2012 | Exclude on target condition; on included CM |
Gilmour 1998 | Exclude on study population Exclude on target condition |
Granlund 2003 | Exclude on study population |
Griffiths 2010 | Exclude on reference standard |
Harrison 1998 | Exclude on target condition, no breakdown of final diagnoses (histo) or of recommendations from FTF consultation Exclude on sample size; could not determine how many D+ for either reference standard Exclude on 2×2 data; no underlying data provided; diagnostic 'accuracy' of teleconsultation reported as 71% for 210 participants compared to 49% for the referring GPs (49%). Also stated that "telemedicine was able to detect malignancies in 94% of cases compared with only 70% detected by general practitioners." |
Heffner 2009 | Exclude on target condition |
Hicks 2003 | Exclude not a primary study |
High 2000 | Exclude on study population; not all suspected of skin cancer 'dermatological conditions' including dermatitis, acne, verruca, etc. Exclude on 2×2 data; reported agreement only; no breakdown of diagnoses/management decisions Exclude on reference standard; table of final diagnoses appeared to be based on FTF diagnoses with histology obtained for 69/106 (65%), so was ineligible for a tele‐dx vs histo comparison (diagnostic accuracy) and is not tele‐dx vs FTF (referral accuracy) either. |
Hue 2016 | Exclude on reference standard; final diagnoses/FTF decisions given only for 17 recommended for rapid referral on tele‐dx; no data for remaining 395 with non‐urgent derm referral/annual FU recommendation/discharge. Exclude on sample size; for tele‐dx vs histo, only 5 excised included 1 melanoma Exclude on 2×2; no data for tele‐dx vs FTF |
Hwang 2014 | Exclude on study population |
Ishioka 2009 | Exclude on target condition No breakdown of disease positive 'malignant' provided |
Kahn 2013 | Exclude on target condition |
Knol 2006 | Exclude on study population |
Krupinski 1999 | Exclude on 2×2 data; reported only diagnostic concordance/agreement |
Lamel 2012 | Exclude on 2×2 data; reported only summary concordance in diagnosis and management decisions between decisions based on mobile phone image and in person; insufficient detail to work out referral accuracy Exclude but contacted authors,"Study presents data on diagnostic and management concordance between in person and remote (via mobile phone app) diagnoses, are any diagnostic accuracy data available, e.g. observers diagnosis of malignant lesion when assessed remotely versus FtF diagnosis of malignancy?" |
Lamminen 2000 | Exclude not a primary study |
Lesher 1998 | Exclude on study population; included people with 'skin problems' (included wide range of diagnoses) Exclude on sample size; could extract referral accuracy (tele‐dx vs FTF) but only 4 lesions with malignant diagnosis on FTF (4 BCCs) |
Lewis 1999 | Exclude on 2×2 data; study appeared to meet all eligibility criteria but disease prevalence not given alongside sensitivity/specificity Exclude but contacted authors: "(Sensitivity and specificity of remote diagnosis in comparison to FtF diagnosis are provided but we would need number D+ in order to complete 2×2 table)." |
Loane 1998a | Exclude on study population |
Loane 1998b | Exclude not a primary study |
Loane 2000 | Exclude on study population; not focused on potentially malignant skin lesions; could not derive any comparative data for detection of 'tumours' |
Loane 2001a | Exclude not a primary study |
Loane 2001b | Exclude on study population |
Lowitt 1998 | Exclude on study population |
Lyon 1997 | Exclude on 2×2 data: for operative referrals, only histo diagnosis given with overall number of disagreements by FTF and tele‐dx; no clear breakdown of index test results to derive 2×2 Exclude on study population; not all suspected of skin cancer |
Martinez‐Garcia 2007 | Exclude on 2×2 data; not test accuracy Exclude on reference standard; no reference standard reported; describes only tele‐dx |
Massone 2007 | Exclude on reference standard; did not meet our criteria for diagnostic accuracy reference standard (only 12/25 (48%) of benign group with histo, including 1 AK as benign instead of malignant); and data not presented to allow extraction of referral accuracy (tele‐dx vs FTF) (included 955 lesions and reported tele‐dx recommendations for all 955; 121 were recommended for excision or for FTF consult but FU data only available for 32 of these (19 with histo dx and 13 with FTF diagnosis)) Exclude on 2×2 data; data not clearly presented to allow extraction of referral accuracy (tele‐dx vs FTF alone); could only extract tele‐dx vs FTF diagnosis of malignancy if we assume that all 7 malignant lesions were diagnosed as such by FTF; from table 2 we only know that all 7 were excised presumably following FTF consult. |
May 2008 | Exclude on 2×2 data: no data for a 2×2 table Exclude on index test; effect on consultant priority of GP referral with photograph vs referral with no accompanying photograph |
McGraw 2009 | Exclude on study population |
McManus 2008 | Exclude on study population |
Moreno‐Ramirez 2006 | Exclude on sample size; comparison is tele‐dx vs final diagnosis (58/61 were biopsied); only 1 melanoma and 2 BCC |
Moreno‐Ramirez 2007 | Exclude on reference standard: did not meet either criteria for eligible reference standard. Table 1 cross‐tabulates the telediagnosis (refer/not refer) against a gold standard which appeared to be a combination of the FTF diagnosis (for those with clinically and dermoscopically benign and non‐suspicious lesions, and with a diagnostic confidence level of 3 after the FTF diagnosis) and histology (those with higher concern at FTF evaluation were excised). |
Moreno‐Ramirez 2009 | Exclude on study population Exclude on reference standard |
Ndegwa 2010 | Exclude not a primary study; technology report |
Nordal 2001 | Exclude on study population Exclude on target condition |
Oakley 1997 | Exclude on study population Exclude on target condition |
Oakley 2006 | Exclude on 2×2 data; insufficient data presented Exclude but contacted authors, "We are looking to compare telederm dx with FtF diagnosis within a diagnostic accuracy framework (i.e. in a 2×2 contingency table) but in order to include your paper we would need information on the misdiagnoses. Using the FtF diagnosis as the gold standard, we can use the data in Table 2 to derive the 'sensitivity' of the tele‐Dx agreement for diagnosis of melanoma, BCC and SCC (%agreement), but we would need to know the % of tele‐dx reports that 'misdiagnosed' the other lesion types as malignant in order to derive 'specificity'. Would you be at all able to supply this data? We could use the data in Table 6 to cross‐tabulate the management decisions between the two approaches, if we collapse the tele‐Dx cat3 and cat4 groups together. However the % agreement for the teledermatology classification adds to greater than 100." |
Oztas 2004 | Exclude on study population Exclude on target condition |
Pak 1999 | Exclude not a primary study; review of service |
Pak 2002 | Exclude conference abstract; no full text paper found |
Pak 2003a | Exclude on study population |
Pak 2003b | Exclude on study population |
Pak 2007 | Exclude on reference standard; reference standard not clear |
Pak 2009 | Exclude not a primary study |
Patro 2015 | Exclude on target condition |
Perednia 1998 | Exclude on study population; not all suspected of skin cancer Exclude on index test; not an evaluation of accuracy of teledermatology but of effect of access to telemedicine on GP confidence and referral decisions |
Phillips 1997 | Exclude on sample size; 68 lesions; FTF diagnosis of 4 skin cancers (tele‐ and FTF diagnoses concordant for 1 melanoma and 2 BCC; FTF dx of 1 additional SCC in 1 further participant); no final diagnoses (histo) recorded Exclude on 2×2 data; presents agreement between observers only Exclude on study population; not all suspected of skin cancer |
Piccolo 1999 | Exclude on 2×2 data; reports data for tele‐dx, FTF and histo; breakdown of discordant results between tele‐dx vs FTF and tele‐dx vs histo is given but only gives number (%) concordant; did not give number TP and TN to allow 2×2 to be estimated for either comparison |
Piccolo 2002 | Exclude not a primary study; review article |
Rashid 2003 | Exclude on study population |
Ribas 2010 | Exclude on study population |
Romero 2010 | Exclude on study population |
Romero 2014 | Exclude on study population |
Seidenari 2004 | Exclude on 2×2 data; no data to populate 2×2 table just ROC curve values given. Exclude but contacted authors, "TABLE 5 provides AUC values for each diagnosis for both formats and observers; requested data in 2×2 format, e.g. for melanoma 'certain' against final diagnosis and for melanoma 'certain or fairly certain' against final diagnosis?" |
Senel 2013 | Exclude on 2×2 data; no accuracy data available. For teledermatology it looks like they only gave the % of correct diagnoses (Table 6) per dermatologist but give no breakdown by disease positive/negative so could not work out 2×2 data. |
Shin 2014 | Exclude on target condition |
Tait 1999 | Exclude on study population; not all suspected of skin cancer 'people with visible skin lesion or lesions' (wide range in diagnoses recorded) |
Tan 2010a | Exclude on reference standard; no reference standard described Exclude on 2×2 data; not test accuracy; reported agreement between 5 dermatologists' telediagnosis, not against histology or FTF diagnosis |
Tan 2010b | Exclude on 2×2 data; study appeared to meet eligibility criteria; however, although sensitivity and specificity values were provided in Table 4 per dermatologist, it was not possible to work back to the underlying 2×2 (final diagnoses by histopath not given and FTF diagnoses for the same 491 lesions differ in Table 1 according to dermatologist A and dermatologist B). |
Tandjung 2015 | Exclude on target condition; 'malignant' included: AK, BD, dysplastic nevus, lentigo maligna, SCC, BCC, MM and KA Exclude on index test; GPs sent images for teledermatology opinion; then free to send for biopsy or not; results shown were only for those that were biopsied, according to teledermatology advice. |
Taylor 2001 | Exclude on study population; not all suspected of skin cancer; wide variety of conditions included Exclude on 2×2 data; reported % agreement only |
Tucker 2005 | Exclude on target condition; no breakdown of either final diagnoses or tele‐dx Exclude on 2×2 data; reports agreement only Exclude on study population; not all suspected of skin cancer |
van der Heijden 2013 | Exclude on 2×2 data; only reported Kappa values for histology vs FTF and histology vs tele‐dx (for each of 4 teledermatologists) but no underlying data given |
Vano‐Galvan 2011 | Exclude on study population: not specific to skin cancer, population included infectious disease and inflammatory disease Exclude on 2×2 data; only gives % agreement between tele‐dx and FTF (gold standard) |
Warshaw 2009a | Exclude on 2×2 data; only reports accuracy Exclude duplicate or related publication; author contacted in regard to 2010 paper Exclude but contact author. Study presented diagnostic accuracy of teledermatology and clinic diagnosis in comparison to histopathology; in order to include in our review, data would need to be presented as a 2×2 contingency table, either per type of malignancy e.g. tele‐dx classification of melanoma vs not melanoma against histological diagnosis of melanoma/not melanoma, or with malignant diagnoses grouped together, i.e. tele‐dx of malignancy vs not malignant against same histological breakdown. Requested these data for the clinic diagnosis and for each method of telediagnosis for this study and for Warshaw 2009b. **Author provided some data for detection of melanoma only and for use of macro images only for 2010 paper (pigmented and non‐pigmented lesion combined) |
Warshaw 2009b | Exclude on 2×2 data; only reports accuracy Exclude duplicate or related publication Exclude but contact author. See Warshaw 2009a |
Warshaw 2010a | Exclude on 2×2 data; not test accuracy; interobserver agreement for subsample of Warshaw 2009a/Warshaw 2010b trial |
Warshaw 2015 | Exclude on 2×2 data; only gives agreement between teledermatology diagnosis and FTF diagnosis |
Watson 2010 | Exclude on target condition |
Weingast 2013 | Exclude on study population |
Weinstock 2002 | Exclude not a primary study |
Weinstock 2009 | Exclude not a primary study |
Whited 1999 | Exclude on 2×2 data; only gave % agreement between tele‐dx and FTF and % correct diagnoses of tele‐dx vs histo |
Whited 2002 | Exclude not a primary study |
Whited 2003 | Exclude not a primary study |
Whited 2004 | Exclude not a primary study |
Whited 2006 | Exclude not a primary study; review article |
Whited 2010 | Exclude not a primary study |
Whited 2016 | Exclude not a primary study |
Williams 2001 | Exclude not a primary study |
Williams 2007 | Exclude not a primary study |
Wootton 2000 | Exclude on study population Exclude on target condition |
Zelickson 1997 | Exclude on study population |
AK: actinic keratosis; AUC: area under curve; BCC: basal cell carcinoma; BD: Bowen’s disease; CM: cutaneous melanoma; D+: disease positive; Derm: dermoscopy; DTA: Diagnostic Test Accuracy; Dx: diagnosis; FP: false positives; FTF: face‐to‐face; FU: follow‐up; GP: general practitioner; histo: histology; KA: keratoacanthoma; MM: malignant melanoma; MiS: melanoma in situ (or lentigo maligna); NMSC: non‐melanoma skin cancer; ROC: receiver operating characteristic; SCC: squamous cell carcinoma; SK: seborrhoeic keratosis; tele‐Dx: teledermatology diagnosis; TP: true positive; TN: true negative; VI: visual inspection.
Differences between protocol and review
Due to the small number of studies available, a single review has been produced that evaluated the accuracy of teledermatology in all skin cancers; this replaces the two reviews intended in the protocols to separately address cutaneous melanoma and keratinocyte cancers.
Primary objectives and primary target condition have been changed from detection of cutaneous invasive melanoma alone and detection of BCC or cSCC as per the two protocols, to the detection of any skin cancer, as the appropriate triage of any malignant skin lesion to specialist care is the key issue for teledermatology. The detection of the target condition of invasive melanoma alone has instead been included as a secondary objective.
Heterogeneity investigations and sensitivity analyses were limited by the data available.
We amended the text to clarify that studies available only as conference abstracts would be excluded from the review unless full papers could be identified; studies available only as conference abstracts do not allow a comprehensive assessment of study methods or methodological quality.
We proposed to supplement the database searches by searching the annual meetings of appropriate organisations (e.g. British Association of Dermatologists Annual Meeting, American Academy of Dermatology Annual Meeting, European Academy of Dermatology and Venereology Meeting, Society for Melanoma Research Congress, World Congress of Dermatology, European Association of Dermato Oncology); however, due to volume of evidence retrieved from database searches and time restrictions we were unable to do this.
For quality assessment, the QUADAS‐2 tool was further tailored according to the review topic. In terms of analysis, restriction to analysis of per participant data was not performed due to lack of data.
Contributions of authors
JD was the contact person with the editorial base. JD and NC co‐ordinated contributions from the coauthors and wrote the final draft of the review. SB conducted the literature searches. JD, NC, OB and JM screened papers against eligibility criteria. JD and NC obtained data on ongoing and unpublished studies. JD, NC, OB and JM appraised the quality of papers. JD, NC, OB and JM extracted data for the review and sought additional information about papers. JD and NC entered data into Review Manager 5. JD, NC and YT analysed and interpreted data. JD, JJD, NC, YT and CD worked on the methods sections. JD, FW, RM, OB, JM, RNM and HCW drafted the clinical sections of the background and responded to the clinical comments of the referees. JD, JJD, CD and YT responded to the methodology and statistics comments of the referees. KG and CO'S were the consumer coauthors and checked the review for readability and clarity, as well as ensuring outcomes are relevant to consumers. JD is the guarantor of the update.
Disclaimer
This project presents independent research supported by the National Institute for Health Research, via Cochrane Infrastructure funding to the Cochrane Skin Group and Cochrane Programme Grant funding, and the NIHR Birmingham Biomedical Research Centre at the University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham. The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the Systematic Reviews Programme, NIHR, NHS or the Department of Health and Social Care.
Sources of support
Internal sources
No sources of support supplied
External sources
-
NIHR Systematic Review Programme, UK.
This project was funded by an NIHR Cochrane Systematic Reviews Programme Grant (13/89/15)
-
The National Institute for Health Research (NIHR), UK.
The NIHR, UK, is the largest single funder of the Cochrane Skin Group
NIHR clinical fellowship, UK.
-
NIHR Birmingham Biomedical Research Centre, UK.
JD, JJD and YT receive support from the NIHR Birmingham Biomedical Research Centre
Declarations of interest
NC: nothing to declare. JD: nothing to declare. YT: nothing to declare. RNM: "my institution received a grant for a Barco NV commercially sponsored study to evaluate digital dermoscopy in the skin cancer clinic. My institution also received Oxfordshire Health Services Research Charitable Funds for carrying out a study of feasibility of using the Skin Cancer Quality of Life Impact Tool (SCQOLIT) in non melanoma skin cancer. I have received royalties for the Oxford Handbook of Medical Dermatology (Oxford University Press). I have received payment from Public Health England for the "Be Clear on Cancer" skin cancer report. I have no conflicts of interest to declare that directly relate to the publication of this work." SB: nothing to declare. CD: nothing to declare. JM: nothing to declare. OB: nothing to declare. KG: nothing to declare. CO: nothing to declare. FW: nothing to declare. RM: nothing to declare. JJD: nothing to declare. HCW: I am director of the NIHR HTA Programme. HTA is part of the NIHR which also supports the NIHR systematic reviews programme from which this work is funded.
Edited (no change to conclusions)
References
References to studies included in this review
Arzberger 2016 {published data only}
- Arzberger E, Curiel‐Lewandrowski C, Blum A, Chubisov D, Oakley A, Rademaker M, et al. Teledermoscopy in high‐risk melanoma patients: a comparative study of face‐to‐face and teledermatology visits. Acta Dermato‐Venereologica 2016;96(6):779‐83. [ER4:25012365; PUBMED: 26776245] [DOI] [PubMed] [Google Scholar]
Borve 2015 {published data only}
- Borve A, Dahlen Gyllencreutz J, Terstappen K, Johansson Backman E, Aldenbratt A, Danielsson M, et al. Smartphone teledermoscopy referrals: a novel process for improved triage of skin cancer patients. Acta Dermato‐Venereologica 2015;95(2):180‐90. [PUBMED: 24923283] [DOI] [PubMed] [Google Scholar]
Bowns 2006 {published data only}
- Bowns IR, Collins K, Walters SJ, McDonagh AJ. Telemedicine in dermatology: a randomised controlled trial. Health Technology Assessment (Winchester, England) 2006;10(43):iii‐iv, ix‐xi, 1‐39. [ER4:15465874; PUBMED: 17049140] [DOI] [PubMed] [Google Scholar]
Congalton 2015 {published data only}
- Congalton AT, Oakley AM, Rademaker M, Bramley D, Martin RC. Successful melanoma triage by a virtual lesion clinic (teledermatoscopy). Journal of the European Academy of Dermatology and Venereology: JEADV 2015;29(12):2423‐8. [ER4:25012348; PUBMED: 26370585] [DOI] [PubMed] [Google Scholar]
Coras 2003 {published data only}
- Coras B, Glaessl A, Kinateder J, Klovekorn W, Braun R, Lepski U, et al. Teledermatoscopy in daily routine ‐ results of the first 100 cases. Current Problems in Dermatology 2003;32:207‐12. [ER4:18375040; PUBMED: 12472014] [DOI] [PubMed] [Google Scholar]
Ferrara 2004 {published data only}
- Ferrara G, Argenziano G, Cerroni L, Cusano F, Blasi A, Urso C, et al. A pilot study of a combined dermoscopic–pathological approach to the telediagnosis of melanocytic skin neoplasms. Journal of Telemedicine and Telecare 2004;10(1):34‐8. [PUBMED: 15006214] [DOI] [PubMed] [Google Scholar]
Grimaldi 2009 {published data only}
- Grimaldi L, Silvestri A, Brandi C, Nisi G, Brafa A, Calabro M, et al. Digital epiluminescence dermoscopy for pigmented cutaneous lesions, primary care physicians, and telediagnosis: a useful tool?. Journal of Plastic, Reconstructive & Aesthetic Surgery 2009;62(8):1054‐8. [ER4:15465940; PUBMED: 18547883] [DOI] [PubMed] [Google Scholar]
Jolliffe 2001a {published data only}
- Jolliffe VM, Harris DW, Whittaker SJ. Can we safely diagnose pigmented lesions from stored video images? A diagnostic comparison between clinical examination and stored video images of pigmented lesions removed for histology. Clinical & Experimental Dermatology 2001;26(1):84‐7. [ER4:15465970; PUBMED: 11260186] [DOI] [PubMed] [Google Scholar]
Jolliffe 2001b {published data only}
- Jolliffe VM, Harris DW, Morris R, Wallace P, Whittaker SJ. Can we use video images to triage pigmented lesions?. British Journal of Dermatology 2001;145(6):904‐10. [PUBMED: 11899143] [DOI] [PubMed] [Google Scholar]
Kroemer 2011 {published data only}
- Kroemer S, Fruhauf J, Campbell TM, Massone C, Schwantzer G, Soyer HP, et al. Mobile teledermatology for skin tumour screening: diagnostic accuracy of clinical and dermoscopic image tele‐evaluation using cellular phones. British Journal of Dermatology 2011;164(5):973‐9. [ER4:15465982; PUBMED: 21219286] [DOI] [PubMed] [Google Scholar]
Mahendran 2005 {published data only}
- Mahendran R, Goodfield MJ, Sheehan‐Dare RA. An evaluation of the role of a store‐and‐forward teledermatology system in skin cancer diagnosis and management. Clinical & Experimental Dermatology 2005;30(3):209‐14. [ER4:15465998; PUBMED: 15807671] [DOI] [PubMed] [Google Scholar]
Manahan 2015 {published data only}
- Manahan MN, Soyer HP, Loescher LJ, Horsham C, Vagenas D, Whiteman DC, et al. A pilot trial of mobile, patient‐performed teledermoscopy. British Journal of Dermatology 2015;172(4):1072‐80. [DOI] [PubMed] [Google Scholar]
Massone 2014 {published data only}
- Massone C, Maak D, Hofmann‐Wellenhof R, Soyer HP, Fruhauf J. Teledermatology for skin cancer prevention: an experience on 690 Austrian patients. Journal of the European Academy of Dermatology and Venereology : JEADV 2014;28(8):1103‐8. [ER4:17941088; PUBMED: 24372877] [DOI] [PubMed] [Google Scholar]
Moreno Ramirez 2005 {published data only}
- Moreno Ramirez D, Ferrandiz L, Bernal AP, Duran RC, Martin JJ, Camacho F. Teledermatology as a filtering system in pigmented lesion clinics. Journal of Telemedicine & Telecare 2005;11(6):298‐303. [ER4:15466016; PUBMED: 16168166] [DOI] [PubMed] [Google Scholar]
Oliveira 2002 {published data only}
- Oliveira MR, Wen CL, Neto CF, Silveira PS, Rivitti EA, Bohm GM. Web site for training nonmedical health‐care workers to identify potentially malignant skin lesions and for teledermatology. Telemedicine Journal and E‐health 2002;8(3):323‐32. [DOI] [PubMed] [Google Scholar]
Phillips 1998 {published data only}
- Phillips CM, Burke WA, Allen MH, Stone D, Wilson JL. Reliability of telemedicine in evaluating skin tumors. Telemedicine Journal 1998;4(1):5‐9. [DOI] [PubMed] [Google Scholar]
Piccolo 2000 {published data only}
- Piccolo D, Smolle J, Argenziano G, Wolf IH, Braun R, Cerroni L, et al. Teledermoscopy ‐ results of a multicentre study on 43 pigmented skin lesions. Journal of Telemedicine & Telecare 2000;6(3):132‐7. [ER4:15466059; PUBMED: 10912329] [DOI] [PubMed] [Google Scholar]
Piccolo 2004 {published data only}
- Piccolo D, Soyer HP, Chimenti S, Argenziano G, Bartenjev I, Hofmann‐Wellenhof R, et al. Diagnosis and categorization of acral melanocytic lesions using teledermoscopy. Journal of Telemedicine and Telecare 2004;10(6):346‐50. [ER4:15466061; PUBMED: 15603633] [DOI] [PubMed] [Google Scholar]
Shapiro 2004 {published data only}
- Shapiro M, James WD, Kessler R, Lazorik FC, Katz KA, Tam J, et al. Comparison of skin biopsy triage decisions in 49 patients with pigmented lesions and skin neoplasms: store‐and‐forward teledermatology vs face‐to‐face dermatology. Archives of Dermatology 2004;140(5):525‐8. [ER4:15466120; PUBMED: 15148095] [DOI] [PubMed] [Google Scholar]
Silveira 2014 {published data only}
- Silveira CE, Silva TB, Fregnani JH, Costa Vieira RA, Haikel RL, Syrjänen K, et al. Digital photography in skin cancer screening by mobile units in remote areas of Brazil. BMC Dermatology 2014;14:19. [PUBMED: 25539949] [DOI] [PMC free article] [PubMed] [Google Scholar]
Warshaw 2010b {published data only}
- Warshaw EM, Gravely AA, Nelson DB. Accuracy of teledermatology/teledermoscopy and clinic‐based dermatology for specific categories of skin neoplasms. Journal of the American Academy of Dermatology 2010;63(2):348‐52. [DOI] [PubMed] [Google Scholar]
Wolf 2013 {published data only}
- Wolf JA, Moreau JF, Akilov O, Patton T, English JC 3rd, Ho J, et al. Diagnostic inaccuracy of smartphone applications for melanoma detection. JAMA Dermatology 2013;149(4):422‐6. [PUBMED: 23325302] [DOI] [PMC free article] [PubMed] [Google Scholar]
References to studies excluded from this review
Armstrong 2007 {published data only}
- Armstrong AW, Dorer DJ, Lugn NE, Kvedar JC. Economic evaluation of interactive teledermatology compared with conventional care. Telemedicine Journal and E‐health 2007;3(2):91‐9. [DOI: 10.1089/tmj.2006.0035] [DOI] [PubMed] [Google Scholar]
Baba 2005 {published data only}
- Baba M, Seckin D, Kapdagli S. A comparison of teledermatology using store‐and‐forward methodology alone, and in combination with Web camera videoconferencing. Journal of Telemedicine and Telecare 2005;11(7):354‐60. [DOI: 10.1258/135763305774472097] [DOI] [PubMed] [Google Scholar]
Badertscher 2015 {published data only}
- Badertscher N, Tandjung R, Senn O, Kofmehl R, Held U, Rosemann T, et al. A multifaceted intervention: no increase in general practitioners' competence to diagnose skin cancer (minSKIN) ‐ randomized controlled trial. Journal of the European Academy of Dermatology and Venereology : JEADV 2015;29(8):1493‐9. [DOI] [PubMed] [Google Scholar]
Barnard 2000 {published data only}
- Barnard CM, Goldyne ME. Evaluation of an asynchronous teleconsultation system for diagnosis of skin cancer and other skin diseases. Telemedicine Journal and E‐health 2000;6(4):379‐84. [DOI] [PubMed] [Google Scholar]
Bashshur 2015 {published data only}
- Bashshur RL, Shannon GW, Tejasvi T, Kvedar JC, Gates M. The empirical foundations of teledermatology: a review of the research evidence. Telemedicine Journal and E‐Health 2015;21(12):953‐79. [DOI] [PMC free article] [PubMed] [Google Scholar]
Bataille 2011 {published data only}
- Bataille V, Hargest E, Brown V, Blackwell V, Dawe S, Cooper A, et al. A teledermatology pilot study in Hertfordshire: triage of 2‐week‐wait referrals. British Journal of Dermatology 2011;165(Suppl 1):137‐8. [Google Scholar]
Bergmo 2000 {published data only}
- Bergmo TS. A cost‐minimization analysis of a realtime teledermatology service in northern Norway. Journal of Telemedicine and Telecare 2000;6(5):273‐7. [DOI] [PubMed] [Google Scholar]
Borve 2013 {published data only}
- Borve A, Terstappen K, Sandberg C, Paoli J. Mobile teledermoscopy ‐ there's an app for that!. Dermatology Practical & Conceptual 2013;3(2):41‐8. [DOI] [PMC free article] [PubMed] [Google Scholar]
Boyce 2011 {published data only}
- Boyce Z, Gilmore S, Xu C, Soyer HP. The remote assessment of melanocytic skin lesions: a viable alternative to face‐to‐face consultation. Dermatology 2011;223(3):244‐50. [DOI] [PubMed] [Google Scholar]
Braun 2000 {published data only}
- Braun RP, Meier M, Pelloni F, Ramelet AA, Schilling M, Tapernoux B, et al. Teledermatoscopy in Switzerland: a preliminary evaluation. Journal of the American Academy of Dermatology 2000;42(5 Pt 1):770‐5. [DOI] [PubMed] [Google Scholar]
Brown 2000 {published data only}
- Brown N. Exploration of diagnostic techniques for malignant melanoma: an integrative review. Clinical Excellence for Nurse Practitioners 2000;4(5):263‐71. [PubMed] [Google Scholar]
Burgiss 1997 {published data only}
- Burgiss SG, Julius CE, Watson HW, Haynes BK, Buonocore E, Smith GT. Telemedicine for dermatology care in rural patients. Telemedicine Journal and E‐health 1997;3(3):227‐33. [DOI: 10.1089/tmj.1.1997.3.227] [DOI] [PubMed] [Google Scholar]
Chen 2002 {published data only}
- Chen K, Lim A, Shumack S. Teledermatology: influence of zoning and education on a clinician's ability to observe peripheral lesions. Australasian Journal of Dermatology 2002;43(3):171‐4. [DOI] [PubMed] [Google Scholar]
D'Elia 2007 {published data only}
- D'Elia PB, Harzheim E, Fisher PD, Ramos MC, Bordin R. Agreement between dermatological diagnoses made by direct observation and digital images [Concordancia entre diagnosticos dermatologicos feitos presencialmente e por imagens digitais]. Anais Brasileiros de Dermatologia 2007;82(6):521‐7. [Google Scholar]
Di Stefani 2007 {published data only}
- Stefani A, Zalaudek I, Argenziano G, Chimenti S, Soyer HP. Feasibility of a two‐step teledermatologic approach for the management of patients with multiple pigmented skin lesions. Dermatologic Surgery 2007;33(6):686‐92. [DOI] [PubMed] [Google Scholar]
Du Moulin 2003 {published data only}
- Du Moulin MF, Bullens‐Goessens YI, Henquet CJ, Brunenberg DE, Bruyn‐Geraerds DP, Winkens RA, et al. The reliability of diagnosis using store‐and‐forward teledermatology. Journal of Telemedicine and Telecare 2003;9(5):249‐52. [DOI] [PubMed] [Google Scholar]
Edison 2008 {published data only}
- Edison KE, Ward DS, Dyer JA, Lane W, Chance L, Hicks LL. Diagnosis, diagnostic confidence, and management concordance in live‐interactive and store‐and‐forward teledermatology compared to in‐person examination. Telemedicine Journal and E‐Health 2008;14(9):889‐95. [DOI: 10.1089/tmj.2008.0001] [DOI] [PubMed] [Google Scholar]
Eminovic 2009 {published data only}
- Eminovic N, Keizer NF, Wyatt JC, ter Riet G, Peek N, Weert HC, et al. Teledermatologic consultation and reduction in referrals to dermatologists: a cluster randomized controlled trial. Archives Dermatology 2009;145(5):558‐64. [DOI] [PubMed] [Google Scholar]
Fabbrocini 2008 {published data only}
- Fabbrocini G, Balato A, Rescigno O, Mariano M, Scalvenzi M, Brunetti B. Telediagnosis and face‐to‐face diagnosis reliability for melanocytic and non‐melanocytic 'pink' lesions. Journal of the European Academy of Dermatology and Venereology : JEADV 2008;22(2):229‐34. [DOI] [PubMed] [Google Scholar]
Ferrandiz 2007 {published data only}
- Ferrandiz L, Moreno RD, Nieto‐Garcia A, Carrasco R, Moreno‐Alvarez P, Galdeano R, et al. Teledermatology‐based presurgical management for nonmelanoma skin cancer: a pilot study. Dermatologic Surgery 2007;33(9):1092‐8. [DOI] [PubMed] [Google Scholar]
Ferrandiz 2012 {published data only}
- Ferrandiz L, Ruiz‐de‐Casas A, Martin‐Gutierrez FJ, Peral‐Rubio F, Mendez‐Abad C, Rios‐Martin JJ, et al. Effect of teledermatology on the prognosis of patients with cutaneous melanoma. Archives of Dermatology 2012;148(9):1025‐8. [DOI] [PubMed] [Google Scholar]
Gilmour 1998 {published data only}
- Gilmour E, Campbell SM, Loane MA, Esmail A, Griffiths CE, Roland MO, et al. Comparison of teleconsultations and face‐to‐face consultations: preliminary results of a United Kingdom multicentre teledermatology study. British Journal of Dermatology 1998;139(1):81‐7. [DOI] [PubMed] [Google Scholar]
Granlund 2003 {published data only}
- Granlund H, Thoden CJ, Carlson C, Harno K. Realtime teleconsultations versus face‐to‐face consultations in dermatology: immediate and six‐month outcome. Journal of Telemedicine and Telecare 2003;9(4):204‐9. [DOI] [PubMed] [Google Scholar]
Griffiths 2010 {published data only}
- Griffiths WA. Improving melanoma diagnosis in primary care ‐ a tele‐dermatoscopy project. Journal of Telemedicine and Telecare 2010;16(4):185‐6. [DOI] [PubMed] [Google Scholar]
Harrison 1998 {published data only}
- Harrison PV, Kirby B, Dickinson Y, Schofield R. Teledermatology ‐ high technology or not?. Journal of Telemedicine and Telecare 1998;4(Suppl 1):31‐2. [DOI] [PubMed] [Google Scholar]
Heffner 2009 {published data only}
- Heffner VA, Lyon VB, Brousseau DC, Holland KE, Yen K. Store‐and‐forward teledermatology versus in‐person visits: a comparison in pediatric teledermatology clinic. Journal of the American Academy of Dermatology 2009;60(6):956‐61. [DOI] [PubMed] [Google Scholar]
Hicks 2003 {published data only}
- Hicks LL, Boles KE, Hudson S, Kling B, Tracy J, Mitchell J, et al. Patient satisfaction with teledermatology services. Journal of Telemedicine and Telecare 2003;9(1):42‐5. [DOI] [PubMed] [Google Scholar]
High 2000 {published data only}
- High WA, Houston MS, Calobrisi SD, Drage LA, McEvoy MT. Assessment of the accuracy of low‐cost store‐and‐forward teledermatology consultation. Journal of the American Academy of Dermatology 2000;42(5):776‐83. [DOI] [PubMed] [Google Scholar]
Hue 2016 {published data only}
- Hue L, Makhloufi S, Sall N'Diaye P, Blanchet‐Bardon C, Sulimovic L, Pomykala F, et al. Real‐time mobile teledermoscopy for skin cancer screening targeting an agricultural population: an experiment on 289 patients in France. Journal of the European Academy of Dermatology & Venereology 2016;30(1):20‐4. [DOI] [PubMed] [Google Scholar]
Hwang 2014 {published data only}
- Hwang JS, Lappan CM, Sperling LC, Meyerle JH. Utilization of Telemedicine in the U.S. Military in a Deployed Setting. Military Medicine 2014;179(11):1347‐53. [DOI] [PubMed] [Google Scholar]
Ishioka 2009 {published data only}
- Ishioka P, Tenorio JM, Lopes PR, Yamada S, Michalany NS, Amaral MB, et al. A comparative study of teledermatoscopy and face‐to‐face examination of pigmented skin lesions. Journal of Telemedicine and Telecare 2009;15(5):221‐5. [DOI] [PubMed] [Google Scholar]
Kahn 2013 {published data only}
- Kahn E, Sossong S, Goh A, Carpenter D, Goldstein S. Evaluation of skin cancer in Northern California Kaiser Permanente's store‐and‐forward teledermatology referral program. Telemedicine Journal and E‐health 2013;19(10):780‐5. [DOI] [PubMed] [Google Scholar]
Knol 2006 {published data only}
- Knol A, Akker TW, Damstra RJ, Haan J. Teledermatology reduces the number of patient referrals to a dermatologist. Journal of Telemedicine and Telecare 2006;12(2):75‐8. [DOI] [PubMed] [Google Scholar]
Krupinski 1999 {published data only}
- Krupinski EA, LeSueur B, Ellsworth L, Levine N, Hansen R, Silvis N, et al. Diagnostic accuracy and image quality using a digital camera for teledermatology. Telemedicine Journal 1999;5(3):257‐63. [DOI] [PubMed] [Google Scholar]
Lamel 2012 {published data only}
- Lamel SA, Haldeman KM, Ely H, Kovarik CL, Pak H, Armstrong AW. Application of mobile teledermatology for skin cancer screening. Journal of the American Academy of Dermatology 2012;67(4):576‐81. [DOI] [PubMed] [Google Scholar]
Lamminen 2000 {published data only}
- Lamminen H, Tuomi ML, Lamminen J, Uusitalo H. A feasibility study of realtime teledermatology in Finland. Journal of Telemedicine and Telecare 2000;6(2):102‐7. [DOI] [PubMed] [Google Scholar]
Lesher 1998 {published data only}
- Lesher JL Jr, Davis LS, Gourdin FW, English D, Thompson WO. Telemedicine evaluation of cutaneous diseases: a blinded comparative study. Journal of the American Academy of Dermatology 1998;38(1):27‐31. [DOI] [PubMed] [Google Scholar]
Lewis 1999 {published data only}
- Lewis K, Gilmour E, Harrison PV, Patefield S, Dickinson Y, Manning D, et al. Digital teledermatology for skin tumours: a preliminary assessment using a receiver operating characteristics (ROC) analysis. Journal of Telemedicine and Telecare 1999;5(Suppl 1):S57‐8. [DOI] [PubMed] [Google Scholar]
Loane 1998a {published data only}
- Loane MA, Bloomer SE, Corbett R, Eedy DJ, Gore HE, Mathews C, et al. Patient satisfaction with realtime teledermatology in Northern Ireland. Journal of Telemedicine and Telecare 1998;4(1):36‐40. [DOI] [PubMed] [Google Scholar]
Loane 1998b {published data only}
- Loane MA, Corbett R, Bloomer SE, Eedy DJ, Gore HE, Mathews C, et al. Diagnostic accuracy and clinical management by realtime teledermatology. Results from the Northern Ireland arms of the UK Multicentre Teledermatology Trial. Journal of Telemedicine and Telecare 1998;4(2):95‐100. [DOI] [PubMed] [Google Scholar]
Loane 2000 {published data only}
- Loane MA, Bloomer SE, Corbett R, Eedy DJ, Hicks N, Lotery HE, et al. A comparison of real‐time and store‐and‐forward teledermatology: a cost‐benefit study. British Journal of Dermatology 2000;143(6):1241‐7. [DOI] [PubMed] [Google Scholar]
Loane 2001a {published data only}
- Loane MA, Bloomer SE, Corbett R, Eedy DJ, Evans C, Hicks N, et al. A randomized controlled trial assessing the health economics of realtime teledermatology compared with conventional care: an urban versus rural perspective. Journal of Telemedicine and Telecare 2001;7(2):108‐18. [DOI] [PubMed] [Google Scholar]
Loane 2001b {published data only}
- Loane MA, Oakley A, Rademaker M, Bradford N, Fleischl P, Kerr P, et al. A cost‐minimization analysis of the societal costs of realtime teledermatology compared with conventional care: results from a randomized controlled trial in New Zealand. Journal of Telemedicine and Telecare 2001;7(4):233‐8. [DOI] [PubMed] [Google Scholar]
Lowitt 1998 {published data only}
- Lowitt MH, Kessler II, Kauffman C, Hooper FJ, Siegel E, Burnett JW. Teledermatology and in‐person examinations: A comparison of patient and physician perceptions and diagnostic agreement. Archives of Dermatology 1998;134(4):471‐6. [DOI] [PubMed] [Google Scholar]
Lyon 1997 {published data only}
- Lyon CC, Harrison PV. A portable digital imaging system in dermatology: Diagnostic and educational applications. Journal of Telemedicine and Telecare 1997;3(1_suppl):81‐3. [DOI] [PubMed] [Google Scholar]
Martinez‐Garcia 2007 {published data only}
- Martinez‐Garcia S, Boz‐Gonzalez J, Martin‐Gonzalez T, Samaniego‐Gonzalez E, Crespo‐Erchiga V. Teledermatology. Review of 917 teleconsults. Actas Dermo‐Sifiliograficas 2007;98(5):318‐24. [PubMed] [Google Scholar]
Massone 2007 {published data only}
- Massone C, Hofmann‐Wellenhof R, Ahlgrimm‐Siess V, Gabler G, Ebner C, Soyer HP. Melanoma screening with cellular phones. PloS One 2007;2(5):e483. [DOI] [PMC free article] [PubMed] [Google Scholar]
May 2008 {published data only}
- May C, Giles L, Gupta G. Prospective observational comparative study assessing the role of store and forward teledermatology triage in skin cancer. Clinical and Experimental Dermatology 2008;33(6):736‐9. [DOI] [PubMed] [Google Scholar]
McGraw 2009 {published data only}
- McGraw TA, Norton SA. Military aeromedical evacuations from central and southwest Asia for ill‐defined dermatologic diseases. Archives of Dermatology 2009;145(2):165‐70. [DOI] [PubMed] [Google Scholar]
McManus 2008 {published data only}
- McManus J, Salinas J, Morton M, Lappan C, Poropatich R. Teleconsultation program for deployed soldiers and healthcare professionals in remote and austere environments. Prehospital & Disaster Medicine 2008;23(3):210‐6. [PubMed] [Google Scholar]
Moreno‐Ramirez 2006 {published data only}
- Moreno‐Ramirez D, Ferrandiz L, Galdeano R, Camacho FM. Teledermatoscopy as a triage system for pigmented lesions: a pilot study. Clinical and Experimental Dermatology 2006;31(1):13‐8. [DOI] [PubMed] [Google Scholar]
Moreno‐Ramirez 2007 {published data only}
- Moreno‐Ramirez D, Ferrandiz L, Nieto‐Garcia A, Carrasco R, Moreno‐Alvarez P, Galdeano R, et al. Store‐and‐forward teledermatology in skin cancer triage: experience and evaluation of 2009 teleconsultations. Archives of Dermatology 2007;143(4):479‐84. [DOI] [PubMed] [Google Scholar]
Moreno‐Ramirez 2009 {published data only}
- Moreno‐Ramirez D, Ferrandiz L, Ruiz‐de‐Casas A, Nieto‐Garcia A, Moreno‐Alvarez P, Galdeano R, et al. Economic evaluation of a store‐and‐forward teledermatology system for skin cancer patients. Journal of Telemedicine and Telecare 2009;15(1):40‐5. [DOI] [PubMed] [Google Scholar]
Ndegwa 2010 {published data only}
- Ndegwa S, Prichett‐Pejic W, McGill S, Murphy G, Severn M. Teledermatology services: rapid review of diagnostic, clinical management, and economic outcomes. Ottawa: Canadian Agency for Drugs and Technologies in Health (CADTH), 2010. [Google Scholar]
Nordal 2001 {published data only}
- Nordal EJ, Moseng D, Kvammen B, Løchen ML. A comparative study of teleconsultations versus face‐to‐face consultations. Journal of Telemedicine and Telecare 2001;7(5):257‐65. [DOI] [PubMed] [Google Scholar]
Oakley 1997 {published data only}
- Oakley AM, Astwood DR, Loane M, Duffill MB, Rademaker M, Wootton R. Diagnostic accuracy of teledermatology: results of a preliminary study in New Zealand. New Zealand Medical Journal 1997;110(1038):51‐3. [PubMed] [Google Scholar]
Oakley 2006 {published data only}
- Oakley AM, Reeves F, Bennett J, Holmes SH, Wickham H. Diagnostic value of written referral and/or images for skin lesions. Journal of Telemedicine and Telecare 2006;12(3):151‐8. [DOI] [PubMed] [Google Scholar]
Oztas 2004 {published data only}
- Oztas MO, Calikoglu E, Baz K, Birol A, Onder M, Calikoglu T, et al. Reliability of Web‐based teledermatology consultations. Journal of Telemedicine and Telecare 2004;10(1):25‐8. [DOI] [PubMed] [Google Scholar]
Pak 1999 {published data only}
- Pak HS, Welch M, Poropatich R. Web‐based teledermatology consult system: preliminary results from the first 100 cases. Studies in Health Technology and Informatics 1999;64:179‐84. [PubMed] [Google Scholar]
Pak 2002 {published data only}
- Pak HS. Teledermoscopy: a specific application of teledermotology. Skinmed 2002;1(1):18‐9. [DOI] [PubMed] [Google Scholar]
Pak 2003a {published data only}
- Pak HS, Harden D, Cruess D, Welch ML, Poropatich R. Teledermatology: an intraobserver diagnostic correlation study, part I. Cutis 2003;71(5):399‐403. [PubMed] [Google Scholar]
Pak 2003b {published data only}
- Pak HS, Harden D, Cruess D, Welch ML, Poropatich R. Teledermatology: an intraobserver diagnostic correlation study, part II. Cutis 2003;71(6):476‐80. [PubMed] [Google Scholar]
Pak 2007 {published data only}
- Pak H, Triplett CA, Lindquist JH, Grambow SC, Whited JD. Store‐and‐forward teledermatology results in similar clinical outcomes to conventional clinic‐based care. Journal of Telemedicine and Telecare 2007;13(1):26‐30. [DOI] [PubMed] [Google Scholar]
Pak 2009 {published data only}
- Pak HS, Datta SK, Triplett CA, Lindquist JH, Grambow SC, Whited JD. Cost minimization analysis of a store‐and‐forward teledermatology consult system. Telemedicine Journal and E‐health 2009;15(2):160‐5. [DOI] [PubMed] [Google Scholar]
Patro 2015 {published data only}
- Patro B, Tripathy J, De D, Sinha S, Singh A, Kanwar AJ. Diagnostic agreement between a primary care physician and a teledermatologist for common dermatological conditions in North India. Indian Dermatology Online Journal 2015;6(1):21‐6. [DOI] [PMC free article] [PubMed] [Google Scholar]
Perednia 1998 {published data only}
- Perednia DA, Wallace J, Morrisey M, Bartlett M, Marchionda L, Gibson A, et al. The effect of a teledermatology program on rural referral patterns to dermatologists and the management of skin disease. Studies in Health Technology and Informatics 1998;52(Pt 1):290‐3. [PubMed] [Google Scholar]
Phillips 1997 {published data only}
- Phillips CM, Burke WA, Shechter A, Stone D, Balch D, Gustke S. Reliability of dermatology teleconsultations with the use of teleconferencing technology. Journal of the American Academy of Dermatology 1997;37(3):398‐402. [DOI] [PubMed] [Google Scholar]
Piccolo 1999 {published data only}
- Piccolo D, Smolle J, Wolf IH, Peris K, Hofmann‐Wellenhof R, Dell'Eva G, et al. Face‐to‐face diagnosis vs telediagnosis of pigmented skin tumors: a teledermoscopic study. Archives of Dermatology 1999;135(12):1467‐71. [DOI] [PubMed] [Google Scholar]
Piccolo 2002 {published data only}
- Piccolo D, Peris K, Chimenti S, Argenziano G, Soyer HP. Jumping into the future using teledermoscopy. Skinmed 2002;1(1):20‐4. [DOI] [PubMed] [Google Scholar]
Rashid 2003 {published data only}
- Rashid E, Ishtiaq O, Gilani S, Zafar A. Comparison of store and forward method of teledermatology with face‐to‐face consultation. Journal of Ayub Medical College, Abbottabad 2003;15(2):34‐6. [PubMed] [Google Scholar]
Ribas 2010 {published data only}
- Ribas J, Graça Souza Cunha M, Schettini AP, Barros da Rocha Ribas C. Concordância entre diagnósticos dermatológicos obtidos por consulta presencial e por análise de imagens digitais. Anais Brasileiros de Dermatologia 2010;85(4):441‐7. [DOI] [PubMed] [Google Scholar]
Romero 2010 {published data only}
- Romero G, Sanchez P, Garcia M, Cortina P, Vera E, Garrido JA. Randomized controlled trial comparing store‐and‐forward teledermatology alone and in combination with web‐camera videoconferencing. Clinical and Experimental Dermatology 2010;35(3):311‐7. [DOI] [PubMed] [Google Scholar]
Romero 2014 {published data only}
- Romero Aguilera G, Cortina de la Calle P, Vera Iglesias E, Sánchez Caminero P, García Arpa M, Garrido Martín JA. Interobserver reliability of store‐and‐forward teledermatology in a clinical practice setting. Actas Dermo‐sifiliograficas 2014;105(6):605‐13. [DOI] [PubMed] [Google Scholar]
Seidenari 2004 {published data only}
- Seidenari S, Pellacani G, Righi E, Nardo A. Is JPEG compression of videomicroscopic images compatible with telediagnosis? Comparison between diagnostic performance and pattern recognition on uncompressed TIFF images and JPEG compressed ones. Telemedicine Journal and E‐health 2004;10(3):294‐303. [DOI] [PubMed] [Google Scholar]
Senel 2013 {published data only}
- Senel E, Baba M, Durdu M. The contribution of teledermatoscopy to the diagnosis and management of non‐melanocytic skin tumours. Journal of Telemedicine and Telecare 2013;19(1):60‐3. [DOI] [PubMed] [Google Scholar]
Shin 2014 {published data only}
- Shin H, Kim DH, Ryu HH, Yoon SY, Jo SJ. Teledermatology consultation using a smartphone multimedia messaging service for common skin diseases in the Korean army: a clinical evaluation of its diagnostic accuracy. Journal of Telemedicine and Telecare 2014;20(2):70‐4. [DOI] [PubMed] [Google Scholar]
Tait 1999 {published data only}
- Tait CP, Clay CD. Pilot study of store and forward teledermatology services in Perth, Western Australia. Australasian Journal of Dermatology 1999;40(4):190‐3. [DOI] [PubMed] [Google Scholar]
Tan 2010a {published data only}
- Tan E, Oakley A, Soyer HP, Haskett M, Marghoob A, Jameson M, et al. Interobserver variability of teledermoscopy: an international study. British Journal of Dermatology 2010;163(6):1276‐81. [DOI] [PubMed] [Google Scholar]
Tan 2010b {published data only}
- Tan E, Yung A, Jameson M, Oakley A, Rademaker M. Successful triage of patients referred to a skin lesion clinic using teledermoscopy (IMAGE IT trial). British Journal of Dermatology 2010;162(4):803‐11. [DOI] [PubMed] [Google Scholar]
Tandjung 2015 {published data only}
- Tandjung R, Badertscher N, Kleiner N, Wensing M, Rosemann T, Braun RP, et al. Feasibility and diagnostic accuracy of teledermatology in Swiss primary care: process analysis of a randomized controlled trial. Journal of Evaluation in Clinical Practice 2015;21(2):326‐31. [DOI] [PubMed] [Google Scholar]
Taylor 2001 {published data only}
- Taylor P, Goldsmith P, Murray K, Harris D, Barkley A. Evaluating a telemedicine system to assist in the management of dermatology referrals. British Journal of Dermatology 2001;144(2):328‐33. [DOI] [PubMed] [Google Scholar]
Tucker 2005 {published data only}
- Tucker WF, Lewis FM. Digital imaging: a diagnostic screening tool?. International Journal of Dermatology 2005;44(6):479‐81. [DOI] [PubMed] [Google Scholar]
van der Heijden 2013 {published data only}
- Heijden JP, Thijssing L, Witkamp L, Spuls PI, Keizer NF. Accuracy and reliability of teledermatoscopy with images taken by general practitioners during everyday practice. Journal of Telemedicine and Telecare 2013;19(6):320‐5. [DOI] [PubMed] [Google Scholar]
Vano‐Galvan 2011 {published data only}
- Vano‐Galvan S, Hidalgo A, Aguayo‐Leiva I, Gil‐Mosquera M, Rios‐Buceta L, Plana MN, et al. (Store‐and‐forward teledermatology: assessment of validity in a series of 2000 observations). Actas Dermo‐Sifiliograficas 2011;102(4):277‐83. [DOI] [PubMed] [Google Scholar]
Warshaw 2009a {published data only}
- Warshaw EM, Lederle FA, Grill JP, Gravely AA, Bangerter AK, Fortier LA, et al. Accuracy of teledermatology for pigmented neoplasms. Journal of the American Academy of Dermatology 2009;61(5):753‐65. [DOI] [PubMed] [Google Scholar]
Warshaw 2009b {published data only}
- Warshaw EM, Lederle FA, Grill JP, Gravely AA, Bangerter AK, Fortier LA, et al. Accuracy of teledermatology for nonpigmented neoplasms. Journal of the American Academy of Dermatology 2009;60(4):579‐88. [DOI] [PubMed] [Google Scholar]
Warshaw 2010a {published data only}
- Warshaw EM, Gravely AA, Bohjanen KA, Chen K, Lee PK, Rabinovitz HS, et al. Interobserver accuracy of store and forward teledermatology for skin neoplasms. Journal of the American Academy of Dermatology 2010;62(3):513‐6. [DOI] [PubMed] [Google Scholar]
Warshaw 2015 {published data only}
- Warshaw EM, Gravely AA, Nelson DB. Reliability of store and forward teledermatology for skin neoplasms. Journal of the American Academy of Dermatology 2015;72(3):426‐35. [DOI] [PubMed] [Google Scholar]
Watson 2010 {published data only}
- Watson AJ, Bergman H, Williams CM, Kvedar JC. A randomized trial to evaluate the efficacy of online follow‐up visits in the management of acne. Archives of Dermatology 2010;146(4):406‐11. [DOI] [PubMed] [Google Scholar]
Weingast 2013 {published data only}
- Weingast J, Scheibböck C, Wurm EM, Ranharter E, Porkert S, Dreiseitl S, et al. A prospective study of mobile phones for dermatology in a clinical setting. Journal of Telemedicine and Telecare 2013;19(4):213‐8. [DOI] [PubMed] [Google Scholar]
Weinstock 2002 {published data only}
- Weinstock MA, Nguyen FQ, Risica PM. Patient and referring provider satisfaction with teledermatology. Journal of the American Academy of Dermatology 2002;47(1):68‐72. [DOI] [PubMed] [Google Scholar]
Weinstock 2009 {published data only}
- Weinstock MA. Evaluation of in‐person dermatology versus teledermatology. Journal of the American Academy of Dermatology 2009;61(5):902‐3. [DOI] [PubMed] [Google Scholar]
Whited 1999 {published data only}
- Whited JD, Hall RP, Simel DL, Foy ME, Stechuchak KM, Drugge RJ, et al. Reliability and accuracy of dermatologists’ clinic‐based and digital image consultations. Journal of the American Academy of Dermatology 1999;41(5):693‐702. [DOI] [PubMed] [Google Scholar]
Whited 2002 {published data only}
- Whited JD, Hall RP, Foy ME, Marbrey LE, Grambow SC, Dudley TK, et al. Teledermatology's impact on time to intervention among referrals to a dermatology consult service. Telemedicine Journal and E‐health 2002;8(3):313‐21. [DOI] [PubMed] [Google Scholar]
Whited 2003 {published data only}
- Whited JD, Datta S, Hall RP, Foy ME, Marbrey LE, Grambow SC, et al. An economic analysis of a store and forward teledermatology consult system. Telemedicine Journal and E‐health 2003;9(4):351‐60. [DOI] [PubMed] [Google Scholar]
Whited 2004 {published data only}
- Whited JD, Hall RP, Foy ME, Marbrey LE, Grambow SC, Dudley TK, et al. Patient and clinician satisfaction with a store‐and‐forward teledermatology consult system. Telemedicine Journal and E‐health 2004;10(4):422‐31. [DOI] [PubMed] [Google Scholar]
Whited 2006 {published data only}
- Whited JD. Teledermatology research review. International Journal of Dermatology 2006;45(3):220‐9. [DOI] [PubMed] [Google Scholar]
Whited 2010 {published data only}
- Whited JD. Economic analysis of telemedicine and the teledermatology paradigm. Telemedicine Journal and E‐health 2010;16(2):223‐8. [DOI] [PubMed] [Google Scholar]
Whited 2016 {published data only}
- Whited JD. Diagnosis and management of pigmented skin lesions using teledermatology. Current Dermatology Reports 2016;5(2):90‐5. [Google Scholar]
Williams 2001 {published data only}
- Williams T, May C, Esmail A, Ellis N, Griffiths C, Stewart E, et al. Patient satisfaction with store‐and‐forward teledermatology. Journal of Telemedicine and Telecare 2001;7(Suppl 1):45‐6. [DOI] [PubMed] [Google Scholar]
Williams 2007 {published data only}
- Williams CM, Qureshi A, Geller A, Kvedar J. Skin cancer education program via teledermatology: Is it effective?. Journal of the American Academy of Dermatology 2007;56(2):AB100. [Google Scholar]
Wootton 2000 {published data only}
- Wootton R, Bloomer SE, Corbett R, Eedy DJ, Hicks N, Lotery HE, et al. Multicentre randomised control trial comparing real time teledermatology with conventional outpatient dermatological care: societal cost‐benefit analysis. BMJ 2000;320(7244):1252‐6. [DOI] [PMC free article] [PubMed] [Google Scholar]
Zelickson 1997 {published data only}
- Zelickson BD, Homan L. Teledermatology in the nursing home. Archives of Dermatology 1997;133(2):171‐4. [PubMed] [Google Scholar]
Additional references
Abbasi 2004
- Abbasi NR, Shaw HM, Rigel DS, Friedman RJ, McCarthy WH, Osman I, et al. Early diagnosis of cutaneous melanoma: revisiting the ABCD criteria. JAMA 2004;292(22):2771‐6. [DOI] [PubMed] [Google Scholar]
ACIM 2017
- Australian Cancer Database. Melanoma of the skin for Australia (ICD10 C43). Australian Cancer Incidence and Mortality (ACIM) Books. Canberra (Australia): Australian Institute of Health and Welfare, 2017. [Google Scholar]
Alam 2001
- Alam M, Ratner D. Cutaneous squamous‐cell carcinoma (review). New England Journal of Medicine 2001;344(13):975‐83. [PUBMED: 11274625] [DOI] [PubMed] [Google Scholar]
Aldridge 2013
- Aldridge RB, Naysmith L, Ooi ET, Murray CS, Rees JL. The importance of a full clinical examination: assessment of index lesions referred to a skin cancer clinic without a total body skin examination would miss one in three melanomas. Acta Dermato‐venereologica 2013;93(6):689‐92. [PUBMED: 23695107] [DOI] [PMC free article] [PubMed] [Google Scholar]
Altamura 2008
- Altamura D, Avramidis M, Menzies SW. Assessment of the optimal interval for and sensitivity of short‐term sequential digital dermoscopy monitoring for the diagnosis of melanoma. Archives of Dermatology 2008;144(4):502‐6. [PUBMED: 18427044] [DOI] [PubMed] [Google Scholar]
Argenziano 2012
- Argenziano G, Zalaudek I, Hofmann‐Wellenhof R, Bakos RM, Bergman W, Blum A, et al. Total body skin examination for skin cancer screening in patients with focused symptoms. Journal of the American Academy of Dermatology 2012;66(2):212‐19. [PUBMED: 21757257] [DOI] [PubMed] [Google Scholar]
Arits 2013
- Arits AH, Mosterd K, Essers BA, Spoorenberg E, Sommer A, Rooij MJ, et al. Photodynamic therapy versus topical imiquimod versus topical fluorouracil for treatment of superficial basal‐cell carcinoma: a single blind, non‐inferiority, randomised controlled trial. Lancet Oncology 2013;14(7):647‐54. [DOI: 10.1016/S1470-2045(13)70143-8] [DOI] [PubMed] [Google Scholar]
Arnold 2014
- Arnold M, Holterhues C, Hollestein LM, Coebergh JW, Nijsten T, Pukkala E, et al. Trends in incidence and predictions ofcutaneous melanoma across Europe up to 2015.. Journal of the European Academy of Dermatology & Venereology 2014;28(9):1170‐8. [PUBMED: 23962170] [DOI] [PubMed] [Google Scholar]
Baldursson 1993
- Baldursson B, Sigurgeirsson B, Lindelof B. Leg ulcers and squamous cell carcinoma. An epidemiological study and a review of the literature. Acta Dermato‐venereologica 1993;73(3):171‐4. [PUBMED: 8105611] [DOI] [PubMed] [Google Scholar]
Bath‐Hextall 2007a
- Bath‐Hextall F, Leonardi‐Bee J, Smith C, Meal A, Hubbard R. Trends in incidence of skin basal cell carcinoma. Additional evidence from a UK primary care database study. International Journal of Cancer 2007;121(9):2105‐8. [PUBMED: 17640064] [DOI] [PubMed] [Google Scholar]
Bath‐Hextall 2007b
- Bath‐Hextall FJ, Perkins W, Bong J, Williams HC. Interventions for basal cell carcinoma of the skin. Cochrane Database of Systematic Reviews 2007, Issue 1. [DOI: 10.1002/14651858.CD003412.pub2] [DOI] [PubMed] [Google Scholar]
Bath‐Hextall 2014
- Bath‐Hextall F, Ozolins M, Armstrong SJ, Colver GB, Perkins W, Miller PS, et al. Surgical excision versus imiquimod 5% cream for nodular and superficial basal‐cell carcinoma (SINS): a multicentre, non‐inferiority, randomised controlled trial. Lancet Oncology 2014;15(1):96‐105. [PUBMED: 24332516] [DOI] [PubMed] [Google Scholar]
Batra 2002
- Batra RS, Kelley LC. A risk scale for predicting extensive subclinical spread of nonmelanoma skin cancer. Dermatologic Surgery 2002;28(2):107‐12; discussion 112. [PUBMED: 11860418] [DOI] [PubMed] [Google Scholar]
Baxter 2012
- Baxter JM, Patel AN, Varma S. Facial basal cell carcinoma. BMJ 2012;345:e5342. [DOI: 10.1136/bmj.e5342; PUBMED: 22915688] [DOI] [PubMed] [Google Scholar]
Belbasis 2016
- Belbasis L, Stefanaki I, Stratigos AJ, Evangelou E. Non‐genetic risk factors for cutaneous melanoma and keratinocyte skin cancers: an umbrella review of meta‐analyses.. Journal of Dermatological Science 2016;84(3):330‐9. [DOI] [PubMed] [Google Scholar]
Benelli 1999
- Benelli C, Roscetti E, Pozzo VD, Gasparini G, Cavicchini S. The dermoscopic versus the clinical diagnosis of melanoma. European Journal of Dermatology 1999;9(6):470‐6. [PUBMED: 10491506] [PubMed] [Google Scholar]
Benelli 2001
- Benelli C, Roscetti E, Dal Pozzo V. Reproducibility of the clinical criteria (ABCDE rule) and dermatoscopic features (7FFM) for the diagnosis of malignant melanoma. European Journal of Dermatology 2001;11(3):234‐9. [ER4:18375028; PUBMED: 11358731] [PubMed] [Google Scholar]
Binder 1997
- Binder M, Schwarz M, Steiner A, Kittler H, Muellner M, Wolff K, et al. Epiluminescence microscopy of small pigmented skin lesions: short‐term formal training improves the diagnostic performance of dermatologists. Journal of the American Academy of Dermatology 1997;36(2 Pt 1):197‐202. [PUBMED: 9039168] [DOI] [PubMed] [Google Scholar]
Bono 2002
- Bono A, Bartoli C, Cascinelli N, Lualdi M, Maurichi A, Moglia D, et al. Melanoma detection. A prospective study comparing diagnosis with the naked eye, dermatoscopy and telespectrophotometry. Dermatology 2002;205(4):362‐6. [PUBMED: 12444332] [DOI] [PubMed] [Google Scholar]
Bono 2006
- Bono A, Tolomio E, Trincone S, Bartoli C, Tomatis S, Carbone A, et al. Micro‐melanoma detection: a clinical study on 206 consecutive cases of pigmented skin lesions with a diameter < or = 3 mm. British Journal of Dermatology 2006;155(3):570‐3. [DOI] [PubMed] [Google Scholar]
Boring 1994
- Boring CC, Squires TS, Tong T, Montgomery S. Cancer statistics, 1994. CA: a Cancer Journal for Clinicians 1994;44(1):7‐26. [PUBMED: 8281473] [DOI] [PubMed] [Google Scholar]
Bossuyt 2015
- Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig L, et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. BMJ 2015;351:h5527. [DOI: 10.1136/bmj.h5527; PUBMED: 26511519] [DOI] [PMC free article] [PubMed] [Google Scholar]
Cancer Research UK 2017
- Cancer Research UK. Skin cancer statistics. www.cancerresearchuk.org/health‐professional/cancer‐statistics/statistics‐by‐cancer‐type/skin‐cancer (accessed prior to 19 July 2017).
Carli 1994
- Carli P, Giorgi V, Donati E, Pestelli E, Giannotti B. Epiluminescence microscopy reduces the risk of removing clinically atypical, but histologically common, melanocytic lesions [La microscopia a epiluminescenza (Elm) riduce il rischio di asportare lesioni melanocitarie clinicamente sospette ma istologicamente comuni]. Giornale Italiano di Dermatologia e Venereologia 1994;129(12):599‐605. [ER4:18375075] [Google Scholar]
Carli 2002
- Carli P, Giorgi V, Argenziano G, Palli D, Giannotti B. Pre‐operative diagnosis of pigmented skin lesions: in vivo dermoscopy performs better than dermoscopy on photographic images. Journal of the European Academy of Dermatology & Venereology 2002;16(4):339‐46. [DOI] [PubMed] [Google Scholar]
Carter 2013
- Carter JB, Johnson MM, Chua TL, Karia PS, Schmults CD. Outcomes of primary cutaneous squamous cell carcinoma with perineural invasion: an 11‐year cohort study. JAMA Dermatology 2013;149(1):35‐41. [DOI: 10.1001/jamadermatol.2013.746; PUBMED: 23324754] [DOI] [PubMed] [Google Scholar]
CCAAC Network 2008
- Cancer Council Australia & Australian Cancer Network. Basal Cell Carcinoma, Squamous Cell Carcinoma (and related lesions) ‐ a guide to clinical management in Australia. www.cancer.org.au/content/pdf/HealthProfessionals/ClinicalGuidelines/Basal_cell_carcinoma_Squamous_cell_carcinoma_Guide_Nov_2008‐Final_with_Corrigendums.pdf. Sydney: Cancer Council Australia & Australian Cancer Network, (accessed 19 May 2015).
Chao 2014
- Chao D, London Cancer North and East. London cancer, guidelines for cutaneous malignant melanoma management August 2014. www.londoncancer.org/media/76373/london‐cancer‐melanoma‐guidelines‐2013‐v1.0.pdf. London: London Cancer North and East Alliance, (accessed 25 February 2015).
Cho 2014
- Cho H, Mariotto AB, Schwartz LM, Luo J, Woloshin S. When do changes in cancer survival mean progress? The insight from population incidence and mortality. Journal of the National Cancer Institute. Monographs 2014;2014(49):187‐97. [PUBMED: 25417232] [DOI] [PMC free article] [PubMed] [Google Scholar]
Chowdri 1996
- Chowdri NA, Darzi MA. Postburn scar carcinomas in Kashmiris. Burns 1996;22(6):477‐82. [PUBMED: 8884010] [DOI] [PubMed] [Google Scholar]
Chu 2006
- Chu H, Cole SR. Bivariate meta‐analysis for sensitivity and specificity with sparse data: a generalized linear mixed model approach (comment). Journal of Clinical Epidemiology 2006;59(12):1331‐2. [PUBMED: 17098577] [DOI] [PubMed] [Google Scholar]
Chuchu 2018
- Chuchu N, Takwoingi Y, Dinnes J, Matin RN, Bassett O, Moreau JF, et al. Smartphone applications for triaging adults with skin lesions that are suspicious for melanoma. Cochrane Database of Systematic Reviews 2018, Issue 12. [DOI: 10.1002/14651858.CD013192] [DOI] [PMC free article] [PubMed] [Google Scholar]
Cristofolini 1994
- Cristofolini M, Zumiani G, Bauer P, Cristofolini P, Boi S, Micciolo R. Dermatoscopy: usefulness in the differential diagnosis of cutaneous pigmentary lesions. Melanoma Research 1994;4(6):391‐4. [ER4:15465898; PUBMED: 7703719] [DOI] [PubMed] [Google Scholar]
Dabski 1986
- Dabski K, Stoll HL Jr, Milgrom H. Squamous cell carcinoma complicating late chronic discoid lupus erythematosus. Journal of Surgical Oncology 1986;32(4):233‐7. [PUBMED: 3736067] [DOI] [PubMed] [Google Scholar]
Deeks 2005
- Deeks JJ, Macaskill P, Irwig L. The performance of tests of publication bias and other sample size effects in systematic reviews of diagnostic test accuracy was assessed. Journal of Clinical Epidemiology 2005;58(9):882‐93. [PUBMED: 16085191] [DOI] [PubMed] [Google Scholar]
Dinnes 2018a
- Dinnes J, Deeks JJ, Chuchu N, Ferrante di Ruffano L, Matin RN, Thomson DR, et al. Dermoscopy, with and without visual inspection, for diagnosing melanoma in adults. Cochrane Database of Systematic Reviews 2018, Issue 12. [DOI: 10.1002/14651858.CD011902.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]
Dinnes 2018b
- Dinnes J, Deeks JJ, Grainge MJ, Chuchu N, Ferrante di Ruffano L, Matin RN, et al. Visual inspection for diagnosing cutaneous melanoma in adults. Cochrane Database of Systematic Reviews 2018, Issue 12. [DOI: 10.1002/14651858.CD013194] [DOI] [PMC free article] [PubMed] [Google Scholar]
Dinnes 2018c
- Dinnes J, Deeks JJ, Chuchu N, Matin RN, Wong KY, Aldridge RB, et al. Visual inspection and dermoscopy, alone or in combination, for diagnosing keratinocyte skin cancers in adults. Cochrane Database of Systematic Reviews 2018, Issue 12. [DOI: 10.1002/14651858.CD011901.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]
Dinnes 2018d
- Dinnes J, Deeks JJ, Saleh D, Chuchu N, Bayliss SE, Patel L, et al. Reflectance confocal microscopy for diagnosing cutaneous melanoma in adults. Cochrane Database of Systematic Reviews 2018, Issue 12. [DOI: 10.1002/14651858.CD013190] [DOI] [PMC free article] [PubMed] [Google Scholar]
Dinnes 2018e
- Dinnes J, Deeks JJ, Chuchu N, Saleh D, Bayliss SE, Takwoingi Y, et al. Reflectance confocal microscopy for diagnosing keratinocyte skin cancers in adults. Cochrane Database of Systematic Reviews 2018, Issue 12. [DOI: 10.1002/14651858.CD013191] [DOI] [PMC free article] [PubMed] [Google Scholar]
Dinnes 2018f
- Dinnes J, Bamber J, Chuchu N, Bayliss SE, Takwoingi Y, Davenport C, et al. High‐frequency ultrasound for diagnosing skin cancer in adults. Cochrane Database of Systematic Reviews 2018, Issue 12. [DOI: 10.1002/14651858.CD013188] [DOI] [PMC free article] [PubMed] [Google Scholar]
Drew 2017
- Drew BA, Karia PS, Mora AN, Liang CA, Schmults CD. Treatment patterns, outcomes, and patient satisfaction of primary epidermally limited nonmelanoma skin cancer. Dermatologic Surgery 2017;43(12):1423‐30. [DOI: 10.1097/DSS.0000000000001225; PUBMED: 28661992] [DOI] [PubMed] [Google Scholar]
Drucker 2017
- Drucker A, Adam GP, Langberg V, Gazula A, Smith B, Moustafa F, et el. Treatments for Basal Cell and Squamous Cell Carcinoma of the Skin. Comparative Effectiveness Reviews, No. 199.. Rockville (MD): Agency for Healthcare Research and Quality, 2017. [PubMed] [Google Scholar]
Erdmann 2013
- Erdmann F, Lortet‐Tieulent J, Schuz J, Zeeb H, Greinert R, Breitbart EW, et al. International trends in the incidence of malignant melanoma 1953‐2008 ‐ are recent generations at higher or lower risk?. International Journal of Cancer 2013;132(2):385‐400. [PUBMED: 22532371] [DOI] [PubMed] [Google Scholar]
EUCAN 2012
- EUCAN, International Agency for Research on Cancer. Malignant melanoma of skin: estimated incidence, mortality & prevalence for both sexes, 2012. eco.iarc.fr/eucan/Cancer.aspx?Cancer=20. International Agency for Research on Cancer, (accessed 29 July 2015).
Fasching 1989
- Fasching MC, Meland NB, Woods JE, Wolff BG. Recurrent squamous‐cell carcinoma arising in pilonidal sinus tract ‐ multiple flap reconstructions. Report of a case. Diseases of the Colon and Rectum 1989;32(2):153‐8. [PUBMED: 2914529] [DOI] [PubMed] [Google Scholar]
Ferlay 2015
- Ferlay J, Soerjomataram I, Dikshit R, Eser S, Mathers C, Rebelo M, et al. Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012. International Journal of Cancer 2015;136(5):E359‐86. [PUBMED: 25220842] [DOI] [PubMed] [Google Scholar]
Ferrante di Ruffano 2018a
- Ferrante di Ruffano L, Dinnes J, Deeks JJ, Chuchu N, Bayliss SE, Davenport C, et al. Optical coherence tomography for diagnosing skin cancer in adults. Cochrane Database of Systematic Reviews 2018, Issue 12. [DOI: 10.1002/14651858.CD013189] [DOI] [PMC free article] [PubMed] [Google Scholar]
Ferrante di Ruffano 2018b
- Ferrante di Ruffano L, Takwoingi Y, Dinnes J, Chuchu N, Bayliss SE, Davenport C, et al. Computer‐assisted diagnosis techniques (dermoscopy and spectroscopy‐based) for diagnosing skin cancer in adults. Cochrane Database of Systematic Reviews 2018, Issue 12. [DOI: 10.1002/14651858.CD013186] [DOI] [PMC free article] [PubMed] [Google Scholar]
Firnhaber 2012
- Firnhaber JM. Diagnosis and treatment of basal cell and squamous cell carcinoma. American Family Physician 2012;86(2):161‐8. [PubMed] [Google Scholar]
Fitzpatrick 1975
- Fitzpatrick TB. Soleil et peau. Journal de Médecine Esthétique 1975;2:33‐4. [Google Scholar]
Friedman 1985
- Friedman RJ, Rigel DS, Kopf AW. Early detection of malignant melanoma: the role of physician examination and self‐examination of the skin. CA: a Cancer Journal for Clinicians 1985;35(3):130‐51. [PUBMED: 3921200] [DOI] [PubMed] [Google Scholar]
Garbe 2016
- Garbe C, Peris K, Hauschild A, Saiag P, Middleton M, Bastholt L, et al. Diagnosis and treatment of melanoma. European consensus‐based interdisciplinary guideline ‐ update 2016. European Journal of Cancer 2016;63:201‐17. [PUBMED: 27367293] [DOI] [PubMed] [Google Scholar]
Garcia 2009
- Garcia C, Poletti E, Crowson AN. Basosquamous carcinoma. Journal of the American Academy of Dermatology 2009;60(1):137‐43. [PUBMED: 19103364] [DOI] [PubMed] [Google Scholar]
Gerbert 2000
- Gerbert B, Bronstone A, Maurer T, Hofmann R, Berger T. Decision support software to help primary care physicians triage skin cancer: a pilot study. Archives of Dermatology 2000;136(2):187‐92. [PUBMED: 10677094] [DOI] [PubMed] [Google Scholar]
Gershenwald 2017
- Gershenwald JE, Scolyer RA, Hess KR, Sondak VK, Long GV, Ross MI, et al. Melanoma staging: evidence‐based changes in the American Joint Committee on Cancer eighth edition cancer staging manual. CA: A Cancer Journal for Clinicians 2017;67(6):472‐92. [DOI: 10.3322/caac.21409] [DOI] [PMC free article] [PubMed] [Google Scholar]
Gordon 2013
- Gordon R. Skin cancer: an overview of epidemiology and risk factors. Seminars in Oncology Nursing 2013;29(3):160‐9. [PUBMED: 23958214] [DOI] [PubMed] [Google Scholar]
Gorlin 2004
- Gorlin RJ. Nevoid basal cell carcinoma (Gorlin) syndrome. Genetics in Medicine 2004;6(6):530‐9. [PUBMED: 15545751] [DOI] [PubMed] [Google Scholar]
Grachtchouk 2011
- Grachtchouk M, Pero J, Yang SH, Ermilov AN, Michael LE, Wang A, et al. Basal cell carcinomas in mice arise from hair follicle stem cells and multiple epithelial progenitor populations. Journal of Clinical Investigation 2011;121(5):1768‐81. [PUBMED: 21519145] [DOI] [PMC free article] [PubMed] [Google Scholar]
Green 1988
- Green A, Leslie D, Weedon D. Diagnosis of skin cancer in the general population: clinical accuracy in the Nambour survey. Medical Journal of Australia 1988;148(9):447‐50. [PUBMED: 3283506] [DOI] [PubMed] [Google Scholar]
Griffin 2016
- Griffin LL, Ali FR, Lear JT. Non‐melanoma skin cancer. Clinical Medicine 2016;16(1):62‐5. [PUBMED: 26833519] [DOI] [PMC free article] [PubMed] [Google Scholar]
Griffiths 2005
- Griffiths RW, Suvarna SK, Stone J. Do basal cell carcinomas recur after complete conventional surgical excision?. British Journal of Plastic Surgery 2005;58(6):795‐805. [PUBMED: 16086990] [DOI] [PubMed] [Google Scholar]
Grob 1998
- Grob JJ, Bonerandi JJ. The 'ugly duckling' sign: identification of the common characteristics of nevi in an individual as a basis for melanoma screening. Archives of Dermatology 1998;134(1):103‐4. [PUBMED: 9449921] [DOI] [PubMed] [Google Scholar]
Hanson 2016
- Hanson JL, Kingsley‐Loso JL, Grey KR, Raju SI, Parks PR, Bershow AL, et al. Incidental melanomas detected in veterans referred to dermatology. Journal of the American Academy of Dermatology 2016;74(3):462‐9. [PUBMED: 26612677] [DOI] [PubMed] [Google Scholar]
Hartevelt 1990
- Hartevelt MM, Bavinck JN, Kootte AM, Vermeer BJ, Vandenbroucke JP. Incidence of skin cancer after renal transplantation in The Netherlands. Transplantation 1990;49(3):506‐9. [PUBMED: 2316011] [DOI] [PubMed] [Google Scholar]
Healsmith 1994
- Healsmith MF, Bourke JF, Osborne JE, Graham‐Brown RA. An evaluation of the revised seven‐point checklist for the early diagnosis of cutaneous malignant melanoma. British Journal of Dermatology 1994;130(1):48‐50. [DOI] [PubMed] [Google Scholar]
Hoorens 2016
- Hoorens I, Vossaert K, Pil L, Boone B, Schepper S, Ongenae K, et al. Total‐body examination vs lesion‐directed skin cancer screening. JAMA Dermatology 2016;152(1):27‐34. [PUBMED: 26466155] [DOI] [PubMed] [Google Scholar]
HPA and MelNet NZ 2014
- Health Promotion Agency and the Melanoma Network of New Zealand (MelNet). New Zealand skin cancer primary prevention and early detection strategy 2014 to 2017. www.sunsmart.org.nz//sites/default/files/documents/NZ%20Skin%20Cancer%20PrimaryPrevention%20and%20EarlyDetection%20Strategy%202014%20to%202017%20FINAL%20VERSION%20%23406761.pdf. Cancer Society of New Zealand, (accessed 29 May 2018).
Janda 2015
- Janda M. Teledermatology: its use in the detection and management of actinic keratosis. Current Problems in Dermatology 2015;46:101‐7. [PUBMED: 25561213] [DOI] [PubMed] [Google Scholar]
Jansen 2018
- Jansen MH, Mosterd K, Arits AH, Roozeboom MH, Sommer A, Essers BA, et al. Five‐year results of a randomized controlled trial comparing effectiveness of photodynamic therapy, topical imiquimod, and topical 5‐fluorouracil in patients with superficial basal cell carcinoma. Journal of Investigative Dermatology 2018;138(3):527‐33. [DOI: 10.1016/j.jid.2017.09.033; PUBMED: 29045820] [DOI] [PubMed] [Google Scholar]
Jensen 1999
- Jensen P, Hansen S, Moller B, Leivestad T, Pfeffer P, Geiran O, et al. Skin cancer in kidney and heart transplant recipients and different long‐term immunosuppressive therapy regimens. Journal of the American Academy of Dermatology 1999;40(2 Pt 1):177‐86. [PUBMED: 10025742] [DOI] [PubMed] [Google Scholar]
Kao 1986
- Kao GF. Carcinoma arising in Bowen's disease. Archives of Dermatology 1986;122(10):1124‐6. [PUBMED: 3767398] [PubMed] [Google Scholar]
Kasprzak 2015
- Kasprzak JM, Xu YG. Diagnosis and management of lentigo maligna: a review. Drugs in Context 2015;4:212281. [PUBMED: 26082796] [DOI] [PMC free article] [PubMed] [Google Scholar]
Keefe 1990
- Keefe M, Dick DC, Wakeel RA. A study of the value of the seven‐point checklist in distinguishing benign pigmented lesions from melanoma. Clinical & Experimental Dermatology 1990;15(3):167‐71. [DOI] [PubMed] [Google Scholar]
Kelleners‐Smeets 2017
- Kelleners‐Smeets NW, Mosterd K, Nelemans PJ. Treatment of low‐risk basal cell carcinoma. Journal of Investigative Dermatology 2017;137(3):539‐40. [PUBMED: 28235442] [DOI] [PubMed] [Google Scholar]
Kim 2014
- Kim DD, Tang JY, Ioannidis JP. Network geometry shows evidence sequestration for medical vs. surgical practices: treatments for basal cell carcinoma. Journal of Clinical Epidemiology 2014;67(4):391‐400. [PUBMED: 24491794] [DOI] [PubMed] [Google Scholar]
Kittler 1999
- Kittler H, Seltenheim M, Dawid M, Pehamberger H, Wolff K, Binder M. Morphologic changes of pigmented skin lesions: a useful extension of the ABCD rule for dermatoscopy. Journal of the American Academy of Dermatology 1999;40(4):558‐62. [PUBMED: 10188673] [DOI] [PubMed] [Google Scholar]
Kittler 2001
- Kittler H, Binder M. Risks and benefits of sequential imaging of melanocytic skin lesions in patients with multiple atypical nevi. Archives of Dermatology 2001;137(12):1590‐5. [PUBMED: 11735709] [DOI] [PubMed] [Google Scholar]
Kittler 2002
- Kittler H, Pehamberger H, Wolff K, Binder M. Diagnostic accuracy of dermoscopy (review). Lancet Oncology 2002;3(3):159‐65. [PUBMED: 11902502] [DOI] [PubMed] [Google Scholar]
Kjome 2017
- Kjome RLS, Wright DJ, Bjaaen AB, Garstad KW, Valeur M. Dermatological cancer screening: evaluation of a new community pharmacy service. Research in Social & Administrative Pharmacy : RSAP 2017;13(6):1214‐7. [DOI: 10.1016/j.sapharm.2016.12.001; PUBMED: 27964893] [DOI] [PubMed] [Google Scholar]
Lansbury 2010
- Lansbury L, Leonardi‐Bee J, Perkins W, Goodacre T, Tweed JA, Bath‐Hextall FJ. Interventions for non‐metastatic squamous cell carcinoma of the skin. Cochrane Database of Systematic Reviews 2010, Issue 4. [DOI: 10.1002/14651858.CD007869.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]
Lansbury 2013
- Lansbury L, Bath‐Hextall F, Perkins W, Stanton W, Leonardi‐Bee J. Interventions for non‐metastatic squamous cell carcinoma of the skin: systematic review and pooled analysis of observational studies. BMJ 2013;347:f6153. [PUBMED: 24191270] [DOI] [PMC free article] [PubMed] [Google Scholar]
Lear 1997
- Lear JT, Tan BB, Smith AG, Bowers W, Jones PW, Heagerty AH, et al. Risk factors for basal cell carcinoma in the UK: case‐control study in 806 patients. Journal of the Royal Society of Medicine 1997;90(7):371‐4. [PUBMED: 9290417] [DOI] [PMC free article] [PubMed] [Google Scholar]
Lear 2012
- Lear JT. Oral hedgehog‐pathway inhibitors for basal‐cell carcinoma. New England Journal of Medicine 2012;366(23):2225‐6. [PUBMED: 22670909] [DOI] [PubMed] [Google Scholar]
Lear 2014
- Lear JT, Corner C, Dziewulski P, Fife K, Ross GL, Varma S, et al. Challenges and new horizons in the management of advanced basal cell carcinoma: a UK perspective. British Journal of Cancer 2014;111(8):1476‐81. [DOI: 10.1038/bjc.2014.270; PUBMED: 25211660] [DOI] [PMC free article] [PubMed] [Google Scholar]
Lederman 1985
- Lederman JS, Sober AJ. Does biopsy type influence survival in clinical stage I cutaneous melanoma?. Journal of the American Academy of Dermatology 1985;13(6):983‐7. [PUBMED: 4078105] [DOI] [PubMed] [Google Scholar]
Leeflang 2013
- Leeflang MM, Rutjes AW, Reitsma JB, Hooft L, Bossuyt PM. Variation of a test's sensitivity and specificity with disease prevalence. CMAJ 2013;185(11):E537‐44. [PUBMED: 23798453] [DOI] [PMC free article] [PubMed] [Google Scholar]
Lees 1991
- Lees VC, Briggs JC. Effect of initial biopsy procedure on prognosis in stage I invasive cutaneous malignant melanoma: review of 1086 patients. British Journal of Surgery 1991;78(9):1108‐10. [PUBMED: 1933198] [DOI] [PubMed] [Google Scholar]
Leff 2008
- Leff B, Finucane TE. Gizmo idolatry. JAMA 2008;299(15):1830‐2. [PUBMED: 18413879] [DOI] [PubMed] [Google Scholar]
Linos 2009
- Linos E, Swetter SM, Cockburn MG, Colditz GA, Clarke CA. Increasing burden of melanoma in the United States. Journal of Investigative Dermatology 2009;129(7):1666‐74. [PUBMED: 19131946] [DOI] [PMC free article] [PubMed] [Google Scholar]
Lister 1997
- Lister RK, Black MM, Calonje E, Burnand KG. Squamous cell carcinoma arising in chronic lymphoedema. British Journal of Dermatology 1997;136(3):384‐7. [PUBMED: 9115922] [PubMed] [Google Scholar]
Lo 1991
- Lo JS, Snow SN, Reizner GT, Mohs FE, Larson PO, Hruza GJ. Metastatic basal cell carcinoma: report of twelve cases with a review of the literature. Journal of the American Academy of Dermatology 1991;24(5 Pt 1):715‐9. [PUBMED: 1869642] [DOI] [PubMed] [Google Scholar]
Lomas 2012
- Lomas A, Leonardi‐Bee J, Bath‐Hextall F. A systematic review of worldwide incidence of nonmelanoma skin cancer. British Journal of Dermatology 2012;166(5):1069‐80. [PUBMED: 22251204] [DOI] [PubMed] [Google Scholar]
MacKie 1985
- MacKie RM, English J, Aitchison TC, Fitzsimons CP, Wilson P. The number and distribution of benign pigmented moles (melanocytic naevi) in a healthy British population. British Journal of Dermatology 1985;113(2):167‐74. [PUBMED: 4027184] [DOI] [PubMed] [Google Scholar]
MacKie 1990
- MacKie RM. Clinical recognition of early invasive malignant melanoma. BMJ 1990;301(6759):1005‐6. [PUBMED: 2249043] [DOI] [PMC free article] [PubMed] [Google Scholar]
Mackie 1991
- MacKie RM, Doherty VR. Seven‐point checklist for melanoma. Clinical & Experimental Dermatology 1991;16(2):151‐3. [DOI] [PubMed] [Google Scholar]
Madan 2010
- Madan V, Lear JT, Szeimies RM. Non‐melanoma skin cancer. Lancet 2010;375(9715):673‐85. [PUBMED: 20171403] [DOI] [PubMed] [Google Scholar]
Maia 1995
- Maia M, Proenca NG, Moraes JC. Risk factors for basal cell carcinoma: a case‐control study. Revista de Saude Publica 1995;29(1):27‐37. [PUBMED: 8525311] [DOI] [PubMed] [Google Scholar]
Maloney 1996
- Maloney ME. Arsenic in dermatology. Dermatologic Surgery 1996;22(3):301‐4. [PUBMED: 8599743] [DOI] [PubMed] [Google Scholar]
Marsden 2010
- Marsden JR, Newton‐Bishop JA, Burrows L, Cook M, Corrie PG, Cox NH, et al. BAD Guidelines: revised UK guidelines for the management of cutaneous melanoma 2010. British Journal of Dermatology 2010;163(2):238‐56. [PUBMED: 20608932] [DOI] [PubMed] [Google Scholar]
McCormack 1997
- McCormack CJ, Kelly JW, Dorevitch AP. Differences in age and body site distribution of the histological subtypes of basal cell carcinoma. A possible indicator of differing causes. Archives of Dermatology 1997;133(5):593‐6. [PUBMED: 9158412] [PubMed] [Google Scholar]
McCusker 2014
- McCusker M, Basset‐Seguin N, Dummer R, Lewis K, Schadendorf D, Sekulic A, et al. Metastatic basal cell carcinoma: prognosis dependent on anatomic site and spread of disease. European Journal of Cancer 2014;50(4):774‐83. [PUBMED: 24412051] [DOI] [PubMed] [Google Scholar]
McGovern 1992
- McGovern TW, Litaker MS. Clinical predictors of malignant pigmented lesions. A comparison of the Glasgow seven‐point checklist and the American Cancer Society's ABCDs of pigmented lesions. Journal of Dermatologic Surgery & Oncology 1992;18(1):22‐6. [ER4:18375119; PUBMED: 1740563] [DOI] [PubMed] [Google Scholar]
Mistry 2011
- Mistry M, Parkin DM, Ahmad AS, Sasieni P. Cancer incidence in the United Kingdom: projections to the year 2030. British Journal of Cancer 2011;105(11):1795‐803. [PUBMED: 22033277] [DOI] [PMC free article] [PubMed] [Google Scholar]
Moeckelmann 2018
- Moeckelmann N, Ebrahimi A, Dirven R, Liu J, Low TH, Gupta R, et al. Analysis and comparison of the 8th edition American Joint Committee on Cancer (AJCC) nodal staging system in cutaneous and oral squamous cell cancer of the head and neck. Annals of Surgical Oncology 2018;25(6):1730‐6. [DOI: 10.1245/s10434-018-6340-x; PUBMED: 29352431] [DOI] [PubMed] [Google Scholar]
Motley 2009
- Motley RJ, Preston PW, Lawrence CM. Multi‐professional guidelines for the management of the patient with primary cutaneous squamous cell carcinoma. www.bsds.org.uk/uploads/pdfs/SCCguide2009.pdf (accessed 15 November 2017).
Murchie 2017
- Murchie P, Amalraj Raja E, Brewster DH, Iversen L, Lee AJ. Is initial excision of cutaneous melanoma by general practitioners (GPs) dangerous? Comparing patient outcomes following excision of melanoma by GPs or in hospital using national datasets and meta‐analysis. European Journal of Cancer 2017;86:373‐84. [PUBMED: 29100192] [DOI] [PubMed] [Google Scholar]
Musah 2013
- Musah A, Gibson JE, Leonardi‐Bee J, Cave MR, Ander EL, Bath‐Hextall F. Regional variations of basal cell carcinoma incidence in the U.K. using The Health Improvement Network database (2004‐10). British Journal of Dermatology 2013;169(5):1093‐9. [PUBMED: 23701520] [DOI] [PubMed] [Google Scholar]
Nachbar 1994
- Nachbar F, Stolz W, Merkle T, Cognetta AB, Vogt T, Landthaler M, et al. The ABCD rule of dermatoscopy. High prospective value in the diagnosis of doubtful melanocytic skin lesions. Journal of the American Academy of Dermatology 1994;30(4):551‐9. [ER4:15466022] [DOI] [PubMed] [Google Scholar]
Nart 2015
- Nart IF, Armayones SG, Medina FV, Orti MB, Orpinell XB. Basal cell carcinoma treated with ingenol mebutate. Journal of the American Academy of Dermatology 2015;5(Suppl 1):AB180. [Google Scholar]
Newcombe 1998
- Newcombe RG. Interval estimation for the difference between independent proportions: comparison of eleven methods. Statistics in Medicine 1998;17(8):873‐90. [DOI] [PubMed] [Google Scholar]
NICE 2010
- National Institute for Health and Clinical Excellence. NICE guidance on cancer services. Improving outcomes for people with skin tumours including melanoma (update): the management of Low‐risk basal cell carcinomas in the community. www.nice.org.uk/guidance/csg8/resources/improving‐outcomes‐for‐people‐with‐skin‐tumours‐including‐melanoma‐2010‐partial‐update‐773380189. NICE, (accessed 27 November 2017).
NICE 2015a
- National Collaborating Centre for Cancer. Melanoma: assessment and management. www.nice.org.uk/guidance/ng14. London: National Institute for Health and Care Excellence, (accessed prior to 20 February 2018).
NICE 2015b
- National Institute for Health and Care Excellence. Suspected cancer: recognition and referral. www.nice.org.uk/guidance/ng12. London: National Institute for Health and Clinical Excellence, (accessed prior to 20 February 2018).
NICE 2017
- National Institute for Health and Care Excellence. Vismodegib for treating basal cell carcinoma. www.nice.org.uk/guidance/ta489. London: NICE, (accessed prior to 28 March 2018).
Norman 2009
- Norman G, Barraclough K, Dolovich L, Price D. Iterative diagnosis. BMJ 2009;339:b3490. [DOI: 10.1136/bmj.b3490] [DOI] [PubMed] [Google Scholar]
O'Gorman 2014
- O'Gorman SM, Murphy GM. Photosensitizing medications and photocarcinogenesis. Photodermatology, Photoimmunology & Photomedicine 2014;30(1):8‐14. [PUBMED: 24393207] [DOI] [PubMed] [Google Scholar]
Offidani 2002
- Offidani A, Simonetti O, Bernardini ML, Alpagut A, Cellini A, Bossi G. General practitioners' accuracy in diagnosing skin cancers. Dermatology 2002;205(2):127‐30. [PUBMED: 12218226] [DOI] [PubMed] [Google Scholar]
Pasquali 2018
- Pasquali S, Hadjinicolaou AV, Chiarion Sileni V, Rossi CR, Mocellin S. Systemic treatments for metastatic cutaneous melanoma. Cochrane Database of Systematic Reviews 2018, Issue 2. [DOI: 10.1002/14651858.CD011123.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]
Pehamberger 1993
- Pehamberger H, Binder M, Steiner A, Wolff K. In vivo epiluminescence microscopy: improvement of early diagnosis of melanoma. Journal of Investigative Dermatology 1993;100(3):356s‐62s. [PUBMED: 8440924] [DOI] [PubMed] [Google Scholar]
Randle 1996
- Randle HW. Basal cell carcinoma. Identification and treatment of the high‐risk patient. Dermatologic Surgery 1996;22(3):255‐61. [PUBMED: 8599737] [DOI] [PubMed] [Google Scholar]
Reitsma 2005
- Reitsma JB, Glas AS, Rutjes AW, Scholten RJ, Bossuyt PM, Zwinderman AH. Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews (Review). Journal of Clinical Epidemiology 2005;58(10):982‐90. [PUBMED: 16168343] [DOI] [PubMed] [Google Scholar]
Review Manager 2014 [Computer program]
- Nordic Cochrane Centre, The Cochrane Collaboration. Review Manager (RevMan). Version 5.3. Copenhagen: Nordic Cochrane Centre, The Cochrane Collaboration, 2014.
Rigel 1993
- Rigel DS, Friedman RJ. The rationale of the ABCDs of early melanoma. Journal of the American Academy of Dermatology 1993;29(6):1060‐1. [DOI] [PubMed] [Google Scholar]
Roozeboom 2012
- Roozeboom MH, Arits AH, Nelemans PJ, Kelleners‐Smeets NW. Overall treatment success after treatment of primary superficial basal cell carcinoma: a systematic review and meta‐analysis of randomized and nonrandomized trials. British Journal of Dermatology 2012;167(4):733‐56. [PUBMED: 22612571] [DOI] [PubMed] [Google Scholar]
Roozeboom 2016
- Roozeboom MH, Arits AH, Mosterd K, Sommer A, Essers BA, Rooij MJ, et al. Three‐year follow‐up results of photodynamic therapy vs. Imiquimod vs. fluorouracil for treatment of superficial basal cell carcinoma: a single‐blind, noninferiority, randomized controlled trial. Journal of Investigative Dermatology 2016;136(8):1568‐74. [PUBMED: 27113429] [DOI] [PubMed] [Google Scholar]
Rowe 1992
- Rowe DE, Carroll RJ, Day CL Jr. Prognostic factors for local recurrence, metastasis, and survival rates in squamous cell carcinoma of the skin, ear, and lip. Implications for treatment modality selection. Journal of the American Academy of Dermatology 1992;26(6):976‐90. [PUBMED: 1607418] [DOI] [PubMed] [Google Scholar]
Rozeman 2017
Rutjes 2005
- Rutjes AW, Reitsma JB, Vandenbroucke JP, Glas AS, Bossuyt PM. Case‐control and two‐gate designs in diagnostic accuracy studies (review). Clinical Chemistry 2005;51(8):1335‐41. [PUBMED: 15961549] [DOI] [PubMed] [Google Scholar]
Sekulic 2017
- Sekulic A, Migden MR, Basset‐Seguin N, Garbe C, Gesierich A, Lao CD, et al. Long‐term safety and efficacy of vismodegib in patients with advanced basal cell carcinoma: final update of the pivotal ERIVANCE BCC study. BMC Cancer 2017;17(1):2171‐9. [PUBMED: 28511673] [DOI] [PMC free article] [PubMed] [Google Scholar]
Siegel 2015
- Siegel R, Miller K, Jemal A. Cancer statistics, 2015. CA: a Cancer Journal for Clinicians 2015;65(1):5‐29. [PUBMED: 25559415] [DOI] [PubMed] [Google Scholar]
SIGN 2017
- Scottish Intercollegiate Guidelines Network. Cutaneous melanoma. www.sign.ac.uk/sign‐146‐melanoma.html. Scotland: SIGN, (accessed prior to 19 July 2017).
Sladden 2009
- Sladden MJ, Balch C, Barzilai DA, Berg D, Freiman A, Handiside T, et al. Surgical excision margins for primary cutaneous melanoma. Cochrane Database of Systematic Reviews 2009, Issue 10. [DOI: 10.1002/14651858.CD004835.pub2] [DOI] [PubMed] [Google Scholar]
Slater 2014
- Slater D, Walsh M. Standards and datasets for reporting cancers: dataset for the histological reporting of primary cutaneous malignant melanoma and regional lymph nodes, May 2014. www.rcpath.org/Resources/RCPath/Migrated%20Resources/Documents/G/G125_DatasetMaligMelanoma_May14.pdf. London: Royal College of Pathologists, (accessed 29 July 2015).
Smeets 2004
- Smeets NW, Krekels GA, Ostertag JU, Essers BA, Dirksen CD, Nieman FH, et al. Surgical excision vs Mohs' micrographic surgery for basal‐cell carcinoma of the face: randomised controlled trial. Lancet 2004;364(9447):1766‐72. [PUBMED: 15541449] [DOI] [PubMed] [Google Scholar]
Sober 1979
- Sober AJ, Fitzpatrick TB, Mihm MC, Wise TG, Pearson BJ, Clark WH, et al. Early recognition of cutaneous melanoma. JAMA 1979;242(25):2795‐9. [PUBMED: 501893] [PubMed] [Google Scholar]
Stanganelli 2000
- Stanganelli I, Serafini M, Bucch L. A cancer‐registry‐assisted evaluation of the accuracy of digital epiluminescence microscopy associated with clinical examination of pigmented skin lesions. Dermatology 2000;200(1):11‐6. [PUBMED: 10681607] [DOI] [PubMed] [Google Scholar]
Stata 2017 [Computer program]
- StataCorp. Stata. Version 15. College Station (TX): StataCorp, 2017.
Stratigos 2015
- Stratigos A, Garbe C, Lebbe C, Malvehy J, Marmol V, Pehamberger H, et al. Diagnosis and treatment of invasive squamous cell carcinoma of the skin: European consensus‐based interdisciplinary guideline. European Journal of Cancer 2015;51(14):1989‐2007. [PUBMED: 26219687] [DOI] [PubMed] [Google Scholar]
Takwoingi 2017
- Takwoingi Y, Guo B, Riley RD, Deeks JJ. Performance of methods for meta‐analysis of diagnostic test accuracy with few studies or sparse data. Statistical Methods in Medical Research 2017;26(4):1896‐911. [DOI: 10.1177/0962280215592269] [DOI] [PMC free article] [PubMed] [Google Scholar]
Thomas 1998
- Thomas L, Tranchand P, Berard F, Secchi T, Colin C, Moulin G. Semiological value of ABCDE criteria in the diagnosis of cutaneous pigmented tumors. Dermatology 1998;197(1):11‐7. [PUBMED: 9693179] [DOI] [PubMed] [Google Scholar]
Usher‐Smith 2016
- Usher‐Smith JA, Sharp SJ, Griffin SJ. The spectrum effect in tests for risk prediction, screening, and diagnosis. BMJ 2016;353:i3139. [DOI: 10.1136/bmj.i3139] [DOI] [PMC free article] [PubMed] [Google Scholar]
van Loo 2014
- Loo E, Mosterd K, Krekels GA, Roozeboom MH, Ostertag JU, Dirksen CD, et al. Surgical excision versus Mohs' micrographic surgery for basal cell carcinoma of the face: a randomised clinical trial with 10 year follow‐up. European Journal of Cancer 2014;50(17):3011‐20. [PUBMED: 25262378] [DOI] [PubMed] [Google Scholar]
Verkouteren 2017
- Verkouteren JAC, Ramdas KHR, Wakkee M, Nijsten T. Epidemiology of basal cell carcinoma: scholarly review. British Journal of Dermatology 2017;177(2):359‐72. [DOI: 10.1111/bjd.15321] [DOI] [PubMed] [Google Scholar]
Walker 2006
- Walker P, Hill D. Surgical treatment of basal cell carcinomas using standard postoperative histological assessment. Australasian Journal of Dermatology 2006;47(1):1‐12. [PUBMED: 16405477] [DOI] [PubMed] [Google Scholar]
Warshaw 2011
- Warshaw EM, Hillman YJ, Greer NL, Hagel EM, MacDonald R, Rutks IR, et al. Teledermatology for diagnosis and management of skin conditions: a systematic review. Journal of the American Academy of Dermatology 2011;64(4):759‐72. [PUBMED: 21036419] [DOI] [PubMed] [Google Scholar]
Whiting 2011
- Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS‐2: a revised tool for the quality assessment of diagnostic accuracy studies. Annals of Internal Medicine 2011;155(8):529‐36. [PUBMED: 22007046] [DOI] [PubMed] [Google Scholar]
WHO 2003
- World Health Organization. INTERSUN: The Global UV Project. A guide and compendium. www.who.int/uv/publications/en/Intersunguide.pdf 2003 (accessed 20 May 2015).
Williams 2017
- Williams HC, Bath‐Hextall F, Ozolins M, Armstrong SJ, Colver GB, Perkins W, et al. Surgery versus 5% imiquimod for nodular and superficial basal cell carcinoma: 5‐year results of the SINS randomized controlled trial. Journal of Investigative Dermatology 2017;137(3):614‐9. [PUBMED: 27932240] [DOI] [PubMed] [Google Scholar]
Wong 2017
- Wong KY, Fife K, Lear JT, Price RD, Durrani AJ. Vismodegib for locally advanced periocular and orbital basal cell carcinoma: a review of 15 consecutive cases. Plastic and Reconstructive Surgery. Global Open 2017;5(7):e1424. [PUBMED: 28831360] [DOI] [PMC free article] [PubMed] [Google Scholar]
Zak‐Prelich 2004
- Zak‐Prelich M, Narbutt J, Sysa‐Jedrzejowska A. Environmental risk factors predisposing to the development of basal cell carcinoma. Dermatologic Surgery 2004;30(2 Pt 2):248‐52. [PUBMED: 14871217] [DOI] [PubMed] [Google Scholar]
References to other published versions of this review
Dinnes 2015a
- Dinnes J, Matin RN, Moreau JF, Patel L, Chan SA, Wong KY, et al. Tests to assist in the diagnosis of cutaneous melanoma in adults: a generic protocol. Cochrane Database of Systematic Reviews 2015, Issue 10. [DOI: 10.1002/14651858.CD011902] [DOI] [Google Scholar]
Dinnes 2015b
- Dinnes J, Wong KY, Gulati A, Chuchu N, Leonardi‐Bee J, Bayliss SE, et al. Tests to assist in the diagnosis of keratinocyte skin cancers in adults: a generic protocol. Cochrane Database of Systematic Reviews 2015, Issue 10. [DOI: 10.1002/14651858.CD011901] [DOI] [Google Scholar]