Skip to main content
The British Journal of Radiology logoLink to The British Journal of Radiology
. 2016 Aug 16;89(1066):20160301. doi: 10.1259/bjr.20160301

Comparing the performance of trained radiographers against experienced radiologists in the UK lung cancer screening (UKLS) trial

Arjun Nair 1,, Natalie Gartland 2, Bruce Barton 2, Diane Jones 3, Leigh Clements 4, Nicholas J Screaton 4, John A Holemans 3, Stephen W Duffy 5, John K Field 6, David R Baldwin 7, David M Hansell 2, Anand Devaraj 2
PMCID: PMC5124804  PMID: 27461068

Abstract

Objective:

To compare the performance of radiographers against that of radiologists for CT lung nodule detection in the UK Lung Cancer Screening (UKLS) pilot trial.

Methods:

Four radiographers, trained in CT nodule detection, and three radiologists were prospectively evaluated. 290 CTs performed for the UKLS were independently read by 2 radiologists and 2 radiographers. The reference standard comprised all radiologist-identified positive nodules after arbitration of discrepancies. For each radiographer and radiologist, relative sensitivity and average false positives (FPs) per case were compared for all cases read, as well as for subsets of cases read by each radiographer–radiologist combination (10 combinations).

Results:

599 nodules in 209/290 (72.1%) CT studies comprised the reference standard. The relative mean (±standard deviation) sensitivity of the four radiographers was 71.6 ± 8.5% compared with 83.3 ± 8.1% for the three radiologists. Radiographers were less sensitive and detected more FPs per case than radiologists in 7/10 and 8/10 radiographer–radiologist combinations, respectively (ranges of difference 11.2–33.8% and 0.4–2.6; p < 0.05). In 3/10 and 2/10 combinations, there was no difference in sensitivity and FPs per case between radiographers and radiologists. For nodules ≥100 mm3 in volume or ≥5 mm in maximum diameter, radiographers were relatively less sensitive than radiologists in only 5/10 radiographer–radiologist combinations (range of difference 16.1–30.6%; p < 0.05) and not significantly different in the remaining 5/10 combinations.

Conclusion:

Although overall radiographer performance was lower than that of experienced radiologists in this study, some radiographer performances were comparable with that of radiologists.

Advances in Knowledge:

Overall, radiographers were less sensitive than radiologists reading the same CTs and also displayed higher average FP detections per case when compared with a reference standard derived from radiologist readings. However, some radiographers compared favourably with radiologists, especially when considering larger potentially clinically relevant nodules. Thus, while probably not sensitive enough to function as first readers, radiographers may still be able to fulfil the role of an assistant reader—that is, as a first or concurrent reader, who presents detected nodules for verification to a reading radiologist.

INTRODUCTION

A CT lung screening programme requires a reading radiologist to dedicate a significant amount of time to the task of lung nodule detection. The finding by the National Lung Screening Trial that CT lung cancer screening could decrease lung cancer-specific mortality by 20%1 has incentivized recommendations for screening implementation in the USA.24 Nevertheless, outstanding questions remain as to whether a national lung screening programme in Europe would be cost effective5 and whether there are sufficient number of thoracic radiologists in Europe to implement it successfully. Based on current working practices, the number of radiologists required to read lung screening CTs needs to increase substantially to implement such a programme. Radiologists are arguably best suited to the task of lung nodule identification on CT because they have medical knowledge, an understanding of CT anatomy and reading experience. However, these attributes help mainly with the interpretation and the assignment of some level of clinical significance to a particular finding, rather than to the detection of that finding itself. If the task of detection could be reliably and consistently performed by a suitably trained non-radiologist, there would be a greater pool of readers available for a CT lung cancer screening programme. Radiologists could then focus on the important tasks of interpretation of findings, recommendations for follow-up and arbitration,6 rather than on nodule detection alone.

An alternative reader to a radiologist is a radiographer (or technologist as they are designated in the USA). Radiographers may be suited for this task, as they have a basic understanding of both the technical and anatomical aspects of thoracic CT. Radiographers have already been assessed as readers in screening mammography7,8 and screening CT colonography,9,10 but their role as readers in CT lung cancer screening has not yet been fully explored in a prospective study.11,12 However, before radiographers can be incorporated into the CT reading process, it is imperative that: (1) they are provided with a basic level of training; and (2) their performance is then assessed against the established methodology of various screening trials, in which radiologists perform all reading. The aim of this investigation was therefore to evaluate the performance of radiographers in lung nodule detection on CT (following a period of training) compared with radiologists in the setting of the UK Lung Screening (UKLS) pilot trial.

METHODS AND MATERIALS

Study population and case selection

The UKLS trial is a randomized control trial evaluating low-dose multidetector CT for lung cancer screening using a single screen design.1315 Recruitment procedures, selection criteria and screening protocols have previously been published.13 Briefly, subjects aged 50–75 years were approached with a questionnaire to determine lung cancer risk. Those with an estimated risk of at least 5% of developing lung cancer in the next 5 years (using the Liverpool Lung project risk model)16 were invited to enrol in the trial. The pilot phase of the trial has randomized 2028 subjects to CT screening at 2 participating sites (Site 1 and Site 2) and 2030 subjects to no screening, between 2011 and 2013.14

This substudy was performed prospectively. 290 consecutive CT studies performed for the UKLS pilot trial were read for this study between November 2011 and April 2012.

Ethical approval for the UKLS trial was granted by the National Research Ethics Service Committee, National Health Service Research and Development, the National Information Governance Board for Health and Social Care and the Administration of Radioactive Substances Advisory Committee [application no. 0521, reference ECC 2-02(a)/2011, approved 15/03/2011]. In addition, all participating sites had undergone site-specific assessment conducted by their local research and development department. Informed consent had been obtained from all participants. In consenting to the trial, participants also consented to further investigations, treatment, follow-up and data collection resulting from the trial, including permission to use data for future research.

CT imaging protocol

At both sites, examinations were performed on a Siemens Definition 128-slice scanner (Siemens, Erlangen, Germany), with thin collimation (0.5–0.625 mm) and a pitch of 0.9–1.1. Images were acquired at maximal inspiration during breath-holding, after appropriate instruction of the subjects and without i.v. contrast. Exposure factors were tailored to patient height and weight, to achieve a CT dose index between 0.8 and 3.2 mGy, with an effective radiation dose below 2 mSv. All appropriate dose modulations were used according to manufacturer guidelines and local practice. Data were reconstructed at 1.0-mm slice thickness with 0.7-mm reconstruction increment, using a moderate spatial frequency kernel.

Classification of nodules

The UKLS categorized nodules according to diameter and volume, as described previously.13 Briefly, non-calcified nodules with volumes ≥15 mm3 or with maximum diameters >3 mm (when unreliably segmented) were considered positive.

CT reading method

All studies were read on a workstation (Leonardo; Siemens Medical Solutions, Erlangen, Germany) using a commercially available software package capable of performing semi-automated volumetric nodule segmentation (Syngo LungCare, v. Somaris/5 VB 10A (Siemens, Erlangen, Germany)). Studies loaded into LungCare were presented in the following manner: 2 × 2 viewing partition with a default window setting level of −500 HU, width 1500 HU, default display of transverse maximum intensity projections at 10-mm thickness in cine mode, 1-mm collimation transverse images, 0.7-mm collimation coronal images and a panel for display of semi-automatic volumetric segmentation analysis if performed. Readers could alter maximum intensity projection thickness and window settings.

Semi-automated volumetry was used in all cases where segmentation of a nodule could be reliably performed. First, the reader marks a candidate nodule with a mouse click and then a second mouse click initiates an automated volume measurement programme. In cases of unreliable or impossible segmentation, manual measurements were performed.

Information including nodule category, location and size (both semi-automated volume and diameter measurements) was initially recorded as a structured digital imaging and communications report using a customized electronic database entry pro forma (Artex VOF, (Capelle aan den IJssel, Netherlands)). In addition, the nodule marks made by each reader were stored on the workstation. At the end of each CT reading, data were exported in an Extensible Markup Language format to the UKLS database via a personal computer with web access. The UKLS database is a web-based database that holds information for all trial participants and is managed and maintained by the UKLS project team.

CT evaluation by readers

Each CT examination was read by a single radiologist at each of the two participating sites (Radiologist A at Site 1 and Radiologist B at Site 2, both with more than 10 years' specialist thoracic imaging experience). The CTs were then transmitted to a central reading site on the same day for a second independent reading (by Radiologist C with 10 years' experience).

Four radiographers who had had experience in thoracic CT scan acquisition, and who were able to commit at least 4 h a week over the study period, were selected as readers. Radiographer 1 read CTs at Site 1 and Radiographer 2 read CTs at Site 2. Two radiographers (Radiographers 3 and 4) read CTs at the central site. Each radiographer underwent a period of training, as detailed in the Supplementary Material, based on a training methodology that we have previously described.17

Each CT was thus read by two radiologists and at least one radiographer, with a maximum of two radiographers (one participating site radiographer and one central site radiographer).

Reference standard

As the aim of the study was to compare radiographer performance against that of radiologists, the reference standard for this study was that used for the UKLS pilot trial15—that is, it was derived from the radiologist readings, as follows: after the second reading, discrepancies between the central and participating site radiologist readings were identified from the database. Arbitration of discrepancies was provided at the central site by a fourth thoracic radiologist with more than 20 years' experience, who had not been involved in reading the CT examinations (D.M.H.).

Classification of discrepancies

For each CT, a list of each reader reading was generated and compared against the reference standard. A nodule was considered to have been missed by a reader if it was included in the reference standard but not recorded by that reader.

Statistical analysis

For descriptive purposes, both the diameters and volumes of reference standard nodules as recorded on the UKLS database are given as medians and interquartile ranges (IQRs) between the 25th and 75th percentiles, as neither measurement followed a normal distribution according to the D'Agostino and Pearson omnibus test for normality.

The relative sensitivity (the percentage of reference standard nodules identified) and the relative average false-positive (FP) detections per case (expressed as mean and standard deviation) were calculated:

  • for each reader

  • for each radiographer and radiologist within a particular radiographer–radiologist combination (10 combinations in total), taking into account only cases read by that combination, to enable direct comparison between the radiographer and radiologist (comparisons of sensitivity and average FPs were performed using McNemar's test and paired Student's t-test, respectively)

  • for each reader in the first 10 weeks (Period 1) of the study and compared with that in the second 10 weeks (Period 2) of the study (comparisons of sensitivity and average FPs per case were performed using the χ2 test and independent samples Student's t-test, respectively).

Subsequently, we also performed a post hoc comparison of the relative sensitivity of radiographers and radiologists within the 10 radiographer–radiologist combinations when confined to nodules which may be considered clinically relevant within a lung cancer screening programme—that is, nodules ≥100 mm3 in volume or ≥5 mm in maximum diameter.18

Thus, we report on 10 reading experiments. In each of these, a specific radiologist is compared with a specific radiographer in terms of sensitivity in reading the same set of images. Thus, we avoid comparing Radiologist 1 reading Case X with Radiographer A reading Case Y—that is, the comparisons are not confounded by casemix. The number of images read in each experiment varied from 49 to 155.

All analyses were performed using Medcalc v. 12.5.0.0 (MedCalc Software, Mariakerke, Belgium). A p-value of <0.05 was assumed to be statistically significant.

RESULTS

Reference standard

81 (27.9%) of the 290 CT studies did not contain any nodules. The reference standard thus consisted of 599 nodules in the remaining 209 (72.1%) CT studies. The majority of CTs had 1 (35.4%), 2 (23.0%) or 3 (21.5%) nodules per CT. The median number of nodules per scan was 2, with a range of 1–18 nodules. Reference standard nodules had a median diameter of 4.4 mm (IQR 2.7 mm) and a median volume of 33.7 mm3 (IQR 40.9 mm3). These nodules were mostly solid [567/599 (94.7%) nodules], with 7 (1.2%) part-solid nodules and 25 (4.2%) pure ground-glass nodules.

Overall performance of radiographers and radiologists

Radiographers 1, 2, 3 and 4 had relative sensitivities of 67.6, 77.8, 79.4 and 61.6%, respectively (mean sensitivity 71.6 ± 8.5%). Radiologists A, B and C had relative sensitivities of 88.9, 87.0 and 74.0%, respectively (mean sensitivity 83.3 ± 8.1%).

The relative average FPs per case for Radiographers 1, 2, 3 and 4 were 1.2 ± 2.1, 2.9 ± 2.8, 0.6 ± 1.0 and 1.1 ± 1.3, respectively, while that of Radiologists A, B and C were 0.5 ± 0.8, 0.7 ± 1.0 and 0.2 ± 0.5, respectively.

Comparison of radiographer and radiologist performance

The relative sensitivities of each radiographer compared with those of the corresponding radiologists within a particular radiographer–radiologist combination are illustrated in Table 1. Radiographers 1 and 2 could be compared with only their corresponding participating site radiologists (i.e. Radiologists A and B, respectively) and the central site radiologist (Radiologist C). Radiographer relative sensitivity was significantly lower than that of radiologists in 7 of 10 radiographer–radiologist combinations (range of difference, 9.7–32.8%; p < 0.05) and not significantly different in 3/10 combinations.

Table 1.

Comparison of radiographer and radiologist relative sensitivity for the 10 radiographer–radiologist combinations

Radiographer–radiologist combination Number of CTs read Sensitivity
p-value
Radiographer (%) Radiologist (%) Difference (%)
1-A 130 67.6 88.0 20.4 0.0008
1-C 130 67.6 74.5 6.9 0.30
2-B 139 77.8 87.4 9.6 <0.0001
2-C 139 77.8 74.5 −3.3 0.20
3-A 68 81.0 92.2 11.2 0.01
3-B 87 78.5 88.2 9.7 0.0087
3-C 155 79.4 76.2 −3.2 0.32
4-A 64 53.8 86.6 32.8 <0.0001
4-B 49 68.7 85.5 16.8 0.0051
4-C 113 61.6 72.0 10.4 0.0119

A negative value for the difference indicates a lower relative sensitivity for a radiologist compared with a radiographer.

p-values are those derived from the McNemar's test.

p-values in bold are statistically significant.

Radiographers had significantly higher relative average FPs per case than radiologists in 8/10 combinations (range of difference, 0.4–2.6; p < 0.05), and there was no significant difference in the remaining 2 combinations (Table 2).

Table 2.

Comparison of radiographer and radiologist relative average false positives (FPs) per case for the 10 radiographer–radiologist combinations

Radiographer–radiologist combination Average FPs per case
Radiographer Radiologist Difference p-value
1-A 1.1 ± 1.3 0.5 ± 0.8 −0.6 <0.0001
1-C 1.1 ± 1.3 0.1 ± 0.5 −1.0 <0.0001
2-B 2.8 ± 2.8 0.7 ± 1.1 −2.1 <0.0001
2-C 2.8 ± 2.8 0.2 ± 0.5 −2.6 <0.0001
3-A 0.9 ± 1.4 0.5 ± 0.8 −0.4 0.0176
3-B 1.4 ± 2.5 0.6 ± 1.0 −0.8 0.0015
3-C 1.2 ± 2.1 0.1 ± 0.5 −1.1 <0.0001
4-A 0.4 ± 0.8 0.5 ± 0.8 0.1 0.2009
4-B 0.8 ± 1.2 0.8 ± 1.3 0 0.71
4-C 0.6 ± 1.0 0.2 ± 0.5 −0.4 0.0001

A negative value for the difference indicates a lower relative average FPs per case for a radiologist compared with a radiographer.

p-values are those derived from the paired Student's t-test.

p-values in bold are statistically significant.

In the post hoc analysis, when analysis was confined to the nodules ≥100 mm3 in volume or ≥5 mm in maximum diameter, radiographers were relatively less sensitive than radiologists in 5/10 radiographer–radiologist combinations (range of difference 16.1–30.6%; p < 0.05) and not significantly different in the remaining 5/10 combinations (Table 3).

Table 3.

Comparison of radiographer and radiologist sensitivity for the 10 radiographer–radiologist combinations for reliably segmented and intraparenchymal nodules measuring ≥100 mm3 in volume and unreliably segmented or pleural nodules ≥ 5-mm diameter

Radiographer–radiologist combination Number of nodules ≥100-mm3 volume or ≥5-mm diameter Sensitivity
Radiographer (%) Radiologist (%) Difference (%) p-value
1-A 56 62.5 78.6 16.1 0.049
1-C 56 62.5 83.9 21.4 0.023
2-B 94 81.9 92.6 10.6 0.053
2-C 94 81.9 80.9 −1.1 1.000
3-A 25 80.0 92.0 12.0 0.375
3-B 60 75.0 91.7 16.7 0.021
3-C 85 76.5 83.5 7.0 0.238
4-A 36 44.4 75.0 30.6 0.013
4-B 31 77.4 90.3 12.9 0.344
4-C 67 59.7 79.1 19.4 0.011

A negative value for the difference indicates a lower relative sensitivity for a radiologist compared with a radiographer.

p-values are those derived from McNemar's test.

p-values in bold are statistically significant.

Reader performance in the first 10 weeks vs second 10 weeks

The two radiographers with the lowest overall relative sensitivity (Radiographers 1 and 4) showed a significant improvement in sensitivity between the first and second 10-week period (sensitivity 50.0% in Period 1 vs 74.1% in Period 2 for Radiographer 1 and 41.8% in Period 1 vs 67.2% in Period 2 for Radiographer 4; p < 0.005), but their relative sensitivity in Period 2 still did not reach the level of Radiographers 2 and 3, who showed no significant difference in their sensitivity between the two periods [relative sensitivity of 75.2% for Period 1 vs Period 2 vs 79.0% (p = 0.54) and 78.1 vs 80.0% (p= 0.82) for Radiographer 2 and Radiographer 3, respectively]. The relative sensitivity of the three radiologists did not significantly differ between the two periods [relative sensitivity of 84.1% for Period 1 vs Period 2 vs 90.4% (p= 0.25), 86.1 vs 87.4% (p= 0.88) and 69.6 vs 75.7% (p= 0.15) for Radiologists A, B and C, respectively]. No radiographer or radiologist demonstrated a significant difference in relative average FPs per case between the two periods. As such, the improved sensitivity of Radiographers 1 and 4 in the second 10 weeks did not come at the expense of increased average FPs per case.

DISCUSSION

The importance of a high rate of nodule detection in lung cancer screening—i.e. high sensitivity—is underscored by the fact that many failures in lung cancer diagnosis are due to errors of detection rather than interpretation.19,20 Therefore, based on our initial data, we could not conclude that radiographers were suitable to act as first readers in CT lung cancer screening. This is because we found that radiographer sensitivity in the majority of cases was statistically significantly lower than that of the radiologists reading the same CTs. Nevertheless, we also did find that after an initial period of dedicated training, some radiographers are capable of achieving comparable sensitivities in nodule detection compared with experienced thoracic radiologists (in 3 out of 10 combinations). Therefore, further work is required to establish whether there may be a role for radiographers to act as concurrent readers in lung cancer screening, or whether the sensitivity of radiographers can be further enhanced by use of computer-aided detection/diagnosis or alternative methods of training. Furthermore, it may be that radiographer performance is dependent on inherent characteristics rather than training, and further work using a different pool of radiographers would be important to support any conclusions on the role of radiographers in a lung cancer screening programme.

It could be argued that although radiographer performance was generally inferior when compared with the thoracic radiologists used in this study, it was nevertheless comparable with radiologist performance reported in other studies from the literature (Table 4).11,12,2127 Caution should always be exercised when comparing sensitivities between nodule detection studies, as differences in the derivation and stringency of the reference standard (as indicated in Table 4)28 and in the types of patients undergoing CT examinations (e.g. patients with multiple metastases vs lung screening studies) may profoundly affect sensitivity.

Table 4.

Sensitivities of radiologists in a selection of nodule detection studies

Authors Year Number and type of readers Sensitivity of radiologists (%)
Reference standard used
Mean Range
Marten et al21 2004 4 radiologists, CAD 40 21–57 2 independent radiologists in consensus performing free search as well as reviewing readings of CAD and 4 reading radiologists
Rubin et al22 2005 3 radiologists, CAD 50 41–60 2 independent radiologists in consensus performing free search as well as reviewing readings of CAD and 3 reading radiologists
Wormanns et al23 2005 3 radiologists 64 NR 1 independent radiologist reviewing readings of 3 reading radiologists, but did not perform free search
Beigelman-Aubry et al24 2007 2 radiologists, CAD 52 46–58 Both reading radiologists jointly reviewed their readings, CAD readings and those from clinical reports of the analyzed CTs
Fraioli et al25 2007 3 radiologists 57 46–68 2 independent radiologists in consensus performing free search as well as reviewing readings of CAD and 3 reading radiologists
Brochu et al26 2007 3 radiologists, CAD 54 38–70 2 of the 3 reading radiologists interpreting images together, with CAD assistance
Roos et al27 2010 3 radiologists, CAD 53 44–59 2 independent radiologists in consensus performing free search as well as reviewing readings of CAD and 3 reading radiologists
Kakinuma et al11 2012 11 radiologists, 10 radiographers 79a 73–85b Each CT previously double-read by 2 independent radiologists as part of a lung cancer screening programme; a third radiologist verified nodules but did not perform free search; radiographer readings not included in reference standard
Ritchie et al12 2016 1 radiographer using CAD as a concurrent reader 98 96–99b Each CT previously single-read by radiologists at participating sites in two lung cancer screening programmes; determination of single radiologist used as reference standard
Current study 2016 3 radiologists
4 radiographers
83
72
74–89
62–79
2 of the 3 reading radiologists independently identified nodules as part of a lung cancer screening programme, a third radiologist arbitrated discrepant nodules but did not perform free search; radiographer readings not included in the reference standard

CAD, computer-aided detection/diagnosis; NR, not reported.

a

Sensitivity for solid nodules ≥ 5-mm diameter.

b

95% confidence interval but not range for sensitivity reported.

It has been recently suggested that pulmonary nodules that are smaller than 100 mm3 in volume or 5 mm in diameter are not predictive of lung cancer18 and do not require surveillance. Therefore, we performed a post hoc analysis on nodules larger than this threshold and found that there was an improvement in radiographer sensitivity, with no significant difference in detection rate compared with radiologists in 5 out of 10 combinations. It is worth bearing in mind, however, that it is still putatively more useful to ensure that radiographers are able to detect nodules as small as 3 mm in diameter rather than confining assessment to nodules ≥5 mm alone, given that the variation in measured diameter for small pulmonary nodules may be as much as 1.73 mm.29

The relative average FP detections per case of the radiographers in this study were significantly higher than those of radiologists. However, it is reassuring that radiographers in general did not exceed 3 average FP detections per CT, which may be considered favourably when compared with some computer-aided detection/diagnosis systems, where average FP rates between 0.3 and 15 per case have been reported.30 When viewed in this context, the higher average FP detection rate of radiographers compared with radiologists in this study does not disqualify them from being used as aids to reading.

Kakinuma et al11 have previously evaluated the performance of technologists in comparison with radiologists in Japan. It is noteworthy that although technologists were as sensitive as radiologists for part- and non-solid nodules ≥5 mm in diameter in that study, on the whole, they were significantly less sensitive than radiologists for solid nodules ≥5 mm in diameter, on either 2-mm or 5-mm slice thickness CT images. It is difficult to draw meaningful comparisons between their results and ours, owing to multiple differences in study design and performance metrics measured. For instance, radiographers in their study were not asked to measure detected nodules; volumetric analysis was not used (since nodule categorization was based on diameter); mean sensitivities and FPs with confidence intervals were compared between the radiologists and technologists as respective groups, rather than head-to-head evaluations between each radiographer and radiologist; and the total number of FPs rather than average FPs per case were described. A more recent study in which a single radiographer and a computer vision tool, acting in combination as a first reader, retrospectively identified nodules in CTs performed for the Pan-Canadian Early Detection of Lung Cancer Study found that the sensitivity of such a combination was 97.8% for nodules ≥1 mm as compared with the Pan-Canadian Early Detection of Lung Cancer Study radiologists12 and thus highlights one of the possible ways in which a radiographer performance could be augmented to make them a vital part of the initial CT reading workflow in lung cancer screening.

A limitation of our study is that the reference standard presumed that any nodule detected by a radiographer but not by a radiologist was not a true positive; this does potentially mean that there were less true positives and more FPs within the radiographer readings, thereby potentially exaggerating their FP detection rate. However, our a priori aim was to establish that radiographers were not inferior to radiologists, according to the clinically practised reference standard against which a result would be issued for a baseline screening CT in the UKLS pilot trial—that is, a reference standard based on the reading and arbitration performed by radiologists.15 Mindful of this fact, we have used the terms “relative sensitivity” and “relative average FP detections per case” to acknowledge that the performance metrics we have described are comparative rather than absolute. It could be argued that a reference standard based on histopathologic verification or on the absence or presence of malignant growth rates could provide a more robust indication of “ground truth”. However, it goes without saying that such a reference standard would, by definition, be inapplicable to baseline screening CTs and as such would be much less clinically relevant.

In conclusion, the radiographers in this study displayed a lower mean relative sensitivity for nodule detection and higher average FP detection rates than radiologists reading the same CTs, when reading lung cancer screening CTs in real time and under actual clinical conditions. Radiographers are thus probably not sensitive enough to be used as first readers. However, the fact that the performance of some radiographers compares favourably with radiologists suggests that radiographers could fulfil the role of an assistant reader—that is, as a first or concurrent reader, who presents detected nodules for verification to a reading radiologist—and such roles should be evaluated in future investigations.

Conflicts of interest

DRB declares relationships with the following companies: Boehringer Ingelheim, Agfa HealthCare, Irwin Mitchell and Roche for an education grant to run the Cambridge Chest Meeting.

Funding

The UKLS received funding from the National Institute for Health Research, Health Technology Assessment (reference number HTA 09/61/01).

Contributor Information

Arjun Nair, Email: arjun7764@gmail.com.

Natalie Gartland, Email: N.Gartland@rbht.nhs.uk.

Bruce Barton, Email: B.Barton@rbht.nhs.uk.

Diane Jones, Email: Diane.Jones@lhch.nhs.uk.

Leigh Clements, Email: leigh.clements@papworth.nhs.uk.

Nicholas J Screaton, Email: Nicholas.Screaton@papworth.nhs.uk.

John A Holemans, Email: John.Holemans@lhch.nhs.uk.

Stephen W Duffy, Email: s.w.duffy@qmul.ac.uk.

John K Field, Email: J.K.Field@liverpool.ac.uk.

David R Baldwin, Email: David.Baldwin@nuh.nhs.uk.

David M Hansell, Email: davidhansell@rbht.nhs.uk.

Anand Devaraj, Email: A.Devaraj@rbht.nhs.uk.

REFERENCES

  • 1.National Lung Screening Trial Research Team; Aberle DR, Adams AM, Berg CD, Black WC, Clapp JD, Fagerstrom RM, et al. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med 2011; 365: 395–409. doi: 10.1056/NEJMoa1102873 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.National Comprehensive Cancer Network. NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines): Lung Cancer Screening v. 1. 2015: NCCN. [updated on 2015; Accessed on 13 January 2015]. Available from: http://www.nccn.org/professionals/physician_gls/pdf/lung_screening.pdf [Google Scholar]
  • 3.Moyer VA; U.S. Preventive Services Task Force. Screening for lung cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2014; 160: 330–8. doi: 10.7326/M13-2771 [DOI] [PubMed] [Google Scholar]
  • 4.Mazzone P, Powell CA, Arenberg D, Bach P, Detterbeck F, Gould MK, et al. Components necessary for high-quality lung cancer screening: American College of Chest Physicians and American Thoracic Society Policy Statement. Chest 2015; 147: 295–303. doi: 10.1378/chest.14-2500 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Field JK, Oudkerk M, Pedersen JH, Duffy SW. Prospects for population screening and diagnosis of lung cancer. Lancet 2013; 382: 732–41. doi: 10.1016/S0140-6736(13)61614-1 [DOI] [PubMed] [Google Scholar]
  • 6.Heuvelmans MA, Oudkerk M, de Jong PA, Mali WP, Groen HJ, Vliegenthart R. The impact of radiologists' expertise on screen results decisions in a CT lung cancer screening trial. Eur Radiol 2015; 25: 792–9. doi: 10.1007/s00330-014-3467-4 [DOI] [PubMed] [Google Scholar]
  • 7.Bennett RL, Sellars SJ, Blanks RG, Moss SM. An observational study to evaluate the performance of units using two radiographers to read screening mammograms. Clin Radiol 2012; 67: 114–21. doi: 10.1016/j.crad.2011.06.015 [DOI] [PubMed] [Google Scholar]
  • 8.Pauli R, Hammond S, Cooke J, Ansell J. Radiographers as film readers in screening mammography: an assessment of competence under test and screening conditions. Br J Radiol 1996; 69: 10–14. doi: 10.1259/0007-1285-69-817-10 [DOI] [PubMed] [Google Scholar]
  • 9.European Society of Gastrointestinal and Abdominal Radiology CT Colonography Group Investigators. Effect of directed training on reader performance for CT colonography: multicenter study. Radiology 2007; 242: 152–61. [DOI] [PubMed] [Google Scholar]
  • 10.Lauridsen C, Lefere P, Gerke O, Gryspeerdt S. Effect of a tele-training programme on radiographers in the interpretation of CT colonography. Eur J Radiol 2012; 81: 851–6. doi: 10.1016/j.ejrad.2011.02.028 [DOI] [PubMed] [Google Scholar]
  • 11.Kakinuma R, Ashizawa K, Kobayashi T, Fukushima A, Hayashi H, Kondo T, et al. Comparison of sensitivity of lung nodule detection between radiologists and technologists on low-dose CT lung cancer screening images. Br J Radiol 2012; 85: e603–8. doi: 10.1259/bjr/75768386 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ritchie AJ, Sanghera C, Jacobs C, Zhang W, Mayo J, Schmidt H, et al. ; Pan-Canadian Early Detection of Lung Cancer Study Group. Computer vision tool and technician as first reader of lung cancer screening CT scans. J Thorac Oncol 2016; 11: 709–17. DOI: 10.1016/j.jtho.2016.01.021 [DOI] [PubMed] [Google Scholar]
  • 13.Baldwin DR, Duffy SW, Wald NJ, Page R, Hansell DM, Field JK. UK Lung Screen (UKLS) nodule management protocol: modelling of a single screen randomised controlled trial of low-dose CT screening for lung cancer. Thorax 2011; 66: 308–13. doi: 10.1136/thx.2010.152066 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Field JK, Hansell DM, Duffy SW, Baldwin DR. CT screening for lung cancer: countdown to implementation. Lancet Oncol 2013; 14: e591–600. doi: 10.1016/S1470-2045(13)70293-6 [DOI] [PubMed] [Google Scholar]
  • 15.Field JK, Duffy SW, Baldwin DR, Whynes DK, Devaraj A, Brain KE, et al. UK Lung Cancer RCT Pilot Screening Trial: baseline findings from the screening arm provide evidence for the potential implementation of lung cancer screening. Thorax 2016; 71: 161–70. doi: 10.1136/thoraxjnl-2015-207140 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Cassidy A, Myles JP, van Tongeren M, Page RD, Liloglou T, Duffy SW, et al. The LLP risk model: an individual risk prediction model for lung cancer. Br J Cancer 2008; 98: 270–6. doi: 10.1038/sj.bjc.6604158 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Nair A, Gartland N, Barton B. Feasibility of training radiographers to detect nodules in CT lung cancer screening. Insights Imaging 2013; 4(Suppl. 1): S216. Updated 12 July 2016. Updated 31 March 2013. Available from: http://www.myesr.org/cms/website.php?id=/en/ecr_2013/book_of_abstracts.htm [Google Scholar]
  • 18.Horeweg N, van Rosmalen J, Heuvelmans MA, van der Aalst CM, Vliegenthart R, Scholten ET, et al. Lung cancer probability in patients with CT-detected pulmonary nodules: a prespecified analysis of data from the NELSON trial of low-dose CT screening. Lancet Oncol 2014; 15: 1332–41. doi: 10.1016/S1470-2045(14)70389-4 [DOI] [PubMed] [Google Scholar]
  • 19.Kakinuma R, Ohmatsu H, Kaneko M, Eguchi K, Naruke T, Nagai K, et al. Detection failures in spiral CT screening for lung cancer: analysis of CT findings. Radiology 1999; 212: 61–6. doi: 10.1148/radiology.212.1.r99jn1461 [DOI] [PubMed] [Google Scholar]
  • 20.White CS, Romney BM, Mason AC, Austin JH, Miller BH, Protopapas Z. Primary carcinoma of the lung overlooked at CT: analysis of findings in 14 patients. Radiology 1996; 199: 109–15. doi: 10.1148/radiology.199.1.8633131 [DOI] [PubMed] [Google Scholar]
  • 21.Marten K, Seyfarth T, Auer F, Wiener E, Grillhösl A, Obenauer S, et al. Computer-assisted detection of pulmonary nodules: performance evaluation of an expert knowledge-based detection system in consensus reading with experienced and inexperienced chest radiologists. Eur Radiol 2004; 14: 1930–8. doi: 10.1007/s00330-004-2389-y [DOI] [PubMed] [Google Scholar]
  • 22.Rubin GD, Lyo JK, Paik DS, Sherbondy AJ, Chow LC, Leung AN, et al. Pulmonary nodules on multi-detector row CT scans: performance comparison of radiologists and computer-aided detection. Radiology 2005; 234: 274–83. doi: 10.1148/radiol.2341040589 [DOI] [PubMed] [Google Scholar]
  • 23.Wormanns D, Ludwig K, Beyer F, Heindel W, Diederich S. Detection of pulmonary nodules at multirow-detector CT: effectiveness of double reading to improve sensitivity at standard-dose and low-dose chest CT. Eur Radiol 2005; 15: 14–22. doi: 10.1007/s00330-004-2527-6 [DOI] [PubMed] [Google Scholar]
  • 24.Beigelman-Aubry C, Raffy P, Yang W, Castellino RA, Grenier PA. Computer-aided detection of solid lung nodules on follow-up MDCT screening: evaluation of detection, tracking, and reading time. AJR Am J Roentgenol 2007; 189: 948–55. doi: 10.2214/AJR.07.2302 [DOI] [PubMed] [Google Scholar]
  • 25.Fraioli F, Bertoletti L, Napoli A, Pediconi F, Calabrese FA, Masciangelo R, et al. Computer-aided detection (CAD) in lung cancer screening at chest MDCT: ROC analysis of CAD versus radiologist performance. J Thorac Imaging 2007; 22: 241–6. doi: 10.1097/RTI.0b013e318033aae8 [DOI] [PubMed] [Google Scholar]
  • 26.Brochu B, Beigelman-Aubry C, Goldmard JL, Raffy P, Grenier PA, Lucidarme O. Computer-aided detection of lung nodules on thin collimation MDCT: impact on radiologists' performance. [In French.] J Radiol 2007; 88: 573–8. doi: 10.1016/S0221-0363(07)89857-X [DOI] [PubMed] [Google Scholar]
  • 27.Roos JE, Paik D, Olsen D, Liu EG, Chow LC, Leung AN, et al. Computer-aided detection (CAD) of lung nodules in CT scans: radiologist performance and reading time with incremental CAD assistance. Eur Radiol 2010; 20: 549–57. doi: 10.1007/s00330-009-1596-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Armato SG, III, Roberts RY, Kocherginsky M, Aberle DR, Kazerooni EA, Macmahon H, et al. Assessment of radiologist performance in the detection of lung nodules: dependence on the definition of “truth”. Acad Radiol 2009; 16: 28–38. doi: 10.1016/j.acra.2008.05.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Revel MP, Bissery A, Bienvenu M, Aycard L, Lefort C, Frija G. Are two-dimensional CT measurements of small noncalcified pulmonary nodules reliable? Radiology 2004; 231: 453–8. doi: 10.1148/radiol.2312030167 [DOI] [PubMed] [Google Scholar]
  • 30.Li Q. Recent progress in computer-aided diagnosis of lung nodules on thin-section CT. Comput Med Imaging Graph 2007; 31: 248–57. doi: 10.1016/j.compmedimag.2007.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The British Journal of Radiology are provided here courtesy of Oxford University Press

RESOURCES