Abstract
Background:
Therapeutic decisions in the pediatric intensive care unit are made by pediatric intensivists (PI) based on their interpretation of chest radiographs before the formal interpretation by a pediatric radiologist (PR). This study was designed to determine the adequacy of chest radiograph interpretations by pediatric intensivists and the effects on patient care. The PI recorded their chest radiograph interpretations, documenting support devices and thoracic abnormalities. Concordance and discordance were determined by the pediatric pulmonologist who was not involved in the care of the patient by comparing the interpretations of the PI and PR. Clinically significant discordance was defined as interpretations by the radiologist that differed to those from the PI that may have required therapeutic intervention.
Results:
The evaluation of 291 chest radiographs demonstrated an overall concordance rate of 82.5% (240 out of 291; P < 0.05). There was no significant difference in the ability of critical care medicine physicians to identify atelectasis, infiltrates, pleural effusions, or airleaks (P > 0.05). Support devices were correctly identified in 100% of the cases. Discordant interpretations included 20 that were clinically significant, 17 insignificant findings and 14 films over-interpreted by the PI. A chart review of the patients with discordant findings revealed only one finding that required an alteration in therapy.
Conclusions:
These findings demonstrate significant agreement between the interpretation of chest radiographs by PI and PR in selected clinical situations. These data support the current practice of the PI making therapeutic decisions based on their interpretations of chest radiographs.
Keywords: children, critical care, radiographic interpretation
Introduction
Chest radiographs are obtained in the pediatric intensive care unit to assess cardiopulmonary abnormalities, evaluate acute clinical deterioration, and to determine the position of invasive life support devices such as central venous catheters and endotracheal tubes. Immediate interpretation of these chest radiographs is often necessary to assess whether further diagnostic or therapeutic interventions are necessary and to determine proper position of invasive devices. The pediatric intensivists (PI) at the bedside are often the first physicians to interpret a radiograph and frequently base diagnostic and therapeutic interventions on their interpretations. With fewer than 30% of hospitals having a radiologist available in the hospitals having a radiologist available in the hospital 24 h a day [1], a formal interpretation by the radiologist is not readily available until after most acute interventions have occurred. Accurate interpretation of chest radiographs by a PI when a radiologist is not immediately available is crucial for optimum patient care. Few centers have mechanisms to determine if discrepancies exist between the radiologist and the treating physician or whether these discrepancies lead to inappropriate changes in therapy.
To our knowledge no previous studies have evaluated the accuracy with which board-certified PI interpret chest radiographs. This study was undertaken to determine the concordance of chest radiograph interpretation between PI and pediatric radiologists (PR) and to determine whether discordant interpretations resulted in adverse patient outcomes.
Methods
Population
All patients admitted to the pediatric intensive care unit (PICU) at Egleston Children's Hospital who received a chest radiograph from August 1995 to July 1996 were eligible for the study. Egleston is a 235 bed tertiary care children's hospital affiliated with an academic medical center (Emory University School of Medicine) admitting 1200 patients to the PICU per year. Following each chest radiograph, the patient's medical record number, age, diagnosis and the indication for he radiograph were recorded by a board-certified PI responsible for the patient's care. The PI reviewed each chest radiograph and documented in a log book any abnormal findings (eg atelectasis, infiltrates, effusions and airleaks), the position of invasive support devices (eg endotracheal tubes and central venous catheters) and any therapeutic interventions performed. Formal review and interpretation of all chest radiographs were independently performed daily by a board-certified or eligible staff PR who was not involved in the study. The PR also did not have access to any preliminary reports, but was provided with a routine description of the clinical situation and the indication for the chest radiograph. The formal interpretation of the PR was retrospectively obtained from the patient medical record and served as the `gold standard' for statistical analysis. The paired interpretations by the PI and PR were reviewed by a board-certified pediatric pulmonolgist (PP). The PP was not involved in the care of the patient and was responsible for categorizing the two interpretations as concordant or discordant. Discordant interpretations were further assessed by retrospective chart review to determine if subsequent alterations in patient management were made by PI on the basis of the official interpretation of radiograph.
An interpretation was considered concordant if the PI reading was consistent with the formal PR interpretation; concordance was the correctly identified presence and location of atelectasis, infiltrates, invasive support devices or other clinically significant findings by the PI. An interpretation was discordant if, in the opinion of the PP, the interpretation of the PI did not coincide with the formal interpretation by the PR. Discordant interpretations were further subdivided by the PP into clinically significant (those that may have altered therapy) or clinically significant (those that may have altered therapy) or clinically insignificant (those requiring no change in patient management). Using the formal interpretation by the PR to determine accuracy, the data were analyzed for sensitivity, specificity and positive-predictive value (PPV). Sensitivity was taken as the proportion of positive (true-positive) findings identified by the PI in relation to the total number of positive findings identified by the PR (true-positives + false-negatives). Specificity was defined as the proportion of negative findings by the PI (true-negative) to the total number of negative films read by the Pr (true-negatives + false-positives). Finally, the proportion of positive findings by the PI (true-positive) to true radiographic findings by the radiologist (true-positive + false-positive) was taken as the PPV.
Data and variables of the two groups were compared during the two-sample test of binomial proportions for matched pair data (McNemar's Test). A value of P < 0.05 was considered statistically significant.
Results
Nine hundred and thirty-nine chest radiographs were obtained for the 958 admissions to the PICU during the study period. A convenience sample of 291 chest radiographs was obtained from 161 patients with a median age of 3.5 years. Primary indications for obtaining a chest radiograph are listed in Table 1.
Table 1.
Indication | Number |
Respiratory distress | 65 |
R/O pneumonia | 16 |
Cystic fibrosis | 3 |
R/O acute chest syndrome | 1 |
R/O congestive heart failure | 1 |
R/O pleural effusion or pneumothorax | 2 |
Follow-up | 144 |
Procedures | |
Intubation | 39 |
Line placement | 18 |
Chest tube placement | 1 |
Thoracentesis | 3 |
R/O, rule out.
Overall interpretation
These was concordance in 240 out of 291 (82.5%) of the chest radiographs interpreted by the PI (P < 0.05; Table 2). Twenty of the discordant interpretations were assessed as clinically significant and 17 as clinically insignificant; 14 were assessed as over-reads by the PI (Table 2). Using the radiologist interpretation as the `gold standard', the overall sensitivity, specificity, and PPV were 83.3%, 79.7% and 93.0%, respectively (Table 2).
Table 2.
Concordant | 240 |
PR and PI positive findings concordant | 185 |
PR and PI negative findings concordant | 55 |
Discordant | 51 |
PR ≠ PI clinically significant | 20 |
PR ≠ PI clinically insignificant | 17 |
Over-read PR ⊖ PI ⊕ | 14 |
Sensitivity | 83.3 |
Specificity | 79.7 |
PPV | 93.0 |
Assessment of abnormal findings
There was no statistical difference in the ability of the PI to identify infiltrates, atelectasis, effusions or air leak syndrome (Fig 1). Considering each abnormal finding separately, the PI was able to accurately identify the presence of pulmonary infiltrates (sensitivity 85.1%, specificity 96.1%, PPV 96.6%) and atelectasis (sensitivity 84.6% specificity 97%, PPV 88.7%). The identification of airleaks had a sensitivity of 64.3% with 100% specificity and 100% PPV and pleural effusions had a sensitivity of 66.6%, specificity was 99.2%, and PPV was 90.9%.
Finally we analyzed the ability of the PI to accurately determine the correct position of life-support devices. There was 100% concordance between the interpretations of the PI and PR regarding the appropriate or inappropriate position of endotracheal tubes and central venous catheters (sensitivity 100%, specificity 100% and PPV 100%; Figs 2 and 3).
Discordant findings
Fifty-one discordant findings were identified. Seventeen were classified as clinically insignificant and 20 were classified as clinically significant by the PP. Clinically significant discordant chest radiograph findings are listed in Table 3. On review of the medical records of the patients with clinically significant discordant chest radiograph findings, therapy was found to be altered in one case. In this patient, the reaccumulation of a pneumothorax was not recognized by the PI. Identification of the pneumothorax by the PR and subsequent communication with the PI resulted in the patient's thoracotomy tube being returned to suction although the patient was unchanged clinically.
Table 3.
PI interpretation | Number of patients | PR interpretation | Number of patients | Management |
Normal or clear lung fields | 8 | Pneumonia/infiltrate | 3 | No change in therapy |
Atelectasis | 1 | No change in therapy | ||
Pleural effusion | 1 | No change in therapy | ||
Airleak | 2 | One chest tube placed to suction | ||
Fracture | 1 | No change in therapy | ||
Pneumonia or infiltrate | 6 | (+) Airleak | 3 | No change in therapy |
Multilobe infiltrates | 3 | No change in therapy | ||
Pleural effusion | 2 | (+) Atelectasis | 1 | No change in therapy |
(+) Small pneumothorax | 1 | No change in therapy | ||
Atelectasis | 3 | Pleural effusion | 1 | No change in therapy |
Multilobed | 1 | No change in therapy | ||
(+) Infiltrate | 1 | No change in therapy | ||
Hyperinflation | 1 | (+) Atelectasis | 1 | No change in therapy |
PI, pediatric intensivist; PR, pediatric radiologist; (+), PI interpretation plus an additional finding.
In the other clinically significant and insignificant (Table 4) discordant cases, changes in therapy were felt to be not indicated based on the interpretation of the PR and based on the patient's clinical condition. Of the 14 radiograph over-reads by the PI, seven had clinically insignificant areas of atelectasis that did not receive additional therapy. Additionally, five films were interpreted to have an infiltrate, but other clinical indicators were used to assist with therapeutic intervention (ie, fever, leukocytes and physical findings). Finally, the PI assessed two radiographs as having small pleural effusions that did not require intervention.
Table 4.
PI interpretation | Number of patients | PR interpretation | Number of patients | Management |
Normal or no change | 7 | Increasing infiltrate | 3 | No change in therapy |
Small pleural effusion | 1 | No change in therapy | ||
RUQ density R/O gall stones | 1 | No change in therapy | ||
Mild cardiomegaly | 1 | No change in therapy | ||
Small patchy RUL density | 1 | No change in therapy | ||
Infiltrate | 4 | (+) Pleural effusion | 1 | No change in therapy |
Bilateral infiltrates | 1 | No change in therapy | ||
(+) Healing clavicle fracture | 1 | No change in therapy | ||
RUL and RML infiltrates | 1 | No change in therapy | ||
Resolved pleural effusion | 3 | Small pleural effusion | 2 | No change in therapy |
(+) Atelectasis | 1 | No change in therapy | ||
Residual pneumothorax | 1 | No pneumothorax | 1 | No change in therapy |
Atelectasis LLL | 1 | LLL and RUL | 1 | No change in therapy |
RUL | 1 | RUL alveolar density | 1 | No change in therapy |
PI, pediatric intensivisit; PR, pediatric radiologist; RUQ, right upper quadrant; R/O, rule out; RUL, right upper lobe; (+), PI interpretation plus an additional finding; RML, right middle lobe; LLL, left lower lobe.
Discussion
These findings demonstrate agreement between PI and PR at our institution in greater than 80% of the chest radiographs reviewed. Our findings are consistent with other studies reporting radiographic interpretation concordance rates for non-radiologists of 77-97% [2,3]. Reviews of pediatric radiographic interpretation typically have higher discordance rates (8.9-16.4%) due to the inherent difficulty in interpreting these radiographs given the wide variations in patient size, physical development and the variety of disease processes. Based on our data, PI and PR at our institution showed similar abilities to determine the proper position of central venous lines and endotracheal tubes on radiographs. As shown in Fig 1, PI appeared to have difficulty identifying small pneumothoraces and pleural effusions. There were very few patients in our population who developed airleaks making it difficult to draw any statistical conclusions from the low concordance rate presented. Although we found a statistically significant number of discordant interpretations, discordance produced a change in therapy in < 1% (1 out of 161) of the enrolled patients. Discordance did not affect PICU length of stay, significantly alter clinical care of result in any adverse patient outcomes in this series.
Three potential reasons that may have contributed to the discordance between PI and PR in the study include: (1) interobserver variability; (2) physician bias; and (3) the ability to correlate clinical findings with radiographic findings. Variability between multiple observers reviewing a radiograph is recognized as a significant cause of discrepancy [3,4,5]. Studies have shown discrepancy rates as high as 30% amongst radiologists reviewing the same radiograph [3]. Interobserver variability should be considered when discrepancies occur between radiologists and nonradiologists. Fleshier et al [6] demonstrated that interobserver variability contributed to the discrepancies in radiograph interpretation seen between the emergency room pediatricians and radiologists. These discrepancies frequently occurred between the reviewers while attempting to distinguish normal pulmonary markings from peribronchial thickening or atelectasis from small infiltrates [6]. Though interobserver variability was not specifically addressed in this study, it may account for some of the discordant interpretations.
PR Usually receive limited information about the patient's clinical condition and their interpretation is based primarily on the radiographic findings present. Due to this limited information, radiologists tend to over-read radio-graphs to minimize errors, to meet community standards, and to avoid potential malpractice litigation [3]. In teaching institutions, as many as 11% of radiographs may be over-read by staff radiologists [3]. Furthermore, when radiologists have access to the treating physician's interpretation they are more likely to be biased by the treating physician's interpretation. Kramer et al [7] reported that radiologists are more likely to overdiagnosis pneumonias in febrile children when they knew the treating physician's interpretation of the chest radiograph. This bias was eliminated when the radiologist was only supplied with information on vital signs or physical findings. Although the effects of overinterpretation on patient outcome have not been determined, routine overinterpretation has the potential to result in inappropriate therapies and increased hospitalizations and procedures. Because we used the PR as the `gold standard' we could not determine how frequently over-interpretation by the PR occurred and how that may have impacted on patient care and outcome.
Finally, we theorize that discordance could be secondary to the treating PI having first-hand knowledge of both the patient's clinical and radiographic findings. The ability of the treating physician to correlate the patient's clinical findings and radiographic findings may have resulted in the under- or over-interpretation of the radiograph compared with that of the PR, leading to a higher disordance rate.
The PR were not aware of this study at the time of performance; all of the PI were involved in the study. Therefore, it is possible that the PI tended to be more attentive to detailed chest radiograph findings that in a typical ICU setting, potentially improving concordance rates in an unrealistic fashion.
These results suggest that board-certified PI in our institution are capable of making appropriate therapeutic decisions based on their interpretations of chest radiographs obtained in the PICU. Thus, the appropriateness of acute interventions based on the radiographic findings in our PICU is the result of the interpretation of the chest radiograph and assessment of the patient's clinical findings by the PI. Further studies in other PICU will be necessary to validate our findings, as results may depend on the specific abilities and experience of the PI.
Future implications
In today's healthcare environment, capitation is stimulating the move towards reduced cost and eliminate the duplication of services. Is the routine review of all radiographs by a radiologist therefore still cost-effective or clinically necessary? In a study comparing the interpretation of plain orthopedic films, Turen et al [8] found no difference in interpretations between radiologists and orthopedists. They concluded that the review of orthopedic films by the radiologist did not influence patient care and that concurrent review by both radiologists and orthopedists was redundant, resulting in unnecessary expense to the patient [8]. Simon et al [9], in another study comparing radiographic interpretations by pediatric emergency room physicians and radiologists from our institution, concluded that a substantial cost saving could be obtained by eliminating the radiologist's routine interpretation of all radiographs and consulting the radiologist with difficult or high risk cases.
From the data that we have presented, it could be concluded that the radiologist may not need to review all chest radiographs ordered in the PICU. Radiographs that are obtained solely for determining central venous line or endotracheal tube position may only need to be reviewed by the PI for the accuracy of placement. The charges associated with interpreting the radiograph could be bundled with the charges associated with the procedure. If other studies show similar results, perhaps the current policies of requiring radiologists' review of all chest radiographs, as in our institution, can be reconsidered. Additional cost reductions would be generated. Additional cost reductions would be generated if, for example, radiologists were required to review the initial chest radiograph upon patient admission to the PICU and were then consulted for specific clinical questions by the PI on subsequent radiographs. For this change to truly be effective, it will require additional emphasis during the training for PI to aid in their interpretation of radiography. Future studies will be required to more accurately determine clinical predictors that could help clinicians determine which radiographs would need further evaluation by a radiologist.
Acknowledgments
Acknowledgements
The authors would like to thank Jean Wright, MD, MBA, KS Anand, MBBS, James Fortenberry, MD, Robert Pettignano, MD, Jana Stockwell, MD, Critical Care, Engleston Pediatric Group, and Turner Ball, Department of Radiology, Emory University, for their assistance with the preparation of this manuscript.
References
- O'Leary MR, Smith MS, O'Leary DS, et al. Application of clinical indicators in the emergency department. JAMA. 1989;262:3444–3447. [PubMed] [Google Scholar]
- Walter RS. Radiology practices in emergency departments associated with pediatric residency training programs. Pediatr Emerg Care. 1995;11:78–81. doi: 10.1097/00006565-199504000-00005. [DOI] [PubMed] [Google Scholar]
- Warren JS, Lara K, Connor PD, Cantrell J, Hahn RG. Correlation of emergency department radiographs: results of a quality assurance review in an urban community hospital setting. J Am Board Fam Pract. 1993;6:255–259. [PubMed] [Google Scholar]
- Zir LM, Miller SW, Dinsmore RE. Interobserver variability in coronary angiography. Circulation. 1976;53:627–632. doi: 10.1161/01.cir.53.4.627. [DOI] [PubMed] [Google Scholar]
- Mata AG, Rosengant RM. Interobserver variability in the radiographic diagnosis of necrotizing enterocolitis. Pediatrics . 1980;66:69–71. [PubMed] [Google Scholar]
- Fleisher G, Ludwig S, McSorley M. Interpretation of pediatric X-ray films by emergency department pediatricians. Ann Emerg Med. 1983;12:153–158. doi: 10.1016/s0196-0644(83)80557-5. [DOI] [PubMed] [Google Scholar]
- Kramer MS, Roberts-Brauer R, Williams RL. Bias and `overcall' in interpreting chest radiographs in young febrile children. Pediatrics. 1992;90:11–13. [PubMed] [Google Scholar]
- Turen CH, Mark JB, Bozman R. Comparative analysis of radiographic interpretation of orthopedic films: is there redundancy? J Trauma . 1995;39:720–721. doi: 10.1097/00005373-199510000-00019. [DOI] [PubMed] [Google Scholar]
- Simon HK, Khan NS, Nordenberg DF, Wright JA. Pediatric emergency physician interpretation of plain radiographs: is routine review by a radiologist necessary and cost-effective? Ann Emerg Med. 1996;27:295–298. doi: 10.1016/s0196-0644(96)70262-7. [DOI] [PubMed] [Google Scholar]