Abstract
Background
In low- and middle-income countries, chest radiographs are most frequently interpreted by non-radiologist clinicians.
Objective
We examined the reliability of chest radiograph interpretations performed by non-radiologist clinicians in Botswana and conducted an educational intervention aimed at improving chest radiograph interpretation accuracy among non-radiologist clinicians.
Materials and methods
We recruited non-radiologist clinicians at a referral hospital in Gaborone, Botswana, to interpret de-identified chest radiographs for children with clinical pneumonia. We compared their interpretations with those of two board-certified pediatric radiologists in the United States. We evaluated associations between level of medical training and the accuracy of chest radiograph findings between groups, using logistic regression and kappa statistics. We then developed an in-person training intervention led by a pediatric radiologist. We asked participants to interpret 20 radiographs before and immediately after the intervention, and we compared their responses to those of the facilitating radiologist. For both objectives, our primary outcome was the identification of primary endpoint pneumonia, defined by the World Health Organization as presence of endpoint consolidation or endpoint effusion.
Results
Twenty-two clinicians interpreted chest radiographs in the primary objective; there were no significant associations between level of training and correct identification of endpoint pneumonia; concordance between respondents and radiologists was moderate (κ=0.43). After the training intervention, participants improved agreement with the facilitating radiologist for endpoint pneumonia from fair to moderate (κ=0.34 to κ=0.49).
Conclusion
Non-radiologist clinicians in Botswana do not consistently identify key chest radiographic findings of pneumonia. A targeted training intervention might improve non-radiologist clinicians’ ability to interpret chest radiographs.
Keywords: Chest radiographs, Children, Pneumonia, Radiology, Training intervention
Introduction
Pneumonia is the leading cause of death in children younger than 5 years, and more than 99% of these deaths occur in low- and middle-income countries [1–5]. Accurate diagnosis of pneumonia, correct identification of its etiology, and implementation of appropriate preventive or treatment strategies are crucial to managing the burden of pneumonia [6]. While chest radiographs are the primary modality for diagnosing pneumonia, less than half of the world’s population is able to access basic radiology services [7, 8]. Additionally, when radiology services are available in low- and middle-income countries, chest radiographs are most frequently interpreted by clinicians without formal radiology training, which might impact their ability to accurately diagnose pneumonia [9]. Consolidation, other opacities and pleural effusion are useful in the risk stratification of children with pneumonia [10]. As such, the ability to accurately identify these radiographic findings could inform the management of children with pneumonia.
Studies suggest that pediatricians, emergency physicians and radiologists can reliably interpret pneumonia on chest radiographs, although agreement is highest among radiologists [11]. Inter-observer variability in chest radiograph interpretations might reflect a lack of standardized imaging criteria [12–14]. Standardized definitions improve identification of radiologic pneumonia [14]. Given the role of chest radiography in the diagnosis of pneumonia, many previous studies highlighted the need for training physicians to accurately interpret chest radiographs [15–17]. While literature suggests that non-radiologists can interpret chest radiographs [15–17], less is known about how the inter-observer reliability varies by level of training among non-radiologist clinicians in low- and middle-income countries. Moreover, few studies have evaluated the utility of educational interventions to improve the accuracy of chest radiograph interpretation by non-radiologist clinicians in these settings.
In Botswana, despite a high burden of lower-respiratory infections, few radiologists are available to interpret chest radiographs [9]. Our primary objective was to examine the reliability of chest radiograph interpretations performed by non-radiologist clinicians in Botswana, and to explore whether accuracy varied by level of medical training. Our secondary objective was to develop and assess an educational intervention to improve chest radiograph interpretation among medical students and non-radiologist clinicians.
Materials and methods
This study was split into two objectives. The primary objective consisted of assessing the interpretation of chest radiographs by non-radiologist clinicians; this informed the development the secondary objective, which was a training intervention to increase accuracy of interpretation of chest radiographs among medical students and non-radiologist clinicians (Fig. 1).
Fig. 1.

Diagram of study objectives
Interpretation of chest radiographs by clinicians
We recruited non-radiologist clinicians (medical officers, pediatric residents and pediatricians) to interpret de-identified chest radiographs. Medical officers were defined as generalist physicians who had completed their internship year but had no further specialty training. These clinicians were recruited from Princess Marina Hospital and the University of Botswana in Gaborone, Botswana. Princess Marina Hospital is Botswana’s primary teaching hospital and is the largest referral hospital in the country, with more than 530 beds [18]. The chest radiographs used in this objective were selected from a prospective cohort study of children 1 month to 23 months of age with World Health Organization (WHO)-defined clinical pneumonia [9]. In total, 50 chest radiographs, 21 of which met criteria for endpoint pneumonia, were selected to represent a variety of chest radiographic findings. These images were downloaded as Digital Imaging and Communications in Medicine (DICOM) images from the hospital’s picture archiving and communication system (PACS). Non-radiologist clinicians were asked to complete a brief written questionnaire assessing their demographics and medical training. Each clinician then viewed 10 randomly selected chest radiographs and completed a standardized chest radiograph interpretation form assessing the image quality and radiographic findings of each chest radiograph (Table 1). Participants were informed that the chest radiographs were from children 1 month to 23 months of age who met WHO clinical criteria for pneumonia [9]. All images were also reviewed by two board-certified pediatric radiologists (18 (E.J.C.) and 9 (M.S.R.) years in practice) in the United States using the same interpretation form. Clinicians who did not complete the demographics questionnaire or who did not complete any chest radiograph interpretations were excluded from analyses.
Table 1.
Chest Radiograph Data Collection Form for the Primary Objective
| Question | Responses |
|---|---|
| Which images are available for review? | AP/LAT AP only LAT only |
| How would you categorize the image quality? | Adequate = features allow confident interpretation of “primary endpoint” as well as other infiltrates Suboptimal = features allow interpretation of primary endpoint, but not of other infiltrates or findings Uninterpretable = features of the image are not interpretable with respect to presence or absence of endpoint without additional images |
| Is there alveolar consolidation? Dense or fluffy opacity that occupies a portion or whole of a lobe or of the entire lung that may or may not contain air bronchograms Atelectasis of an entire lobe that produces a dense opacity and a positive silhouette sign with the mediastinal border is considered to be consolidation |
No Yes |
| Please indicate the primary pattern of alveolar consolidation. | Lobar opacification Bronchopheumonia Confluent nodular opacification |
| Is there an interstitial infiltrate? Lacy pattern involving both lungs featuring peribronchial thickening and multiple areas of atelectasis. It also includes minor patchy infiltrates that are not of sufficient magnitude to constitute primary endpoint consolidation and small areas of atelectasis that in children may be difficult to distinguish from consolidation |
No Yes |
| Indicate predominant pattern. | Atelectasis Interstitial infiltrate |
| Are air bronchograms present? | No Yes |
| Is there hilar adenopathy? | No Yes |
| Do you appreciate any pleural effusion? Presence of fluid in the lateral pleural space between the lung and chest wall; in most cases, this will be seen at the cost-phrenic angle or as a layer of fluid adjacent to the lateral chest wall; this does not include fluid seen in the horizontal or oblique fissures |
No Yes |
| Please indicate the size of the pleural effusion. | Trace (minimal blunting of the costophrenic angle) Small (height of effusion between “trace” and 25% of height of thoracic cavity) Moderate (height of effusion between 25–50% of height of thoracic cavity) Large (height of effusion >50% of the height of the thoracic cavity) |
AP anteroposterior, LAT lateral
Definitions
Our primary outcome was accurate identification of primary endpoint pneumonia, which is defined by the WHO as the presence of either endpoint consolidation or endpoint effusion [19]. Endpoint consolidation (alveolar consolidation in Table 1) was defined as “a dense opacity that may be a fluffy consolidation of a portion or whole of a lobe or of the entire lung, often containing air bronchograms and sometimes associated with pleural effusion.” Endpoint effusion (pleural effusion in Table 1) was defined as any fluid that is “in the lateral pleural space (and not just in the minor or oblique fissure) and is spatially associated with a pulmonary parenchymal opacity (including other opacities), or if the effusion obliterates enough of the hemithorax to obscure an opacity.” Other abnormalities participants were asked to evaluate included other (non-endpoint) opacities, air bronchograms and hilar adenopathy. Other opacities were defined by the WHO as “linear and patchy densities in a lacy pattern involving both lungs, featuring peribronchial thickening and multiple areas of atelectasis” [19]. Chest radiographs were classified as: (1) primary endpoint pneumonia, (2) other opacity/abnormality or (3) no significant pathology.
Educational intervention
For our secondary objective, we developed and implemented an educational intervention with the goal of improving chest radiograph interpretation skills among medical students and non-radiologist clinicians. The intervention consisted of a 45-minute in-person training session with PowerPoint (Microsoft, Redmond, WA) materials led by an experienced pediatric radiologist (S.A., 20 years in practice). Participants were medical students, non-radiologist clinicians and radiographers at Princess Marina Hospital. These participants might have participated in the primary objective and there was no identifiable link for participation in both objectives. The presenting radiologist designed the educational intervention with input from other members of the study team. The presentation materials were divided into four 7-min sessions that consisted of: (1) general concepts of interpreting chest radiographs, (2) reading anteroposterior (AP) chest radiographs and evaluating the lungs, (3) diagnosing bacterial pneumonia and pulmonary tuberculosis using AP chest radiographs and (4) diagnosing bacterial pneumonia and pulmonary tuberculosis using lateral chest radiographs.
Before the radiologist started the presentation, participants were asked to interpret 20 radiographs and identify whether the radiograph was normal or contained consolidation, other parenchymal abnormalities not qualifying as consolidation, lymphadenopathy, cavitation or effusion. Thirteen of the 20 radiographs met criteria for endpoint pneumonia. De-identified chest radiographs were provided by the facilitating radiologist and were different from the chest radiographs used in the primary objective. Participants had 45 s to interpret each radiograph. The same 20 radiographs, albeit in a different order, were presented to the participants for interpretation after the training intervention. Endpoint pneumonia, endpoint consolidation and endpoint effusion were classified using standardized WHO criteria as in the first objective. Final chest radiograph classification was (1) primary endpoint pneumonia, (2) other opacity/abnormality or (3) no significant pathology. “Other opacity/abnormality” included other parenchymal abnormalities not qualifying as consolidation, lymphadenopathy or cavitation.
Statistical analysis
Descriptive characteristics were generated for each group of participants in both objectives. Categorical variables were described by frequencies and proportions, and continuous variables were presented as medians and interquartile ranges. To determine whether there were any statistically significant differences among included and excluded participants for the primary objective, we used two-sample t-tests to compare mean values for the amount of time participants had been in their current position and current level of training. We used chi-square tests to evaluate for differences in the place where participants completed their undergraduate medical education and current level of training, between included and excluded participants, in the primary objective. Overcall rates — participants diagnosing endpoint pneumonia when the radiologists did not — were also described by proportions for the primary objective as well as for before and after the educational intervention.
We used univariate logistic regression to evaluate associations between level of training and the identification of endpoint consolidation, endpoint effusion and primary endpoint pneumonia on chest radiograph in the primary objective. To account for multiple testing we made a Bonferroni correction, which indicated that a significant result (P-value≤0.05) in the univariate logistic regression analysis was 0.017 or less. Variability in interpretation of the chest radiographs within and between groups as compared to the consensus chest radiograph interpretation by two board-certified radiologists was measured using the kappa (κ) statistic, with values indicating slight (0 to 0.20), fair (0.21 to 0.40), moderate (0.41‒0.60), substantial (0.61‒0.80) and almost perfect (0.81‒1.00) agreement [20].
All study data for both the primary and secondary objectives were collected and managed using REDCap tools hosted at the Children’s Hospital of Philadelphia (CHOP) [21]. The University of Pennsylvania institutional review board, the Princess Marina Hospital ethics committee, and the Health and Research Development Committee of the Botswana Ministry of Health approved the study. Analyses were conducted using Stata version 15.0 (StataCorp, College Station, TX).
Results
Clinicians’ interpretation of chest radiographs
Thirty-three clinicians were approached for this study, 11 of whom were dropped from the final study sample (Fig. 2). The remaining 22 were included in the final study sample (Table 2). Seven of these (32%) were medical officers, 6 (27%) were pediatric residents and 9 (41%) were pediatricians. More than half of the participants (55%) obtained their medical training in Africa, and only one clinician self-identified as having had formal radiology training outside of medical school. Study participants in the primary objective reviewed a total of 199 chest radiographs. These participants rated 130 chest radiographs (65%) as adequate, 63 (32%) as suboptimal and 6 (3%) as uninterpretable, whereas the radiologists rated all reviewed radiographs as adequate. When compared to the excluded participants, included participants had worked fewer years in their current position (P=0.01) and had more recently reached their current level of training (P=0.01). There were no differences between included and excluded participants based on where they received their medical education (P=0.13) or their current level of training (P=0.09).
Fig. 2.

Flow diagram for inclusion of participants in the primary objective
Table 2.
Demographics of Study Population in Primary Objective
| Gender | |
| Male | 11 (50.0) |
| Female | 11 (50.0) |
| Years Since Completing Undergraduate Degree [Median (IQR)] | 5.5 (1-10) |
| Place of Medical Education | |
| Africa | 12 (54.6) |
| Europe | 3 (13.6) |
| North America | 3 (13.6) |
| Caribbean/Central/South America | 4 (18.2) |
| Formal Radiology Training | |
| Yes | 1 (4.5) |
| No | 21 (95.5) |
| Level of Training | |
| Medical Officer | 7 (31.8) |
| Resident | 6 (27.3) |
| Pediatrician | 9 (40.9) |
| Years in Current Position [Median (IQR)] | 1.75 (0.66-7) |
| Years Worked at Current Level of Training [Median (IQR)] | 3.50 (0.33-7) |
Concordance between respondents and board-certified radiologists was moderate for endpoint pneumonia (κ=0.43; Table 3). The overcall rate for endpoint pneumonia was 19%. When looking at endpoint consolidation and endpoint effusion on their own, participants had fair agreement with the radiologists (κ=0.35 and κ=0.24, respectively). In contrast, participants performed poorly in identifying other abnormalities. Participants only had slight agreement with radiologists for interstitial opacities (κ=0.19), air bronchograms (κ=0.14) and hilar adenopathy (κ=0.013).
Table 3.
Comparison of Mean Kappa Values for Endpoint Consolidation, Effusion, and Pneumonia among Participants by Level of Training Compared to Radiologists in Primary Objective (N=Completed Chest Radiograph Reads)
| Mean Interval (95% Confidence Interval) | ||
|---|---|---|
| Endpoint Consolidation | (N=198) | |
| Combined | 0.35 (0.23-0.48) | |
| Medical Officer | 0.20 (−0.02-0.41) | |
| Resident | 0.40 (0.16-0.64) | |
| Pediatrician | 0.41 (0.22-0.61) | |
| Endpoint Effusion | (N=198) | |
| Combined | 0.24 (0.07-0.41) | |
| Medical Officer | 0.20 (−0.12-0.52) | |
| Resident | 0.20 (−0.04-0.44) | |
| Pediatrician | 0.33 (0.07-0.60) | |
| Endpoint Pneumonia | (N=198) | |
| Combined | 0.43 (0.31-0.55) | |
| Medical Officer | 0.31 (0.10-0.52) | |
| Resident | 0.46 (0.22-0.69) | |
| Pediatrician | 0.47 (0.28-0.65) |
After accounting for multiple testing, level of training was not associated with the likelihood of correct identification of primary endpoint pneumonia (Table 4).
Table 4.
Logistic Regression for Correct Interpretation of Endpoint Consolidation, Effusion, and Pneumonia by Level of Training in Primary Objective
| Odds Ratio | P-Value* | 95% CI | |
|---|---|---|---|
| Endpoint Consolidation | 0.33 | ||
| Medical Officer | 0.48 | 0.05 | 0.23-0.99 |
| Resident | 0.92 | 0.83 | 0.44-1.94 |
| Pediatrician | 1.00 | Base | Base |
| Endpoint Effusion | 0.50 | ||
| Medical Officer | 0.63 | 0.29 | 0.27-1.49 |
| Resident | 0.68 | 0.37 | 0.29-1.59 |
| Pediatrician | 1.00 | Base | Base |
| Endpoint Pneumonia | 0.42 | ||
| Medical Officer | 0.62 | 0.21 | 0.30-1.30 |
| Resident | 0.95 | 0.90 | 0.45-2.03 |
| Pediatrician | 1.00 | Base | Base |
Adjusted p-value for significance is p=0.017 or less for Bonferroni Correction from multiple testing.
We provide two examples of discordant chest radiograph interpretations, one in which the pediatric radiologists classified the chest radiographs as having endpoint pneumonia and at least one participant did not (Fig. 3), and another in which the pediatric radiologists classified the chest radiographs as not having endpoint pneumonia and at least one participant did (Fig. 4).
Fig. 3.


Discordant interpretation. Anteroposterior (a) and lateral (b) radiographs in a 20-month-old girl (reader blinded to age/gender) who met World Health Organization clinical criteria for pneumonia. Pediatric radiologists reported the images as having endpoint pneumonia but at least one participant in the primary objective reported them as not showing pneumonia
Fig. 4.


Discordant interpretation. Anteroposterior (a) and lateral (b) radiographs in a 7-month-old boy (reader blinded to age/gender) who met World Health Organization clinical criteria for pneumonia. Pediatric radiologists reported the images as not having endpoint pneumonia or opacities but at least one participant in the primary objective classified the images as having endpoint pneumonia
Educational intervention
Fifty-four participants attended the chest radiograph intervention. The intervention included 18 (33%) pediatric residents, 13 (24%) medical students, 9 (17%) pediatricians, 7 (13%) interns, 3 (6%) medical officers, 1 (2%) radiographer and three (6%) unspecified participants. Overall, participants improved agreement with the radiologist for endpoint pneumonia after the intervention (Table 5). Participants improved their interpretation of endpoint pneumonia from fair agreement with the teaching radiologist prior to the intervention (κ=0.34) to moderate agreement after the intervention (κ=0.49). The overcall rate for endpoint pneumonia decreased after the intervention (13% pre vs. 10% post). Participants had fair agreement with the radiologist for endpoint consolidation (κ=0.23 pre vs. κ=0.37 post) and endpoint effusion (κ=0.18 pre vs. κ=0.26 post) when each variable was looked at in isolation.
Table 5.
Mean Kappa Statistics for Interpretation of Endpoint Consolidation, Effusion, and Pneumonia Pre- and Post-Intervention by Level of Training Compared to the Radiologist in Secondary Objective (N=Completed Chest Radiograph Reads)
| Pre-Intervention (95% CI) | Post-Intervention (95% CI) | ||
|---|---|---|---|
| Endpoint Consolidation | N=922 | N=888 | |
| Combined | 0.23 (0.16-0.29) | 0.37 (0.30-0.43) | |
| Medical Student | 0.13 (0.01-0.26) | 0.30 (0.16-0.44) | |
| Intern | 0.24 (0.06-0.42) | 0.48 (0.31-0.65) | |
| Medical Officer | 0.30 (0.11-0.49) | 0.35 (0.13-0.57) | |
| Resident | 0.22 (0.11-0.34) | 0.35 (0.25-0.46) | |
| Pediatrician | 0.29 (0.15-0.44) | 0.38 (0.24-0.52) | |
| Endpoint Effusion | N=860 | N=796 | |
| Combined | 0.18 (0.11-0.25) | 0.26 (0.18-0.33) | |
| Medical Student | 0.12 (0.01-0.23) | 0.18 (0.07-0.28) | |
| Intern | 0.16 (−0.06-0.37) | 0.25 (0.02-0.47) | |
| Medical Officer | 0.14 (−0.20-0.48) | 0.22 (−0.10-0.54) | |
| Resident | 0.24 (0.08-0.39) | 0.37 (0.21-0.53) | |
| Pediatrician | 0.30 (0.11-0.49) | 0.33 (0.15-0.52) | |
| Endpoint Pneumonia | N=915 | N=870 | |
| Combined | 0.34 (0.27-0.40) | 0.49 (0.43-0.55) | |
| Medical Student | 0.26 (0.13-0.39) | 0.47 (0.34-0.61) | |
| Intern | 0.29 (0.11-0.47) | 0.49 (0.32-0.66) | |
| Medical Officer | 0.46 (0.26-0.66) | 0.49 (0.24-0.74) | |
| Resident | 0.36 (0.25-0.48) | 0.51 (0.41-0.61) | |
| Pediatrician | 0.36 (0.21-0.51) | 0.46 (0.32-0.60) |
The improved accuracy of interpretation of endpoint pneumonia was observed across all levels of training. Pediatric residents demonstrated the most improvement in interpretation of endpoint pneumonia, increasing from fair agreement prior to the intervention (κ=0.36) to moderate agreement after the intervention (κ=0.51). For endpoint consolidation, interns experienced the sharpest increase in accuracy, and improved from fair agreement (κ=0.24) before the intervention to moderate agreement (κ=0.48) after the intervention. Pediatric residents also showed the most improvement for endpoint effusion, although agreement remained fair before (κ=0.24) and after (κ=0.37) the intervention.
Participants also improved their accuracy of interpreting chest radiographs with no significant pathology and radiographs with other abnormalities. Most notably, participants improved their ability to detect normal chest radiographs from slight agreement before the intervention to moderate agreement after the intervention (κ=0.25 pre vs. κ=0.50 post). Participants’ ability to detect lymphadenopathy also increased moderately post-intervention (κ=0.24 pre vs. κ=0.38 post).
Discussion
Among a cohort of pediatric clinicians and trainees from a large teaching hospital in a low- to middle-income country, our results suggest that non-radiologist clinicians do not consistently identify key radiographic findings in children with pneumonia. Furthermore, concordance among participants with board-certified radiologists for a variety of radiographic features was poor across all levels of training.
Participants were asked to interpret a variety of chest radiographs and generally showed only slight or moderate agreement across radiologic findings. There was only 65% concordance between participants and radiologists surrounding the image quality of reviewed radiographs. As in previous reports, there was greater inter-observer variability in detecting interstitial opacities, air bronchograms, and hilar lymphadenopathy than in identifying endpoint consolidation, endpoint effusion, and endpoint pneumonia [16, 22–25]. In our study, training level was not consistently associated with accurate identification of endpoint pneumonia. This suggests that clinical experience alone might not be enough to lead to improvements in interpretation of chest radiographs because chest radiography might not be a standard component of training.
Results from our educational intervention indicate that short, targeted trainings have the potential to achieve short-term improvements in the accuracy of chest radiograph interpretation. Specifically, after the intervention, we observed improved identification of endpoint pneumonia, other opacities, lymphadenopathy and chest radiographs with no significant pathology among non-radiologist clinicians. Among our sample, the participants showed a marked improvement in their ability to distinguish normal chest radiographs and the absence of endpoint pneumonia, a meaningful improvement in their ability to determine a lack of significant pathology, as evidenced by the decrease in the overcall rate for endpoint pneumonia after the educational intervention. Literature has shown the potential for radiologic trainings to improve clinicians’ ability to interpret chest radiographs [26–29].
Increased radiology services are thus needed in low- and middle-income countries to help increase correct identification of radiologic pneumonia. These services might include short, targeted trainings such as the one used in this intervention, which might be beneficial to improving clinician skills. Additionally, other diagnostic modalities that are less observer-dependent might help to decrease variability in interpretation of primary endpoint pneumonia. While we did not access US imaging in our study, US technologies appear to be an easier diagnostic tool for non-radiologists to diagnose pneumonia and warrant further exploration in low- and middle-income countries [30, 31]. Shifts to digital platforms might indicate a path toward sustainable learning if a radiologist is not readily available to conduct a training intervention in person. Digital platforms for non-radiologists (i.e. orthopedic surgeons, pediatricians, emergency medicine physicians) and radiologists alike have yielded significant improvements for radiograph interpretation skills [27–29]. This indicates an area for further research, especially around trainings that can be integrated into medical education curricula or continuing medical education opportunities in resource-limited settings. Because less than half of the world’s population is able to access basic radiology services, emphasis should also be placed on expanding access to radiology services and board-certified radiologists, as well as improving physicians’ experience and comfort with using radiology services [7, 8].
Educational literature suggests that short-term improvements might not be sustained over time [32]. However, given the short format of the training intervention used in this study, this could be integrated into routine trainings or departmental activities on an ongoing basis. The next steps might be to implement weekly or monthly trainings to improve and sustain skills for chest radiograph interpretation among clinicians at the hospital to evaluate whether the short-term gains observed in this study can be sustained.
Our study has several strengths, notably our use of benchmark data in the primary objective to develop a training intervention within a single tertiary-care hospital setting, and the use of standardized criteria for endpoint determinations. Our study has several limitations. For the primary objective, we did not evaluate associations between level of training and correct interpretation of chest radiographs for medical students and interns because of logistical constraints, but we were able to compare clinicians based on completion of pediatric training. Additionally, included participants were significantly more likely to have worked at their current positions and level of training longer than excluded participants, indicating that level of agreement with radiologists in the primary objective on the aggregate might have been overestimated. Responses for the training were collected anonymously, so any individual self-improvement for those who participated in both objectives was not communicated back to participants; this limits the ability to evaluate sustained improvements over time. Additionally, the training intervention assessed short-term improvement in inter-observer reliability by using the same set of 20 chest radiographs before and after the intervention, which might have influenced the results because participants were already familiar with the chest radiographs. This might have led to an overestimation in the improvement of accuracy in chest radiograph interpretation among participants. While we kept the training intervention short to maximize the chance that the intervention could be done on a regular basis, the short duration also makes sustained improvement less likely. Because we intended to teach participants how to identify endpoint pneumonia in the educational intervention, we selected radiographic findings to demonstrate significant pathology, and as such might have introduced bias toward detection of these findings. Last, this study was carried out in one study site that is also a large teaching facility with a relatively young, inexperienced cohort of clinicians. As such, findings might not be generalizable to other hospitals in low- and middle-income countries with more experienced clinicians.
Conclusion
Our results suggest that non-radiologists in low- and middle-income countries are not able to consistently identify the key radiographic findings that enable risk stratification of children with suspected pneumonia, highlighting the need for more radiologists and improved diagnostic radiology services in these countries. Targeted training interventions might improve non-radiologist clinicians’ ability to interpret chest radiographs immediately after the training, although future studies should evaluate whether such improvements can be sustained over time.
Acknowledgments
This work was supported by the Thrasher Research Fund, the Children’s Hospital of Philadelphia, the Pincus Family Foundation, and through core services from the Penn Center for Acquired Immunodeficiency Syndrome (AIDS) Research, a program funded by the National Institutes of Health. Funding for this project was also made possible in part by a Collaborative Initiative for Paediatric HIV Education and Research (CIPHER) grant (to M.S.K.) from the International AIDS Society, supported by ViiV Healthcare. The views expressed in this publication do not necessarily reflect the official policies of the International AIDS Society or ViiV Healthcare. M.S.K. received financial support from the National Institutes of Health through the Duke Center for AIDS Research and a Career Development Award. A.P.S. and T.A.M. received financial support from the National Institutes of Health through the Penn Center for AIDS Research.
Footnotes
Conflicts of interest None
References
- 1.Walker CLF, Rudan I, Liu L et al. (2013) Global burden of childhood pneumonia and diarrhoea. Lancet 381:1405–1416 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Chang AB, Ooi MH, Perera D, Grimwood K (2013) Improving the diagnosis, management, and outcomes of children with pneumonia: where are the gaps? Front Pediatr 1:29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Xavier-Souza G, Vilas-Boas AL, Fontoura M-SH et al. (2013) The inter-observer variation of chest radiograph reading in acute lower respiratory tract infection among children. Pediatr Pulmonol 48:464–469 [DOI] [PubMed] [Google Scholar]
- 4.Kelly MS, Smieja M, Luinstra K et al. (2015) Association of respiratory viruses with outcomes of severe childhood pneumonia in Botswana. PLoS One 10:e0126593. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Scott JAG, Wonodi C, Moïsi JC et al. (2012) The definition of pneumonia, the assessment of severity, and clinical standardization in the Pneumonia Etiology Research for Child Health Study. Clin Infect Dis 54:S109–S116 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Zar HJ, Andronikou S, Nicol MP (2017) Advances in the diagnosis of pneumonia in children. BMJ 358:j2739. [DOI] [PubMed] [Google Scholar]
- 7.Lynch T, Bialy L, Kellner JD et al. (2010) A systematic review on the diagnosis of pediatric bacterial pneumonia: when gold is bronze. PLoS One 5:e11989. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Andronikou S, McHugh K, Abdurahman N et al. (2011) Paediatric radiology seen from Africa. Part I: providing diagnostic imaging to a young population. Pediatr Radiol 41:811–825 [DOI] [PubMed] [Google Scholar]
- 9.Kelly MS, Crotty EJ, Rattan MS et al. (2016) Chest radiographic findings and outcomes of pneumonia among children in Botswana. Pediatr Infect Dis J 35:257–262 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Patel A, Mamtani M, Hibberd PL et al. (2008) Value of chest radiography in predicting treatment response in children aged 3-59 months with severe pneumonia. Int J Tuberc Lung Dis 12:1320–1326 [PubMed] [Google Scholar]
- 11.McClain L, Hall M, Shah SS et al. (2014) Admission chest radiographs predict illness severity for children hospitalized with pneumonia. J Hosp Med 9:559–564 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Cherian T, Mulholland EK, Carlin JB et al. (2005) Standardized interpretation of paediatric chest radiographs for the diagnosis of pneumonia in epidemiological studies. Bull World Health Organ 83:353–359 [PMC free article] [PubMed] [Google Scholar]
- 13.Zar HJ, Andronikou S (2015) Chest X-rays for screening of paediatric PTB: child selection and standardised radiological criteria are key. Int J Tuberc Lung Dis 19:1411. [DOI] [PubMed] [Google Scholar]
- 14.Ben Shimol S, Dagan R, Givon-Lavi N et al. (2012) Evaluation of the World Health Organization criteria for chest radiographs for pneumonia diagnosis in children. Eur J Pediatr 171:369–374 [DOI] [PubMed] [Google Scholar]
- 15.Levinsky Y, Mimouni FB, Fisher D, Ehrlichman M (2013) Chest radiography of acute paediatric lower respiratory infections: experience versus interobserver variation. Acta Paediatr 102:e310–e314 [DOI] [PubMed] [Google Scholar]
- 16.Bada C, Carreazo NY, Chalco JP, Huicho L (2007) Inter-observer agreement in interpreting chest X-rays on children with acute lower respiratory tract infections and concurrent wheezing. Sao Paulo Med J 125:150–154 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Gatt ME, Spectre G, Paltiel O et al. (2003) Chest radiographs in the emergency department: is the radiologist really necessary? Postgrad Med J 79:214–217 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.The Electives Network: Princess Marina Hospital (2019) https://www.electives.net/hospital/73/preview. Accessed 16 Apr 2018
- 19.World Health Organization (2001) Standardization of interpretation of chest radiographs for the diagnosis of pneumonia in children. WHO, Geneva [Google Scholar]
- 20.Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33:159. [PubMed] [Google Scholar]
- 21.Harris PA, Taylor R, Thielke R et al. (2009) Research electronic data capture (REDCap) -- a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 42:377–381 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Correia MA, Mello MJG, Petribu NC et al. (2011) Agreement on radiological diagnosis of acute lower respiratory tract infection in children. J Trop Pediatr 57:204–207 [DOI] [PubMed] [Google Scholar]
- 23.Pauls S, Krüger S, Richter K et al. (2007) [Interobserver agreement in the assessment of pulmonary infiltrates on chest radiography in community-acquired pneumonia]. Rofo 179:1152–1158 [DOI] [PubMed] [Google Scholar]
- 24.Elemraid MA, Muller M, Spencer DA et al. (2014) Accuracy of the interpretation of chest radiographs for the diagnosis of paediatric pneumonia. PLoS One 9:e106051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Neuman MI, Lee EY, Bixby S et al. (2012) Variability in the interpretation of chest radiographs for the diagnosis of pneumonia in children. J Hosp Med 7:294–298 [DOI] [PubMed] [Google Scholar]
- 26.Seddon JA, Padayachee T, Du Plessis A-M et al. (2014) Teaching chest X-ray reading for child tuberculosis suspects. Int J Tuberc Lung Dis 18:763–769 [DOI] [PubMed] [Google Scholar]
- 27.Patel AB, Amin A, Sortey SZ et al. (2007) Impact of training on observer variation in chest radiographs of children with severe pneumonia. Indian Pediatr 44:675–681 [PubMed] [Google Scholar]
- 28.Semakula-Katende NS, Andronikou S, Lucas S (2016) Digital platform for improving non-radiologists’ and radiologists’ interpretation of chest radiographs for suspected tuberculosis — a method for supporting task-shifting in developing countries. Pediatr Radiol 46:1384–1391 [DOI] [PubMed] [Google Scholar]
- 29.Buijze GA, Guitton TG, van Dijk CN et al. (2012) Training improves interobserver reliability for the diagnosis of scaphoid fracture displacement. Clin Orthop Relat Res 470:2029–2034 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Shah VP, Tunik MG, Tsung JW (2013) Prospective evaluation of point-of-care ultrasonography for the diagnosis of pneumonia in children and young adults. JAMA Pediatr 167:119. [DOI] [PubMed] [Google Scholar]
- 31.Pereda MA, Chavez MA, Hooper-Miele CC et al. (2015) Lung ultrasound for the diagnosis of pneumonia in children: a meta-analysis. Pediatrics 135:714–722 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Custers EJFM (2010) Long-term retention of basic science knowledge: a review study. Adv Heal Sci Educ 15:109–128 [DOI] [PubMed] [Google Scholar]
