Skip to main content
The British Journal of Radiology logoLink to The British Journal of Radiology
. 2020 Jun 2;93:20200055. doi: 10.1259/bjr.20200055

A comparison of manually populated radiology information system digital radiographic data with electronic dose management systems

Nathan Dickinson 1,, Matthew Dunn 1
PMCID: PMC7336068  PMID: 32462887

Abstract

Objective:

To assess the accuracy and agreement of radiology information system (RIS) kerma–area product (KAP) data with respect to automatically populated dose management system (DMS) data for digital radiography (DR).

Methods:

All adult radiographic examinations over 12 months were exported from the RIS and DMS at three centres. Examinations were matched by unique identifier fields, and grouped by examination type. Each centre’s RIS sample completeness was calculated, as was the percentage of the RIS examination KAP values within 5% of their DMS counterparts (used as an accuracy metric). For each centre, the percentage agreement between the RIS and DMS examination median KAP values was computed using a Bland–Altman analysis. At two centres, up to 42.5% of the RIS KAP units entries were blank or invalid; corrections were attempted to improve data quality in these cases.

Results:

Statistically significant intersite variation was seen in RIS data accuracy and the agreement between the uncorrected RIS and DMS median KAP data, with a Bland–Altman bias of up to 11.1% (with a −31.7% to 53.9% 95% confidence interval) at one centre. Attempts to correct invalid KAP units increased accuracy but produced worse agreement at one centre, a slight improvement at another and no significant change in the third.

Conclusion:

The RIS data poorly represented the DMS data.

Advances in knowledge:

RIS KAP data are a poor surrogate for DMS data in DR. RIS data should only be used in patient dose surveys with an understanding of its limitations and potential inaccuracies.

Introduction

The surveying of patient examination dose data is a well-established process, with survey methods having developed over the decades since the Royal College of Radiology and National Radiological Protection Board’s Patient Dose Reduction in Diagnostic Radiology1 document, and the subsequent National Protocol for Patient Dose Measurements in Diagnostic Radiology2 in the early 1990s. Regular local dose auditing (and the subsequent generation of diagnostic reference levels, or DRLs) is a legal requirement in the UK [most recently implemented via the Ionising Radiations (Medical Exposure) Regulations 20173].

For many years, data were collected manually by radiographers and submitted to medical physics departments for local analysis. The sample collated would have needed to be large enough to provide 10 patients per examination after removing obvious errors and patients with body masses that fell outside the 60–80 kg range, or that pulled the sample mean body mass outside the 65–75 kg range, to produce a representative sample of the “average” patient. Such methods suffered the risks of transcription errors, logistical issues with weighing patients, a significant training overhead for those collecting the data and potential “cherry picking,” to project the image of good local protocol optimisation. Furthermore, as the average UK adult body mass has increased over time,4 finding large numbers of adult patients in the required body mass range may be challenging at some centres.

To mitigate these issues, the use of large samples from electronic systems has been explored in several studies. When comparing radiology information system (RIS) and Digital Imaging and Communications in Medicine (DICOM) file header kV, mAs and kerma–area product (or KAP, following the naming convention in IAEA Report 457,5 also known as DAP in older literature) values for a month’s data in a single room, Wilde et al6 found that 17.1% of the examinations had KAP data in the RIS, while 91% were present in the DICOM sample (the missing 9% in the DICOM sample were attributed to a fault in the system that was corrected by a service engineer). The accuracy of the KAP data, defined as the percentage of the RIS sample that “matched” the DICOM sample was 98% (though the criteria for a match, e.g. the threshold used to differentiate between differences in rounding between the systems used and erroneous data input, were not detailed). The mean difference between the RIS and DICOM KAP where one existed was only 0.8%, and the mean difference across all records was 0.04%.

The use of electronic records was extended by Charnock et al,7 who proposed a method for using large RIS samples to calculate local and regional entrance surface dose (ESD) DRLs following the process used for national patient dosimetry audits. Part of their study compared RIS and matching DICOM data (assumed by those authors to be the “real” patient ESD values) for 7,500 examinations from five sites over six months, using 11 digital X-ray rooms for abdomen, cervical spine, lumbar spine, pelvis and chest anteroposterior (AP) projections. Descriptive statistics were produced for both the RIS and DICOM data. Chauvenet’s criterion was applied to the RIS data, to examine whether removing outliers improved RIS and DICOM data equivalence where it was initially poor, or degraded equivalence where it was initially good. Equivalence was quantified using linear correlation coefficients between the matched samples. After rejecting outliers, correlation coefficients greater than 0.93 were obtained, and the RIS examination mean ESD values were within the 95% confidence intervals of the DICOM mean ESD values. This was taken as evidence that the RIS data were an accurate representation of the DICOM header data.

However, by the authors’ admission, there were several limitations to this RIS and DICOM data comparison. While the two methods of measuring the factors used to calculate patient ESD values gave a good correlation, correlation does not quantify the agreement between the two methods. Indeed, one may expect that the DICOM and RIS data would inherently be strongly related when looking at a matched sample; something would have needed to be significantly amiss in the population of one or both of the data sets to get a poor correlation. Furthermore, the RIS derived mean ESD values being within the 95% confidence limits of the mean from their DICOM counterparts again merely showed that the two methods were strongly related, as one would intuit.

In recent years, electronic dose management systems (DMSs), which automatically collate dose and examination data into large databases via modality performed procedure steps (MPPSs), radiation dose structured reports (RDSRs) or optical character recognition (OCR), have come to market. As DMS use has become more widespread, an easily mined resource for a quantitative analysis of manually populated RIS data accuracy and agreement with an automatically populated data set is now in place in many centres (provided RIS use has continued alongside the DMS and that the relevant fields of the DMS have been correctly populated). Furthermore, developments in how patient data are analysed make such a comparison timely. In Publication 135, the ICRP recommended the use of median averaging in patient dose surveys,8 given the significant effect relatively few outliers can have on mean averages. Indeed, one would hope that with big DMS data sets, minimal data cleaning would be required when using median values.

While totally digital radiology departments may now use data exclusively from a DMS in patient dose surveys, many centres still use a mix of digital and computed radiography (CR) equipment, where dose data from the latter modality will be stored in a manually populated RIS system due to the inability to connect CR systems to a DMS. Further, there may be hospitals still exclusively using CR and manual RIS systems (particularly at the international level). A quantitative evaluation of the use of RIS data for patient dose analysis is therefore useful, to understand the limitations of RIS data when a DMS is not available.

The aim of this study is to:

  1. Quantify the completeness and accuracy of RIS KAP values with respect to DMS data across multiple centres in England (since the KAP gives the most accessible indication of the risk associated with a patient’s radiation exposure).

  2. Quantify the agreement between RIS and DMS median examination KAP at the centres studied.

  3. Ascertain the effect of correcting obvious errors in the RIS data, where corrections can easily be applied without the use of the DMS as a reference to identify, correct or exclude potentially erroneous data.

  4. Determine if systematic variations in RIS use between centres affect accuracy and agreement with the DMS data.

Methods and materials

Data exporting

12 months’ data were obtained from three centres in central England. For each centre, all radiographic and fluoroscopic examinations were exported from the local DMS (all centres used GE Dosewatch, GEMS, Milwaukee, WI). At the centres used, the DMS was populated either via MPPS or RDSR. When the DMS systems were set up, the mapping of KAP data to the DMS was validated by each centre, giving assurance that the DMS data used in this study provided an accurate value for the true examination KAP values. The unique examination identifier (accession number), examination KAP, unit of the KAP quantity and Exam Name were exported from the RIS (all centres used CRIS, Healthcare Software Solutions Ltd, UK) for all digital radiography examinations over the same time period.

It should be noted that fluoroscopic data were exported from the DMS because the DMS only provided a combined radiography and fluoroscopy export. Given the combination of fluoroscopic and flurographic imaging KAP displayed on the KAP meters of fluoroscopy units, and the routine use of auto-ranging KAP meters in fluoroscopy, it was felt that sources of error in fluoroscopic RIS use may be different from those in digital radiography. Similarly, the DMS held CT data, which are only ever expressed in standard units, may again have yielded other modality specific RIS use findings. To reduce such confounding factors between modalities, this study was restricted to digital radiography rooms only.

Data cleaning and matching

All data processing was carried out using the statistical computing programming language R.9 For each centre, all adult digital radiography examinations (defined as patients 18 years or over on the examination date) with a non-zero KAP and non-blank accession number were selected from the DMS data, providing the “reference” KAP data against which the RIS data could be matched and compared. Adult examinations were chosen to give a large sample size with consistent image acquisition technique; by the same logic that fluoroscopy and CT data were not considered here, the use of paediatric data, which are often segregated by patient size in dose surveys, leading to potentially multiple, relatively small samples, were excluded so as to minimise potential confounding factors.

RIS examination entries were matched to their DMS counterparts using the accession number. In the RIS data, some accession numbers were present twice, with two different KAP values. This was determined to be due to the examination (e.g. a pelvis examination) forming part of a wider investigation (e.g. a skeletal survey), with the KAP value for both the examination and wider investigation recorded against a single RIS accession number. Accession numbers that appeared twice in the RIS were therefore excluded, to avoid incorrectly matching whole investigation KAP values from the RIS data to examination KAP values in the DMS data.

The RIS examination name was used to group data by the type of examination carried out. To maximise the number of data points used per examination median, examinations of contralateral body parts were grouped together (e.g. left and right elbows became “elbows”), as were examinations with single named digits (e.g. index, middle, ring and little finger examinations became “fingers”), as there would be no practical difference in how the exposures were carried out. We did not limit the examination types considered here to those commonly featured in national DRL surveys (e.g. by considering only abdomen, cervical spine, lumbar spine, pelvis and chest AP projections7), preferring instead to examine the quality of adult digital radiography RIS data against a reference dataset at the largest possible scale.

While the DMS KAP data were always presented in a standardised unit (mGy cm2 in the case of radiography), each centre had different approaches to the assignment of the KAP unit value in the RIS. At centre 1, the KAP unit value was manually selected from a list of all possible radiation dosage units in the RIS by the operator. At centre 2, the KAP units available in the RIS were restricted to the KAP meter units in a given room, but still required manual selection. At centre 3, the KAP unit value was not populated; the operators only entered the numerical KAP value. When dose surveys were carried out, the medical physics department at centre three assigned the KAP unit to each room’s data using KAP meter QA records. These KAP unit values were therefore substituted into the RIS data for centre three for all data.

An inspection of the RIS data at centres 1 and 2 found entries with no KAP unit recorded, multiple KAP unit values in a single room, or KAP unit values that did not represent a KAP quantity (such as “mGy,” “MBq” or “AK”). Since the objective of this study was to validate the use of RIS data in the absence of a DMS, correction of obvious KAP unit errors at centres 1 and 2 by comparing to DMS data was not undertaken. Therefore, in addition to using the “raw” RIS KAP data (which could not include blank and non-KAP unit quantities in the KAP unit entries, since no meaningful KAP can be discerned from only a numerical value), two strategies to attempt to correct for erroneous KAP unit values were devised to increase the number of meaningful RIS KAP values for centres 1 and 2:

  1. to assume that the blank KAP unit entries should contain the most common RIS KAP unit for a given room

  2. to assume that all KAP units in a given room should be the most common KAP unit found in the RIS data (i.e. assuming that other KAP unit values were input in error by the operator)

The correction attempts described above were not applied at centre 3, since the KAP unit values in the RIS data for centre 3 were retrospectively populated for all data using QA records for each room’s displayed KAP unit. No easily identifiable errors, such as “O” instead of “0” or “,” instead of “.” were found in the RIS numerical KAP fields, so no corrections or substitutions were made to numerical KAP data in the RIS. All RIS entries with a numerical KAP value were included; no attempts to identify and reject outlying or anomalous data were made, following the rationale that the median KAP value for each examination should be insensitive to such data (assuming that data entry errors are randomly distributed). Similarly, cases with KAP values in the RIS much greater than the typical exposure for the examination carried out were not excluded, as in some cases very high or low RIS KAP data were correct, and in other cases they were incorrect. For example, some examinations were transcribed with an errant decimal place, giving an order of magnitude error in the RIS KAP, whereas in some cases the examination KAP was really ~10 times the typical value; identification of such individual cases without reference to the DMS data was not possible.

Table 1 details the number of examinations obtained following the cleaning and matching process described, and the number of rooms included at each centre. The matched data were then ready for an analysis of the RIS examination KAP accuracy and the agreement of the RIS median exam KAP with the DMS median exam KAP. Figure 1 shows a flowchart of this analysis.

Table 1.

The records exported from each centre

Centre 1 Centre 2 Centre 3
Number of adult, digital radiographic DMS examinations 121,552 83,659 194,785
Number of matching RIS examinations with valid KAP units prior to any correction (and DMS percentage coverage) 93,458 (76.9%) 46,729 (55.9%) 186,314 (96%)
Number of matching RIS examinations with valid KAP units once blank units were corrected (and DMS percentage coverage) 110,938 (91.3%) 81,236 (97.1%) -
Number of matching RIS examinations with valid KAP units once the commonest KAP unit was assumed for each room (and DMS percentage coverage) 110,949 (91.3%) 82,277 (98.4%) -
Number of digital radiographic rooms 9 8 12

DMS, dose management system; KAP, kerma–area product; RIS, radiology information system.

Figure 1.

Figure 1.

A flow chart showing the data reduction and analysis process employed in this study.

Completeness and accuracy quantification

For each centre, RIS sample completeness was quantified as the percentage of the matched DMS examination sample with a corresponding entry in the RIS. The percentage error for each RIS examination KAP value with respect to its DMS counterpart was calculated, and the percentage of each centre’s RIS sample with an error of less than or equal to 5% was quantified to give a metric of RIS data accuracy. This was also carried out for each RIS KAP unit correction method at centres 1 and 2.

Agreement quantification

For each centre, the median KAP using the RIS and DMS data for each examination was calculated, where 10 or more cases were carried out. To quantify the agreement between the RIS and DMS data, a percentage Bland–Altman analysis10 was used. The percentage rather than absolute Bland–Altman method was chosen, due to the large range in relative effect a given absolute difference in examination KAP values would have across all examinations.

The difference between the RIS and DMS median KAP values as a percentage of the DMS median KAP value was plotted against the mean of the RIS and DMS median KAP values at each centre. The bias (dm) was computed as the mean of the differences. If a bias of 0% was found, it would have signified that the median KAP values for a centre agreed across the entire sample, or that the over and under estimates in the RIS balanced so that there was no net disagreement. Alternatively, a positive bias would have signified a systematic overestimate in the RIS sample, while a negative bias would have occurred if the RIS data significantly under reported the median examination KAP values. The standard deviation (SD) was used to compute the standard error (SD/n1/2, where n was the number of examinations) on the bias; where the bias was separated from 0% by more than the standard error, it was deemed significant.11 The standard deviation was also used to obtain the 95% confidence limits on the bias (dm ±1.96 SD). Even with a bias of 0%, a wide confidence interval would signify a large spread of differences, indicating that the agreement between the RIS and DMS median KAP for a given examination may still have been poor.

This was repeated for each of the RIS KAP calculation methods for centres 1 and 2. Changes in the bias following a KAP unit correction were deemed significant when the two bias values being compared were separated by more than their standard errors. Similarly, the standard error on the confidence limits (quantified as (3SD2/n)1/2) was used to assess whether changes in confidence limits were significant when there was no change in the bias, following the methodology detailed by Giavarina.11

Results

The matched sample

Table 1 details the number of records at each centre following the methodology described in “Materials and methods,” after each processing stage.

Completeness, accuracy and agreement quantification

The completeness (e.g. the percentage of the DMS examinations with a RIS KAP value) and percentage of each centre’s RIS KAP values within 5% of the DMS values are detailed in Table 2, while Figure 2 shows the percentage of the RIS sample with a KAP value within 5% of the corresponding DMS examination KAP value for each centre and, for centres 1 and 2, correction method. The Bland–Altman results are shown in Table 3 for each centre and correction method (a representative Bland–Altman plot is shown in Figure 3). Figures 4–6 show how the Bland–Altman statistics varied by centre and, for centres 1 and 2, correction method.

Table 2.

The percentage completeness for the RIS sample, and the percentage of the sample with a KAP within 5% of the DMS KAP, for each centre and KAP unit correction (at centres 1 and 2)

RIS KAP unit correction Centre 1 Centre 2 Centre 3
Completeness (%) Percentage of RIS KAP data within 5% of DMS KAP Completeness (%) Percentage of RIS KAP data within 5% of DMS KAP Completeness (%) Percentage of RIS KAP data within 5% of DMS KAP
None 76.9 58.2 55.9 53.2 95.7 87.3
Blank KAP units substituted 91.3 69.1 97.1 91.3 - -
Commonest room KAP unit assumed 91.3 66.1 98.4 91.9 - -

DMS, dose management system; KAP, kerma–area product; RIS, radiology information system..

Figure 2.

Figure 2.

The accuracy of the RIS KAP data (defined as the percentage of the RIS examination KAP values within 5% of the corresponding DMS examination KAP) for each centre and, for centres 1 and 2, KAP unit correction method. DMS, dose management system; KAP, kerma–area product; RIS, radiology information system.

Table 3.

The number of RIS examinations with ≥10 cases, the percentage bias and 95% confidence limits for the median examination KAP for each centre (and RIS KAP unit correction at centres 1 and 2), with standard errors (SE)

RIS KAP
correction
Centre 1 Centre 2 Centre 3
n Bias
(SE; %)
95% CI
(SE; %)
n Bias
(SE; %)
95% CI
(SE; %)
n Bias
(SE; %)
95% CI
(SE; %)
Uncorrected
RIS KAP
49 11.1 (3.1) −31.7 (5.0) to
53.9 (5.0)
49 0.7 (2.3) −30.5 (4.0) to 31.8 (4.0) 63 5.1 (2.6) −35.8 (5.0) to 46.1 (5.0)
Blank KAP units
substituted
49 17.9 (4.2) −39.3 (7.0) to
75.1 (7.0)
49 1.1 (0.8) −10.3 (1.0) to 12.4 (1.0) - - -
Commonest room
KAP unit assumed
49 35.1 (6.5) −54.7 (11.0) to
125 (11.0)
49 0.8 (0.9) −11.3 (2) to 12.9 (2.0) - - -

CI, confidence interval; DMS, dose management system; KAP, kerma–area product; RIS, radiological information system; SE, standard error.

Figure 3.

Figure 3.

A representative percentage Bland–Altman plot illustrating the examination median RIS and DMS KAP agreement for centre 3. 0% difference is indicated with a dotted line. The bias is shown with a solid line, while the 95% confidence limits are denoted by dashed lines. The standard errors on the bias and 95% confidence limits are shaded in grey. DMS, dose management system; KAP, kerma–area product; RIS, radiology information system.

Figure 4.

Figure 4.

The change in bias and 95% confidence limits at centre one when RIS KAP unit corrections were applied. The standard error of each quantity is shaded in grey. A 0% bias is shown with a dotted line. KAP, kerma–area product; RIS, radiology information system.

Figure 5.

Figure 5.

The change in bias and 95% confidence limits at centre two when RIS KAP unit corrections were applied. The plotting convention is the same as for Figure 4. KAP, kerma–area product; RIS, radiology information system.

Figure 6.

Figure 6.

The bias and 95% confidence limits at centre 3; the plotting convention is the same as for Figure 4.

Centre 1

At centre 1, the completeness of the RIS was only 76.9% (Table 2). Only 58.2% of the RIS examinations had KAP values within 5% of the matched DMS data (Table 2). The agreement between the RIS and DMS room examination median KAP values indicated a significant overestimate in the RIS, with a bias of 11.1% (with a standard error of 3.1%) and 95% confidence limits of −31.7% to 53.9% (with a standard error of 5.0%).

While filling the blank RIS KAP unit entries with the most frequent RIS KAP unit in a given room increased the sample completeness to 91.3% and the “accuracy” metric employed here to 69.1%, the change in the Bland–Altman bias (now 17.9% with a standard error of 4.2%) was not deemed significant and the confidence limits widened (−39.3% to 75.1% with a standard error of 7.0%), indicating that the magnitude of the over and under estimation for individual examinations increased.

Only 13 examinations had an invalid KAP unit, so against a matched sample size of 110, 949 the completeness assuming each room’s most frequent RIS KAP unit for all examinations in a given room remained at the 69.1% achieved when correcting for blank RIS KAP units. With this correction, the percentage of the RIS KAP values within 5% of the DMS value dropped to 66.1%. As each room’s most frequent KAP unit was applied to all the RIS examinations in a given room, the drop in the accuracy metric here is interpreted to be due to the over writing of KAP unit values in the RIS that were correctly entered, but were not the most frequent unit used, rather than due to the addition of 13 more records.

Furthermore, the bias increased significantly to 35.1% (standard error 6.5%) and the confidence interval widened again, with limits from −54.7% to 125.0% (standard error 11.0%). This was again interpreted to being due to there being more entries in the RIS with KAP units truly varying within a room (e.g. where an auto-ranging KAP meter is in use), which were then made incorrect using this RIS KAP unit substitution.

Centre 2

At centre 2, the RIS data completeness without any corrections was only 55.9%, and the percentage of the RIS KAP values within 5% of their DMS counterparts was 53.2%. However, the bias of only 0.7% (standard error 2.3%) was not significantly different from 0%, and the 95% confidence interval (with limits at −30.5% to 31.8%, with a standard error of 4.0%), while tighter than at centre 1, still indicated a large spread in agreement across the examinations.

Correcting for blank KAP units increased the completeness and the percentage of the RIS KAP values within 5% of the DMS KAP values to 97.1% and 91.3%, respectively. Again, applying this correction did not significantly change the bias on the median examination KAP agreement (1.1% with a standard error of 0.8%), but the confidence limits narrowed significantly (to −10.3% to 12.4%, with a standard error of 1.0%).

Correcting for invalid KAP units produced marginal gains in completeness (to 98.4%) and the percentage of RIS KAP values within 5% of the DMS values (to 91.9%). No significant change was seen in the agreement.

Centre 3

At centre 3, the completeness was 95.7%, and the percentage of the sample within 5% of the DMS value was 87.3%. The bias of 5.1% (with a standard error of 2.6%) was between the uncorrected bias values found at centres 1 and 2, indicating an overestimate less severe than at centre 1. The 95% confidence limits were also wide, at −35.8% to 46.1% (with a standard error of 5.0%). Further RIS KAP unit substitution was not carried out for this centre, since the same KAP unit was applied to every RIS examination in a given room.

Discussion

The findings of this study strongly suggest that the simplified use of median averaging to provide a robust, “outlier-proof” metric of examination dose in a patient dose survey cannot be confidently applied to manually entered RIS data, due to the large variation in RIS data accuracy and the disagreement seen between the RIS and DMS examination median KAP values across multiple centres.

At centre 1, radiography staff manually entered the numerical KAP value and unit for each examination. The need to manually enter all quantities maximised the potential for transcription error. At centre 2, the KAP units available in the RIS were restricted to those found on the equipment in each room, reducing the opportunity for a transcription error for this quantity. While it is tempting to therefore hypothesise that not having to freely enter the KAP unit made a significant difference to the agreement between the RIS and DMS KAP data, the method used to populate the RIS at centre 3 (i.e. entering only the numerical value and having KAP meter units recorded elsewhere) required no manual inputting of the KAP unit, and this centre had a significant bias and 95% confidence limits that were similar to those found at centre 1. This suggests that other local factors may be at play, such as staff training and their approach to the use of the RIS; the findings here suggest that an emphasis on staff training and diligent culture may be fruitful in improving the quality of RIS KAP data.

Furthermore, variation as to whether examination KAP data were noted on paper then entered into the RIS, were entered directly to the RIS on a computer within sight of the KAP meter or whether KAP meter data were memorised then entered at a distant RIS terminal, as well as interoperator variation, may have had an effect on the quality of the RIS KAP data. However, attempting to retrospectively disentangle such variation across 399,996 examinations (Table 1) taken in 29 rooms at three centres over the course 12 months, by numerous radiographers, would have been a complex task with a strong chance of making errors, and was therefore not attempted here.

The fact that attempting to correct for incomplete or invalid KAP units made the agreement poorer at centre 1 and yielded little change at centre 2 indicates not only that such corrections yielded limited value or worsened the disagreement, but that without validating such a correction against a reference data set one would have no way of knowing if the RIS data were improving or getting worse. Therefore, without a reference data source against which to benchmark local practice, great care should be exercised when assuming that RIS median examination KAP data for a given centre are accurate and agree with the true, underlying patient exposures, when bias values as large as 11.1% (with a −31.7% to 53.9% 95% confidence interval) were seen in the uncorrected RIS data in this study.

This is at odds with the conclusions of earlier work.5,6 This may be due to the significant differences in approach to measuring the agreement between the RIS and other electronic systems to this study. The Bland–Altman methods used here better quantify the level of disagreement between two methods of measuring an underlying quantity (in this case the true median examination KAP values), rather than verifying a relationship between the quantities. Indeed, in spite of the bias and confidence intervals found for the RIS data without KAP unit corrections (Table 3), correlation coefficient values of 0.98, 0.99 and 0.98 were still obtained when using linear regression between the RIS data without KAP unit corrections and DMS data at centres 1, 2 and 3, respectively (e.g. Figure 7). The lack of outlier rejection here compared to Charnock et al7 is also a notable difference in the methodology used. However, the RIS accuracy and the agreement in the median examination KAP values in the two data sets was only of interest here, so very high and low values were not in themselves problematic, as long as the values in both datasets were consistent with each other.

Figure 7.

Figure 7.

When plotting the RIS median KAP values against the DMS median KAP values with no RIS KAP unit correction at centre 1, a correlation coefficient of 0.98 was obtained. Unity is shown with a dashed line. DMS, dose management system; KAP, kerma–area product; RIS, radiology information system.

If DMSs are to be routinely used in the longer term, great care must be exercised when mapping to DMS fields. While the DMS KAP field population had been validated at the centres in this study, it was found that the DMS local study description was mapped from various sources, such as the RIS code or the protocol selected on the radiographic control system. In some cases, this lead to an examination description in the DMS being inconsistent (e.g. where an acquisition protocol named after a physical region was selected). For example, at centre 1, 17 examinations in the DMS had local study descriptions of “Upper extremity,” of which 8 were fingers, 4 were hands, 4 were wrists and 1 was a scaphoid in the RIS (after consolidating left- and right-sided examinations). Furthermore, the DMS local study description was empty in 583 examinations at centre 1 (0.5% of that centre’s sample), 116 examinations at centre 2 (0.1% of the sample) and 1062 examinations at centre 3 (0.6% of the sample). This was accounted for in this study by grouping by the RIS examination names. While the number of affected records in this work was small when compared to the overall sample sizes at the centres in this study (Table 1), differences in examination filtering methods yielded significant differences in mean dose–length product in Nicol et al’s assessment of DMS CT data,12 even with large samples, suggesting care must be taken when categorising examinations. Where a DMS can be used alongside a RIS, as in this study, one can benefit from both the automated KAP data in the DMS and the wider clinical data on the patient referral and examination in the RIS.

Conclusions

The RIS examination accuracy and median examination KAP agreement in matched RIS and DMS data for three centres were quantified. The accuracy and agreement varied significantly between centres, and attempts to correct obvious errors in the RIS data yielded either little benefit or exacerbated the disagreement between the RIS and DMS. The use of RIS data for median KAP evaluation should therefore be approached with caution; the use of an automated, properly validated DMS is advised as centres transition towards digital radiography in the longer term.

Footnotes

Acknowledgement: The authors would like to thank Leicester Radiation Safety Service and Sherwood Forest Hospitals NHS Trust for contributing data to this study. Andy Rogers, Paul Morgan and the referees of this paper are also thanked for useful feedback and discussions.

Contributor Information

Nathan Dickinson, Email: nathan.dickinson@nuh.nhs.uk.

Matthew Dunn, Email: Matthew.Dunn@nuh.nhs.uk.

REFERENCES

  • 1.Patient dose reduction in diagnostic radiology; a report of the Royal College of Radiologists and the National Radiological Protection Board. Vol. 1. Chilton, UK: Documents of the NRPB; 1990. National radiological protection board – Royal College of radiologists; p. 3. [Google Scholar]
  • 2.Dosimetry Working Party of the Institute of physical sciences in medicine. National Protocol for Patient Dose Measurements in Diagnostic Radiology, Chilton, UK: National Radiological Protection Board 1992;. [Google Scholar]
  • 3.Department of Health The ionising radiation (medical exposure) regulations 2017. statutory instruments 2017 No. 1322. London, UK: The Stationery Office 2017;. [Google Scholar]
  • 4.Baker C. Obesity Statistics, House of Commons Library, 2019,Briefing paper number. 3336. [Google Scholar]
  • 5.Technical reports series no 457 dosimetry Working Party of the Institute of physical sciences in medicine. Dosimetry in Diagnostic Radiology: An International Code of Practice, Vienna, Austria: International Atomic Energy Agency 2007;. [Google Scholar]
  • 6.Wilde R, Charnock P, McDonald S, Moores BM. Qualifying the use of RIS data for patient dose by comparison with DICOM header data. Radiat Prot Dosimetry 2011; 147(1-2): 329–32. doi: 10.1093/rpd/ncr352 [DOI] [PubMed] [Google Scholar]
  • 7.Charnock P, Moores BM, Wilde R. Establishing local and regional DRLs by means of electronic radiographical X-ray examination records. Radiat Prot Dosimetry 2013; 157: 62–72. doi: 10.1093/rpd/nct125 [DOI] [PubMed] [Google Scholar]
  • 8.ICRP Diagnostic reference levels in medical imaging. ICRP publication 135. Ann. ICRP 2017; 46. [DOI] [PubMed] [Google Scholar]
  • 9.R Core Team The R Project for Statistical Computing [homepage on the Internet]. 2017. Available from: https://www.R-project.org/ [updated 2019 July 05; cited 2019 November 12].
  • 10.Altman DG, Bland JM. Measurement in medicine: the analysis of method comparison studies. The Statistician 1983; 32: 307–17. doi: 10.2307/2987937 [DOI] [Google Scholar]
  • 11.Giavarina D. Understanding Bland Altman analysis. Biochemia Medica 2015; 25: 141–51. doi: 10.11613/BM.2015.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Nicol RM, Wayte SC, Bridges AJ, Koller CJ. Experiences of using a commercial dose management system (GE DoseWatch) for CT examinations. Br J Radiol 2016; 89: 20150617. doi: 10.1259/bjr.20150617 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The British Journal of Radiology are provided here courtesy of Oxford University Press

RESOURCES