Skip to main content
PLOS One logoLink to PLOS One
. 2021 Nov 8;16(11):e0259667. doi: 10.1371/journal.pone.0259667

The impact of errors in medical certification on the accuracy of the underlying cause of death

U S H Gamage 1,#, Tim Adair 1,*,#, Lene Mikkelsen 1,#, Pasyodun Koralage Buddhika Mahesh 1,#, John Hart 1,, Hafiz Chowdhury 1,, Hang Li 1,, Rohina Joshi 1,2,3,, W M C K Senevirathna 4,, H D N L Fernando 4,, Deirdre McLaughlin 1,, Alan D Lopez 5,#
Editor: Ritesh G Menezes6
PMCID: PMC8575485  PMID: 34748575

Abstract

Background

Correct certification of cause of death by physicians (i.e. completing the medical certificate of cause of death or MCCOD) and correct coding according to International Classification of Diseases (ICD) rules are essential to produce quality mortality statistics to inform health policy. Despite clear guidelines, errors in medical certification are common. This study objectively measures the impact of different medical certification errors upon the selection of the underlying cause of death.

Methods

A sample of 1592 error-free MCCODs were selected from the 2017 United States multiple cause of death data. The ten most common types of errors in completing the MCCOD (according to published studies) were individually simulated on the error-free MCCODs. After each simulation, the MCCODs were coded using Iris automated mortality coding software. Chance-corrected concordance (CCC) was used to measure the impact of certification errors on the underlying cause of death. Weights for each error type and Socio-demographic Index (SDI) group (representing different mortality conditions) were calculated from the CCC and categorised (very high, high, medium and low) to describe their effect on cause of death accuracy.

Findings

The only very high impact error type was reporting an ill-defined condition as the underlying cause of death. High impact errors were found to be reporting competing causes in Part 1 [of the death certificate] and illegibility, with medium impact errors being reporting underlying cause in Part 2 [of the death certificate], incorrect or absent time intervals and reporting contributory causes in Part 1, and low impact errors comprising multiple causes per line and incorrect sequence. There was only small difference in error importance between SDI groups.

Conclusions

Reporting an ill-defined condition as the underlying cause of death can seriously affect the coding outcome, while other certification errors were mitigated through the correct application of mortality coding rules. Training of physicians in not reporting ill-defined conditions on the MCCOD and mortality coders in correct coding practices and using Iris should be important components of national strategies to improve cause of death data quality.

Introduction

Accurate cause of death statistics are a fundamental component of the evidence base to inform population health policy. They are dependent upon deaths that are correctly certified by a qualified medical practitioner using the World Health Organization’s (WHO) International Form of Medical Certificate of Cause of Death (MCCOD), and correctly coded by a trained coder adhering to the rules of the International Classification of Diseases–Version 10 (ICD-10) (S1 Text) [1]. The WHO-recommended medical certificate is comprised of two parts; Part 1 and Part 2. In Part 1, the certifier reports the logical sequence of events leading to death, including the underlying cause of death (UCOD), defined by the WHO as “the disease or injury which initiated the train of morbid events leading directly to death or the circumstances of the accident or violence which produced the fatal injury” [1]. Part 2 is used to report any other significant conditions that may have contributed to death, but that were not part of the morbid sequence initiated by the UCOD. The MCCOD is the primary source of cause of death statistics for much of the world, and thus the basis of interventions to strengthen health and health information systems [2].

High quality cause of death certification (i.e. completing the MCCOD) is strongly dependent on accurately recording the chain of morbid events leading to death in an acceptable sequence, legibly, and without use of nonstandard abbreviations, symptoms, modes of dying and other ill-defined causes that can make coding very difficult [2, 3]. Despite the very clear rules of ICD-10, however, errors in death certification are common and have been noted worldwide [1, 413]. The types of errors range from reporting multiple causes on a line of the MCCOD and using abbreviations, to the selection of ill-defined conditions for the UCOD [2]. The more commonly reported errors in the literature that can affect the correct selection of the cause of death are presented in S2 Text [2, 9, 11, 1318]. Each of these errors can potentially adversely affect the selection of the correct UCOD, and thus, the policy utility of the resultant cause of death statistics.

Efforts to assess and improve the quality of medical certification, especially the training of physicians in accurate certification, should be informed by an evidence-based understanding of the relative importance of medical certification errors on cause of death statistics. Many studies have categorised certification errors according to their perceived potential for affecting the UCOD selection. A common classification method used by many researchers is to subjectively classify errors as being either major or minor, in terms of their perceived impact on diagnostic and coding accuracy [2, 5, 11, 12, 19]. Major errors usually include: (a) reporting multiple causes per line; (b) illegible handwriting; (c) incorrect causal sequence; (d) reporting ill-defined conditions as the underlying cause of death; (e) missing external cause for deaths due to accidents and violence, and; (f) missing important information about neoplasms [2, 5, 11, 19, 20]. Minor errors typically include: (a) use of abbreviations; (b) missing time intervals, and; (c) leaving blank lines in Part 1 of the medical certificate [2, 5, 20]. Some studies have described the presence of individual error types without categorising them into broad groups [10, 15, 21], while other researchers have studied the impact of errors on specific diseases like cancer, cardiovascular diseases, sudden unexplained deaths etc., or according to their severity [6, 2224]. Another study assessed the accuracy of death certificate entries compared with postmortem findings, where the researchers graded clinician errors in recording ‘Other significant conditions’, ‘causes of death’ and ‘manner of death’ into categories ranging from no errors to wrong manner of death [13]. However, this kind of error identification and categorization is only possible when the death certificate entries are validated with medical records or autopsy findings; the errors identified in other studies are those specific to completing the MCCOD. None of these studies used empirical evidence to objectively measure the relative importance of the different error types on the accuracy of the resultant cause of death data.

To address this knowledge gap and to improve understanding about the relative importance of medical certification errors, this study uses existing MCCODs to measure the extent to which each type of certification error affects the selection of the UCOD. The results are then used to develop weights to classify the relative importance of each error for countries with different epidemiological profiles. This objective information on the impact of certification errors could subsequently be applied to assess the quality of medical certification in countries and to inform prioritisation of training interventions for physicians.

Methods

We measured the importance of each error by taking a sample of existing error-free MCCODs and then individually simulating each error type. The error-free MCCODs are those where no medical certification errors were made by the physician in completing the MCCOD. It does not necessarily mean, however, that the UCOD is in fact true, when compared with an autopsy; issues such as diagnostic equipment, physician biases or training, information available for the physician, etc, can affect the accuracy of the UCOD even if there are no certification errors. The resultant underlying cause of death pattern after each error simulation was compared with the pattern from the error-free records using summary validation metrics.

The error-free MCCODs were obtained from the United States’ (US) Multiple Cause of Death data file for 2017, which is the only publicly available digitised database of completed MCCODs (i.e. listing every cause reported by the physician in both Part 1 and Part 2 of the death certificate, and identifying the line number and the underlying cause) (S1 Text) [25]. The 2017 dataset comprised 2,820,034 MCCODs which comprised deaths that occurred in all settings: i.e. in hospital, at home, in public places, etc. First, we extracted MCCODs and formed three sample groups that replicated mortality conditions in low, middle and high Socio-demographic Index (SDI) populations. The SDI is an overall index, used in the Global Burden of Disease (GBD) study, of the level of development of a population defined by a composite measure of income per capita, average educational attainment, and fertility rate prevailing in that population [26]. For each sample group, the percentage of deaths in each age group (0–4 years, 5–44 years, 45–64 years, 65–84 years and 85 years and above), sex, and the distribution of deaths across five broad cause categories (Communicable/Maternal/Neonatal/Nutritional; Cardiovascular diseases, Cancers, Other non-communicable diseases; Injuries and accidents) replicated that reported by the GBD study for each SDI population [27]. This process allowed assessment of the impact of each error type in populations with different epidemiological profiles. For example, in high SDI populations there is typically a higher proportion of deaths at older ages, and from cancer, than in low SDI populations; therefore, if a certain error type has a larger impact on diagnostic accuracy for cancer deaths at older ages, it would be expected that this error might be more important in high rather than low SDI populations.

The sampled MCCODs were then manually screened by trained MCCOD certifiers to remove MCCODs with any type of certification error or those assigned a ‘garbage’ code (i.e. an unusable code for policy, as defined by the WHO’s list of ill-defined causes) as the UCOD [28]. This resulted in 1592 eligible MCCODs, a number of which were allocated to more than one SDI group to reduce the time needed for the manual screening for error-free MCCODs. The final sample comprised 952 deaths representative of high SDI populations, 971 middle SDI populations, and 972 low SDI populations (S1 Table).

Our study assessed the following types of certification errors, identified as being the most common according to published studies (S2 Text) [2, 5, 8, 29, 30]:

  1. Incorrect or clinically improbable sequencing of causes in Part 1 of the death certificate

  2. Reporting multiple causes on a single line of Part 1

  3. Reporting an ill-defined condition as the underlying cause of death in the lowest used line of Part 1

  4. Illegible entries (assigned no code or assigned code R99)

  5. Incorrect or absent time intervals

  6. Reporting competing causes in Part 1

  7. Reporting contributory causes in Part 1

  8. Reporting underlying causes in Part 2

  9. Unspecified neoplasms

  10. Poorly defined external cause of death.

In order to identify how each error was simulated, a set of business rules was adopted (S3 Text). These rules were written to replicate, as closely as possible, how these errors are made in practice by certifiers. For the illegibility error, given that the error-free database is digitised, we assumed that if a line were illegible the coder would either skip the illegible entry and not assign a code, or assign code R99 (other ill-defined or unknown causes of mortality). Other potential responses by coders include misreading an entry and assigning an incorrect ICD-10 code, or attempting to replace the entry with a plausible alternative based on the other causes reported. These two responses, however, were not feasible options to simulate in our study.

Once MCCODs were simulated with each error, the Iris automated mortality coding software (version 5.7) was applied to select the UCOD [31]. We used the ICD-10 Mortality Tabulation List 1 (103 causes) as the cause list to assess the impact of certification errors on the accuracy of diagnosis [28].

The primary metric used to measure the impact of certification errors was chance-corrected concordance (CCC) [32]. CCC is a measure of the accuracy of individual cause assignment, calculated as:

CCC=TPTP+FN-1N1-1N (1)

where TP is true positives (i.e. MCCODs where the underlying cause is the same before and after introduction of the error), FN is false negatives (i.e. MCCODs where the underlying cause changed after the introduction of the error), and N is the number of MCCODs. Please note that “true positives” is the conventional terminology used for this metric; it does not necessarily mean that the underlying cause of death of an error-free MCCOD is in fact true (when compared to an autopsy).

CCC ranges from 0 (i.e. all diagnoses are wrong) to 1 (i.e. the underlying cause does not change for any MCCODs); the lower the CCC, the greater the adverse impact of the error type on the accuracy of the underlying cause. CCC was calculated for each error type and SDI group (high, middle and low). Weights were calculated based on CCC and for each error type and SDI group, as 1.0 minus CCC; with a higher weighting implying an error type of greater importance.

An aggregate measure of the accuracy of cause-specific mortality fractions (CSMFs) in a population is CSMF accuracy, calculated as [32]:

CSMFAccuracy=CSMFtrue-CSMFpred2-1-Minimum(CSMFtrue) (2)

where CSMFtrue is the CSMF from the error-free certificates and CSMFpred is the CSMF from the certificates containing a particular error. Again, we note that the “true” CSMF is simply the conventional terminology used for this metric and does not necessarily mean that the CSMF from error-free MCCODs are in fact true, when compared with an autopsy. CSMF Accuracy also ranges from 0 (i.e. the maximum possible error) to 1 (i.e. the CSMFs based on certification errors had zero impact on correctly specifying the true cause of death distribution). We have used the measure of individual cause assignment (CCC) as a more appropriate metric for assessing the impact of certification errors than CSMF Accuracy, because the impact of certification errors can be masked by “swapping” deaths between causes.

We categorised error types into one of four categories to describe their effect on the accuracy of cause of death data: (1) Very high impact (weight 0.40 and higher), (2) High impact (weight (0.25<0.40), (3) Medium impact (weight 0.10<0.25), and (4) Low impact (weight less than 0.10). The category thresholds were chosen to identify key discontinuities in the distribution of weights (Fig 1). The error types external causes or neoplasms were not categorised because their importance in any given population is dependent on the proportion of deaths that are due to these causes.

Fig 1. Error weights of each error type and SDI category.

Fig 1

Very high impact (Red): Weight>0.40. High impact (Orange): Weight 0.25<0.40. Medium impact (Yellow): Weight 0.10<0.25. Low impact (Green): Weight<0.10.

We demonstrated the application of the weights for assessing the quality of certification of individual MCCODs. Transformed weights for each error type were calculated as the percentage of the sum of weights across all relevant error types. For deaths not due to neoplasms or external causes, only the weights of the remaining eight error types were included in the calculation. For the calculation of transformed weights for neoplasm deaths, weights for the same eight error types were included, and a weight of 1.0 was applied to the unspecified neoplasms error, because the presence of this error means that it is certain that the UCOD of a neoplasm death is incorrect. Similarly, for deaths due to external causes, a weight of 1.0 was applied to the poorly defined external causes error. For each individual MCCOD, a composite error score was calculated as the sum of transformed weights of each error type that was present, with the maximum score being 1.0 if all errors were present on the MCCOD, and the minimum score being 0.0 if no errors were present.

Results

Table 1 presents the CCC and weights for each error-type and SDI level. For all causes in our sample (high, medium and low SDI populations), the only very high impact error type was ill-defined underlying cause of death, where over half of all MCCODs had their UCOD changed due to this error (weight 0.573). High impact errors across all MCCODs were reporting competing causes in Part 1 (0.354) and illegibility (0.334), medium impact errors included reporting underlying cause in Part 2 (0.187), time interval errors (0.186) and reporting contributory cause in Part 1 (0.118), while low impact errors included incorrect sequence (0.082) and multiple causes per line (0.074). Relatively low weights were calculated for unspecified neoplasms (0.147) and poorly defined external causes errors (0.059), with these being influenced by the proportion of deaths in the sample that were due to these causes.

Table 1. Chance-corrected concordance (CCC) and weights by error type and SDI level.

Error type All SDI level
High Middle Low
CCC Weight CCC Weight CCC Weight CCC Weight
Contributory cause in Part 1 0.882 0.118 0.870 0.130 0.872 0.128 0.900 0.100
Underlying cause in Part 2 0.813 0.187 0.801 0.199 0.793 0.207 0.810 0.190
Multiple causes per line 0.926 0.074 0.916 0.084 0.920 0.080 0.925 0.075
Incorrect sequence 0.918 0.082 0.920 0.080 0.854 0.146 0.922 0.078
Ill-defined UCOD 0.427 0.573 0.321 0.679 0.354 0.646 0.511 0.489
Competing causes in Part 1 0.646 0.354 0.666 0.334 0.666 0.334 0.609 0.391
Illegibility (R99 or blank line) 0.666 0.334 0.622 0.378 0.631 0.369 0.704 0.296
Time interval 0.814 0.186 0.788 0.212 0.788 0.212 0.821 0.179
Poorly defined external causes* 0.941 0.059 0.950 0.050 0.921 0.079 0.925 0.075
Unspecified neoplasms* 0.853 0.147 0.773 0.227 0.858 0.142 0.924 0.076

Very High Impact (Red): Weight>0.40. High Impact (Orange): Weight 0.25<0.40. Medium Impact (Yellow): Weight 0.10<0.25. Low Impact (Green): Weight<0.10.

* External cause and neoplasms are error types only related to specific types of causes of death, and their importance will depend on the percentage of deaths that are due to each of these causes. Hence, these are not included in the error impact categories.

The relative importance of the various error types remained relatively consistent across SDI levels. Ill-defined underlying cause of death was the most important error type across all SDI levels, being highest in the high SDI category and lowest in the low SDI category. The ill-defined error type also had a lower CCC where only one cause was reported on the death certificate compared with more than one cause (0.454), with this being particularly low in High (0.138) and Middle SDI (0.200) levels (S2 Table). The other notable difference across SDI categories was that the incorrect sequence error weight in middle SDI populations was almost double that of high and low SDI populations, and hence was in the medium rather than low impact category. The unspecified neoplasms error was largest in high SDI populations, largely because of the typically higher proportion of neoplasm deaths in these populations. A similar observation was made for poorly defined external causes in middle and low SDI populations.

As shown in S3 Table, the weights based on use of CSMF Accuracy as the summary metric of impact were lower than measured using CCC, but the relative importance of the error types was largely consistent across the two impact measures. Of note, reporting competing causes in Part 1 was only the third most important error according to CSMF accuracy, but was ranked second when CCC was used. Incorrect sequence was the least important error according to CSMF Accuracy rather than the second least important according to CCC, with its importance in middle SDI populations declining. These are very minor differences, however, and suggest that the weights are relatively robust across different impact metrics.

Transformed weights were calculated by expressing the weights as a percentage of the sum of all weights. Table 2 presents the transformed weights for each error type and SDI level. For deaths due to neoplasms or external causes, the transformed weights (transformed weights 2) for all other error types were lower than for other causes (transformed weights 1) because all transformed weights must equal 1.0. As an example, if a MCCOD which specified a cardiovascular disease as the cause of death in a high SDI population had incorrect sequence (Transformed weight 0.038), reporting contributory causes in Part 1 (0.062) and multiple causes per line (0.040) as errors, then the composite score of that MCCOD would be 0.140. The same errors on an MCCOD for an external cause of death would result in a composite score of 0.095 (0.026 + 0.042 + 0.027).

Table 2. Weights and transformed weights by error type and SDI level.

Error type All High SDI Middle SDI Low SDI
Weight T/f weight 1 T/f weight 2 Weight T/f weight 1 T/f weight 2 Weight T/f weight 1 T/f weight 2 Weight T/f weight 1 T/f weight 2
Contributory causes in Part 1 0.118 0.062 0.041 0.130 0.062 0.042 0.128 0.060 0.041 0.100 0.056 0.036
Underlying cause in Part 2 0.187 0.098 0.064 0.199 0.095 0.064 0.207 0.098 0.066 0.190 0.106 0.068
Multiple causes per line 0.074 0.039 0.025 0.084 0.040 0.027 0.080 0.038 0.026 0.075 0.042 0.027
Incorrect sequence 0.082 0.043 0.028 0.080 0.038 0.026 0.146 0.069 0.047 0.078 0.043 0.028
Ill-defined UCOD 0.573 0.300 0.197 0.679 0.324 0.219 0.646 0.304 0.207 0.489 0.272 0.175
Competing causes in Part 1 0.354 0.186 0.122 0.334 0.159 0.108 0.334 0.157 0.107 0.391 0.217 0.140
Illegibility (R99 or blank line) 0.334 0.175 0.115 0.378 0.180 0.122 0.369 0.174 0.118 0.296 0.165 0.106
Time interval 0.186 0.097 0.064 0.212 0.101 0.068 0.212 0.100 0.068 0.179 0.100 0.064
Poorly defined external causes / Unspecified neoplasms 1.000 - 0.344 1.000 - 0.323 1.000 - 0.320 1.000 - 0.357

T/f weight 1: Transformed weights for MCCODs of deaths not due to neoplasms or external causes. T/f weight 2: Transformed weights for MCCODs of deaths due to either neoplasms or external causes.

Discussion

We believe that this is the first study to use empirical evidence to measure the relative impact of various common medical certification errors on the UCOD, generating important information to inform efforts to improve medical certification. Common certification errors were identified by searching the published literature [2, 69, 11, 1318, 24, 30], business rules were developed to reflect how these errors occur in practice, and each error was simulated separately on 1592 error free death certificates and then coded using Iris automated coding software. Weights were calculated for different SDI levels (low, middle, high) to measure the impact of the errors for countries at different stages of development and to enable the monitoring of trends in the quality of medical certification.

Overall, the impact of each error type upon the selection of the UCOD appears to be less than what was expected. This was particularly evident with errors such as incorrect sequence and multiple causes per line, which were previously considered to be ‘major errors’ by researchers [2, 5, 11, 19]. This could in part be due to the mitigating effect of cause of death coding rules. While in most cases the UCOD is that identified to be the condition initiating the causal sequence, sometimes a condition other than that which initiated the morbid sequence leading to death is selected as the UCOD according to international coding rules that consider epidemiological and other public health factors in the UCOD selection. The ICD-10 mortality coding rules are designed in such a way that they can partially mitigate the impact of possible certification errors on the selection of the UCOD [28].

The one error type that was consistently identified as having a very high impact was the reporting of an ill-defined condition as the underlying cause of death, most notably in high SDI populations and where only one cause was reported on the death certificate. This finding is to be expected given that an ill-defined underlying cause of death, by definition, should lead to a misdiagnosis of the true cause of death; something we observed for more than half of the deaths. Illegibility and reporting competing causes in Part 1 of the death certificate were the next most important errors (affecting the underlying cause in approximately 30 to 40% of MCCODs), followed by time interval errors and reporting contributory causes in Part 1. Multiple causes on the same line and specifying an incorrect causal sequence generally had a low impact or, for the latter in middle SDI countries, medium impact on the selection of the true UCOD, yet in the published literature, ‘incorrect causal sequencing’ and ‘multiple causes in a single line’ have been considered as major errors [2, 19]. This empirical finding is perhaps surprising, and demonstrates the beneficial impact of coding rules SP4 and SP5 [28] on correctly specifying the underlying cause. Rule SP4 allows the coder to search for another possible sequence ending in the first mentioned condition (terminal condition) in part 1 and select the originating cause of that sequence as the tentative starting point. Further, when there is no sequence ending with the terminal condition, rule SP5 allows the coder to select the terminal condition also as the tentative starting point. For most error types, there was only a small difference in impact across SDI levels. This demonstrates that the impact of error types is largely invariant to differences in age, sex and cause composition of deaths in a population, and their relative weight is consistent irrespective of the country to which they are applied.

For two errors specific to particular causes of death, neoplasms and external causes, the results need to be interpreted in isolation from other errors because their importance is dependent on the proportion of MCCODs with each of these causes. When neoplasms are insufficiently specified in terms of site, morphology, and behaviour, the resultant UCOD is always incorrect. The unspecified neoplasms error was largest in high SDI populations because the highest proportion of neoplasm deaths occurs in these populations. Similarly, the poorly defined external causes error was most prevalent in middle and low SDI populations, where the highest proportion of deaths due to this cause occur.

The comparative magnitude of the transformed weights derived in this study has significant implications for guiding intervention efforts to improve the quality of medical certification practices. By measuring the relative importance of each certification type and categorising them into groups based on their impact, they inform prioritisation of the errors medical certification training should focus on to have the maximum impact on resultant cause of death quality. The weights also allow development of a composite index of certification quality which can be used to monitor trends and differentials in overall certification practices from a database of MCCODs. This index is a better measure of cause of death certification quality than the indices based on judgmental error weights, because it is based on empirical evidence about the comparative effects of individual errors on the selection of the UCOD.

One limitation of this study is that Iris automated coding software, although ensuring consistent application of mortality coding rules after each error simulation, is not widely used in many low- and middle-income countries [33]. Instead, coding is performed manually, a process which may be subject to incorrect or inconsistent application of coding rules. With the correct application of coding rules mitigating, to some extent, the importance of errors, it may be that in many settings these errors consequently have a greater impact on the UCOD selection than what we have shown in this study. Additionally, the relatively high quality of the error-free certificates obtained from the US database, which is of higher quality than data in many low- and middle-income countries, may also have played a part in the lower than expected impact of errors. Using the incorrect sequence error as an example, for a death certificate from the US it is very likely that the correct UCOD will be reported by the doctor somewhere on the certificate, even if the sequence is incorrect. Therefore, even if causes are jumbled within a death certificate, the correct UCOD will likely still be listed somewhere on the certificate, making it possible for Iris to select the correct UCOD. In a setting where the quality of medical certification is lower and the correct UCOD may not be mentioned anywhere on the certificate, this may not occur. Hence, the magnitude of the impact of each error on the accuracy of cause of death statistics in low- and middle-income countries may be higher than what is presented in this study. However, it is likely that the results will still be relevant for countries with different levels of certification quality because we have attempted to measure impact for epidemiological profiles of low, middle and high SDI populations.

Another potential limitation of our study was that the errors were introduced individually to measure the impact of each separately, rather than assessing the combined effects of multiple errors, which is likely to occur in practice [2, 24]. Assessing the diagnostic impact of clusters of errors would however require a much larger data set and would involve vast possible combinations of two or more of the ten errors. For example, assessment of the ill-defined error may need to be conducted with one of each of the other errors, different combinations of each of the other errors, up to all eight other errors; this would make disentangling the relative importance of each error a major methodological challenge that is beyond the scope of this study. Additionally, combining multiple errors would need to be done with some empirical understanding of the composition of error clusters, which are likely to also differ by cause. On the other hand, individual assessment of errors, as we have done, provides comparative information on the relative importance of various errors and in a more robust way because it is not subjected to the impact of one error being reduced by the presence of another error.

We also note that the use of a digitised database without time intervals reduced our ability to assess illegibility and time intervals errors. Nonetheless, we were able to simulate these errors to closely mimic their appearance in practice. Further, while ‘use of abbreviations’ is another commonly identified error type, it was not simulated in this study due to the difficulty in predicting coders’ interpretations of non-standard abbreviations used by certifying physicians. We also measured weights based on their importance using samples of MCCODs with the age-sex and cause pattern of large populations. For analyses of MCCOD quality in settings with different cause of death profiles, such as specialist (e.g. cancer) hospitals, some transformed weights may reflect higher or lower values than their actual impact (e.g. the unspecified neoplasms error would have a lower score than its true impact). Finally, the UCOD of the error-free MCCODs may not be the “true” UCOD, when measured against a gold standard. However, this should not affect the relative impact of each type of certification error on the cause of death statistics.

Despite these limitations, the empirical evidence generated on the comparative importance of different certification errors on cause of death accuracy will have, we believe, important implications for guiding efforts to improve diagnostic accuracy. In particular, the finding that reporting an ill-defined condition as the underlying cause of death substantially and consistently has a high impact on the accuracy of cause of death statistics, particularly in high SDI populations, suggests clear priorities for physician training programs in all countries. In addition, medical certification quality assessment tools can now use these evidence-based findings to objectively assess the quality of medical certification training and practice.

Conclusion

Different certification errors have a variable impact on the selection of the underlying cause of death. The greatest impact results from the reporting of ill-defined conditions as the underlying cause of death. Illegibility of the entries and reporting competing causes in Part 1 of the death certificate have a medium impact on the selection of the underlying cause of death. Other errors, including reporting multiple causes on a single line, incorrect sequencing of causes, reporting contributory causes in Part 1, reporting underlying causes in Part 2, insufficiently specified neoplasms, and insufficiently specified external causes do not appear to have much impact on the selection of the underlying cause of death. The impact of certification errors on the selection of underlying cause of death does not vary much between low, middle and high SDI populations. Although we were not able to use a database from a low- or middle-income country, our findings are generalizable to such countries because we sample MCCODs with the cause, age and sex profile of low, middle and high SDI countries. Correct application of mortality coding rules can substantially mitigate the effects of some certification errors. Training mortality coders in correct coding practices and using automated coding software should, therefore, be an important component of national strategies to improve the quality of cause of death data, along with training physicians in correct certification practices.

Supporting information

S1 Text. International form of the Medical Certificate of Cause of Death (MCCOD).

(DOCX)

S2 Text. Certification errors.

(DOCX)

S3 Text. Business rules used.

(DOCX)

S1 Table. Age, sex and cause distribution of MCCODs in final sample (%), low, middle, high SDI countries and all.

(DOCX)

S2 Table. Chance-corrected concordance for ill-defined UCOD error, by number of causes reported on death certificate and SDI level.

(DOCX)

S3 Table. CSMF accuracy and weights by error type and SDI level.

(DOCX)

S1 Data

(CSV)

Acknowledgments

The authors wish to acknowledge the contribution of Sara Hudson for her assistance with the editing and review of the final format of the document.

Data Availability

All relevant data are within the manuscript and its Supporting information files.

Funding Statement

This study was funded under an award from Bloomberg Philanthropies and the Australian Department of Foreign Affairs and Trade to the University of Melbourne to support the Data for Health Initiative. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.WHO. International Statistical Classification of Diseases and Related Health Problems. 10th Revision. Vol. 2. Geneva: World Health Organization; 1993. [Google Scholar]
  • 2.Hart JD, Sorchik R, Bo KS, Chowdhury HR, Gamage S, Joshi R, et al. Improving medical certification of cause of death: effective strategies and approaches based on experiences from the Data for Health Initiative. BMC Medicine. 2020;18(1):74. doi: 10.1186/s12916-020-01519-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Maudsley G, Williams EM. Death certification by house officers and general practitioners—practice and performance. J Public Health Med. 1993;15(2):192–201. [PubMed] [Google Scholar]
  • 4.Aung E, Rao C, Walker S. Teaching cause-of-death certification: lessons from international experience. Postgrad Med J. 2010;86(1013):143–52. doi: 10.1136/pgmj.2009.089821 [DOI] [PubMed] [Google Scholar]
  • 5.McGivern L, Shulman L, Carney JK, Shapiro S, Bundock E. Death Certification Errors and the Effect on Mortality Statistics. Public Health Rep. 2017;132(6):669–75. doi: 10.1177/0033354917736514 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Jimenez-Cruz A, Leyva-Pacheco R, Bacardi-Gascon M. [Errors in the certification of deaths from cancer and the limitations for interpreting the site of origin]. Salud Publica Mex. 1993;35(5):487–93. [PubMed] [Google Scholar]
  • 7.Pritt BS, Hardin NJ, Richmond JA, Shapiro SL. Death Certification Errors at an Academic Institution. Archives of Pathology & Laboratory Medicine. 2005;129(11):1476–9. doi: 10.5858/2005-129-1476-DCEAAA [DOI] [PubMed] [Google Scholar]
  • 8.Cina SJ, Selby DM, Clark B. Accuracy of death certification in two tertiary care military hospitals. Mil Med. 1999;164(12):897–9. [PubMed] [Google Scholar]
  • 9.Middleton D, Anderson R, Billingsly T, Virgil NBM, Wimberly Y, Lee R. Death certification: issues and interventions. Open J Prev Med. 2011;1(3):167–70. [Google Scholar]
  • 10.Katsakiori PF, Panagiotopoulou EC, Sakellaropoulos GC, Papazafiropoulou A, Kardara M. Errors in death certificates in a rural area of Greece. Rural Remote Health. 2007;7(4):822. [PubMed] [Google Scholar]
  • 11.Filippatos G, Andriopoulos P, Panoutsopoulos G, Zyga S, Souliotis K, Gennimata V, et al. The quality of death certification practice in Greece. Hippokratia. 2016;20(1):19–25. [PMC free article] [PubMed] [Google Scholar]
  • 12.Brooks EG, Reed KD. Principles and Pitfalls: a Guide to Death Certification. Clinical medicine & research. 2015;13(2):74–82; quiz 3–4. doi: 10.3121/cmr.2015.1276 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Schuppener LM, Olson K, Brooks EG. Death certification: errors and interventions. Clinical medicine & research. 2020;18(1):21–6. doi: 10.3121/cmr.2019.1496 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Gamage USH, Mahesh PKB, Schnall J, Mikkelsen L, Hart JD, Chowdhury H, et al. Effectiveness of training interventions to improve quality of medical certification of cause of death: systematic review and meta-analysis. BMC Medicine. 2020;18(1):384. doi: 10.1186/s12916-020-01840-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Miki J, Rampatige R, Richards N, Adair T, Cortez-Escalante J, Vargas-Herrera J. Saving lives through certifying deaths: assessing the impact of two interventions to improve cause of death data in Perú. BMC Public Health. 2018;18(1):1329. doi: 10.1186/s12889-018-6264-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.El-Nour AE, Mohammed A, Yousif I, Ali AH, Makki M. Evaluation of death certificates in the pediatric hospitals in Khartoum state during 2004. Sudan J Public Health. 2007;2(1):29–37. [Google Scholar]
  • 17.Madadin M, Alhumam AS, Bushulaybi NA, Alotaibi AR, Aldakhil HA, Alghamdi AY, et al. Common errors in writing the cause of death certificate in the Middle East. J Forensic Leg Med. 2019;68:101864. doi: 10.1016/j.jflm.2019.101864 [DOI] [PubMed] [Google Scholar]
  • 18.Pattaraarchachai J, Rao C, Polprasert W, Porapakkham Y, Pao-in W, Singwerathum N, et al. Cause-specific mortality patterns among hospital deaths in Thailand: validating routine death certification. Population Health Metrics. 2010;8(1):12. doi: 10.1186/1478-7954-8-12 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Myers KA, Farquhar DR. Improving the accuracy of death certification. CMAJ: Canadian Medical Association journal = journal de l’Association medicale canadienne. 1998;158(10):1317–23. [PMC free article] [PubMed] [Google Scholar]
  • 20.Azim A, Singh P, Bhatia P, Baronia AK, Gurjar M, Poddar B, et al. Impact of an educational intervention on errors in death certification: An observational study from the intensive care unit of a tertiary care teaching hospital. Journal of anaesthesiology, clinical pharmacology. 2014;30(1):78–81. doi: 10.4103/0970-9185.125708 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Walker S RR, Wainiqolo I, Aumua A. Improving cause of death certification practices in the Pacific: findings from a pilot study of the World Health Organization web-based ICD training tool. Health Information Systems Knowledge Hub. University of Queensland; 2011.
  • 22.Lakkireddy DR, Gowda MS, Murray CW, Basarakodu KR, Vacek JL. Death certificate completion: how well are physicians trained and are cardiovascular causes overstated? Am J Med. 2004;117(7):492–8. doi: 10.1016/j.amjmed.2004.04.018 [DOI] [PubMed] [Google Scholar]
  • 23.Crandall LG, Lee JH, Friedman D, Lear K, Maloney K, Pinckard JK, et al. Evaluation of Concordance Between Original Death Certifications and an Expert Panel Process in the Determination of Sudden Unexplained Death in Childhood. JAMA Netw Open. 2020;3(10):e2023262. doi: 10.1001/jamanetworkopen.2020.23262 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Haque AS, Shamim K, Siddiqui NH, Irfan M, Khan JA. Death certificate completion skills of hospital physicians in a developing country. BMC Health Services Research. 2013;13(1):205. doi: 10.1186/1472-6963-13-205 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.National Center for Health Statistics. Mortality Multiple Cause Files. In: National Center for Health Statistics, editor. Hyattsville, MD: National Center for Health Statistics; 2020. [Google Scholar]
  • 26.GBD 2017 Causes of Death Collaborators. Global, regional, and national age-sex-specific mortality for 282 causes of death in 195 countries and territories, 1980–2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. 2018;392:1736–88. doi: 10.1016/S0140-6736(18)32203-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Global Burden of Disease Collaborative Network. Global Burden of Disease Study 2019 (GBD 2019) Socio-Demographic Index (SDI) 1950–2019. In: Insitute of Health Metrics and Evaluation (IHME), editor. Global Burden of Disease. Seattle; 2020. [Google Scholar]
  • 28.World Health Organization. International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10): 2016 Version. Geneva: World Health Organization; 2016.
  • 29.Selinger CP, Ellis RA, Harrington MG. A good death certificate: improved performance by simple educational measures. Postgraduate Medical Journal. 2007;83(978):285–6. doi: 10.1136/pgmj.2006.054833 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Selby DM, Clark B, Cina SJ. Accuracy of Death Certification in Two Tertiary Care Military Hospitals. Military Medicine. 1999;164(12):897–9. [PubMed] [Google Scholar]
  • 31.Iris User Reference Manual V5.7.0S2. Iris Institute; 2020.
  • 32.Murray CJ, Lozano R, Flaxman AD, Vahdatpour A, Lopez AD. Robust metrics for assessing the performance of different verbal autopsy cause assignment methods in validation studies. Population Health Metrics. 2011;9(1):28. doi: 10.1186/1478-7954-9-28 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Rooney C, Devis T. Mortality trends by cause of death in England and Wales 1980–94: the impact of introducing automated cause coding and related changes in 1993. Popul Trends. 1996(86):29–35. [PubMed] [Google Scholar]

Decision Letter 0

Ritesh G Menezes

29 Jun 2021

PONE-D-21-14961

The impact of errors in medical certification on the diagnostic accuracy of the underlying cause of death

PLOS ONE

Dear Dr. Adair,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by 19-July-2021. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Prof. Ritesh G. Menezes, M.B.B.S., M.D., Diplomate N.B.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. 

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is an extremely important topic.

Rejection causes:

Major:

1. Multiple recognized (recent and high impact) Major and Minor diagnosis errors systems have been published but ignored.

2. The basis of the study is the impact of the errors. This can NOT be calculated if the true cause is UNKNOWN. If there is a focus on reading recent (and old) publications the authors will understand that this is the reason data and statistics from current death certificates can NOT be used. Regardless what sophisticated statistics are used with "garbage" data, it will still result in erroneous results, discussion and conclusion (garbage in, garbage out).

Minor:

1, Grammar and typo errors (multiple). Many that would be highlighted and corrected by word or google documents

2. Old references (except those by authors. Self referencing (while excluding others) gives a false impression of importance without recognizing others

3. Repetition of reference 14 and 15 (30 and 32)

Reviewer #2: This is a very interesting paper dealing with an essential aspect of public health monitoring: the impact of errors in medical certification on the diagnostic accuracy of the underlyng cause of death. This is the first study I read about measuring the relative impact of various common medical certification errors on the UCOD and that gives important results to improve medical certification.

Following are my comments:

1. The error-free MCCODs were obtained from the United States’ (US) Multiple Cause of Death data file for 2017 and after that a sample of 1592 MCCODs were used. It would be important to know how many MCCODs were available in the US data file.

2. About the reporting of an ill-defined condition as the UCOD error, I would like to know if there was more impact in the MCCODs that had only one line written in part 1 or if there were other ill-defined cause of death in all the other lines of part 1.

3. About the illegibility error, you mentioned that if there were an illegible line, the coder would skip the illegible entry and not assign a code, how do you difference that from a blank space left by the medical certifier.

4. It would have been important to analyse the use of abbreviations, at least for the countries using Iris coding.

Reviewer #3: 1.A well-written, neatly structured manuscript

2.The manuscript is technically sound and the data does support the conclusions

3. Statistical analysis and explanation of results is clear

4. Conclusion needs to be redone to incorporate lot more information

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Janet Miki

Reviewer #3: Yes: Dr Kavya Rangaswamy

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: Dr Kavya R reviewer comments.docx

PLoS One. 2021 Nov 8;16(11):e0259667. doi: 10.1371/journal.pone.0259667.r002

Author response to Decision Letter 0


15 Jul 2021

Responses to reviewers’ comments

Review Comments to the Author

We thank the Editor for providing us the opportunity to respond to the reviewers’ valuable comments. Please find our responses below in red. Line numbers refer to the clean version of the manuscript.

Reviewer #1: This is an extremely important topic.

We thank the reviewer for their comments. Please find our responses below. Line numbers refer to the tracked version of the manuscript.

We would like to clarify that our study assesses the impact on the accuracy of cause of death statistics of errors made by physicians in completing the Medical Certificate of Cause of Death, as according to the WHO standard guidelines in certification. We did not specifically assess the diagnostic accuracy of the diseases and conditions reported on the certificate against an autopsy. Hence, the “error-free” death certificates are those where the physician has made no errors in completing the death certificate, based on the WHO standard guidelines in certification. It is likely that many of these “error-free” death certificates would not have the “true” cause of death, as assessed when compared with an autopsy, for various reasons (e.g. diagnostic limitations, biases of the physician, etc). Our assessment of the accuracy of cause of death statistics compares, for each death in the database, the underlying cause of death (after the application of ICD coding rules) where there is each type of certification error versus the underlying cause of death where there is no error on the death certificate.

To reflect this, we have amended the title of the manuscript (removing the term “diagnostic”) and also clarified the specific nature of the study and methods used in these places:

- Abstract Lines 26-27 and Line 33-34

- Introduction Line 69

- Methods Lines 111-115, 171-173, 185-87

- Discussion Lines 359-61

Rejection causes:

Major:

1. Multiple recognized (recent and high impact) Major and Minor diagnosis errors systems have been published but ignored.

We based our categorisation of certification error types on those identified as the most common according to published studies. The reviewer has stated that there are other diagnosis errors systems, however the errors that we are focusing on are errors in completing the death certificate according to WHO guidelines and how these impact on the accuracy of causes of death (after the application of ICD coding rules), rather than errors specifically related to diagnostic accuracy of the diseases and conditions reported on the certificate. We note that a recent article by Schuppener et al. (Death certification: errors and interventions. Clinical Medicine & Research, 2020;18(1):21-6) identified “Wrong COD” as an error type (we have added a description of this study in Lines 92-99). However, this kind of error identification and categorization is only possible when the death certificate entries are validated with medical records or autopsy findings, and is not applicable to assessment of errors in completing the MCCOD.

2. The basis of the study is the impact of the errors. This can NOT be calculated if the true cause is UNKNOWN. If there is a focus on reading recent (and old) publications the authors will understand that this is the reason data and statistics from current death certificates can NOT be used. Regardless what sophisticated statistics are used with "garbage" data, it will still result in erroneous results, discussion and conclusion (garbage in, garbage out).

As mentioned above, our aim was to measure the impact of each different type of error in completing the death certificate (according to WHO guidelines) on the resultant cause of death (after application of ICD coding rules). We do this by comparing the cause of death of the death certificates with an error against those without an error, using the metric of chance-corrected concordance. However, we do recognise that we do not know the true cause of each death, which may be impacted by diagnostic equipment, physician biases or training, information available for the physician, etc. We don’t necessarily believe the error-free certificates have the ‘true’ cause of death (as compared with an autopsy) but the cause of death where none of certification errors are present. We have clarified this issue in Lines 111-115 and 359-61.

We do not agree however that data and statistics from current death certificates cannot be used. In fact, they are used widely, including by the WHO, Global Burden of Disease Study and numerous national statistics offices. There are established methods to assess the extent and severity of garbage codes and reallocate these to non-garbage causes (see Naghavi M et al. Improving the quality of cause of death data for public health policy: are all 'garbage' codes equally problematic? BMC Med. 2020;18(1):55, and Naghavi M, et al. Algorithms for enhancing public health utility of national causes-of-death data. Popul Health Metr. 2010;8:9). We also recognise that it is not possible to conduct an autopsy on all deaths, so physician-completed death certificates are the primary source of information on causes of death. We also recognise that there should be continued efforts to improve training of physicians in death certification is important to ensure that the data are as useful for policy as possible.

Minor:

1, Grammar and typo errors (multiple). Many that would be highlighted and corrected by word or google documents

We have amended grammar and typo errors in multiple places.

2. Old references (except those by authors. Self referencing (while excluding others) gives a false impression of importance without recognizing others

We didn’t intend to create a false impression of importance. We have now added more recent references of Schuppener et al, 2020 (reference #13, described in lines 92-97) and Brooks & Reed, 2015 (reference #12, Lines 73, 84). We note that other references we have used are relatively recent: McGiven 2017, Flippatos 2016, Miki 2018, Crandll 2020, Azim 2014.

3. Repetition of reference 14 and 15 (30 and 32)

Thank you for identifying these issues - we have deleted these duplicate references.

Reviewer #2: This is a very interesting paper dealing with an essential aspect of public health monitoring: the impact of errors in medical certification on the diagnostic accuracy of the underlyng cause of death. This is the first study I read about measuring the relative impact of various common medical certification errors on the UCOD and that gives important results to improve medical certification.

We thank the reviewer for providing valuable comments on the manuscript. Please find our responses below. Line numbers refer to the tracked version of the manuscript.

Following are my comments:

1. The error-free MCCODs were obtained from the United States’ (US) Multiple Cause of Death data file for 2017 and after that a sample of 1592 MCCODs were used. It would be important to know how many MCCODs were available in the US data file.

There were a total of 2,820,034 MCCODs in the UCOD data file in 2017 – we have now included this figure in Line 121. We selected a sample of MCCODs because each MCCOD needed to be individually screened for errors by trained MCCOD certifiers so that they could select the error-free certificates.

2. About the reporting of an ill-defined condition as the UCOD error, I would like to know if there was more impact in the MCCODs that had only one line written in part 1 or if there were other ill-defined cause of death in all the other lines of part 1.

The reviewer raises a good point. We conducted a further analysis of whether there was greater impact on MCCODs of the ill-defined error with only one cause written in Part 1 of the death certificate compared with more than one cause. The results shown in the table below show that the chance-corrected concordance for the ill-defined error with only one cause reported is 0.353 compared with 0.454 for more than one error. Notably, the chance-corrected concordance for the ill-defined error with only one cause reported is particularly low in high SDI (0.138) and middle SDI (0.200) countries, but with less of an impact in low SDI countries where the problem of only one cause being reported may be more pronounced.

We have included this table in S2 Table and briefly described the results in Lines 229-231 and 286-287.

S2 Table. Chance-Corrected Concordance for ill-defined UCOD error, by number of causes reported on death certificate and SDI level

SDI level

Number of causes reported All High Middle Low

Ill-defined UCOD: 1 cause 0.353 0.138 0.200 0.457

Ill-defined UCOD: >1 cause 0.454 0.385 0.401 0.532

3. About the illegibility error, you mentioned that if there were an illegible line, the coder would skip the illegible entry and not assign a code, how do you difference that from a blank space left by the medical certifier.

In our simulation of the illegibility error, one assumption we made was that the coder would skip the illegible entry and not assign a code. In practice, this may be the same as if the certifier did leave a blank space. Leaving a blank space will generally cause a lesser impact on coding and the selection of the underlying cause than the removal of a condition from the certificate due to illegibility. Blank spaces are usually ignored during the coding process, but the inability to code due to illegibility means a removal of a condition which had been originally reported by the certifier. The latter causes a higher impact on the selection of underlying cause than blank spaces.

4. It would have been important to analyse the use of abbreviations, at least for the countries using Iris coding.

We agree that it would have been valuable to analyse the use of abbreviations, however we were unable to do so because we were using a digitised database and we could not identify a plausible means of simulating this error. We have mentioned this issue already in Lines 353-355. However, any abbreviation can be included into the Iris dictionary using standardization tables, which can lessen the impact of abbreviations on cause of death statistics.

Reviewer #3: 1.A well-written, neatly structured manuscript

2.The manuscript is technically sound and the data does support the conclusions

3. Statistical analysis and explanation of results is clear

4. Conclusion needs to be redone to incorporate lot more information

We thank the reviewer for their valuable comments.

1. Since the study uses error-free MCCODS from US database, generalizing the results across the globe (primarily in medium and low income countries) would be wrong. In fact, the study states that the impact of errors on selection of UCOD is lesser than expected. if the sample was MCCODs collected from diverse countries the results would be different altogether. The authors need to emphasize on this point lot more both in the discussion and conclusion

We would like to clarify how we sought to generalise the results from our study to middle and low income (Socio-Demographic Index or SDI) countries. In the study, we selected samples of MCCODs that represent the age, sex and cause profile of low, middle and high SDI countries, to provide results that estimate how each error type would impact cause of death accuracy across the different epidemiological profiles present in the world. For example, the Low SDI sample has a higher proportion of deaths from infectious diseases compared with the High SDI sample (see S1 Table). Unfortunately, we were unable to use a similar database from a low- or middle-income country. In the discussion, we have already stated in Lines 325-327 that “Additionally, the relatively high quality of the error-free certificates, obtained from a US database which is of higher quality than databases in many low- and middle-income countries, may also have played a part in the lower than expected impact of errors.” We have however now added in the Discussion in Lines 332-334 that the magnitude of the results may be different if we were able to use data from a low- or middle-income country. We have also mentioned this overall limitation in the Conclusion (Lines 381-383).

3. Though the list of errors in death certification are similar across countries, their impact on diagnostic accuracy of UCOD is varied especially in medium and low income countries. Emphasis on this differentiation is not evident in the manuscript.

We primarily based our interpretation of our results in terms of their differential impact on high, middle and low SDI countries from Table 1. This shows that there are considerable differences in the chance-corrected concordance of the ill-defined error, especially where only one cause is reported in the MCCOD (see new S2 Table and Lines 229-231). Other clear differences are found for the unspecified neoplasms error, which is already reported in Lines 234-235. However, for none of the other errors is there a clear difference in the Chance-corrected concordance or Weight (and impact category) between any of the SDI levels, so we believe that it is not necessary to report these.

4. As mentioned in the limitation of the study, the impact of combined errors is not studied. The author needs to articulate much further the advantages of studying individual errors

The reviewer makes a good point. As we have mentioned in Lines 338-349, the main advantages of analysing individual errors is that it allows individual measurement of the impact of each error without being biased by the presence of other errors, that we do not need to make assumptions of the composition or combination of errors on MCCODs and that it does not require using a vast number of possible combination of causes. We have now expanded on the last of these issues in Lines 340-345, specifically that:

“Assessing the diagnostic impact of clusters of errors would however require a much larger data set and would involve vast possible combinations of two or more of the nine errors. For example, assessment of the ill-defined error may need to be conducted with one of each of the other errors, different combinations of each of the other errors, up to all eight other errors; this would make disentangling the relative importance of each error a major methodological challenge that is beyond the scope of this study.”

5. Conclusion needs to be redone to incorporate lot more information

We are unsure which specific information should be added to the Conclusion, keeping in mind that it should only be relatively brief. However, in the Conclusion we have now included information on the overall limitation of the use of the selection of samples of MCCODs from the US database that represent the age, sex and cause profile of low, middle and high SDI countries due to no alternate database being available from low- or middle-income countries.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Janet Miki

Reviewer #3: Yes: Dr Kavya Rangaswamy

Attachment

Submitted filename: Response to reviewers.docx

Decision Letter 1

Ritesh G Menezes

7 Oct 2021

PONE-D-21-14961R1

The impact of errors in medical certification on the accuracy of the underlying cause of death

PLOS ONE

Dear Dr. Adair,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by 14-October-2021. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A 'Response to Reviewers' letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Prof. Ritesh G. Menezes, M.B.B.S., M.D., Diplomate N.B.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

Reviewer #4: All comments have been addressed

Reviewer #5: (No Response)

Reviewer #6: All comments have been addressed

********** 

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

Reviewer #4: Yes

Reviewer #5: Partly

Reviewer #6: Yes

********** 

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

Reviewer #4: Yes

Reviewer #5: Yes

Reviewer #6: Yes

********** 

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #4: Yes

Reviewer #5: Yes

Reviewer #6: Yes

********** 

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #4: Yes

Reviewer #5: Yes

Reviewer #6: Yes

********** 

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: (No Response)

Reviewer #4: (No Response)

Reviewer #5: A well written and novel approach to estimating the extent to which MCCODS errors affect the UCOD. Kindly find herein my comments on the article:

• Line 43-42 : The authors can mention what errors, if any, came under the “Low impact error type” as the paragraph mentions the classification of error types into very high, high, medium, and low.

• Line 49 : Explanation of what Iris stands for, (the automated coding software), would give an uninitiated reader a better understanding as it may be a new term for many young physicians.

• Line 60 : The S1 text reference only shows the guidelines for filling out the WHO MCCOD. However, there should also be a reference for the guidelines for Coding mentioned here as it has been specifically mentioned in lines 59-60.

• Line 72 : Does the “very clear rules” mentioned here refer to lines 70-71? If not, then does it refer to S1 text reference? Or is it mentioned among the other references (4-13)? More clarity is needed on this.

• S2 text : “Reporting multiple causes in a single line of part 1” and “Reporting competing causes in part 1” appear at first glance to be separate entities. So, does that imply that the multiple causes in the single line are not mutually exclusive but equally possible? And the competing causes of part 1 may also be in a single line. Which type of error would it then come under? As the result is essentially wrongly coding 1 of the causes listed as the UCOD. A better explanation/differentiation between the two types of errors would be helpful.

• Line 123 and footnote 1 : Though the explanation for how SDI is calculated is mentioned, since the authors have mentioned that it is used by Global burden of disease study, a reference to a GBD capstone article/ related GBD reference may be included.

• Line 136 : Would the decision to include the MCCODs in multiple SDI groups not affect the results? The rationale for including them in multiple SDI groups needs to be mentioned for better clarity. The authors also need to clarify here whether MCCODS with “unspecified neoplasms” error and “poorly defined external causes” error were also out in multiple SDI groups or not.

• Fig 1 : SDI categories are not labelled.

• Lines 272-273 : Though one of the stated goals was “to monitor trends in the quality of medical certification”, subsequent sections fail to elaborate on this. Though there is the mention of the use of Iris in mainly high SDI countries and the possibility of manual coding causing more errors, there isn’t enough discussion to correlate the study findings and the “quality of medical certification” in other countries.

• Lines 279-281 : More details need to be mentioned regarding situations where epidemiological and other public health factors are considered in the UCOD selection. Since these coding rules are mentioned as the main reason for mitigating errors, the situations where they are applied instead of using the condition initiating the causal sequence needs to be elaborated. And if according to the coding rules, some other condition is selected as the UCOD and not the condition initiating the causal sequence, should it still be considered as an error for the purpose of this study?

• Line 296 : A brief explanation of Coding rules SP4 and SP5 is needed even though a reference for the same is given (24). This is because of the importance of these rules in understanding why the effect of published “major errors” may not be as high as expected.

Reviewer #6: Dear Authors,

The authors have satisfactorily addressed all the queries by the previous reviewers. This article is very relevant in the present scenario of COVID pandemic. The authors are requested to address the below queries .

Abstract:

• Line 32 and 33: The author needs to clarify whether “ 1592- error free reports” selected were “ either the death occurred in the medical institution or non- institutional death”, since WHO has recommended two types of Forms specifically ie FORM 4 and FORM 4A.

Discussion:

• Line 328: The author needs to write the type of version and make of “ Iris automated coding software” and the calibration to avoid errors in coding.

********** 

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Janet Miki

Reviewer #4: Yes: Prateek rastogi

Reviewer #5: No

Reviewer #6: Yes: Dr Jagadish Rao Padubidri

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Nov 8;16(11):e0259667. doi: 10.1371/journal.pone.0259667.r004

Author response to Decision Letter 1


21 Oct 2021

PONE-D-21-14961R1: The impact of errors in medical certification on the accuracy of the underlying cause of death

Responses to reviewers’ comments

We thank the Editor for the opportunity to respond to the reviewers’ comments. Please find below our responses. All line numbers refer to the document with tracked changes.

Reviewer #5: A well written and novel approach to estimating the extent to which MCCODS errors affect the UCOD. Kindly find herein my comments on the article:

We thank the reviewer for their valuable comments. Please find below our responses. All line numbers refer to the document with tracked changes.

• Line 43-42 : The authors can mention what errors, if any, came under the “Low impact error type” as the paragraph mentions the classification of error types into very high, high, medium, and low.

We have now added that the low impact errors were multiple causes per line and incorrect sequence.

• Line 49 : Explanation of what Iris stands for, (the automated coding software), would give an uninitiated reader a better understanding as it may be a new term for many young physicians.

Iris is an automated mortality coding software – we now state this in Line 35.

• Line 60 : The S1 text reference only shows the guidelines for filling out the WHO MCCOD. However, there should also be a reference for the guidelines for Coding mentioned here as it has been specifically mentioned in lines 59-60.

We have now added Reference 1 (WHO ICD – 10 vol 2) in Line 62.

• Line 72 : Does the “very clear rules” mentioned here refer to lines 70-71? If not, then does it refer to S1 text reference? Or is it mentioned among the other references (4-13)? More clarity is needed on this.

These “very clear rules” are those in WHO ICD – 10 vol 2 (reference #1) – we have now stated this in Lines 74-75.

• S2 text : “Reporting multiple causes in a single line of part 1” and “Reporting competing causes in part 1” appear at first glance to be separate entities. So, does that imply that the multiple causes in the single line are not mutually exclusive but equally possible? And the competing causes of part 1 may also be in a single line. Which type of error would it then come under? As the result is essentially wrongly coding 1 of the causes listed as the UCOD. A better explanation/differentiation between the two types of errors would be helpful.

Competing causes are mutually exclusive conditions reported in different lines of part 1. Multiple causes in a single line are considered a different error category, irrespective of whether the multiple causes in the single line are mutually exclusive or not. If two competing causes are reported on a single line in Part 1, then we classified the error as “Reporting multiple causes in a single line of part 1”. We have clarified this in S2 Text, certification error type 7 “Reporting competing causes in Part 1”.

• Line 123 and footnote 1 : Though the explanation for how SDI is calculated is mentioned, since the authors have mentioned that it is used by Global burden of disease study, a reference to a GBD capstone article/ related GBD reference may be included.

We have added a reference (27) in Line 129.

• Line 136 : Would the decision to include the MCCODs in multiple SDI groups not affect the results? The rationale for including them in multiple SDI groups needs to be mentioned for better clarity. The authors also need to clarify here whether MCCODS with “unspecified neoplasms” error and “poorly defined external causes” error were also out in multiple SDI groups or not.

The decision to include MCCODs in multiple SDI groups is not expected to affect the results because deaths with the same causal sequence would occur in high, medium and low SDI countries. We included the same MCCODs in multiple SDI groups because the process for identifying error-free death certificates was conducted manually and was very time-intensive; hence, this reduced the amount of time needed to identify error-free death certificates. We have now clarified this in Lines 139-141. The “unspecified neoplasms” error and “poorly defined external causes” error were both simulated on MCCODs that were included in multiple SDI groups.

• Fig 1 : SDI categories are not labelled.

The SDI categories in Figure 1 are now labeled.

• Lines 272-273 : Though one of the stated goals was “to monitor trends in the quality of medical certification”, subsequent sections fail to elaborate on this. Though there is the mention of the use of Iris in mainly high SDI countries and the possibility of manual coding causing more errors, there isn’t enough discussion to correlate the study findings and the “quality of medical certification” in other countries.

This sentence should have read “…to enable the monitoring of trends in the quality of medical certification”. We have subsequently modified the existing sentence in Lines 318-320 to be “The weights also allow development of a composite index of certification quality which can be used to monitor trends and differentials in overall certification practices from a database of MCCODs.”

• Lines 279-281 : More details need to be mentioned regarding situations where epidemiological and other public health factors are considered in the UCOD selection. Since these coding rules are mentioned as the main reason for mitigating errors, the situations where they are applied instead of using the condition initiating the causal sequence needs to be elaborated. And if according to the coding rules, some other condition is selected as the UCOD and not the condition initiating the causal sequence, should it still be considered as an error for the purpose of this study?

In mortality coding, whether a causal relationship is considered acceptable or not for mortality coding is established not only on a medical assessment but also on epidemiological and public health considerations. Therefore, a medically acceptable relationship might be listed as unacceptable in the coding instructions because a later step in the sequence is more important from a public health point of view. However, a medically acceptable sequence was never considered as an error for the purposes of this study.

• Line 296 : A brief explanation of Coding rules SP4 and SP5 is needed even though a reference for the same is given (24). This is because of the importance of these rules in understanding why the effect of published “major errors” may not be as high as expected.

Usually, the first cause reported in the lowest used line of part 1 is the underlying cause of death meant by the physician. In an incorrectly reported sequence, the first condition reported in the lowest used line may not explain all the conditions reported above. Rule SP4 allows the coder to search for another possible sequence ending in the first mentioned condition (terminal condition) in part 1 and select the originating cause of that sequence as the tentative starting point. Further, when there is no sequence ending with the terminal condition, the rule SP5 allows the coder to select the terminal condition also as the tentative starting point. We have added this information in a footnote (footnote 2).

Reviewer #6: Dear Authors,

The authors have satisfactorily addressed all the queries by the previous reviewers. This article is very relevant in the present scenario of COVID pandemic. The authors are requested to address the below queries .

We thank the reviewer for their valuable comments. Please find below our responses. All line numbers refer to the document with tracked changes.

Abstract:

• Line 32 and 33: The author needs to clarify whether “ 1592- error free reports” selected were “ either the death occurred in the medical institution or non- institutional death”, since WHO has recommended two types of Forms specifically ie FORM 4 and FORM 4A.

The 1592 error-free MCCODs were US registered deaths that occurred in all settings - i.e. in hospital, at home, in public places, etc. We have now clarified this Lines 124-125.

Discussion:

• Line 328: The author needs to write the type of version and make of “ Iris automated coding software” and the calibration to avoid errors in coding.

Iris version 5.7 was used for coding – this is now mentioned in Line 164.

Attachment

Submitted filename: Response to reviewers final.docx

Decision Letter 2

Ritesh G Menezes

25 Oct 2021

The impact of errors in medical certification on the accuracy of the underlying cause of death

PONE-D-21-14961R2

Dear Dr. Adair,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Prof. Ritesh G. Menezes, M.B.B.S., M.D., Diplomate N.B.

Academic Editor

PLOS ONE

Acceptance letter

Ritesh G Menezes

29 Oct 2021

PONE-D-21-14961R2

The impact of errors in medical certification on the accuracy of the underlying cause of death

Dear Dr. Adair:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Prof. Dr. Ritesh G. Menezes

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Text. International form of the Medical Certificate of Cause of Death (MCCOD).

    (DOCX)

    S2 Text. Certification errors.

    (DOCX)

    S3 Text. Business rules used.

    (DOCX)

    S1 Table. Age, sex and cause distribution of MCCODs in final sample (%), low, middle, high SDI countries and all.

    (DOCX)

    S2 Table. Chance-corrected concordance for ill-defined UCOD error, by number of causes reported on death certificate and SDI level.

    (DOCX)

    S3 Table. CSMF accuracy and weights by error type and SDI level.

    (DOCX)

    S1 Data

    (CSV)

    Attachment

    Submitted filename: Dr Kavya R reviewer comments.docx

    Attachment

    Submitted filename: Response to reviewers.docx

    Attachment

    Submitted filename: Response to reviewers final.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES