In this issue of the Journal a study is published in which reporting odds ratios (RORs), one of the disproportionality measures, are presented on the association between suspected drugs and memory disorders using a spontaneous reporting database [1]. I have serious doubts about the clinical importance of this type of research. Clearly hundreds of such papers can ‘simply’ be published by just selecting a reported suspected adverse drug reaction (ADR) from the database, constructing relevant two-by-two tables and presenting all the RORs of the suspected drugs [2]. But what do we learn from such publications?
First Chavant et al. [1] present RORs of well known drug–ADR relations (like tricyclic antidepressants and memory disorders). Obviously we do not need these RORs to let us know that these causal drug–ADR relations exist: we already know. It would be of help of course in case we were newly informed about the absolute or relative risk of the ADR when exposed to the drug for all exposed patients or certain subgroups. The problem is however that in this respect the ROR has no quantitative meaning since the calculated number depends on the number of reported cases and the number of other adverse drug events (ADEs) reported of the same suspected drug [2]. These numbers are always too low (under-reporting) but can also fluctuate strongly depending on whether or not there was, for instance, media attention for this specific drug–ADE or ADR association [3]. Thus our knowledge does not increase by presenting RORs on ADRs well known to be causally related to certain drugs as presented in the paper of Chavant et al. [1]
Second the authors also present RORs on drugs for which the relation with memory disorders has not yet been published [1]. What do we learn from these figures? Again we do not need RORs to make clear that these drugs were brought into relation with memory disorders by the health-care providers. This can be done by simply presenting observed counts. After the collection of a number of reports presenting the same drug–ADE relation, the first question is whether there is a causal relation between the suspected drug and the specific ADE. In case there is a causal relation, the second question is ‘What is the absolute risk of the ADR?’ The ROR does not answer both questions. Because of the above mentioned problem of selective reporting the outcome of the ROR calculation does not say anything about a potential causal relation and also has no quantitative meaning of having the risk (irrespective of causality) of this specific medical event when using the drug of interest. It is even possible that causal relations are missed (in the sense that the ROR does not pass an arbitrary border and/or does not become statistically significant) when the reporting is still low or when another suspected ADR of the same drug is frequently reported [2]. It would be more informative to the readers to present for each suspected drug–ADE relation the causality assessments of the reports. Based on these assessments it becomes clear how plausible a drug is causally related to the adverse event. In case the suspicion of causality is high enough a first estimate of the absolute risk (because of under-reporting in reality it will be higher) can be presented based on rough estimates of the drug exposure in the population (easy to obtain from population based drug use databases) and the number of reported ADRs. In case the information from the database does not allow to conclude yes or no causality or when it does but we need more quantitative information, pharmacoepidemiological studies like a follow-up or case control study should be performed. This will help to explore causality further but also allows valid absolute risk (follow-up study) or relative risk (follow-up and case control study) estimates to be obtained.
Disproportionality measures were developed to detect signals of unexpected or unknown ADRs from spontaneous reporting databases [4]. Already Bate & Evans stated that such alerts warrant further investigation but not wider communication [2]. I think the study of Chavant et al. is an example where the data may be less informative. As already mentioned above, even as a signal detection method one should be very careful. All disproportionality measures look for unexpected frequencies of reports in the dataset in comparison with general reporting frequencies [2]. The problem is that the causality of a drug–ADE combination and the clinical relevance of the ADE do not depend on the frequency of reporting of that specific combination. As stated above, the signal will not be flagged by the ROR for serious ADEs that have low reporting rates or when another suspected ADE of the same drug is frequently reported. There have been solutions proposed for this last problem (recalculation of a disproportionality measure after excluding reports on a frequently reported event) [5, 6] but when they should be used is quite arbitrary. Another problem is that when an ADR occurs in a specific subgroup a ROR without stratification can easily miss such a relation which again can cause false trust. Therefore, I am wondering whether the use of disproportionality measures is really helpful to detect unknown ADRs from spontaneous reporting databases. Why would this be more helpful than simply listing frequencies of specific suspected drug ADE combinations by which the number of reports becomes clear and the clinical relevance of the ADE can be judged by balancing it against the beneficial effects of the compound of interest, e.g. cytopenia as an ADE is a more important signal for a simple analgesic drug than for an anticancer drug. In case the ADE is potentially relevant a causality assessment becomes relevant by evaluating each report and/or starting a systematic pharmacoepidemiological study.
I do not want to suggest that studies that use RORs from spontaneous reporting databases should not be published at all. There are examples in the literature in which the use of disproportionality measures gave new and even causal insights. These studies had in common that they tried to circumvent the problem of selection bias caused by selective reporting. For instance Stricker et al. [7] studied the association between the antibiotics cefaclor, amoxicillin and cephalexin and serum sickness-like reaction. By restricting the ROR calculation to cases vs. non cases within reports in which the ADRs were related to the suspected drugs cefaclor (index drug), amoxicillin (control drug) and cephalexin (control drug) they tried to reduce the problem of selective reporting. Obviously there was selective reporting but when it can be assumed that this was similar for the three compounds with a similar indication and ADR pattern, differences in reporting suggest the existence of causal relationships. Another example is the evaluation of potential drug–drug interactions [8] or performing subgroup analyses. Suppose that a particular drug–ADE combination, whether or not already known to be causally related, is predominantly reported in children and less in an older age group (with a significant difference in RORs) while the drug is prescribed equally in both age groups this might indicate a causal relation in children. Obviously a pharmacological explanation for the difference is relevant in this respect. De Bruin et al. [9] and Egberts et al. [10] had a pharmacological mechanism (level of anti-HERG activity and serotonergic vs. non-serotonergic antidepressants, respectively) as a starting point for calculating RORs in spontaneous reporting databases. By using a pharmacological characteristic as exposure of interest the influence of selective reporting was again reduced. Another way to use the ROR in a useful way is not to analyze the suspected drugs but instead the co-medication that was also reported. There should be a pharmacological basis for such an analysis but an important advantage will be that the case/non case selection will be neutral for the exposure. I am not aware of such a study in the literature but this is potentially a strong design. It mimics the case control study design in which selection bias is reduced by two important measures. First, the selection of cases and controls should not be influenced by knowledge of the exposure status. Second, the respective exposure odds in the selected cases and controls need to be representative for the exposure odds of all cases and potential controls in the population from which the cases and controls originated.
In summary, disproportionality measures as used in spontaneous reporting databases have important limitations. They can be used as signal detection algorithms that warrant further investigation without wider communication of the numerical outcome as such. When used in a more advanced way by dealing with the problem of selective reporting these measures might generate new relevant ADR knowledge worth publishing.
Competing Interests
There are no competing interests to declare.
REFERENCES
- 1.Chavant F, Favrelière S, Lafay-Chebassier C, Plzanet C, Pérault-Pochat MC. Memory disorders associated with drugs consumption: updating through a case/non-case study in the French PharmacoVigilance Database. Br J Clin Pharmacol. 2011 doi: 10.1111/j.1365-2125.2011.04009.x. May 10. doi: 10.1111/j.1365-2125.2011.04009.x. [Epub ahead of print] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Bate A, Evans SJW. Quantitative signal detection using spontaneous ADR reporting. Pharmacoepidemiol Drug Saf. 2009;18:427–36. doi: 10.1002/pds.1742. [DOI] [PubMed] [Google Scholar]
- 3.Pariente A, Gregoire F, Fourrier-Reglat A, Haramburu F, Moore N. Impact of safety alerts on measures of disproportionality in spontaneous reporting databases. The notoriety bias. Drug Saf. 2007;30:891–8. doi: 10.2165/00002018-200730100-00007. [DOI] [PubMed] [Google Scholar]
- 4.Moore N, Thiessard F, Begaud B. The history of disproportionality measures (reporting odds ratio, proportional reporting rates) in spontaneous reporting of adverse drug reactions. Pharmacoepidemiol Drug Saf. 2005;14:285–6. doi: 10.1002/pds.1058. [DOI] [PubMed] [Google Scholar]
- 5.Pariente A, Didailler M, Avillach P, Miremont-Salamé F, Moore N. A potential competition bias in the detection of safety signals from spontaneous reporting databases. Pharmacoepidemiol Drug Saf. 2010;19:1166–71. doi: 10.1002/pds.2022. [DOI] [PubMed] [Google Scholar]
- 6.Ooba N, Kubota K. Selected control events and reporting odds ratio in signal detection methodology. Pharmacoepidemiol Drug Saf. 2010;19:1159–65. doi: 10.1002/pds.2014. [DOI] [PubMed] [Google Scholar]
- 7.Stricker BHCH, Tijssen JGP. Serum sickness-like reactions to cefaclor. J Clin Epidemiol. 1992;45:1177–84. doi: 10.1016/0895-4356(92)90158-j. [DOI] [PubMed] [Google Scholar]
- 8.Van Puijenbroek EP, Egberts ACG, Heerdink ER, Leufkens HGM. Detecting drug-drug interactions using a database for spontaneous adverse drug reactions: an example with diuretics and non-steroidal anti-inflammatory drugs. Eur J Clin Pharmacol. 2000;56:733–8. doi: 10.1007/s002280000215. [DOI] [PubMed] [Google Scholar]
- 9.De Bruin ML, Pettersson M, Meyboom RHB, Hoes AW, Leufkens HGM. Anti-HERG activity and the risk of drug-induced arrhythmias and sudden death. Eur Heart J. 2005;26:590–7. doi: 10.1093/eurheartj/ehi092. [DOI] [PubMed] [Google Scholar]
- 10.Egberts ACG, Meyboom RHB, De Koning FHP, Bakker A, Leufkens HGM. Non-puerperal lactation associated with antidepressant drug use. Br J Clin Pharmacol. 1997;44:277–81. doi: 10.1046/j.1365-2125.1997.00652.x. [DOI] [PMC free article] [PubMed] [Google Scholar]