Skip to main content
BMJ Open logoLink to BMJ Open
. 2019 Oct 28;9(10):e028133. doi: 10.1136/bmjopen-2018-028133

Use of real-world evidence in postmarketing medicines regulation in the European Union: a systematic assessment of European Medicines Agency referrals 2013–2017

Jeremy Philip Brown 1,, Kevin Wing 1, Stephen J Evans 1, Krishnan Bhaskaran 1, Liam Smeeth 1, Ian J Douglas 1
PMCID: PMC6830614  PMID: 31662354

Abstract

Objectives

To assess the use, and evaluate the usefulness, of non-interventional studies and routinely collected healthcare data in postmarketing assessments conducted by the European Medicines Agency (EMA).

Design

We reviewed and systematically assessed all referrals to the EMA made due to safety or efficacy concerns that were evaluated between 1 January 2013 and 30 June 2017. We extracted information from the assessment report and the referral notification. Two reviewers independently assessed the contribution of non-interventional evidence to decision-making.

Results

The preliminary evidence leading to the assessment in 52 eligible referrals was mostly from spontaneous reports (cited in 26 of 52 referrals) and randomised trials (22/52). In contrast, many evidence types were used for the full assessment. Non-interventional studies were frequently used in the full assessment for the evaluation of product safety (31/52) and product efficacy (18/52). In particular, non-interventional studies were relied on for the evaluation of safety and efficacy in subgroups, the evaluation of safety relating to a rare adverse event, understanding product usage and misuse and for evaluation of the effectiveness of risk minimisation measures. The most common recommendations were changes to product information (43/52) and marketing authorisation withdrawal or suspension (12/52). In the majority of referrals, non-interventional evidence was judged to contribute to the decision made (30/52) and in three referrals it was the primary source of evidence.

Conclusions

European regulatory decision-making relies on multiple evidence types, particularly randomised trials, spontaneous reports and non-interventional studies. Non-interventional studies had an important role particularly for the characterisation and quantification of adverse events, the evaluation of product usage and for evaluating the effectiveness of regulatory action to minimise risk.

Keywords: real world evidence, non-interventional studies, medicines regulation


Strengths and limitations of this study.

  • We assessed all safety and efficacy postmarketing authorisation referrals completed through the European Medicines Agency between January 2013 and June 2017. Previous studies focused on marketing authorisation withdrawal only, but we included referrals regardless of referral outcome.

  • While previous studies investigated which different evidence types are used in regulatory decision-making, these did not look in depth at the role of these different evidence types, and in particular at the role of non-interventional evidence, which we examined in detail.

  • Though the majority of studies cited in the referral assessment reports could be identified, occasionally referencing was incomplete and there was insufficient detail to determine basic study information.

  • Judgement on the role of non-interventional evidence in each assessment was to some extent subjective and is dependent of what is recorded in the assessment report. However, close agreement between two independent reviewers was observed.

Introduction

There is an ongoing public debate about the use of routinely collected healthcare data in research, particularly regarding concerns over patient confidentiality.1 2 Conducting research that meets strict confidentiality requirements is of paramount importance, but for public trust to be established and maintained there is also a need for evidence that research using patient records provides clear benefits for the wider public. One potentially important and generally agreed benefit is in evaluating the safety of drugs in real-world use, though surprisingly, there is no comprehensive and systematic evidence of how data from patient records is currently used in this context, with previous summaries focussing largely on safety assessments resulting in marketing authorisation (MA) withdrawal or suspension.3–14

Real-world evidence has been defined in a number of ways. The US 21st Century Cures Act defines it as ‘data regarding the usage, or the potential benefits or risks, of a drug derived from sources other than traditional trials’.15 An alternative definition of real-world evidence is evidence derived from information collected for purposes other than research (ie, routinely collected healthcare data such as electronic healthcare records and insurance claims data).16 While this evidence can be generated by (pragmatic) randomised controlled trials, currently non-interventional studies are the predominant source of real-world evidence, and these are the focus of our study.16 17

Regulatory authorities increasingly require non-interventional evidence of drug effects. As a result of the US 21st Century Cures Act, the US Food and Drug Administration is developing a framework for the use of non-randomised ‘real-world evidence’ in the approval of new indications and in post-authorisation medicinal product assessment.15 18 Similarly, the European Medicines Agency’s (EMA) adaptive pathway approach forms a new route of approval for medicines, blurring the lines between pre and postmarketing data collection, it seeks to facilitate conditional approval in areas of unmet need, subject to further evidence collection, particularly of non-randomised real-world evidence.19 European Union (EU) legislation now mandates the assessment of medication effectiveness in routine clinical care where warranted.20 The focus on using non-interventional data to evaluate the expected effectiveness of medicines is relatively new; there are concerns over their validity to measure causal associations, and agreed methodologies and experience are limited.

The aim of this study was to systematically assess the type of evidence used in post-authorisation drug regulation by the EMA to give a better understanding of the contribution of non-interventional evidence and routinely collected data in this setting.

Methods

We identified and reviewed all EMA post-MA referrals made for safety and/or efficacy concerns which were evaluated by an assessment committee between 1 January 2013 and 30 June 2017. The EMA is the EU agency responsible for the scientific evaluation, supervision and safety monitoring of medicines used in the EU. Its work includes the evaluation of applications for MA and the monitoring of approved medicines. We evaluated referrals which concluded after 2012 since EU medicines regulation changed that year with legislation strengthening pharmacovigilance through many measures including the introduction of a Pharmacovigilance Risk Assessment Committee and increased regulatory requirements.21 The evaluated referrals were made in accordance with the directives of European Parliament: Article 107(i) of Directive 2001/83/EC, Article 31 of Directive 2001/83/EC, and Article 20 of Regulation No 726/2004 (online supplementary table 1).

Supplementary data

bmjopen-2018-028133supp001.pdf (79.8KB, pdf)

When an EU member state or the European Commission has a significant concern regarding the safety or efficacy of an approved medicine, a referral process is initiated. The EMA initially publishes a notification, which details the reasons for the referral. The safety and/or efficacy of the medicine is then assessed in depth by designated member states and subsequently evaluated by one or more of the EMA committees which include the Pharmacovigilance Risk Assessment Committee, the Committee for Medicinal Products for Human Use and the Co-ordination Group for Mutual Recognition and Decentralised Procedures-Human. Finally, an assessment report is published by the EMA for each referral, providing information on the recommendations made by the assessment committee and the reasons for these recommendations.

Eligible referrals were identified from the EMA website. One reviewer (JPB) evaluated the notification and assessment report of each referral using a form (available in the online supplementary appendix). Information was extracted about the notification, the referral, the medicinal product, the adverse events under study and the types of evidence assessed (preclinical, non-randomised trials, randomised trials, non-interventional studies, spontaneous reports and systematic reviews; definitions in online supplementary appendix). In addition, the reviewer assessed how different study types were used within the referral process and categorised usage into: mechanism of action, pharmacokinetics/pharmacodynamics, efficacy, risk, product usage and the effectiveness of risk minimisation measures (see the online supplementary appendix for an example). The referral outcome was categorised into: no change, further evidence before decision-making, suspension or withdrawal of MA, change to availability and change to product information (or a combination of these categories).

For each referral, the adverse events under study were recorded and categorised into their respective Medical Dictionary for Regulatory Activities (MedDRA) system organ class.22 Drugs were categorised by Anatomical Therapeutic Chemical (ATC) classification system code.23

Two reviewers (JPB and IJD) independently assessed the recommendations made in the assessment report, and judged the extent to which non-interventional studies were both cited and contributed to the recommendation made, with disagreements resolved through discussion. We aimed to determine whether evidence from non-interventional studies, and in particular, non-interventional studies using routinely collected data, had an important or pivotal role in the assessment, in order to determine the contribution of this type of evidence in this context.

Patient involvement

No patients were involved in the development of the research question, definition of study outcomes or study design. We will disseminate our study findings to patients through social media and using patient groups with an interest in data.

Results

Referrals

Sixty potentially eligible referrals were identified with a committee opinion date between 1 January 2013 and the 31 June 2017. Of these 60 referrals, eight were excluded, either because they related to bioequivalence (n=4) or manufacturing concerns (n=3) rather than safety/efficacy concerns, or because an assessment report was not yet available as of the 31 October 2017 (n=1) (full list of included referrals included in the online supplementary appendix).

The most frequent initiators of referrals were the European Commission (n=13), France (n=12), the UK (n=8), Germany (n=4) and Italy (n=4). According to the referral notification and assessment report, 21 of 52 referrals (40%) were made due to a combination of safety and efficacy concerns, 29 (56%) due to safety concerns only and 2 (4%) due to efficacy concerns only.

Drug groups and adverse events

The most common drug groups defined according to ATC code were sex hormones and modulators of the genital system, and analgesics (six referrals each), followed by drugs used in diabetes, cough and cold preparations, anti-inflammatory and antirheumatic products and cardiac therapies (three referrals each) (online supplementary table 2). The most common body systems on which referred products acted were, based on ATC code, the nervous system (n=13), the cardiovascular system (n=9), the alimentary tract and metabolism (n=8) and the genitourinary system and sex hormones (n=8) (online supplementary table 3).

The most commonly investigated adverse events included arterial thromboembolism (n=5), venous thromboembolism (n=4), hypersensitivity (n=4) and renal impairment (n=3). The most frequent categories of adverse events according to MedDRA system organ class were cardiac and vascular disorders (n=16); nervous system disorders (n=15); respiratory, thoracic and mediastinal disorders (n=7); and skin and subcutaneous tissue disorders (n=7) (online supplementary table 4).

Evidence usage

Evidence cited by the initial notification and the referral assessment report was categorised by type (table 1). Where no notification was available (in 12 of 52 referrals), information on the evidence leading to the referral was extracted from the EMA website and the assessment report. The evidence leading to referral was most commonly spontaneous reports (50%, 26/52) and randomised trials (42%, n=22). Assessment reports also frequently cited spontaneous reports (73%, n=38) and randomised trials (92%, n=48), but frequently cited non-interventional studies (79%, n=41) too. Among the 52 referrals, in the assessment report, 31 (60%) cited non-interventional studies using pre-existing routinely collected data (eg, electronic medical records) and 33 (63%) cited studies using data collected specifically for research. Evidence was also frequently cited from non-randomised trials (63%, 33/52), preclinical studies (56%, n=29) and systematic reviews of randomised trials (52%, n=27). The quality of study description and referencing varied considerably by assessment report. It was not always possible to find a corresponding study publication or to ascertain the design for every study mentioned in the assessment; 63% (33/52) of assessment reports referred to at least one study of unclear design.

Table 1.

Evidence leading to referral and evidence cited in assessment report for the 52 included referrals

Type of evidence Evidence leading to referral* In assessment report
No of referrals % of all referrals No of referrals % of all referrals
Preclinical evidence 4 8 29 56
Non-randomised trials 1 2 33 63
Randomised trials 22 42 48 92
Non-interventional studies 13 25 41 79
1. Using routinely collected data 8 15 31 60
2. Using data collected for research 6 12 33 63
Spontaneous reports 26 50 38 73
Systematic review of randomised trials 7 13 27 52
Systematic review of non-interventional studies 1 2 4 8
Systematic review combining randomised trials and non-interventional studies 0 0 8 15
Unclear 11 21 33 63

*This was primarily based on the referral notification. However, for 12 of 52 referrals, no notification was available and evidence leading to initiation was instead obtained from the assessment report and from the description of the referral on the EMA website.

EMA, European Medicines Agency.

Table 2 summarises how each type of evidence contributed to different aspects of the assessments. The efficacy of medications was largely determined through evidence from randomised trials (cited with regard to efficacy in 77% (40/52) of referrals), with non-interventional studies contributing information on efficacy in 25% (13/52) of assessments. Non-interventional studies contributed to the assessment of efficacy, to a limited degree, and mostly when clinical trial data were limited, such as in a subgroup (eg, hydroxyethyl starch in patients with trauma—EMEA/H/A-107i/1376; intravenous nicardipine in children and pregnant women—EMEA/H/A-31/1339), for a product developed prior to current regulatory requirements (eg, polymyxin—EMEA/H/A-31/1383), or where a clinical trial would be difficult to run due to sporadic and unpredictable need for therapy (eg, epinephrine autoinjectors—EMEA/H/A-31/1398; methysergide for cluster headache—EMEA/H/A-31/1335).

Table 2.

Number and percentage of all referrals (n=52) that use each type of evidence for each purpose

Type of evidence Usage*
Mechanism, (%) PK/PD, (%) Efficacy, (%) Risk—overall, (%) Risk—subgroup, (%) Usage of product, (%) Effectiveness of risk minimisation measures, (%)
Preclinical evidence 16 (31) 6 (12) 2 (4) 10 (19) 1 (2) 0 (0) 0 (0)
Non-randomised trials 1 (2) 10 (19) 18 (35) 14 (27) 2 (4) 0 (0) 0 (0)
Randomised trials 3 (6) 9 (17) 40 (77 36 (69) 7 (13) 0 (0) 1 (2)
Non-interventional 3 (6) 4 (8) 18 (35) 31 (60) 5 (10) 14 (27) 0 (0)
Non-interventional using routinely collected data 0 (0) 1 (2) 8 (15) 25 (48) 4 (8) 10 (19) 0 (0)
Non-interventional using data collected for research 2 (4) 4 (8) 13 (25) 20 (38) 3 (6) 7 (13) 0 (0)
Spontaneous reports 2 (4) 0 (0) 3 (6) 37 (71) 6 (12) 4 (8) 0 (0)
Systematic review of randomised trials 0 (0) 0 (0) 19 (37) 10 (19) 1 (2) 0 (0) 0 (0)
Systematic review of non-interventional studies 0 (0) 0 (0) 0 (0) 4 (8) 1 (2) 0 (0) 0 (0)
Systematic review of randomised trials and non-interventional studies 0 (0) 1 (2) 2 (4) 4 (8) 0 (0) 0 (0) 0 (0)
Unclear study design 1 (2) 8 (15) 12 (23) 10 (19) 0 (0) 1 (2) 0 (0)
Legend
Percentage of referrals that use evidence type for each purpose Colour
<10
10–19
20–29
30–39
40+

*Usage was categorised, as detailed in the table, into: mechanism of adverse event with product usage, PK/PD of product, efficacy of product, risk of adverse events with product, risk of adverse events with product in a subpopulation, usage/misuse of a product and effectiveness of regulatory risk minimisation measures.

PK/PD, pharmacokinetics/pharmacodynamic.

For overall risks, both randomised trials (69%, 36/52) and non-interventional studies (60%, n=31) were commonly assessed, alongside evidence from spontaneous reports (71%, n=37). Product usage, where assessed, was almost entirely assessed based on non-interventional evidence (27%, n=14). Mechanistic evidence was largely obtained from preclinical sources (31%, n=16), while pharmacokinetics and pharmacodynamics were addressed through non-randomised trials (19%, n=10), randomised trials (19%, n=10) and pre-clinical studies (12%, n=6).

Investigation of product usage and misuse was almost entirely based on non-interventional data (table 2). Non-interventional evidence was also cited for estimating background incidence rates of the adverse event in the population, and for characterising the prevalence of additional risk factors and effect modifiers for the outcome under study.

Role of non-interventional evidence

Over half of the assessments relied at least in part on evidence from non-interventional studies to be able to make recommendations for regulatory action (eg, MA suspension or change in product information) (table 3). Only in 11 of 52 assessments (21%) were no non-interventional studies cited. In a further 11 referrals, non-interventional studies were cited, but the reports did not indicate that they contributed significantly to the decision made, either because only a few pertinent non-interventional studies were cited (n=9), or due to limitations of the non-interventional studies (n=2).

Table 3.

Usage of non-interventional studies in referral assessment reports

Usage of non-interventional studies All referrals (n=52) Referrals leading to MA withdrawal/suspension (n=12) Referrals leading to changes to product information (n=43)
No of referrals % of all referrals No of referrals % of all referrals No of referrals % of all referrals
No evidence from non-interventional studies was cited in the report 11 21 4 33 7 16
Evidence from non-interventional studies was cited, but made little to no contribution to the decision 11 21 4 33 9 21
The decision was consistent with evidence from non-interventional studies and also consistent with other evidence 27 52 4 33 24 56
The decision was consistent with evidence from non-interventional studies and this evidence was the primary or only factor involved in the decision, for example, there was some spontaneous reports and some large non-interventional studies 3 6 0 0 3 7

MA, marketing authorisation.

In three referrals (combined hormonal contraceptives and thromboembolism; valproate, birth defects and developmental disorders (EMEA/H/A-31/1387); and Kogenate Bayer/Helixate NexGen and factor VIII inhibition (EMEA/H/C/275/A20/150/EMEA/H/C/276/A20/143) non-interventional studies alone were the primary source of evidence. When stratified by the outcome of the assessment, it appears that non-interventional evidence more often contributed to decision-making in referrals leading to prescribing changes (64%, 27/42) than those leading to suspension (33%, 4/12), though only 12 assessments led to suspension or withdrawal of MA (table 3).

Non-interventional studies were used for the evaluation of safety in a subpopulation who were largely or completely excluded from clinical trials, such as pregnant women. They were also used for estimating the risk of rare adverse outcomes, such as venous thromboembolism with oral contraceptives, for which clinical trials were underpowered. Relative to spontaneous reports, non-interventional studies contributed to decision-making more when reporting was strongly influenced by the media, such as with human papillomavirus (HPV) vaccines (EMEA/H/A-20/1421), and when the outcome was unlikely to be picked up by case reports, such as exposure–outcome associations with a long latency period (eg, Caustinerf arsenical and cancer (EMEA/H/A-31/1382)). Non-interventional studies using routinely collected data were mostly used in a similar way to studies using data collected for research (table 2). Studies using routinely collected data were used more often when the outcome was rare, whereas studies using data collected for research purposes contributed more when the outcome was poorly recorded in clinical records (eg, Numeta G13%E/G16%E and hypermagnesaemia—EMEA/H/A-107i/1373).

Referral outcomes

The majority (98%, 51/52) of referrals led to regulatory action, with the assessment committee recommending changes to the product information (83%, n=43) and particularly changes to the warnings, posology, undesirable effects and indication sections of the summary of product characteristics (table 4). In 12 of 52 (23%), referrals suspension or withdrawal of MA was recommended. Only for one referral into the safety of HPV vaccines was no change recommended.

Table 4.

Recommendations made as a result of assessment for the 52 included referrals

Recommendation No of referrals % of all referrals
No change 1 2
Further evidence before decision-making 2 4
Suspension or withdrawal of marketing authorisation 12 23
Change to availability 0 0
Change to product information 43 83
By section of the summary of product characteristics:
 Indication 24 46
 Posology 28 54
 Contraindications 22 42
 Warnings 39 75
 Interactions 14 27
 Pregnancy 10 19
 Driving/machinery 2 4
 Undesirable effects 26 50
 Overdose 3 6
 Studies 13 25
 Nature and contents 3 6

For many referrals (42%, n=22), the assessment committee required further specific studies to be conducted, generally to elucidate safety, product usage and the effectiveness of risk minimisation measures. From a review of the assessment reports and the EU register of post-authorisation studies (EU register), most of these were non-interventional studies using routinely collected data or data collected for research purposes (required in 19 referrals).

Discussion

In this comprehensive evaluation, we have shown that a wide range of evidence sources are used to aid decision-making during EU drug regulatory referrals. The three types cited in the majority of assessments were randomised trials, spontaneous reports and non-interventional studies. Although non-interventional evidence is rarely cited in notifications leading to a referral, it is cited substantially during the detailed assessment of most issues, and in a few referrals was the primary evidence type used in decision-making. Notably, at the end of an assessment when recommendations were made for evidence gaps to be filled, further non-interventional evidence was required more often than any other type.

Each type of evidence appears to contribute to different aspects of a drug safety/efficacy referral, allowing for a well-rounded assessment of medication risks and benefits. Unsurprisingly, given their unique inferential advantages, randomised trials are relied on more than any other evidence type to provide evidence of drug efficacy. Current usage of non-interventional evidence for efficacy largely occurs where clinical trial data are limited. Increasingly, however, regulators require measures of drug effectiveness in routine clinical care, for which well-designed non-interventional studies and pragmatic clinical trials using routinely collected data could be highly informative.15 19 20

To assess safety issues, non-interventional evidence is heavily relied on alongside randomised trials and spontaneous reports. Although less frequently cited, evidence from sources such as preclinical studies is occasionally relied on to provide information about mechanisms of effect or pharmacokinetics/pharmacodynamics.

Real-world evidence can be generated from trials, such as from pragmatic trials conducted using routinely collected data. We did not identify any such trials in the assessment reports. This study design could, however, be of considerable utility given the potential for increased generalisability relative to traditional trials, and the minimisation of confounding, through randomisation, relative to non-interventional studies.24

Strengths and limitations

We were able to assess almost all referrals completed between 2013 and 2017, making this the most comprehensive summary of recent postmarketing drug regulatory decision-making in Europe. The assessment reports are a comprehensive summary of the evidence used in decision-making, meaning we were able to determine how each type of evidence contributed to the final recommendations.

We were unable to directly assess the quality and validity of individual studies included in the assessments. However, by reviewing the assessment reports, we evaluated how the evidence had been rated by the committees and how it had contributed to the overall decisions made. Occasionally, studies were mentioned in the assessment reports but no reference to a publication was given, or referencing was incomplete, and there was insufficient detail for readers to determine basic information such as the study design or setting. For example, for the assessment report on combined hormonal contraceptives (EMEA/H/A-31/1356), it was not clear whether some of the trials mentioned were randomised or not. More consistent and comprehensive referencing in assessment reports would increase the transparency of decision-making to the public and other stakeholders.

Judgement about how evidence was used in an assessment is to some extent subjective and is also reliant on what is recorded in each assessment report. However, close agreement was achieved between the two reviewers in this study.

Previous studies of the role of different evidence types in drug regulatory decision-making have largely focused on MA withdrawals/suspensions.3–11 25 These studies highlight how the balance of evidence types has shifted over time, from heavy reliance on spontaneous reports to a more comprehensive reliance on varied evidence types including non-interventional studies, randomised controlled trials and meta-analyses. Over a similar time period, the overall number of non-interventional studies conducted and published also appears to be increasing, with studies of UK electronic primary care data a prime example of this trend.26 With the increase in research opportunities provided by new database linkages this publication trend is likely to continue.

Unique strengths of non-interventional evidence

Non-interventional evidence was particularly useful for the assessment of product safety in situations where evidence from randomised controlled trials was limited such as the quantification of rare events, and the investigation of special populations (eg, pregnant women and children). While other types of evidence are also useful in some of these areas, our study highlighted occasions when non-interventional evidence is unique and vital for regulatory decision-making. The risk of developmental disability and birth defects in the offspring of women taking valproate in pregnancy is a key example of this.27 This rare outcome occurring in a group largely excluded from randomised trials could not have been characterised and quantified without large, well-powered non-interventional studies. Similarly, the detailed characterisation and quantification of adverse outcomes associated with nonsteroidal anti-inflammatory drugs and the oral contraceptive could not have been done without good quality non-interventional evidence. Where media interest led to stimulated spontaneous reporting, such as in the case of HPV vaccine and various adverse effects, unbiased evidence from non-interventional settings was vital in providing reassurance of safety, enabling continued use of the vaccine with no further action required. Randomised trials used to justify licensing of medicines are simply too small to detect even relatively common adverse reactions. The median number of patients studied on a new active substance is 1708 for standard medicines and 438 for orphan medicines in the EU.28 Rare adverse reactions (such as those occurring in 1 in 500 patients) will not have been detected as caused by the medicine, but such rare effects can dramatically alter the benefit/risk balance of the medicine.

Where the EMA’s committees call for further studies to be done, they frequently require non-interventional evidence. There is increasingly a recognition that regulatory action to minimise risks needs to be followed up to determine how effective it has been.29 Almost all drug regulatory action involves making changes to how medicines are used in routine clinical care, and to determine whether new directives are being followed requires evidence obtained in the routine clinical care setting. Patterns of drug usage and quantification or characterisation of adverse events following regulatory action are often required; non-interventional studies will be important here, and though spontaneous reports may also be useful, they are mostly unable to give quantitative information.

There are three key elements required to ensure a successful future for non-interventional evidence within the framework of drug regulatory science. First, there are legitimate concerns regarding the use of evidence from non-interventional studies in drug regulation given the potential problems of missing data and residual confounding.30 Through high-quality study design, conduct and reporting these issues can in many cases be resolved.31 Second, timely evidence is needed; non-interventional studies can be conducted rapidly in response to emerging issues, or to measure the effectiveness of past regulatory action. Third, the data used in non-interventional studies need to be of the highest standard. This includes both the quality of the data and its generalisability to the population from which it comes. Data quality can be monitored and assured by data custodians.32 Generalisability relies on research data being drawn from a representative sample of the population. Whether data are taken from existing medical records or newly collected for a specific study, this requires the majority of patients to consent to their data to be included. For such a transaction between researchers and patients to operate successfully, maintaining anonymity and confidentiality is paramount.

Conclusions

Regulatory decision-making about the safety and efficacy of medication in the EU relies on evidence obtained from a wide range of sources; most frequently from randomised trials, spontaneous reports and non-interventional studies. Non-interventional evidence can be vital for characterising and quantifying adverse drug reactions, is often needed for monitoring the effectiveness of regulatory action to minimise risks, and in certain situations will be the only available evidence.

Supplementary Material

Reviewer comments
Author's manuscript

Footnotes

Contributors: IJD conceived the study. All authors (JPB, KW, SJE, KB, LS and IJD) were involved substantially in the design and planning of the study. JPB and IJD undertook the data collection and wrote the initial draft of the manuscript. JPB conducted the data analyses. All authors interpreted the results, contributed to later drafts of the manuscript and approved the final manuscript.

Funding: This study was funded by the Association of the British Pharmaceutical Industry’s (ABPI’s) Pharmaceutical Industry Health Information Group. JPB is supported by the grant from the ABPI for the study in question. KB holds a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society. LS was supported by a Wellcome Trust senior research fellowship in clinical science (098504/Z/12/Z). IJD is supported by an unrestricted grant from GlaxoSmithKline.

Disclaimer: The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: JPB had financial support from ABPI for the submitted work; IJD has received a grant from the ABPI for the study in question, financial support from GlaxoSmithKline for work unrelated to the study in question, has consulted for GlaxoSmithKline and Gilead, and holds stock in GlaxoSmithKline; SJE is an independent European Commission-appointed expert member of EMA’s Pharmacovigilance Risk Assessment Committee; LS reports personal fees from GSK outside the submitted work; there are no other relationships or activities that could appear to have influenced the submitted work. The views expressed in this article are personal views of the author and may not be understood or quoted as being made on behalf of or reflecting the position of the European Medicines Agency or one of its committees or working parties.

Patient consent for publication: Not required.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data availability statement: Data are available in a public, open access repository. All data analysed is available on the European Medicines Agency (https://www.ema.europa.eu/) and EU Register of Post-Authorisation Studies (http://www.encepp.eu/) websites.

References

  • 1. Papoutsi C, Reed JE, Marston C, et al. . Patient and public views about the security and privacy of electronic health records (EHRs) in the UK: results from a mixed methods study. BMC Med Inform Decis Mak 2015;15:86 10.1186/s12911-015-0202-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Campos-Castillo C, Anthony DL. The double-edged sword of electronic health records: implications for patient disclosure. J Am Med Inform Assoc 2015;22:e130–40. 10.1136/amiajnl-2014-002804 [DOI] [PubMed] [Google Scholar]
  • 3. Arnaiz JA, Carné X, Riba N, et al. . The use of evidence in pharmacovigilance. Case reports as the reference source for drug withdrawals. Eur J Clin Pharmacol 2001;57:89–91. 10.1007/s002280100265 [DOI] [PubMed] [Google Scholar]
  • 4. Clarke A, Deeks JJ, Shakir SAW. An assessment of the publicly disseminated evidence of safety used in decisions to withdraw medicinal products from the UK and US markets. Drug Saf 2006;29:175–81. 10.2165/00002018-200629020-00008 [DOI] [PubMed] [Google Scholar]
  • 5. McNaughton R, Huet G, Shakir S. An investigation into drug products withdrawn from the EU market between 2002 and 2011 for safety reasons and the evidence used to support the decision-making. BMJ Open 2014;4:e004221 10.1136/bmjopen-2013-004221 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Olivier P, Montastruc J-L. The nature of the scientific evidence leading to drug withdrawals for pharmacovigilance reasons in France. Pharmacoepidemiol Drug Saf 2006;15:808–12. 10.1002/pds.1248 [DOI] [PubMed] [Google Scholar]
  • 7. Onakpoya IJ, Heneghan CJ, Aronson JK. Post-marketing withdrawal of anti-obesity medicinal products because of adverse drug reactions: a systematic review. BMC Med 2016;14:191 10.1186/s12916-016-0735-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Onakpoya IJ, Heneghan CJ, Aronson JK. Post-marketing withdrawal of analgesic medications because of adverse drug reactions: a systematic review. Expert Opin Drug Saf 2018;17:63–72. 10.1080/14740338.2018.1398232 [DOI] [PubMed] [Google Scholar]
  • 9. Paludetto M-N, Olivier-Abbal P, Montastruc J-L. Is spontaneous reporting always the most important information supporting drug withdrawals for pharmacovigilance reasons in France? Pharmacoepidemiol Drug Saf 2012;21:1289–94. 10.1002/pds.3333 [DOI] [PubMed] [Google Scholar]
  • 10. Rawson NSB. Drug safety: withdrawn medications are only part of the picture. BMC Med 2016;14:28 10.1186/s12916-016-0579-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Lane S, Lynn E, Shakir S. Investigation assessing the publicly available evidence supporting postmarketing withdrawals, revocations and suspensions of marketing authorisations in the EU since 2012. BMJ Open 2018;8:e019759 10.1136/bmjopen-2017-019759 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Downing NS, Shah ND, Aminawung JA, et al. . Postmarket safety events among novel therapeutics Approved by the US food and drug administration between 2001 and 2010. JAMA 2017;317:1854–63. 10.1001/jama.2017.5150 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Zeitoun J-D, Lefèvre JH, Downing NS, et al. . Regulatory review time and post-market safety events for novel medicines Approved by the EMA between 2001 and 2010: a cross-sectional study. Br J Clin Pharmacol 2015;80:716–26. 10.1111/bcp.12643 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Lester J, Neyarapally GA, Lipowski E, et al. . Evaluation of FDA safety-related drug label changes in 2010. Pharmacoepidemiol Drug Saf 2013;22:302–5. 10.1002/pds.3395 [DOI] [PubMed] [Google Scholar]
  • 15. 21St century cures act H.R.34, 2015.
  • 16. Sherman RE, Anderson SA, Dal Pan GJ, et al. . Real-World Evidence - What Is It and What Can It Tell Us? N Engl J Med 2016;375:2293–7. 10.1056/NEJMsb1609216 [DOI] [PubMed] [Google Scholar]
  • 17. Kalkman S, van Thiel GJMW, Zuidgeest MGP, et al. . Series: pragmatic trials and real world evidence: paper 4. informed consent. J Clin Epidemiol 2017;89:181–7. 10.1016/j.jclinepi.2017.03.019 [DOI] [PubMed] [Google Scholar]
  • 18. Avorn J, Kesselheim AS, ASJNEJoM K. The 21st century cures act — will it take us back in time? N Engl J Med 2015;372:2473–5. 10.1056/NEJMp1506964 [DOI] [PubMed] [Google Scholar]
  • 19. Agency EM. Final report on the adaptive pathways pilot, 2016. [Google Scholar]
  • 20. Regulation (EU) NO 1235/2010 of the European Parliament and of the Council. Official Journal of the European Union 2010. [Google Scholar]
  • 21. Commision E. Commission implementing regulation (EU) NO 520/2012 of 19 June 2012 on the performance of pharmacovigilance activities provided for in regulation (EC) NO 726/2004 of the European Parliament and of the Council and Directive 2001/83/EC of the European Parliament and of the Council. Official Journal of the European Union 2012. [Google Scholar]
  • 22. International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use Medical dictionary for regulatory activities. Available: https://bioportal.bioontology.org/ontologies/MEDDRA
  • 23. World Health Organization Collaborating Centre for Drug Statistics Methodology Anatomical therapeutic chemical classification system. Available: https://www.whocc.no/atc_ddd_index/
  • 24. Knottnerus JA, Tugwell P. Research methods must find ways of accommodating clinical reality, not ignoring it: the need for pragmatic trials. J Clin Epidemiol 2017;88:1–3. 10.1016/j.jclinepi.2017.08.012 [DOI] [PubMed] [Google Scholar]
  • 25. Ishiguro C, Hall M, Neyarapally GA, et al. . Post‐market drug safety evidence sources: an analysis of FDA drug safety communications. Pharmacoepidemiol Drug Saf 2012;21:1134–6. 10.1002/pds.3317 [DOI] [PubMed] [Google Scholar]
  • 26. Vezyridis P, Timmons S. Evolution of primary care databases in UK: a scientometric analysis of research output. BMJ Open 2016;6:e012785 10.1136/bmjopen-2016-012785 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Valproate Art 31 assessment report. European Medicines Agency, 2014. [Google Scholar]
  • 28. Duijnhoven RG, Straus SMJM, Raine JM, et al. . Number of patients studied prior to approval of new medicines: a database analysis. PLoS Med 2013;10:e1001407 10.1371/journal.pmed.1001407 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Commission E Directive 2010/84/EU of the European Parliament and of the Council of 15 December 2010 amending, as regards pharmacovigilance, Directive 2001/83/EC on the community code relating to medicinal products for human use, 2010. [Google Scholar]
  • 30. Kesselheim AS, Avorn J. New "21st Century Cures" legislation: speed and ease vs science. JAMA 2017;317:581–2. 10.1001/jama.2016.20640 [DOI] [PubMed] [Google Scholar]
  • 31. Goodman SN, Schneeweiss S, Baiocchi M. Using design thinking to differentiate useful from misleading evidence in observational research. JAMA 2017;317:705–7. 10.1001/jama.2016.19970 [DOI] [PubMed] [Google Scholar]
  • 32. Herrett E, Gallagher AM, Bhaskaran K, et al. . Data resource profile: clinical practice research Datalink (CPRD). Int J Epidemiol 2015;44:827–36. 10.1093/ije/dyv098 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

bmjopen-2018-028133supp001.pdf (79.8KB, pdf)

Reviewer comments
Author's manuscript

Articles from BMJ Open are provided here courtesy of BMJ Publishing Group

RESOURCES