Abstract
Regulators are increasingly mandating the use of pharmaceutical risk-minimization programs for a variety of medicinal products. To date, however, evaluations of these programs have shown mixed results and relatively little attention has been directed at diagnosing the specific factors contributing to program success or lack thereof. Given the growing use of these programs in many different patient populations, it is imperative to understand how best to design, deliver, disseminate, and assess them. In this paper, we argue that current approaches to designing, implementing, and evaluating risk-minimization programs could be improved by applying evidence- and theory-based ‘best practices’ from implementation science. We highlight commonly encountered challenges and gaps in the design, implementation, and evaluation of pharmaceutical risk-minimization initiatives and propose three key recommendations to address these issues: (1) risk-minimization program design should utilize models and frameworks that guide what should be done to produce successful outcomes and what questions should be addressed to evaluate program success; (2) intervention activities and tools should be theoretically grounded and evidence based; and (3) evaluation plans should incorporate a mixed-methods approach, pragmatic trial designs, and a range of outcomes. Regulators, practitioners, policy makers, and researchers are encouraged to apply these best practices in order to improve the public health impact of this important regulatory tool.
Key Points
Although regulators are increasingly mandating pharmaceutical risk-minimization programs, insufficient attention has been given to maximizing or measuring their effectiveness as public health interventions. |
Application of evidence- and theory-based approaches from the field of implementation science can increase the likelihood of program effectiveness by guiding what should be done to produce successful outcomes and what types of questions should be asked to assess program success. |
Introduction
Pharmaceutical risk-minimization programs are public health interventions that are legally mandated in certain countries as part of the pharmacovigilance strategy for specific drugs. In essence, such programs are intended to achieve a positive benefit-to-risk balance for these medications by ensuring that “the right prescriber provides the right drug to the right patient at the right dose and at the right time” under ‘real-world use’ conditions [1]. Risk-minimization programs are often needed to gain more insight into potentially concerning aspects of a product’s safety profile. These programs can also enable regulators to address unmet therapeutic needs by providing access to medicines which, due to the significant risk(s) they pose to patients, might not otherwise have been approved or permitted to retain their marketing authorization.
Risk-minimization programs, as a type of pharmacovigilance tool, are designed, implemented, and evaluated within the context of a heavily regulated environment. In both the USA and the European Union (EU), the ability to require risk-minimization programs is statutorily defined [2, 3]. Regulatory guidances set forth recommendations for risk-minimization program planning, implementation, and evaluation [1–5]. These guidances also specify categories of acceptable risk-minimization program intervention components or ‘tools.’ Examples of the latter include proactive communication [e.g., patient information leaflets, Dear Healthcare Provider (HCP) letters], education and training (e.g., educational programs, clinical decision aides), and restrictions on prescribing and/or drug distribution (e.g., HCP and/or patient certification, patient informed consent or ‘safe use’ contracts).
Over the past decade, numerous risk-minimization programs have been implemented and evaluated for a range of drug products across multiple therapeutic areas, and public discussion and scrutiny of risk-minimization initiatives has risen sharply [6–8]. To date, however, only a handful of these programs have been shown to be effective [9–11]. As a result, there is an urgent need to identify factors that contribute to successful risk-minimization programs. Efforts to do so, however, are stymied by the fact that programs have been designed without reference to extant theory, conceptual models and frameworks, or prior research. In addition, the process and context of program implementation are typically not systematically assessed or reported, and evaluations have been inadequate in terms of scope and methodologic rigor [9–11].
In recent years, there has been increased focus on improving the effectiveness of a variety of pharmacovigilance methods by introducing scientific approaches and techniques borrowed from other fields [12, 13]. Similarly, we argue that now is an opportune moment to advance the practice of pharmaceutical risk minimization by applying evidence- and theory-based methods from another discipline, the field of implementation science. Implementation science is the study of strategies to adopt and integrate evidence-based health interventions and change healthcare practice patterns within specific settings [14]. The field is supported by a substantial and growing body of empirical research [15, 16], a seminal textbook [17], dedicated US Government websites [18, 19], and a journal [20].
As a discipline, implementation science emphasizes three interacting factors which collectively influence program effectiveness: the attributes of the proposed intervention, the characteristics of the intended adopters, and aspects of the intervention delivery context [21, 22]. These factors are similarly critical for risk-minimization programs that aim to integrate new practices or processes within the healthcare system, target individual (e.g., patients, HCPs) and/or organizational behavior change in order to adopt these new practices, and involve implementation across numerous, heterogeneous settings (e.g., home, clinic, specialty pharmacy) and geographical areas [6, 7].
Greater attention to these three factors can improve the overall quality, use, and impact of risk-minimization programs. For example, programs are more likely to be adopted if they are perceived as adding value and integrate easily within existing healthcare delivery processes. Moreover, they are more likely to be effective if they are implemented in a targeted and coordinated manner by informed, committed stakeholders in settings that are adequately resourced and supportive [21–23].
The purpose of this paper is to share ‘best practices’ from the field of implementation science and demonstrate how they can be applied to pharmaceutical risk minimization. This effort is particularly timely given the recent call for initiatives to identify and address knowledge gaps for the prevention of adverse drug effects [24, 25]. The application of implementation science methods offers multiple advantages. First, it can provide a systematic and comprehensive approach to conceptualizing, implementing, and evaluating risk-minimization programs. Second, it can spur development of an evidence base and facilitate sharing of results across different risk-minimization initiatives. Third, it can enhance the likelihood that the implementation strategies that are used are evidence-based ones. Lastly, it can ensure that adequate and appropriate data are collected for evaluation purposes, thereby improving the evaluator’s ability to determine whether the program was truly effective or not.
Areas to Improve Pharmaceutical Risk Minimization
We present recommendations from the perspective of implementation science according to the phases of risk-minimization program design, implementation, and evaluation. This perspective is based on the collective learning of the authors in reviewing, designing, conducting, and evaluating numerous risk-minimization programs in the USA and EU and their service on several US Food and Drug Administration (FDA) advisory groups and professional working groups [26–32].
Risk-Minimization Program Design
Table 1 compares suggested ‘best practices’ from implementation science with current practices in risk-minimization program design. Further elaboration on these proposed best practices is provided below.
Table 1.
Optimal program design feature | Implementation science best practices | Pharmaceutical risk-minimization programs: actual practice | Gap? |
---|---|---|---|
Use of models and frameworks |
Theoretical models guide conceptualization of risk-minimization intervention and hypothesis generation. Intervention models and frameworks guide program planning to increase the likelihood of effectiveness by focusing on the essential strategies for successful translation. Evaluation models and frameworks guide the types of questions that should be asked to assess the success of the risk-minimization program. |
Current risk-minimization strategies are generally atheoretical and developed without benefit of comprehensive and well-tested models and frameworks that guide intervention planning and implementation, dissemination, and evaluation assessment [8]. | Yes |
Evidence-based | Intervention components are selected and designed based on prior learning and empirical evidence. | Justification for intervention components and implementation design is generally absent, design elements are largely derived from regulatory precedent. | Yes |
Patient and stakeholder centered |
Formative evaluation is conducted with stakeholders as part of the design process, including patients and staff. Implementation interventions should ideally be compatible with existing patterns of care and workflows to facilitate adoption. Implementation interventions should be designed for sustainability given the context of the program. |
Varies by program. Some consideration for compatibility with clinical and patient workflows is given (i.e., considerations of patient and healthcare burden). However, formative research is typically not conducted and/or presented at the time a risk-minimization program is approved. Program costs and sustainability are not addressed. |
Partial |
Multi-faceted and multi-level |
Multiple, integrated intervention elements are delivered in unison for increased effectiveness. Implementation program components are integrated across patient, provider, and system levels using a social ecological framework of healthcare delivery. |
Varies by program; some are more developed than others. Some over-reliance on single element to achieve desired goal. Programs are usually directed at multiple levels (e.g., patient, physician, hospital, and/or pharmacy). |
Partial |
Dissemination and communication strategies |
Target audience(s) are segmented according to their level of knowledge, attitudes, and beliefs. Implementation messaging should be appropriately targeted and/or tailored to the audience. Active dissemination strategies are used involving multiple communication channels of the appropriate scale (e.g., reach and frequency) given the target audience(s). |
Communication strategies and examples of targeted messaging are typically not presented at the time a risk-minimization program is approved. Communication campaign metrics are not specified. | Yes |
Adaptable | Core (non-mutable) program elements are identified. Implementation flexibility is allowed for non-core elements to accommodate for differences in and allow adaptation for contextual factors across sites and areas. | Regulatory precedence is that programs must be implemented uniformly within a nation; however, programs often vary between nations that are under different regulatory authorities. | Partial |
Implementation science draws upon a variety of models and frameworks to address different conceptual needs that arise at different phases of program design, implementation, and evaluation [33]. Three key conceptual needs at the program planning phase pertinent to risk-minimization initiatives are how to (1) design the program to produce successful outcomes under real-world conditions; (2) identify and select program intervention components that will be most effective in changing individual and/or organizational behavior; and (3) determine the right questions to ask in order to assess program success.
Intervention models or frameworks can be utilized to address the question of how to produce successful program outcomes and to ensure that program intervention components, progress measures, and implementation processes are integrated [33–35]. Intervention models emphasize the use of specific intervention components or tools that have been empirically demonstrated to be feasible, acceptable, and effective. Evidence-based tools can be identified by reviewing the medical, public health, and social marketing literature (e.g., systematic reviews, meta-analyses, medical guidelines, and state, federal and international information resources). When existing evidence is limited or absent, preliminary evidence can be generated regarding intervention feasibility and acceptability via human factor studies that simulate ‘real-world’ use conditions and/or piloting conducted as part of the phase III clinical program.
Intervention models emphasize the importance of being stakeholder-centered to enhance the likelihood that intervention components will be acceptable and feasible to the target audience and amenable to integration with existing processes and systems [33–35]. To this end, risk-minimization planning should involve formative research with key stakeholders to gather input on program design and strategies for engaging stakeholders in the process of program implementation and evaluation [33]. Key stakeholders include patients, informal caregivers, the public, healthcare providers, healthcare insurance purchasers, and payers [36].
Risk-minimization programs should also draw upon relevant theories to guide conceptualization and selection of individual intervention components, activities, or tools. For example, behavioral theories have been used to guide development of patient educational materials and training programs [37], and to design computer-based decision-support tools for physicians [38].
Interventions targeting multiple audiences and multiple levels (i.e., the majority of risk-minimization programs to date) are more likely to be effective and to yield clear science-based results when guided by one or more theoretical models [33–35]. A theory-based approach enables generation of testable hypotheses and the linking of program results to a relevant empirical literature, providing insight into how, when, and why a given risk-minimization program was successful by shedding light on the mechanism(s) of action responsible for program success. Relevant models include those that address individual behavior change (e.g., theory of planned behavior, theory of reasoned action) [39, 40], communication processes [41], and diffusion of innovation theories [42]. Social–ecological models are also useful for linking program components across different levels (e.g., individual patient, healthcare setting, community) and can increase the likelihood that intervention impact will be more comprehensive and sustained [43, 44].
A key—though largely under-recognized—challenge for pharmaceutical risk-minimization program planning is how to implement the program into actual practice under real-world conditions in a manner that both preserves the fidelity of the intervention as originally approved yet is flexible enough to accommodate necessary local adaptation. A potential reason why this issue has been under-appreciated to date is that both regulators and industry drug safety professionals are most familiar with clinical trials and hence focus on issues of internal validity almost exclusively. Risk-minimization programs, on the other hand, need to maximize external validity in order to achieve the desired impact and hence need to be designed with dissemination in mind. Principles of ‘evaluability,’ which assess the likelihood that a program can be taken to scale, should also be incorporated during the design process [45].
To increase external validity, risk-minimization programs should be designed to incorporate active dissemination strategies. Active dissemination efforts, which feature multiple communication methods targeting multiple audiences and involving peer-to-peer human interaction, have been empirically proven to be more effective than passive strategies, such as printed pamphlets or informational websites, alone [46, 47]. ‘Designing for dissemination’ involves identifying the processes and factors that affect the adoption of a risk-minimization program so as to increase the likelihood that a program will be implemented and endorsed by local members and integrated into existing practices and procedures [46, 47]. Key adopter characteristics and contextual factors should be identified as part of the program design and assessed during the program evaluation [21].
To enhance the effectiveness of the information being provided, risk communication messages should be designed so as to address five dimensions: (1) identity (i.e., what is the harm associated with the risk); (2) probability of risk occurring; (3) permanence of risk; (4) timing (i.e., when is it likely to occur); and (5) value (i.e., how much does the consequence matter) [41]. Further information can be found in an FDA evidence-based user’s guide for best practices in communicating risks and benefits [48].
Numerous evaluation models and frameworks are useful in guiding the design of risk-minimization program implementation, dissemination, and evaluation [10, 15, 16]. A recent review conducted by Tabak and colleagues [15] identified 61 different models and frameworks and categorized them along three criteria: (1) construct flexibility (ability to adapt and apply to a wide array of contexts); (2) dissemination (spreading evidence) and implementation (integration of evidence within a setting) continuum; and (3) level of socio-ecological framework (healthcare system, community, organization, or individual). Using these criteria, the appropriate models and/or frameworks for a given risk-minimization program can be identified and integrated into the planning process. A guide to applying models and frameworks is provided by the US Veterans Administration Quality Enhancement Research Initiative (QUERI) [19].
Because risk-minimization programs typically need to be carried out in a range of countries or locales where a product is marketed, program adaptability is a critical factor for successful local implementation [49]. As a result, an important task during the design phase is to specify which program components are essential or ‘core’ and which are ‘non-core.’ Core components refer to specific elements of the intervention that are critical to its effectiveness. Non-core elements, sometimes referred to as the ‘adaptable periphery,’ are adaptable elements, structures, and systems related to the risk-minimization program and the organizational settings into which it is being implemented [16]. For example, a prescriber training curriculum could be identified as a ‘core’ element. While the training curriculum would be considered a ‘core’ tool, the mode of training delivery (e.g., via web, printed materials, or group training) could be determined to be a ‘non-core’ element. The program’s ‘adaptable periphery’ in this example would be the training delivery modality which would be allowed to vary in order to best suit the needs of each unique setting. In general, local variations or refinements to non-core elements should be encouraged as greater adaptability can increase the likelihood that the program will continue to be delivered as designed [21].
Case Examples
The Exalgo® (hydromorphone HCl) risk-minimization program to address product abuse is an example of a multi-level program that was built around an alliance among physicians, patients, and pharmacists and incorporated pre-testing of programmatic elements. The design could have been further strengthened through explicit use of a behavioral change model to guide messaging [50, 51]. A second example is that of Yervoy® (ipilimumab), a product that was approved in the USA for the treatment of late-stage melanoma with a risk-minimization communication program designed to address severe autoimmune reactions [52]. The risk-minimization program was developed synergistically with market launch and commercialization planning activities, illustrating the concept of designing for diffusion [52].
Program Implementation
Recommended best practices from implementation science for use during program delivery and current practices in pharmaceutical risk-minimization program implementation are summarized in Table 2.
Table 2.
Optimal program implementation feature | Implementation science best practices | Pharmaceutical risk-minimization programs: actual practice | Gap? |
---|---|---|---|
Organization and delivery |
Formal collaborations and governance structures are specified between a central planning group and the local teams charged with implementing the program. Organizational readiness-to-change is assessed to inform local implementation adaptation. Champions are identified and engaged within the local organization and/or target audience (e.g., specialty or provider group) to facilitate implementation. Training and technical assistance is provided at program initiation and throughout implementation. |
Risk-minimization programs are designed and approved by regulatory agencies at the national level. Programs are either implemented at the local level by individuals who have multiple competing priorities with varying levels of skills, commitment, and resources, or at the national level by individuals who do not have an understanding of local organizational challenges and barriers. |
Yes |
Process measures |
Implementation is systematically evaluated for: Reach: absolute number, proportion, and representativeness of participants; Adoption: absolute number, proportion, and representativeness of participating settings and providers; Fidelity: extent to which key program components were delivered as designed; Cost and adaptations: time and resources required, and extent to which program activities were modified. |
In general, process measures are not pre-specified at the time a risk-minimization program is approved. During program implementation, process measures are generally not reported in real-time; therefore, there is limited early assessment of how well the program is being implemented under real-world conditions. The exception is products with distribution restrictions (e.g., patient, provider, pharmacy registries) that have greater ability to monitor implementation progress than products without such distribution systems. |
Partial |
Sustainability |
Promising practices, solutions, and results among implementing teams are shared across sites to increase the likelihood of program sustainability. Ongoing training and technical assistance to sites are provided periodically to minimize intervention drift and minimize impact of staff turnover. |
Typically, risk-minimization programs must be delivered over the lifetime of product marketing. The need for assessing patient and healthcare system burden has been identified, but methods have not been established. Local learning on how best to adapt a program is not included in program evaluations presented to regulatory agencies. Re-training (or re-certification) has not been discussed for healthcare providers. |
Risk-minimization programs can be challenging to implement because implementation may need to occur at multiple levels and/or be conducted by multiple parties. Typically, risk-minimization programs are designed by staff from the central office of the marketing authorization holder (MAH) in conjunction with the requesting health authority. For globally marketed products, however, it is often staff at the affiliate offices who are actually responsible for program implementation. For more complex risk-minimization programs (e.g., prescriber certification, restricted dispensing), an additional level of implementation may involve engaging with third-party vendors to build specific program infrastructure (e.g., central data collection repositories, patient service ‘hubs,’ and quality monitoring systems). A final level may involve program implementation within the actual healthcare system itself. At each of these levels, implementation is dependent on individuals who often have multiple competing priorities, and who possess varying levels of motivation, expertise, training, and access to needed resources.
Specific practices that can facilitate successful risk-minimization program implementation include assessing the implementing group’s ‘readiness-to-change,’ providing tailored training and technical assistance to implementers, identifying local ‘champions’ to initiate the program (from both within local participating organizations as well as targeted stakeholder groups), and establishing governance processes to strengthen the collaborative links across all levels of implementation [16, 53].
During the implementation process, there is a need to monitor and document factors affecting the external validity of risk-minimization programs so as to understand the generalizability of the learning for other programs. Multilevel process indicators that measure program reach, adoption, implementation, fidelity, and costs/burden are useful in this regard [35, 54]. Process measures, such as reach and adoption, should be tracked in real-time during program roll-out in order to identify and address problems early on.
Contextual factors should also be collected as they can serve as important mediators and moderators of program effectiveness. Factors to assess include characteristics of the key implementers (e.g., job title and professional training, length of time in current role, prior experience with and attitudes towards risk-minimization programs), level of awareness, and degree of ‘buy-in’ from the local healthcare provider community, features of the local healthcare delivery and reimbursement system, and local laws/regulations. Comprehensive process data can also provide key information on the impact of different levels of implementation, information that in turn can be valuable for generating recommendations for future program modifications [54]. Additionally, it is important to document the range of program adaptation or variations across all sites and locations where the risk-minimization program has been implemented.
Training and technical assistance should be provided on an ongoing basis to offset the effects of staff turnover and risk-minimization intervention ‘drift’ over time. Additionally, establishing ‘communities of practice’ (CoPs) can facilitate the exchange of best practices, promote engagement and identity-building, and enhance program sustainability [55]. CoPs can be established as local, regional, or national advisory boards comprised of healthcare delivery stakeholders in a manner analogous to how medical advisory boards are constituted to inform clinical development programs.
Case Examples
The use and type of implementation metrics for communication programs (e.g., reach, frequency, time on market) is illustrated in two examples: (1) programmatic efforts to reduce abuse of dextromethorphan by adolescents in the USA [56]; and (2) in FDA’s ‘The Real Cost’ campaign to reduce tobacco use among adolescents [57].
The risk-minimization program for warfarin, an oral anticoagulant (marketed under the brand name of Coumadin® in the USA), is an example of a highly successful program implementation process. Key tools in the risk-minimization program included a computerized patient tracking software application and educational materials. Notably, prototypes for the risk-minimization materials were initially developed by practicing clinicians. A recognition program for clinics demonstrating excellence in improving quality of care for warfarin patients was used to incentivize clinics to adopt the new tools. Indeed, results showed that warfarin prescriptions continued to grow post-program roll-out and that, ultimately, program elements became integrated into standard of care for patients receiving this product [58].
Program Evaluation
Key evaluation features, best practices in implementation science, and current practices in risk-minimization program evaluation are summarized in Table 3.
Table 3.
Optimal program evaluation feature | Implementation science best practices | Pharmaceutical risk-minimization programs: actual practice | Gap? |
---|---|---|---|
Design |
Implementation models and frameworks are used for systematic evaluation. Pragmatic trial designs are used to evaluate implementation effectiveness in order to increase external validity of findings while maintaining strong internal validity, and to compare key subgroups in terms of program outcomes. A key feature of pragmatic designs is the recruitment of a representative range of settings, implementation personnel, and patients. References data from relevant sources (e.g., phase III trials, published literature) to interpret impact results. ‘Mixed methods’ are used to collect both qualitative and quantitative data to assess intervention contexts and impact, and to triangulate or confirm and validate findings. Provides information on program adoption and ongoing maintenance across sites. |
Standards on what constitutes adequate evaluation for risk-minimization programs have not been established, and there is no consensus regarding what constitutes appropriate ‘thresholds of success’ for primary endpoints. The regulatory nature of risk-minimization programs has not permitted use of experimental trial designs. Interrupted time series and pre–post designs are often used without comparison groups [11], and studies utilize small, unrepresentative samples of patients, staff, or settings such that results lack external validity. Some qualitative and quantitative data are collected to evaluate knowledge, attitudes, and (to a limited extent) behaviors; however, reporting of data collection and analysis methods is uneven and generally under-described, particularly for qualitative methods. Triangulation of this learning with formal drug utilization or health services research studies is not generally performed. |
Partial |
Measures |
Endpoints address a broad array of outcomes important to patients, practitioners, and policy makers (including regulatory authorities) and include measures of behavior (intent and observed), health outcomes, and cost effectiveness. Patient-centered outcomes are collected. Measurement is conducted from the perspective of multiple stakeholders (e.g., patients, providers, policy makers). Measures should be practical, easy to collect, feasible to measure, and sensitive to change. |
There is little incentive to incorporate more measures than minimally necessary for regulatory review. Risk-minimization program endpoints are narrowly focused—typically focusing on physician and patient knowledge, attitudes, and perceptions of clinical risk. Clinical outcomes can be rare adverse events, making it challenging to study the effects of the program on preventing these events. Program ‘burden’ and unintended consequences are typically not assessed. |
Yes |
Measurement frequency |
Assessments timepoints are dictated by individual program design. Measures are collected at a frequency to minimize burden but maximize ability to provide timely information for learning and quality improvement. |
US Food and Drug Administration-mandated risk-minimization assessments are, in most instances, set at 18 months, and 3 and 7 years, regardless of the program. Measurement frequency does not support early program adaptation nor foster a learning healthcare system. |
Yes |
Due to the multilevel, multi-stakeholder nature of risk-minimization programs, no single methodology is sufficient for conducting a robust evaluation of program implementation and impact. Qualitative methods are vital for assessing the context of program delivery and for characterizing the factors contributing to, or hindering, program success, while quantitative methods are instrumental for assessing intervention impact. Thus, it is preferable to utilize a combination of qualitative and quantitative methods or a ‘mixed methods’ approach [59, 60]. An example of this approach can be seen in the development of an enhanced FDA Patient Medication Guide [61]. Focus groups were conducted to elicit detailed qualitative input from patients regarding preferred features of a Medication Guide; prototypes were developed, and structured questionnaires were administered to obtain quantitative assessments of patient comprehension and information retention [61].
When using a mixed methods approach, there should be a priori specification of (1) the order in which each method will be used (e.g., sequentially or simultaneously); (2) the priority of the methods (e.g., whether the approaches will be equal or one will be the primary method); and (3) the purpose of the methodological combination (i.e., for purposes of convergence or complementarity) [59]. Multiple different mixed methods design typologies are applicable [59].
Current drug licensing requirements specify that risk-minimization programs, as a type of post-approval commitment, must be implemented fully once marketing authorization has been granted. To date, MAHs have rarely used experimental research designs for evaluating program impact and instead have typically employed less scientifically robust methodologies that do not involve randomization [62]. It is not clear whether this reflects regulatory dictates or industry preferences. Nonetheless, while randomized clinical trial designs are high on internal validity, results have limited generalizability. In contrast, practical or ‘pragmatic’ trial designs address issues of both external as well as internal validity by recruiting diverse, heterogeneous samples, including multiple and representative settings and staff, using randomization, and assessing multiple program outcomes (including behavior change) at multiple program levels (e.g., organizational, patient, staff, healthcare provider) [62, 63]. Alternatively, quasi-experimental designs can be utilized (i.e., to capitalize on ‘natural experiments,’ such as when one country has implemented one version of a risk-minimization program while another country has implemented a modified version of that same program). In the absence of a comparator group, reference data from phase III trial data and published literature can aid in interpreting program impact as well by providing an a priori threshold for program success [11, 64].
Some regulators may be receptive to using experimental or quasi-experimental designs for the purposes of risk-minimization evaluation; thus, MAHs should engage with the appropriate health authorities early on in order to obtain joint agreement as to the most feasible and scientifically rigorous evaluation design to utilize. New ‘adaptive drug licensing’ approaches also offer the promise of greater flexibility in how risk-minimization programs may be implemented and, hence, evaluated, in the future [64, 65]. Industry has an important role to play in advancing the science in this domain as well by supporting the development of new evaluation methods.
Currently, risk-minimization programs employ a limited range of evaluation outcomes (e.g., knowledge, comprehension, clinical) [1, 3, 5–7]. Given the present status of knowledge in this area, however, we argue that other types of endpoints should be measured as well. These include implementation and dissemination outcomes (e.g., extent to which the targeted patient population was reached by the intervention and extent to which the program was successfully replicated in different settings), behavior change (e.g., increases in frequency of prescriber counseling or prescribing of specific screening or monitoring tests), patient quality of life, and cost effectiveness. Outcome measures should be practical, feasible, easy to collect, and sensitive to change. Frequency of measurement should be tailored to the individual attributes of each risk minimization. To minimize burden on both healthcare professionals and patients, existing data sources (e.g., electronic medical records, prescription dispensing records, healthcare claims databases) should be leveraged to the full extent possible [66, 67, 68].
Sharing of risk-minimization evaluation results is an important way to promote dissemination of successful risk-minimization interventions. Information on select risk-minimization program evaluations can be found in advisory committee documentation on the FDA’s website. The new EU Post-authorisation Study (PAS) Register will be posting the protocols and abstracts of results of risk-minimization evaluation studies. Program evaluation results should also be published in the peer-reviewed literature, similar to what is done for clinical trial results.
Case Examples
The isotretinoin iPLEDGE™ risk-minimization program to prevent birth defects demonstrates a comprehensive evaluation within the context of a closed distribution, registry-based system. It includes implementation metrics, assessment of knowledge and reported behaviors, and health outcomes (pregnancies) [69].
The Risk Mitigation Action Plan (RiskMAP) for oxycodone extended-release (original formulation of OxyContin®) illustrates a mixed-methods approach to program evaluation. Program impact was assessed using data from poison control center calls, treatment program admissions records, and law enforcement reports on drug raids/seizures [70, 71]. Interviews were conducted with drug abuse treatment experts, school and law enforcement personnel and other local leaders [72]. These multiple information sources and outcome measures provided a richly detailed picture of the scope and nature of OxyContin abuse and were useful in guiding targeted intervention efforts moving forward [72, 73].
Conclusion
Pharmaceutical risk-minimization programs, as currently designed, implemented and evaluated, have yet to fulfill their potential as public health interventions. We offer a set of best practices from the field of implementation science to address commonly encountered challenges (or gaps) in the design, implementation, and evaluation of pharmaceutical risk-minimization initiatives. Two elements are necessary to operationalize implementation science into risk-minimization practice: access to behavioral science expertise, and synergy with drug development and commercialization activities. Industry can hire behavioral scientists to work on product safety teams, or contract with external experts. In addition, implementation science approaches, if appropriately integrated into existing drug development and post-marketing planning activities, pose no special obstacle to the timely achievement of filings and key post-marketing milestones. In sum, we encourage regulators, risk-minimization practitioners, policy makers, and researchers to apply implementation science best practices in order to improve the public health impact of this important regulatory tool.
Acknowledgments
The authors are grateful to Dr. Russell E. Glasgow at the Colorado Health Outcomes Research Program, University of Colorado School of Medicine, and to Dr. Emanuel Lohrmann, Merck KGaA, for their review and insightful comments on a draft of this manuscript.
Contributors
M.Y. Smith originated the study idea, and developed the first draft of the paper.
E. Morrato re-wrote the tables, participated in re-writing sections of the text, and commented on all of the drafts. Both authors read and approved the final manuscript.
Ethical standards
This manuscript does not contain clinical studies or patient data.
Conflict of interest
Dr. Smith is a full-time employee of EMD Serono, Inc. and owns stock in Abbott Laboratories and AbbVie, Inc. Other than salary support, she received no funding for the conduct of this study or for the preparation of this manuscript. Dr. Smith is a former member of the Council on International Organizations of Medical Sciences (CIOMS) Working Group VIII “Signal Detection in Pharmacovigilance,” and Working Group IX “Practical Considerations for the Development and Application of a Toolkit for Medicinal Product Risk Management.” She has been an invited expert speaker at the US FDA Monthly Staff Forum, the FDA Risk Evaluation and Mitigation Strategy (REMS) Assessments Public Workshop (June 2012), and the Brookings Institution’s Expert Workshop “Strengthening Risk Evaluation and Mitigation Strategies” (September 2013). She is a member of the Benefit-Risk Assessment, Communication and Evaluation (BRACE) Special Interest Group (SIG), International Society for Pharmacoepidemiology (ISPE) (no financial interest). She is co-inventor of US patent 12/237,853, Title: System and methods for management of risk data and analytics (Reference #: 241957.000019; Purdue Ref.: 07-BM-0014US02).
Dr. Morrato received no financial support for the conduct of this study or for the preparation of this manuscript. She has received consulting fees from the Consumer Healthcare Products Association, Merck & Company, and Janssen Pharmaceuticals. She has also received travel support from the Consumer Healthcare Products Association and Merck & Company. She has received research grant support from Janssen Pharmaceuticals. Dr. Morrato is an advisor to the FDA and a former member of its Drug Safety and Risk Management Advisory Committee (DSARM). She has been an invited expert at the FDA REMS Assessments Public Workshop (June 2012), the Brookings Institution’s Expert Workshop “Strengthening Risk Evaluation and Mitigation Strategies” (September 2013—paid travel), the CTTI-FDA Opioid Workshop (paid travel), and the Conjoint Committee on Continuing Education, Opioid REMS (no financial interest). She is a member of the BRACE SIG, and ISPE (no financial interest).
This manuscript represents the views of the authors and does not represent the views of EMD Serono, Inc. nor FDA policy or opinion.
References
- 1.European Medicines Agency. Guideline on good pharmacovigilance practices (GVP). Module V—risk management systems 2012 (Rev 1). http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2012/06/WC500129134.pdf. Accessed 19 Feb 2014.
- 2.Food and Drug Administration. Food and Drug Administration Amendment Act of 2007. http://www.fda.gov/regulatoryinformation/legislation/federalfooddrugandcosmeticactfdcact/significantamendmentstothefdcact/foodanddrugadministrationamendmentsactof2007/default.htm. Accessed 19 Feb 2014.
- 3.European Medicines Agency. Guideline on good pharmacovigilance practices (GVP). Module XVI—risk minimisation measures: selection of tools and effectiveness indicators 2013. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/06/WC500144010.pdf. Accessed 19 Feb 2014.
- 4.Food and Drug Administration. Guidance for industry: development and use of risk minimization action plans 2005. Rockville: Food and Drug Administration; 2005. http://www.fda.gov/downloads/RegulatoryInformation/Guidances/UCM126830.pdf. Accessed 19 Feb 2014.
- 5.Food and Drug Administration. Guidance for industry: format and content of proposed risk evaluation and mitigation strategies (REMS), REMS assessments, and proposed REMS modifications draft guidance 2009. Rockville: Food and Drug Administration; 2009. http://www.fda.gov/downloads/Drugs/Guidances/UCM184128.pdf. Accessed 19 Feb 2014.
- 6.European Medicines Agency. European public assessment reports. http://www.ema.europa.eu/ema/index.jsp?curl=pages/medicines/landing/epar_search.jsp&mid=WC0b01ac058001d125. Accessed 19 Feb 2014.
- 7.Food and Drug Administration. Approved risk evaluation and mitigation strategies (REMS). http://www.fda.gov/drugs/drugsafety/postmarketdrugsafetyinformationforpatientsandproviders/ucm111350.htm. Accessed 27 Mar 2014.
- 8.Morrato EH, Ling SB. The Drug Safety and Risk Management Advisory Committee: a case study of meeting frequency, content and outcomes before and after FDAAA. Med Care. 2012;50(11):970–986. doi: 10.1097/MLR.0b013e31826c872d. [DOI] [PubMed] [Google Scholar]
- 9.Department of Health and Human Services. Office of Inspector General—FDA lacks comprehensive data to determine whether risk evaluation and mitigation strategies improve drug safety 2013. https://oig.hhs.gov/oei/reports/oei-04-11-00510.pdf. Accessed 19 Feb 2014.
- 10.Dusetzina SB, Higashi AS, Dorsey ER, et al. Impact of FDA drug risk communications on healthcare utilization and health behaviors: a systematic review. Med Care. 2012;50(6):466–478. doi: 10.1097/MLR.0b013e318245a160. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Gridchyna I, Cloutier AM, Nkeng L, et al. Methodological gaps in the assessment of risk minimisation interventions: a systematic review. Pharmacoepidemiol Drug Saf. 2014;23(6):572–579. doi: 10.1002/pds.3596. [DOI] [PubMed] [Google Scholar]
- 12.Bahri P. Public pharmacovigilance communication: a process calling for evidence-based, objective-driven strategies. Drug Saf. 2010;33(12):1065–1079. doi: 10.2165/11539040-000000000-00000. [DOI] [PubMed] [Google Scholar]
- 13.Harpaz R, DuMouchel W, Shah NH, et al. Novel data-mining methodologies for adverse drug event discovery and analysis. Clin Pharmacol Ther. 2012;91(6):1010–1021. doi: 10.1038/clpt.2012.50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Glasgow RE, Vinson C, Chambers D, et al. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Health. 2012;102(7):1274–1281. doi: 10.2105/AJPH.2012.300755. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–350. doi: 10.1016/j.amepre.2012.05.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50–61. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and implementation research in health: translating science to practice. New York: Oxford University Press; 2012.
- 18.U.S. National Institutes of Health: Fogarty International Center. Frequently asked questions about implementation science. http://www.fic.nih.gov/News/Events/Implementation-science/Pages/faqs.aspx. Accessed 4 Jun 2014.
- 19.United States Department of Veterans Affairs. QUERI—Quality Enhancement Research Initiative. http://www.queri.research.va.gov/ciprs/about.cfm. Accessed 27 Mar 2014.
- 20.Implementation Science journal. http://www.implementationscience.com/. Accessed 11 Jun 2014.
- 21.Rabin BA, Brownson RC, Kerner JF, Glasgow RE. Methodologic challenges in disseminating evidence-based interventions to promote physical activity. Am J Prev Med. 2006;31(4S):S24–S34. doi: 10.1016/j.amepre.2006.06.009. [DOI] [PubMed] [Google Scholar]
- 22.Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629. doi: 10.1111/j.0887-378X.2004.00325.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29(1):126–153. doi: 10.1177/0163278705284445. [DOI] [PubMed] [Google Scholar]
- 24.Systems for improved access to pharmaceuticals and services. Preventing and minimizing risks associated with antituberculosis medicines to improve patient safety, 2013. Submitted to the U.S. Agency for International Development by the Systems for Improved Access to Pharmaceuticals and Services (SIAPS) Program. Arlington: Management Sciences for Health 3. http://siapsprogram.org/wp-content/uploads/2014/02/14-033-Min-Risk-Anti-TB-Meds-final.pdf. Accessed 26 Jun 2014.
- 25.EMA. Medication errors—follow-up actions from workshop–implementation plan 2014–2015. 15 April 2014. EMA/20791/2014. Human Medicines Research and Development Support. http://www.ema.europa.eu/docs/en_GB/document_library/Other/2014/04/WC500165496.pdf. Accessed 26 Jun 2014.
- 26.Food and Drug Administration. Drug Safety and Risk Management Advisory Committee. http://www.fda.gov/advisorycommittees/committeesmeetingmaterials/drugs/drugsafetyandriskmanagementadvisorycommittee/ucm094886.htm. Accessed 30 June 2014
- 27.Food and Drug Administration. Expert workshop: strengthening Risk Evaluation and Mitigation Strategies (REMS) through systematic analysis, standardized design, and evidence-based assessment. The Engelberg Center for Healthcare Reform. Brookings Institution; 25 Sep 2013; Washington, DC.
- 28.Food and Drug Administration. Expert workshop: the safety and long-term efficacy of extended-release and long-acting opioid analgesics. Center for Drug Evaluation and Research and the Clinical Trials Transformation Initiative. 12–13 Aug 2013; Silver Spring.
- 29.Food and Drug Administration. Public workshop: Risk Evaluation and Mitigation Strategies (REMS) assessments: social science methodologies to assess goals related to knowledge. 7 Jun 2012; Silver Spring; Docket no. FDA-2012-N-0408.
- 30.Food and Drug Administration. Public workshop: standardizing and evaluating risk evaluation and mitigation strategies (REMS). 25–26 Jul 2013; Silver Spring; Docket no. FDA-2013-N-0502.
- 31.Council on International Organizations of Medical Science. Practical considerations for development and application of a toolkit for medicinal product risk management: report from CIOMS Working Group IX. Geneva: CIOMS; 2014. (in press).
- 32.Council on International Organizations of Medical Science (CIOMS). Practical aspects of signal detection in pharmacovigilance: report of CIOMS working group VIII. Geneva: CIOMS; 2010.
- 33.Glasgow RE. What does it mean to be pragmatic: opportunities and challenges for pragmatic approaches. Health Educ Behav. 2013;40(3):257–265. doi: 10.1177/1090198113486805. [DOI] [PubMed] [Google Scholar]
- 34.Glanz K, Bishop DB. Role of behavioral science theory in development and implementation of public health interventions. Annu Rev Pub Health. 2010;31:399–418. doi: 10.1146/annurev.publhealth.012809.103604. [DOI] [PubMed] [Google Scholar]
- 35.Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–1327. doi: 10.2105/AJPH.89.9.1322. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Concannon TW, Meissner P, Grunbaum JA, McElwee N, Guise JM, Santa J, et al. A new taxonomy for stakeholder engagement in patient-centered outcomes research. J Gen Intern Med. 2012;27(8):985–991. doi: 10.1007/s11606-012-2037-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Smith MY, DuHamel KN, Egert J, Winkel G. Impact of a brief intervention on patient communication and barriers to pain management: results from a randomized controlled trial. Patient Educ Couns. 2010;81(1):79–86. doi: 10.1016/j.pec.2009.11.021. [DOI] [PubMed] [Google Scholar]
- 38.Unrod M, Smith MY, Spring B, DePue J, Redd W, Winkel G. Randomized controlled trial of a computer-based, tailored intervention to increase smoking cessation counseling by primary care physicians. J Gen Intern Med. 2007;22(4):478–484. doi: 10.1007/s11606-006-0069-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50:179–182. doi: 10.1016/0749-5978(91)90020-T. [DOI] [Google Scholar]
- 40.Ajzen I, Fishbein M. Understanding attitudes and predicting social behavior. Englewood Cliffs: Prentice-Hall; 1980. [Google Scholar]
- 41.Mebane FE. The importance of news media in pharmaceutical risk communication: proceedings of a workshop. Pharmacoepidemiol Drug Saf. 2005;14(5):297–306. doi: 10.1002/pds.993. [DOI] [PubMed] [Google Scholar]
- 42.Rogers EM. Diffusion of innovations. New York: The Free Press; 2002. [Google Scholar]
- 43.Stockols D. Social ecology and behavioral medicine: implications for training, practice and policy. Behav Med. 2000;26(3):129–138. doi: 10.1080/08964280009595760. [DOI] [PubMed] [Google Scholar]
- 44.Fisher EB, Brownson CA, O’Toole ML, et al. Ecological approaches to self-management: the case of diabetes. Am J Public Health. 2005;95(9):1523–1535. doi: 10.2105/AJPH.2005.066084. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Leviton LC, Khan LK, Rog D, et al. Evaluability assessment to improve public health policies, programs and practices. Annu Rev Public Health. 2010;31:213–233. doi: 10.1146/annurev.publhealth.012809.103625. [DOI] [PubMed] [Google Scholar]
- 46.Dearing JW, Smith DK, Larson RS, Estabrooks CA. Designing for diffusion of a biomedical intervention. Am J Prev Med. 2013;44(IS2):S70–S76. doi: 10.1016/j.amepre.2012.09.038. [DOI] [PubMed] [Google Scholar]
- 47.Kerner J, Rimer B, Emmons K. Introduction to the special section on dissemination: dissemination research and research dissemination: how can we close the gap? Health Psychol. 2005;24(5):443–446. doi: 10.1037/0278-6133.24.5.443. [DOI] [PubMed] [Google Scholar]
- 48.Fischhoff B, Brewer NT, Downs JS, editors. Communicating risks and benefits: an evidence-based user’s guide. U.S. Department of Health and Human Services Food and Drug Administration Risk Communication Advisory Committee and consultants; 2011. http://www.fda.gov/ScienceResearch/SpecialTopics/RiskCommunication. Accessed 27 Mar 2014.
- 49.Stirman SW, Miller CJ, Toder K, Calloway A. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implement Sci. 2013;8:65. doi: 10.1186/1748-5908-8-65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.EXALGO™ (hydromorphone HCI) extended-release tablets (CII). NDA 21-217. United States Food and Drug Administration joint meeting of the Anesthetic and Life Support Drugs Advisory Committee with the Drug Safety and Risk Management Advisory Committee. September 23, 2009. http://www.fda.gov/downloads/AdvisoryCommittees/CommitteesMeetingMaterials/Drugs/AnestheticAndAnalgesicDrugProductsAdvisoryCommittee/UCM248769.pdf. Accessed 25 Jun 2014.
- 51.EXALGO™ (hydromorphone HCl) extended-release tablets CII. NDA 21-217. Joint meeting Anesthetic and Life Support Drugs Advisory Committee and Drug Safety and Risk Management Advisory Committee. 23 September 2009. http://www.fda.gov/downloads/AdvisoryCommittees/CommitteesMeetingMaterials/Drugs/AnestheticAndAnalgesicDrugProductsAdvisoryCommittee/UCM183035.pdf. Accessed 25 Jun 2014.
- 52.BLA 125377 YERVOY (ipilimumab) injection, for intravenous infusion. Human cytotoxic T-lymphocyte antigen-4 (CTLA-4)-blocking monoclonal antibody. Risk evaluation and mitigation strategy (REMS). Princeton: Bristol-Myers Squibb Company; 2012. http://www.fda.gov/downloads/Drugs/DrugSafety/PostmarketDrugSafetyInformationforPatientsandProviders/UCM249435.pdf. Accessed 26 Jun 2014.
- 53.Gaglio B, Shoup JA, Glasgow RE. The RE-AIM framework: a systematic review of use over time. AJPH. 2013;103(6):e38–e46. doi: 10.2105/AJPH.2013.301299. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Steckler A, Goodman RM, Kegler MC. Mobilizing organizations for health enhancement: theories of organizational change. In: Glanz K, Rimer BK, Lewis F, editors. Health behavior and health education: theory, research and practice. 3. San Francisco: Jossey Bass; 2002. pp. 335–366. [Google Scholar]
- 55.Li LC, Grimshaw JM, Nielsen C, et al. Use of communities of practice in business and healthcare sectors: a systematic review. Implement Sci. 2009;4:27. doi: 10.1186/1748-5908-4-27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Dextomorphan program. CHPA briefing book meeting of the Drug Safety and Risk Management Advisory Committee September 14, 2010. http://www.fda.gov/downloads/AdvisoryCommittees/CommitteesMeetingMaterials/Drugs/DrugSafetyandRiskManagementAdvisoryCommittee/UCM224448.pdf. Accessed 10 Jun 2014.
- 57.FDA. The Real Cost campaign. http://www.fda.gov/AboutFDA/CentersOffices/OfficeofMedicalProductsandTobacco/AbouttheCenterforTobaccoProducts/PublicEducationCampaigns/TheRealCostCampaign/ucm388656.htm. Accessed 10 Jun 2014.
- 58.Fetterman JE, Pines WL, Nikel WK, Slatko GH, editors. Current state of pharmaceutical risk management programs. In: A framework for pharmaceutical risk management. Washington, DC: Food and Drug Law Institute; 2003. pp. 20–1.
- 59.Albright K, Gechter K, Kempe A. Importance of mixed methods in pragmatic trials and dissemination and implementation research. Acad Pediatr. 2013;13(5):400–407. doi: 10.1016/j.acap.2013.06.010. [DOI] [PubMed] [Google Scholar]
- 60.Creswell JW, Fetters MD, Ivankova NV. Designing a mixed methods study in primary care. Ann Fam Med. 2004;2(1):7–12. doi: 10.1370/afm.104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Wolf MS, Wolf M, Bailey S, Serper M, Smith M, Davis T, et al. Comparative effectiveness of patient-centered strategies to improve FDA Medication Guides. Med Care (in press). [DOI] [PubMed]
- 62.Gridchyna I, Cloutier A-M, Nkeng L, Craig C, Frise S, Moride Y. Methodological gaps in the assessment of risk minimization interventions: a systematic review. Pharmacoepidemiol Drug Saf. 2014;23(6):572–579. doi: 10.1002/pds.3596. [DOI] [PubMed] [Google Scholar]
- 63.Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. CMAJ. 2009;180:E47–E57. doi: 10.1503/cmaj.090523. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Prieto L, Spooner A, Hidalgo-Simon, et al. Evaluation of the effectiveness of risk minimisation measures. Pharmacoepidemiol Drug Saf. 2012;21(8):896–899. doi: 10.1002/pds.3305. [DOI] [PubMed] [Google Scholar]
- 65.European Medicines Agency. European Medicines Agency launches adaptive licensing pilot. March 19, 2014. http://www.ema.europa.eu/ema/index.jsp?curl=pages/news_and_events/news/2014/03/news_detail_002046.jsp&mid=WC0b01ac058004d5c1. Accessed 4 Apr 2014.
- 66.Eichler H-G, Oye K, Baird LG, et al. Adaptive licensing: taking the next step in the evolution of drug approval. Clin Pharmacol Ther. 2012;91(3):426–437. doi: 10.1038/clpt.2011.345. [DOI] [PubMed] [Google Scholar]
- 67.Baker DW, Brown T, Buchanan DR et al. Design of a randomized clinical trial to assess the comparative effectiveness of a multi-faceted intervention to improve adherence to colorectal cancer screening among patients cared for in a community health center. BMC Health Serv Res. 2013; 13:153 http://www.biomedcentral.com/1472-6963/13/153. Accessed 4 Jun 2014. [DOI] [PMC free article] [PubMed]
- 68.Green BB, Anderson ML, Cook AJ, et al. e-Care for heart wellness: a feasibility trial to decrease blood pressure and cardiovascular risk. Am J Prev Med. 2014;46(4):368–377. doi: 10.1016/j.amepre.2013.11.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Drug Safety and Risk Management Advisory Committee; Dermatologic and Opthalmic Drugs Advisory Committee. Briefing document for iPLEDGE. Amnesteem® (isotretinoin capsules, USP) produced by Mylan Pharmaceuticals Inc.; Claravis™ (isotretinoin capsules, USP) produced by Barr Laboratories, Inc.; SOTRET® (isotretinoin capsules, USP) produced by Ranbaxy Laboratories Inc. http://www.fda.gov/downloads/AdvisoryCommittees/CommitteesMeetingMaterials/Drugs/DermatologicandOphthalmicDrugsAdvisoryCommittee/UCM281376.pdf. Accessed 26 Jun 2014.
- 70.Cicero TJ, Dart RC, Inciardi JA, Woody GE, Schnoll S, Munoz A. The development of a comprehensive risk-management program for prescription opioid analgesics: Researched Abuse, Diversion and Addiction-Related Surveillance (RADARS®) Pain Med. 2007;8(2):157–170. doi: 10.1111/j.1526-4637.2006.00259.x. [DOI] [PubMed] [Google Scholar]
- 71.Rosenblum A, Parrino M, Schnoll SH, Fong C, Maxwell C, Cleland C, Magura S, Haddox D. Prescription opioid abuse among enrollees into methadone maintenance treatment. Drug Alcohol Depend. 2007;90:64–71. doi: 10.1016/j.drugalcdep.2007.02.012. [DOI] [PubMed] [Google Scholar]
- 72.Fitzgerald JP, Smith MY, Haddox JD, Kline AT. Characterizing opioid analgesic abuse: findings from ethnographic field research. CPDD Annual Meeting. Scottsdale: AZ; 2006
- 73.Haddox JD, Fitzgerald JP, Kline AT, Smith MY. Characterizing the nonmedical use/abuse and diversion of opioid analgesics in 2006: findings from ethnographic field research. CPDD Annual Meeting. Quebec City: Canada; 2007