Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2017 Feb 25;81(1):9. doi: 10.5688/ajpe8119

Disseminating Comparative Effectiveness Research Through Community-based Experiential Learning

Richard A Hansen 1,, Margaret Williamson 1, Lynn Stevenson 1, Brandy R Davis 1, R Lee Evans 1
PMCID: PMC5339595  PMID: 28289299

Abstract

Objectives. To launch and evaluate a comparative effectiveness research education and dissemination program as part of an introductory pharmacy practice experience (IPPE).

Methods. First- through third-year PharmD students received training on comparative effectiveness research and disseminated printed educational materials to patients in the community who they were monitoring longitudinally (n=314). Students completed an assessment and initial visit documentation form at the first visit, and a follow-up assessment and documentation form at a subsequent visit.

Results. Twenty-three diabetes patients, 29 acid-reflux patients, 30 osteoarthritis patients, and 50 hypertension patients received materials. Aside from the patient asking questions, which was the most common outcome (n=44), the program resulted in 38 additional actions, which included stopping, starting, or changing treatments or health behaviors, or having additional follow-up or diagnostic testing. Small but positive improvements in patient understanding, confidence, and self-efficacy were observed.

Conclusions. Dissemination of comparative effectiveness research materials in an IPPE program demonstrated a positive trend in markers of informed decision-making.

Keywords: introductory pharmacy practice experience, comparative effectiveness, informed decision-making

INTRODUCTION

When patients understand their disease state(s) and treatment(s), their overall health outcomes improve.1 However, several barriers prevent patients from understanding their health and being fully engaged in the management of disease. One barrier is that close to half of Americans do not fully comprehend health information, resulting in up to $73 billion in excess health care costs.2,3 This phenomenon is referred to as low health literacy. Health literacy is “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.”3 When patients do not understand their disease state, they are more likely to use their medications inappropriately,4-6 which leads to increased adverse events and a higher risk of poor adherence.7

Another barrier to patients’ understanding of how to best manage their disease states is the lack of availability and awareness of generalizable, comparative evidence on treatment options most relevant to the patient. Most of the clinical evidence base is derived from randomized controlled trials that compare a single treatment to placebo in homogenous populations that do not represent many patients. This leaves a high degree of uncertainty when faced with multiple possible treatment options and complex health or health care circumstances. Comparative effectiveness research (CER) has grown in popularity as a means to address this problem. The Institute of Medicine defines CER as “the generation and synthesis of evidence that compares the benefits and harms of alternative treatments.”8 The purpose of CER is to provide patients, providers, and policymakers with better evidence for decision-making. While public and private sector funding for CER has increased,9 the results of many comparative effectiveness studies are not used by patients and health care providers because of lack of awareness, bias, lack of incentives, ambiguity of the research, or failure of the research to meet enduser needs.10

Both government and private organizations have begun to address the need for disseminating easy-to-read health information and CER to the public. Several organizations have synthesized health care information in an easy to read manner for the general public to use and have made that available on their websites. For example, the Agency for Health Care Research and Quality (AHRQ) launched the Effective Health Care (EHC) program, which funds the conduct and synthesis of CER tailored to patients, providers, and policymakers.11 Patient-based treatment summaries are provided in plain language that patients can read and understand easily. The website also has downloadable audio versions of the reports and versions of the reports in Spanish. Another example comes from Consumer Reports, which provides patient-friendly information on common conditions and their treatments.12 The Consumer Reports “Best Buy Drugs” page allows patients to explore how drugs that treat the same disease state compare to each other in effectiveness, safety, dosing convenience, duration of action, and cost. Both the EHC program and the Consumer Reports Best Buy Drugs websites represent examples of how to help patients overcome information and health literacy barriers. Corresponding efforts have been made to disseminate these materials via paper and other electronic media, but dissemination of unbiased materials to patients has been relatively limited.13

Efforts to overcome patient health literacy problems and limited access to reliable and up-to-date CER also have targeted health care providers. For example, Federal resources have been created to help address low health literacy, including the National Action Plan to Improve Health Literacy,14 Health Literacy Online: A Guide to Writing and Designing Easy-to-Use Health Web Sites,15 and the Quick Guide to Health Literacy.16 These websites offer guidelines and solutions for health practitioners to use to address the problem of low health literacy in the United States, including interactive communication tools that can be shared with patients to increase understanding of disease states.17 Likewise, efforts have been made to help health care providers stay current with CER. However, given that there are over 5,600 biomedical journals on Medline alone, and 2,000 to 4,000 new references added each day,18 arming providers with unbiased, up-to-date CER that can be translated easily to patient-provider decision-making remains a challenge. To address this challenge, a growing number of health professions schools have expanded formal CER and training programs (University of Washington, Johns Hopkins University, Harvard School of Public Health, University of Utah, University of Maryland, and University of Illinois at Chicago),19 although more can be done to address patient and provider needs. Given that a large focus of CER is on pharmaceuticals, schools of pharmacy are an optimal venue for training new providers and developing intervention dissemination strategies.

In the fall of 2014, the Auburn University Harrison School of Pharmacy (HSOP) piloted a CER program with the goals of educating pharmacy students about CER resources, and providing community outreach by disseminating CER materials. This program included a 50-minute classroom training around the intent and principles of CER, followed by a school-wide dissemination effort launched in an IPPE. One arm of the HSOP IPPE is a longitudinal pharmacy practice experience during which students in the first three years of the PharmD curriculum are put into teams of three or four students and, with oversight of faculty mentors, regularly meet with more than 300 patients in the local community to manage their medications and disease states.20 The average patient in this program is visited by the same team anywhere from once a week to once a month, and each of the visits is documented in an electronic medical record. We identified home blood pressure monitoring, osteoarthritis, gastroesophageal reflux disease (GERD), and diabetes as common conditions among this patient population and thus disseminated CER materials on these conditions. Here we report our experience with and evaluation of this brief student CER training and the corresponding experiential program of disseminating CER materials to patients.

METHODS

The Auburn University Harrison School of Pharmacy introduces PharmD students (approximately 150 per class) to professional practice during the first three years of the curriculum by sending teams of students and faculty members to regularly visit with patients in the community (n=314 patients enrolled at the time of this study) that are referred for medication therapy management. Patients reside in local communities near the Auburn University main campus (ie, Auburn or Opelika, Alabama) or near the Auburn-Mobile campus (ie, Mobile, Alabama). The program includes weekly team meetings among students and faculty members, as well as regular patient visits that occur weekly to monthly depending on the patient’s needs. This CER dissemination initiative was piggybacked on top of already scheduled visits and only part of the time dedicated to each visit was used for reviewing the CER materials. While visiting the patients was a mandatory component of the existing IPPE, distribution of the CER materials was encouraged but not required. Students reviewed electronic patient records maintained for the IPPE program and identified the patients they routinely visited who had high blood pressure, osteoarthritis, GERD, and/or diabetes. Patients with at least one of these conditions (n=202) were eligible to receive student-led education and printed materials. Because this program was conceptualized as experiential education and community outreach, neither students nor patients provided informed consent. However, use of de-identified data from this program was approved for use as exempt human subjects research by the Auburn University Institutional Review Board.

Print copies of the following consumer summary documents for the four conditions were obtained from AHRQ: Measuring Your Blood Pressure at Home: A Review of the Research for Adults,21 Managing Osteoarthritis Pain with Medicines: A Review of the Research for Adults,22 Treatment Options for GERD or Acid Reflux Disease: A Review of the Research for Adults,23 and Medicines for Type 2 Diabetes: A Review of the Research for Adults.24 Students were asked to review the Web-based clinician guides corresponding to these materials before meeting with their patients, although no assessment was performed to ensure the adequacy of student review and mastery of these materials. Free blood pressure logs were provided for all patients with high blood pressure, and free automatic blood pressure monitors were provided for those patients demonstrating economic need or resistance to purchasing a home blood pressure monitor on their own.

The program was delivered during the fall semester of 2014 (August-December). In August 2014, faculty mentors were informed of program requirements during a regularly scheduled faculty meeting. Students were introduced to the fundamental elements of CER and provided background materials during a school-wide 50-minute Professional Seminar Series in early September. Then, students were asked to identify which of their PPE patients had one or more eligible conditions, and determine whether the EHC materials were appropriate for each of these patients by reviewing the “Is This Information Right for Me” sections of the written materials. Patient eligibility forms were submitted during September 2014, and students were provided an appropriate number and type of printed materials for distribution to their eligible patients. The clinician and patient guides for all relevant materials were made easily available through the resources section of the electronic course management site. Initial materials were distributed from late September through October 2014. At the next student-patient visit following initial distribution of the materials (October through December 2014), students were asked to follow-up with patients and ask if they had any additional questions related to the materials.

Outcomes of the initial and follow-up visits were documented by students using paper forms that were included with the EHC consumer guides (Appendix 1). These forms captured student and patient information, including their patients’ eligible conditions. At the beginning of the initial visit, students asked the patient five questions that were created for this study to provide a baseline assessment of how well the patient understood their condition and treatment options, and how comfortable they felt with engaging in an informed discussion with their health care provider. These questions broadly encompassed six of the seven domains of informed decision-making25,26 that we felt were most relevant to ongoing management of an existing illness, plus elements of decision self-efficacy27 and self-advocacy.28 The five questions were framed using a five-item Likert-type scale on which 1=strongly disagree and 5=strongly agree. Following each visit, students self-reported how long they discussed the materials and whether any outcomes resulted from the discussion. Responses included: no relevant outcome observed; patient asked the student questions; patient changed how they take an existing treatment; patient stopped, started, or changed treatment; patient changed a health behavior related to the condition; patient changed how they monitor their treatment or condition; discussion related to additional diagnostic tests; and patient took other relevant actions. The initial visit was concluded with an assessment of whether patients and students found the materials and/or discussion useful, again using a five-item Likert-type scale ranging from 1=strongly disagree to 5=strongly agree. The discussion, outcome, and five questions on informed decision-making were repeated at the conclusion of the follow-up visit. The follow-up visit included an additional outcome documentation category for whether the patient had asked another health care provider questions.

Descriptive statistics (means, standard deviations, and percentages) were used to summarize the number of materials distributed, length of discussions, outcomes, questions related to informed decision-making, and questions related to the value of the program (initial visit only). Likert-type questions are reported as continuous variables, as well as by the number (percent) of participants that either agreed or strongly agreed with each question. For the informed decision-making questions, we compared baseline responses with follow-up visit responses overall and by condition using paired t tests when the response was treated as a continuous variable and McNemar’s test for matched pairs when the response was treated as a dichotomous variable (agree or strongly agree as opposed to all other responses). Chi-square tests also explored the relationship between patient and student responses on whether the program was valuable, with results stratified by condition, the length of their discussion, and whether an outcome was reported. All analyses were conducted using SAS, version 9.3 (SAS Corporation, Cary, NC).

RESULTS

Patient eligibility to participate in the study and the materials distributed are listed in Table 1. There were 202 unique patients identified as eligible, representing 405 unique patient-condition combinations (70 with diabetes, 90 with GERD, 81 with osteoarthritis, and 164 with hypertension). Of these, AHRQ materials were provided to 23 (33%) diabetes patients, 29 (32%) GERD patients, 30 (37%) osteoarthritis patients, and 50 (30%) hypertension patients (132 total patients). Eleven of the hypertension patients also qualified for and were given a free blood pressure monitor. Of the patients who received information, follow-up visits were made to 21 diabetes patients (91%), 26 GERD patients (90%), 25 osteoarthritis patients (83%), and 50 hypertension patients (100%). In total, 49 patients had valid data for both a baseline and follow-up visit. Among these 49 unique patients, there were 109 unique patient-disease combinations that had valid data for both a baseline and follow-up visit.

Table 1.

Comparative Effectiveness Research Materials Distributed by Pharmacy Students in IPPE as Part of This Study (N = 202 unique patients eligible)

graphic file with name ajpe8119-t1.jpg

Students reported that most of the initial and follow-up visit discussions took from one to five minutes per condition. Discussions surrounding diabetes tended to be longer than discussions for GERD, osteoarthritis, or hypertension. For example, at the initial visit, 39% of discussions took more than five minutes for diabetes compared with 18% of discussions for other conditions lasting more than five minutes. This trend was consistent for the follow-up visits.

Fifty-one (39%) of the initial visits did not result in any further discussion or outcomes, and 78 (59%) of the follow-up visits did not result in any further discussion or outcomes (Table 2). The most common outcome of the EHC material distribution and discussion with the student was that the patient asked additional questions (44 [39%] initial visits and 26 [20%] follow-up visits). Interestingly, six (5%) patients reported the intent to change a health behavior related to their condition at the initial visit, while three (2%) patients reported actually changing a health behavior at the follow-up visit. Similarly, 15 (11%) patients reported the intent to take “other” relevant actions at the initial visit, but only five (4%) patients reported “other” actions at the follow-up visit. At the initial visit, a combined 6% of patients reported the intent to change treatments, change how they took their existing treatment, or seek additional diagnostic tests following discussion of the AHRQ materials. This compared with a combined 3% of patients who reported taking these actions at the follow-up visit.

Table 2.

Patient Outcomes Documented by Students During Their Initial and Follow-up Visit Interactions

graphic file with name ajpe8119-t2.jpg

The length of student-patient discussion was correlated with whether one or more outcomes was observed (Figure 1), whereby longer discussions were more likely to result in an outcome such as change in a health behavior or treatment. At the initial visit, for example, among the 79 patients who had a discussion lasting one to five minutes, 49 (62%) had no outcome, 17 (22%) had one outcome, and 13 (16%) had more than one outcome (the number of patients here reflects an individual person more than once if they had multiple eligible conditions). Of the 26 patients who had a discussion lasting six to10 minutes, seven (27%) had no outcomes, 18 (69%) had one outcome, and one (4%) had more than one outcome. The two patients who had a discussion lasting 11 minutes or longer had more than one outcome. At the follow-up visit, among the 54 patients who had discussions lasting one to five minutes, 24 (44%) had one or more relevant outcomes. Among the 12 patients having follow-up discussions lasting longer than five minutes, 100% had one or more relevant outcomes.

Figure 1.

Figure 1.

Length of Student-Patient Discussion by How Many Outcomes Occurred Possible outcomes included the following: (1) patient asked the student questions, (2) patient changed how they take an existing treatment, (3) patient stopped, started, or changed treatment, (4) patient changed a health behavior related to the condition, (5) patient changed how they monitor their treatment or condition, (6) discussion related to additional diagnostic tests, (7) patient took other relevant actions, and (8) patient asked another health care provider questions (follow-up only).

During the initial visit, 85% of patients responded positively to the five questions related to informed decision-making, self-efficacy, and self-advocacy (reflected by responses of agree or strongly agree). This compares with 89% of patients responding positively to these five questions at the follow-up visit. Among the individual questions, the biggest improvements from baseline to follow-up were with the questions regarding patients’ confidence in their ability to ask questions and find answers about their health (8% improvement), as well as for questions related to whether the patient understood their health (6% improvement) and felt they could influence their health (6% improvement). We did not observe an improvement in patients’ understanding of treatment options, and in fact one patient indicated decreased confidence in their ability to make informed treatment choices. In comparing responses based on which types of materials the patient received (Figure 2), we found that patients with hypertension consistently marked a higher score for every informed decision-making statement at the follow-up visit, and patients with osteoarthritis consistently marked a lower score for every statement at the follow-up visit. Patients with GERD had the highest changes in scores for the statements, “I am confident in my ability to ask questions, find answers, and figure out how to address my health concerns,” and “I feel I consistently take steps to manage my health.” None of the overall or condition-specific changes in these questions from baseline to follow-up were significant (p>.05 for all).

Figure 2.

Figure 2.

Changes in Patient Response to Questions of Informed Decision-making From Initial to Follow-up Visit Elements of informed decision-making were measured with a Likert-type scale anchored at 1=strongly disagree to 5=strongly agree. The bars represent the mean baseline score for each condition, and the solid black lines represent the change in mean score from baseline to follow-up visit.

Overall, patient and student satisfaction with the program were similar, with 71% of patients and 73% of students indicating that they agreed or strongly agreed that the program was valuable. Looking at these responses based on the conditions that were discussed for each individual patient, there were no differences in the value that patients and students placed on the program across the four conditions (p>.05 for all). On the five-point Likert scale, the mean response for patients and students was higher (indicating stronger agreement that the program was valuable) for visits that resulted in at least one outcome (patients=4.0; students=3.9) compared with visits that did not result in any outcomes (patients=3.7; students=3.7). These differences, however, were not significant (p>.05).

DISCUSSION

The purpose of this study was to evaluate the effectiveness of a CER education and outreach program in an IPPE. The program was successful at familiarizing students with CER resources and disseminating CER materials to patients, and showed a trend toward helping patients understand their disease states. The materials stimulated worthwhile discussions and both patients and students generally were satisfied with the program. Patients reported some improvements in their ability to make informed decisions regarding their health. Patients also reported seeking medication changes, improving their medication adherence, and making lifestyle changes.

The AHRQ materials we used were, for the most part, appropriate for the patients. The disease state reviews were well written and were at an appropriate health literacy level for the general population. However, some of our patients seemed to have trouble understanding the materials. This was reported informally by students, and is evidenced by patients with osteoarthritis having a lower understanding of their disease state in the follow-up visit than in the initial visit. This could be related to underlying differences in health literacy, which unfortunately we did not measure directly. The usability of the materials may be another reason why we observed only minimal improvements in measures such as patient confidence. Possible solutions to this problem could be multifaceted. One approach may be to better customize materials for the health literacy level of individual patients. This has been demonstrated by groups working to incorporate CER into patient decision aids.29,30 For example, Montori and colleagues developed CER-based pictographic cards for diseases such as osteoporosis,31 diabetes,32 and depression.33 Use of these cards can be customized to individual patients to guide them in making treatment decisions, and this can improve the quality of clinical decisions and outcomes of care.

Another possible improvement to our approach might be to provide more thorough student training surrounding how these materials are generated and how to best incorporate CER materials and decision aids in practice. A workgroup comprised of leadership from multiple funded National Institutes of Health Clinical and Translational Science Award institutions defined 14 core competencies relevant to CER, noting that major changes in workforce development are needed.34 Similar curricular recommendations have followed.35-38 To date, however, we are unaware of best practices in health professions programs for how to teach CER methods, and more importantly, how to apply CER data in practice. We believe core curricular models are needed for dissemination and implementation of CER in practice.

In the context of our initiative at Auburn, the current curriculum does not provide any other formal training surrounding CER. Adding this to the curriculum might help students to put this in better context for their patients. Further, while we provided a 50-minute overview to students that covered the materials and explained the pilot program, we did not conduct a formal assessment of students’ understanding of the clinician guides that accompanied the patient guides. Better ensuring that students were comfortable with the materials might have resulted in longer interactions and more significant improvement in markers of informed decision-making or a larger number of patient outcomes.

Arming patients and providers with up-to-date, useful CER can improve health care decision-making and outcomes of care. While part of the challenge is generating unbiased, up-to-date, and relevant CER for the medical community as a whole, a more pressing challenge is getting this information out to patients and providers in practice. Fischer and Avorn propose that academic detailing might play a key role in this regard, noting that key challenges to the effective use of CER findings include difficulty in interpretation, need for a trusted source, and the challenge of actually changing clinical practice.39 Addressing the interpretation and practice change barriers might be accomplished through better exposure in health professions education programs. Efforts to address this need should consider requiring curricular changes among the health professions training programs as well as development of community-based collaborations to actively disseminate educational materials, perhaps through an academic detailing model. This might be done through community pharmacies, physician offices, local departments of health, or community health centers. More work is needed to address the effectiveness of various strategies for education, dissemination, and implementation.

Different disease states and different patients may be more suited to this type of intervention, as evidenced by variability in patient-reported measures of informed decision-making. Hypertension was the only disease state that had consistently higher scores in the follow-up visit. This might be partially attributed to the inclusion of free blood pressure monitors for a subset of patients, although we were unable to assess this in our study. Osteoarthritis actually had lower scores marked for every statement in the follow-up visit. This may be due to some disease states having information provided that may not have been appropriate for patients’ understanding or health literacy levels, thus causing some degree of confusion.40,41 This also could be a reflection of how the students discussed these materials with their patients, perhaps being more comfortable talking about some disease states compared with others. Most patients did not discuss the material for longer than one to five minutes in both initial and follow-up visits and patients had variable outcomes. Patients who had discussions for longer than 11 minutes, however, all had an outcome. The percentage of patients who had outcomes in the initial visit was comparable to the percentage of patients who did not have any outcomes in the follow-up visit. This implies that the patients who did not have any outcomes in the initial visit reviewed the documents further on their own and this resulted in additional outcomes reported at the follow-up visit. In the initial visit, the patients who did not discuss the material with the students at all did not have any outcomes; however, in the follow-up visit several of these patients reported outcomes even though they did not discuss the material with the students at that visit. This may speak to the desire of some patients to review health education materials on their own, and speaks to a possible benefit associated with passive distribution of the materials.

One aspect of our program that might be modified in future work is to focus on newly diagnosed or newly treated patients, as opposed to patients who had long-standing disease and were experienced with their treatments. While we did not attempt to determine how long patients had their condition or quantify their level of experience with prior treatments, we believe that the impact of providing CER information would be greater among patients who do not have prior experience or knowledge. In fact, our patient population receives longitudinal medication therapy management by our student and faculty teams20 and they are probably less likely than an average patient to need additional educational materials. The fact that we still saw positive trends is reassuring that the impact of a program like ours would show similar or better results if replicated in a patient population not already receiving longitudinal pharmacy services. Future work should consider focusing on newly diagnosed and/or newly treated patients that might be identified in community pharmacies rather than through longitudinal in-home visits as we did in our program.

One limitation we encountered was the relatively small patient population we had to draw from (around 200 patients with at least one of the diseases of interest). This likely is related to the lack of significance across the various analyses we conducted. Also, while we asked students to engage in a discussion with their patients surrounding the materials, we were unable to control or assess the quality of these discussions. Further, we were unable to ensure that all eligible patients received the materials as this program was not graded and it was simply added on to existing student requirements in our IPPE program. There was, however, a potential incentive for students that did complete the assignment. The team that had the highest number of patient educational interventions was awarded a “cookie cake,” with the winning team having nearly 100% completion. Another possible limitation was that the five questions used to measure informed decision-making, self-efficacy, and self-advocacy had not been previously validated. These measures were instead based on the practicality of what students could ask and rooted in constructs from the literature.25-28 Existing validated instruments that might have been relevant were too long or nonspecific for this project. Validation of our measure may be a topic of future research.

CONCLUSION

This was an effective pilot study of a student-led patient education program as part of an IPPE. Patients and students appeared to learn from the materials and indicators suggest that patient treatment and understanding of their treatment was improved. Future work should explore the expansion of this program, including formal inclusion in the curriculum, continuing education programs, and community-based delivery of similar materials.

ACKNOWLEGMENTS

The authors thank Destenie Ray for assistance with data collection and entry.

Appendix 1.

Appendix 1.

graphic file with name ajpe8119app1-2.jpg

graphic file with name ajpe8119app1-3.jpg

REFERENCES

  • 1.James J. Health policy brief: patient engagement. Health Aff. February 14, 2013 [Google Scholar]
  • 2.Parker RM, Ratzan SC, Lurie N. Health literacy: a policy challenge for advancing high-quality health care. Health Aff (Millwood) 2003;22(4):147–153. doi: 10.1377/hlthaff.22.4.147. [DOI] [PubMed] [Google Scholar]
  • 3.Neilsen-Bohlman L, Panzer AM, Kindig DA. Health Literacy: A Prescription to End Confusion. Washington, DC: The National Academies Press; 2004. [PubMed] [Google Scholar]
  • 4.Al Sayah F, Majumdar SR, Williams B, Robertson S, Johnson JA. Health literacy and health outcomes in diabetes: a systematic review. J Gen Intern Med. 2013;28(3):444–452. doi: 10.1007/s11606-012-2241-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Kalichman SC, Ramachandran B, Catz S. Adherence to combination antiretroviral therapies in HIV patients of low health literacy. J Gen Intern Med. 1999;14(5):267–273. doi: 10.1046/j.1525-1497.1999.00334.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Williams MV, Baker DW, Honig EG, Lee TM, Nowlan A. Inadequate literacy is a barrier to asthma knowledge and self-care. Chest. Oct 1998;114(4):1008–1015. doi: 10.1378/chest.114.4.1008. [DOI] [PubMed] [Google Scholar]
  • 7.Dewalt DA, Berkman ND, Sheridan S, Lohr KN, Pignone MP. Literacy and health outcomes: a systematic review of the literature. J Gen Intern Med. 2004;19(12):1228–1239. doi: 10.1111/j.1525-1497.2004.40153.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Institute of Medicine. Initial National Priorities for Comparative Effectiveness Research. Washington, DC: Institute of Medicine; 2009. [Google Scholar]
  • 9.US National Library of Medicine. Comparative effectiveness research: grants, funding, and fellowships. 2015; http://www.nlm.nih.gov/hsrinfo/cer.html#700Grants,%20Funding,%20and%20Fellowships. Accessed August 31, 2015. [DOI] [PubMed]
  • 10.Timbie JW, Fox DS, Van Busum K, Schneider EC. Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. Health Aff. 2012;31(10):2168–2175. doi: 10.1377/hlthaff.2012.0150. [DOI] [PubMed] [Google Scholar]
  • 11.Agency for Health care Research and Quality. Effective health care program: helping you make better treatment choices. http://effectivehealthcare.ahrq.gov/index.cfm. Accessed September 1, 2015.
  • 12.Reports Consumer. Consumer Reports best buy drugs. http://www.consumerreports.org/cro/health/prescription-drugs/best-buy-drugs/index.htm. Accessed September 1, 2015.
  • 13.Patient-Centered Outcomes Research Institute. Dissemination and implementation action plan: request for proposal. http://www.pcori.org/assets/2013/08/PCORI-Dissemination-Implementation-RFP-083013.pdf. Accessed November 18, 2015.
  • 14.US Department of Health and Human Services; Office of Disease Prevention and Health Promotion. https://health.gov/communication/initiatives/health-literacy-action-plan.asp . National action plan to improve health literacy. Accessed September 1, 2015.
  • 15.US Department of Health and Human Services; Office of Disease Prevention and Health Promotion. Health literacy online: a guide for simplifying the user experience. http://health.gov/healthliteracyonline/. Accessed September 1, 2015.
  • 16.US Department of Health and Human Services; Office of Disease Prevention and Health Promotion. Health communication activities: quick guide to health literacy. http://health.gov/communication/literacy/quickguide/. Accessed September 1, 2015.
  • 17.Schillinger D, Piette J, Grumbach K, et al. Closing the loop: physician communication with diabetic patients who have low health literacy. Arch Intern Med. 2003;163(1):83–90. doi: 10.1001/archinte.163.1.83. [DOI] [PubMed] [Google Scholar]
  • 18.US National Library of Medicine. MEDLINE Fact Sheet. 2015; http://www.nlm.nih.gov/pubs/factsheets/medline.html. Accessed August 31, 2015. [DOI] [PubMed]
  • 19.Hostettler S. UIC College of Pharmacy creates MS in comparative effectiveness research. http://news.uic.edu/uic-college-of-pharmacy-to-create-m-s-in-comparative-effectiveness-research. Accessed September 1, 2015.
  • 20.Stevenson TL, Brackett PD. A novel approach to Introductory Pharmacy Practice Experiences: an integrated, longitudinal, residence-based program. Curr Pharm Teach Learn. 2011;3(1):41–52. [Google Scholar]
  • 21.US Department of Health and Human Services; Agency for Health care Research and Quality. Measuring your blood pressure at home: a review of the research for adults. 2012; http://www.effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=894&PCem=RA. Accessed September 1, 2014.
  • 22.US Department of Health and Human Services; Agency for Health care Research and Quality. Managing osteoarthritis pain with medicines: a review of the research for adults. 2012; http://effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=950. Accessed September 1, 2014.
  • 23.US Department of Health and Human Services; Agency for Health care Research and Quality. Treatment options for GERD or acid reflux disease: a review of the research for adults. 2011; http://effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=756. Accessed September 1, 2014.
  • 24.US Department of Health and Human Services; Agency for Health care Research and Quality. Medicines for type 2 diabetes: a review of the research for adults. 2011; http://effectivehealthcare.ahrq.gov/index.cfm/search-for-guides-reviews-and-reports/?pageaction=displayproduct&productID=721. Accessed September 1, 2014.
  • 25.Braddock CH, 3rd, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: time to get back to basics. JAMA. 1999;282(24):2313–2320. doi: 10.1001/jama.282.24.2313. [DOI] [PubMed] [Google Scholar]
  • 26.Leader A, Daskalakis C, Braddock CH, 3rd, et al. Measuring informed decision making about prostate cancer screening in primary care. Med Decis Making. 2012;32(2):327–336. doi: 10.1177/0272989X11410064. [DOI] [PubMed] [Google Scholar]
  • 27.O’Connor AM. User manual - decision self-efficacy scale. 1995; https://decisionaid.ohri.ca/docs/develop/user_manuals/UM_decision_selfefficacy.pdf. Accessed September 14, 2014.
  • 28.Brashers DE, Haas SM, Neidig JL. The patient self-advocacy scale: measuring patient involvement in health care decision-making interactions. Health Commun. 1999;11(2):97–121. doi: 10.1207/s15327027hc1102_1. [DOI] [PubMed] [Google Scholar]
  • 29.Gionfriddo MR, Leppin AL, Brito JP, Leblanc A, Shah ND, Montori VM. Shared decision-making and comparative effectiveness research for patients with chronic conditions: an urgent synergy for better health. J Comp Eff Res. 2013;2(6):595–603. doi: 10.2217/cer.13.69. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Shah ND, Mullan RJ, Breslin M, Yawn BP, Ting HH, Montori VM. Translating comparative effectiveness into practice: the case of diabetes medications. Med Care. 2010;48(6 Suppl):S153–S158. doi: 10.1097/MLR.0b013e3181d5956c. [DOI] [PubMed] [Google Scholar]
  • 31.Montori VM, Shah ND, Pencille LJ, et al. Use of a decision aid to improve treatment decisions in osteoporosis: the osteoporosis choice randomized trial. Am J Med. 2011;124(6):549–556. doi: 10.1016/j.amjmed.2011.01.013. [DOI] [PubMed] [Google Scholar]
  • 32.Mullan RJ, Montori VM, Shah ND, et al. The diabetes mellitus medication choice decision aid: a randomized trial. Arch Intern Med. 2009;169(17):1560–1568. doi: 10.1001/archinternmed.2009.293. [DOI] [PubMed] [Google Scholar]
  • 33.LeBlanc A, Bodde AE, Branda ME, et al. Translating comparative effectiveness of depression medications into practice by comparing the depression medication choice decision aid to usual care: study protocol for a randomized controlled trial. Trials. 2013;14:127. doi: 10.1186/1745-6215-14-127. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Kroenke K, Kapoor W, Helfand M, Meltzer DO, McDonald MA, Selker H. Training and career development for comparative effectiveness research workforce development: CTSA Consortium Strategic Goal Committee on comparative effectiveness research workgroup on workforce development. Clin Transi Sci. 2010;3(5):258–262. doi: 10.1111/j.1752-8062.2010.00221.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Institute of Medicine. Learning What Works: Infrastructure Required To Learn Which Care Is Best. Washington, DC: National Academies; 2009. A framework for the workforce required for comparative effectiveness research. [Google Scholar]
  • 36.Jonas D, Crotty K. Are we equipped to train the future comparative effectiveness research workforce? Med Dec Making. 2009;29(6):NP14-15. doi: 10.1177/0272989X09351589. [DOI] [PubMed] [Google Scholar]
  • 37.Murray MD. Curricular considerations for pharmaceutical comparative effectiveness research. Pharmacoepidemiol Drug Saf. 2011;20(8):797–804. doi: 10.1002/pds.2100. [DOI] [PubMed] [Google Scholar]
  • 38.Rich EC, Bonham AC, Kirch DG. The implications of comparative effectiveness research for academic medicine. Acad Med. 2011;86(6):684–688. doi: 10.1097/ACM.0b013e318217e941. [DOI] [PubMed] [Google Scholar]
  • 39.Fischer MA, Avorn J. Academic detailing can play a key role in assessing and implementing comparative effectiveness research findings. Health Aff (Millwood) 2012;31(10):2206–2212. doi: 10.1377/hlthaff.2012.0817. [DOI] [PubMed] [Google Scholar]
  • 40.Weinman J. Providing written information for patients: psychological considerations. J R Roc Med. 1990;83(5):303–305. doi: 10.1177/014107689008300508. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Freemantle N, Harvey EL, Wolf F, Grimshaw JM, Grilli R, Bero LA. Printed educational materials: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2000;(2):CD000172. doi: 10.1002/14651858.CD000172. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES