Skip to main content
Frontiers in Public Health logoLink to Frontiers in Public Health
. 2015 Jul 16;3:179. doi: 10.3389/fpubh.2015.00179

Quantitative Evaluation of the Community Research Fellows Training Program

Lucy D’Agostino McGowan 1,*, Jewel D Stafford 1, Vetta Lynn Thompson 2, Bethany Johnson-Javois 3, Melody S Goodman 1
PMCID: PMC4504145  PMID: 26236703

Abstract

Context

The community research fellows training (CRFT) program is a community-based participatory research (CBPR) initiative for the St. Louis area. This 15-week program, based on a Master in Public Health curriculum, was implemented by the Division of Public Health Sciences at Washington University School of Medicine and the Siteman Cancer Center.

Objectives

We measure the knowledge gained by participants and evaluate participant and faculty satisfaction of the CRFT program both in terms of meeting learning objectives and actively engaging the community in the research process.

Participants

We conducted analyses on 44 community members who participated in the CRFT program and completed the baseline and follow-up knowledge assessments.

Main outcome measures

Knowledge gain is measured by a baseline and follow-up assessment given at the first and final session. Additionally, pre- and post-tests are given after the first 12 sessions. To measure satisfaction, program evaluations are completed by both the participants and faculty after each topic. Mid-way through the program, a mid-term evaluation was administered to assess the program’s community engagement. We analyzed the results from the assessments, pre- and post-tests, and evaluations.

Results

The CRFT participants’ knowledge increased at follow-up as compared with baseline on average by a 16.5 point difference (p < 0.0001). Post-test scores were higher than pre-test scores for 11 of the 12 sessions. Both participants and faculty enjoyed the training and rated all session well.

Conclusion

The CRFT program was successful in increasing community knowledge, participant satisfaction, and faculty satisfaction. This success has enhanced the infrastructure for CBPR as well as led to CBPR pilot projects that address health disparities in the St. Louis Greater Metropolitan Area.

Keywords: community-based participatory research, health disparities, community education, public health, health education

Introduction

The benefits of community-based participatory research (CBPR) have been well established; however, there is limited information on developing the infrastructure and increasing community capacity to partner in CBPR projects. Participating in public health research training can prepare community members for collaborative work with academic researchers and empower them to act as equal partners in the research process (1, 2). In an effort to alleviate mistrust, foster community-academic relationships, and educate community stakeholders in St. Louis, the Division of Public Health Sciences at Washington University School of Medicine (WUSM), and the Siteman Cancer Center, began the community research fellows training (CRFT) program. Aligned with CBPR principles, this program has the potential to be mutually beneficial for the participants, training faculty, and the local health research community.

In 2010, a similar training program, Community Alliances for Research Empowering Social Change (CARES), was initiated on Long Island, New York (3, 4). This program was developed based on feedback from community members of the Suffolk County Minority Health Action Coalition at a mini-summit on Minority Health focused on CBPR (5). Participants newly introduced to this concept felt these approaches could really benefit their communities but requested training in research methods to address concerns about their limited knowledge about research and their ability to be equal partners in a process they did not know much about.

The CARES academic-community collaboration developed a training program for community members based on the standard Master of Public Health curriculum, designed to implement culturally appropriate ways to increase research literacy among community members (4). The CRFT program built upon the model created by the CARES training program. The St. Louis-based CRFT program expanded the goals of the CARES program, adding topics to the curriculum, based on CARESs participant evaluations and input from the community advisory board to make it culturally tailored and region specific. After the CARES program evaluation (3), there was discussion that the success of the CARES program was specific to the suburban region and small class size (6). The CRFT program more than doubled the program size and was conducted in urban St. Louis, as opposed to the former Long Island, New York suburban location, in order to assess the generalizability of this CBPR program’s approach. This paper provides a quantitative evaluation of the CRFT program. To this end, we measure the knowledge gained and evaluate participant and faculty satisfaction with the CRFT program to determine whether learning objectives and goals were met through this CBPR approach.

Background and Rationale

Community-based participatory research emerged from research traditions of 1980s and 1990s that focused on engaging stakeholders affected by the public health concerns at hand (7). Community engagement is a powerful instrument in bringing about positive social and community health change (8). Active community member engagement in the research process improves health outcomes, health promotion and prevention, and institution-community relationships (3, 4, 912). Community engagement requires a long-term process that builds trust, values contributions of all stakeholders, and generates a collaborative framework (13). CBPR is an effective vehicle to speed up the elimination of the mortality and morbidity disparities consistently seen among minorities, low-income, and other vulnerable populations (14). The success of CBPR is dependent on the strong formation of community-researcher relationships; in order to engage communities to collaborate with researchers to address identified health concerns, researchers must build trust and rapport with community members by maintaining a consistent presence (4, 12, 15, 16).

Several studies suggest that medical mistrust and negative encounters with health care personnel are closely linked to racial disparities in health (17, 18). Attempting to reduce racial disparities is complicated by medical mistrust among other barriers that reduce participants’ willingness to actively engage in medical research (19). CBPR has been shown to be effective in ameliorating or abating some of these issues by engaging underserved communities (9, 20) and has emerged as an evidence-based approach to address the complex issues that affect the health of marginalized populations using innovative and effective community–academic partnerships to address health disparities (2123). Over the years, CBPR has become valued as an effective research strategy for improving community health and reducing health disparities (2432). Community engagement can contribute to a more nuanced understanding of health problems, increasing the relevance of problems examined (3335), improving the quality and outcomes of health promotion activities, disease prevention initiatives, and research studies (10, 31, 36).

Methods

Community research fellows training program

The CRFT program was a pilot project of the Program to Eliminate Cancer Disparities at the Siteman Cancer Center (SCC), Barnes Jewish Hospital, and WUSM. The goal of CRFT was to train community members to serve as the bidirectional conduit between WUSM/SCC and communities in St. Louis. A community advisory board (CAB) was formed to help guide all aspects of the CRFT program including recruitment and acceptance decisions, program implementation, selection of pilot projects, and evaluation of the program. The objectives of this training were to

  1. Enhance community knowledge and understanding of research

  2. Create a pool of trained community members who can serve on Institutional Review Boards and community research advisory boards

  3. Develop CBPR pilot projects that address health disparities in the St. Louis Greater Metropolitan Area

  4. Enhance the infrastructure for CBPR

  5. Provide community members with skills to engage as equal partners in every phase of the research process.

The CRFT program sought to involve community members in research methods training designed to help them to become good consumers of research; understand the utility of research in improving health outcomes in their communities, increase their understanding of how to work with academic researchers, and develop skills that increase capacity for organizations and communities to engage in research. To this end, a 15-week training course, adapted from the CARES program, was designed and implemented (4). It is important to note that a community group was convened to consider the faculty rationale and strategy for implementing the program. Both the faculty and the community members were focused on ways to facilitate academic researchers and community members as co-equals. The advisory group reviewed the materials from the CARES program and believed that a similar program could benefit the community. The course is comprised of 25 topics, divided across 12 didactic training sessions and 3 experiential workshops, held weekly (Thursday evenings 6–9 p.m.) April–August 2013. Each session is a condensed 3 hour lay-friendly version of Master of Public Health (MPH) curriculum topics including health literacy, ethics, cultural competency, epidemiology, quantitative and qualitative research methods, chronic disease prevention, clinical trials, study design, program evaluation, and grant writing. Each session was led by one or two faculty members recruited to teach in the program by the Principal Investigator; Table 1 lists the session topics and learning objectives from the course syllabus given to participants at orientation.

Table 1.

Session topics and learning objectives.

Topics Learning objectives
Session 1 Community health
  • Define Public Health

  • Define Community Health

  • Identify contributing factors that impact the health of a community

  • Describe community health activities

  • Describe the Healthy People 2020 initiative

  • Assess the need for a program


Session 2 Research methods/data
  • Define research

  • Describe the steps of the research process

  • Identify and explain research methodology

  • Identify appropriate research methods and techniques

  • Define data

  • Compare and contrast quantitative and qualitative data

  • Compare and contrast primary data and secondary data


Session 3 Public health research/health disparities
  • Define public health research

  • Identify and explain types of research

  • Identify and explain types of research methods

  • Explain why research is important

  • Define health disparities

  • Identify major health disparities in the St. Louis including those by gender, race/ethnicity, geographic location, and socioeconomic status

  • Understand and provide example of causes of health disparities with respect to prevention, incidence, and mortality

  • Discuss the social determinants of health

  • Describe public health strategies and interventions for reducing health disparities


Session 4 Public health library resources/health literacy
  • Understand public resources available at Becker Medical Library

  • Describe library sources useful for public health research

  • Define health literacy

  • Understand the limited literacy perspective

  • Describe the association between literacy and health

  • Describe health literacy on a national scale

  • Discuss current WUSTL research on health literacy


Session 5 Cultural competency
  • Define cultural competency

  • Describe the need for culturally competent research and practice based on a historical perspective

  • Identify contributing risk factors for health disparities

  • Identify skills associated with cultural competent practices

  • Conduct cultural competency self-assessment

  • Develop SMART goals for programs and projects

  • Identify culturally competent evaluation approaches

  • Understand the importance of evaluation


Session 6 Introduction to epidemiology/evidence- based public health/community-based prevention
  • Define epidemiology

  • Identify major contributions of epidemiology

  • Identify frameworks for understanding disease processes

  • Compare and contrast observational studies vs. clinical trials

  • Define evidence-based public health

  • Discuss methods for community-based prevention


Session 7 Quantitative methods
  • Identify strengths and weakness of quantitative methods

  • Describe strengths of mixed-methods approaches

  • Describe stages of questionnaire design

  • Identify sampling methods

  • Understand usefulness of statistics in health research

  • Understand p-values and odds ratios


Session 8 Community-based participatory research
  • Describe history and principles of CBPR

  • Critically evaluate their own position within their community(ies) and their potential roles within CBPR projects

  • Describe methods to ensure that CBPR research benefits all partners

  • Lessons learned from CBPR projects

  • CBPR efforts in St. Louis


Session 9 Research ethics I and II
  • Define research ethics and bioethics

  • Compare and contrast clinical ethics vs. research ethics

  • Identify examples of unethical practices in research

  • Understand ethical theories and professional ethical duties

  • Identify historical milestones in ethics

  • Understand the Belmont Report

  • Understand NIH – IRB Protocol Review Standards


Session 10 Qualitative methods
  • Define basic principles of qualitative research methods

  • Describe the characteristics of qualitative research

  • Describe the advantages and disadvantages of qualitative methods

  • Understand and distinguish between different types of qualitative approaches

  • Facilitate qualitative interviews and focus groups

  • Understand the relationship between qualitative and quantitative research methods

  • Discern when a qualitative research design is desirable


Session 11 Clinical trials and bio-banking
  • Understand clinical trials research

  • Describe the role of clinical trials research in advancing medical practice

  • Discuss the impact of minority participation in clinical trials research

  • Define bio-repository

  • Describe the type of research conducted from bio-repository data

  • Discuss the risks and benefits of minority participation in bio-repositories


Session 12 Health policy research/human subjects certification I
  • Define health policy and health services research

  • Identify and develop relevant well framed health policy research questions

  • Describe public use and other common data sources for health policy research

  • Participants will be certified in the conduct of human subjects research

  • Conduct an informed consent process to recruit a participant in a research study

  • Develop a humans subjects and HIPPA compliant research proposal


Workshop 1 Research synthesis/research evaluation
  • Describe the research process

  • Identify the components of research design

  • Develop a conceptual model

  • Develop a research hypothesis

  • Conduct a literature review

  • Develop an evaluation plan for a research project or proposal

  • Determine appropriate evaluation metrics and measures


Workshop 2 Human subjects certification II/history of healthcare in St. Louis
  • Participants will be certified in the conduct of human subjects research

  • Conduct an informed consent process to recruit a participant in a research study

  • Develop a humans subjects and HIPPA compliant research proposal

  • Respond to the request for proposal (RFP)

  • Development of pilot project ideas


Workshop 3 Family health history/grant writing
  • Understand importance of collecting and maintaining a family health history

  • Understand the role of family health history in healthcare

  • Complete a family health history chart

  • Understand grant guidelines and requirements

  • Understand the power of collaboration for grant writing

  • Develop SMART goals and specific aims

  • Define a project and develop a research plan

  • Develop a collaborative grant proposal including background and significance, specific aims, preliminary studies, research design and methods

The information found in this table was previously provided in the Supplemental Section of Coats et al. (37) and can be found here: http://jre.sagepub.com/content/suppl/2014/12/10/1556264614561959.DC1/DS_10.1177_1556264614561959_T1.pdf

Twelve CAB members, 17 faculty members, and 10 research assistants were involved in the conception, planning, and implementation of the CRFT program (37). While participants are not compensated they do receive free training and resources; to further empower the participants and engage them in the academic process, throughout the CRFT program, we referred to our community members as “fellows” similar to those on fellowship at an academic institution (from this point forward, we will do the same here). Fellows were recruited through The St. Louis American (a local newspaper; 32%), E-mail (20%), radio (14%), community websites (2%), community newsletters (6%), flyers (8%), personal referrals (6%), and word of mouth (12%) during January–March 2013. Of the 62 applicants, 50 (81%) were accepted into the program, which was double the program target of 25. The inclusion criteria for the CRFT program required participants to be at least 18 years old and live or work in the St. Louis greater metropolitan area. Applications were reviewed by the CRFT CAB such that the participants were selected to be purposefully diverse in composition in terms of work, education, and life experiences.

The training uses multiple teaching approaches (large didactic interactive lectures, small group activities, group exercises, and small and large group discussions) to explain topics in ways that reach a variety of learning styles (37). Consistent with CBPR principles and the needs of adult learners, participants provided feedback on the most feasible day and time for the course. Pedagogically, each session is formatted to support active class participation that encourages adults to draw upon their experiences, group activities that require problem solving and direct application of the lecture material, supplemented by homework assignments that foster independence and self-direction through application of the material to their communities (38). Using CBPR approaches, this training program recognizes the contribution participants can make to the learning process and the diversity of the cohort foster example-based learning using culturally competent region specific scenarios. Further information about the program development, recruitment strategies, training structure, program implementation, pilot projects, and best practices can be found elsewhere (37).

The Human Research Protection Office at WUSM classified this study as program evaluation and non-human subjects’ research. Analysis was completed using SAS/STAT 9.4 (Cary, NC, USA). Of the 50 fellows enrolled in the CRFT program, 45 (90%) completed the 15-week training program and 44 (88%) completed both the baseline and follow-up assessments. The majority of the fellows who completed both the baseline and follow-up assessments were female (84%) and African-American/Black (86%). The fellows ranged in age from 28 to 72, with a mean age of 51. The cohort of fellows was comprised of community members (30%), those affiliated with community-based organizations (23%), health care workers (21%), government workers (11%), those affiliated with faith-based organizations (9%), and those in academia (7%). Education attainment ranged from junior high school to those with a graduate degree. Over half (57%) of fellows self-reported previously taking a research course (Table 2). The 6 fellows who did not complete the training and/or final assessment were similar to the 44 who completed the training and the final assessment; all were African-American/Black females, ranging in education attainment from a high school diploma to graduate degree. Their affiliation varied from faith-based organizations, community-based organizations, academia, and community members with no organizational affiliation. The mean age was slightly lower, 48 (range 36–57). For consistency, from this point forward, the sample will be the 44 fellows who completed both the baseline and follow-up assessments.

Table 2.

Demographic characteristics of CRFT participants in evaluation sample (N = 44).

Characteristic n %
Gender
Female 37 84.1
Male 7 15.9
Race
African-American/Black 38 86.4
White 6 13.6
Education attainment
Graduate degree 23 52.3
Bachelor’s degree 7 15.9
Some college/associate’s degree 12 27.3
High school diploma 1 2.3
Junior high or some high school 1 2.3
Affiliation
Faith-based organization 4 9.1
Healthcare worker 9 20.5
Community-based organization 10 22.7
Academic 3 6.8
Government 5 11.4
Community member 13 29.6
Previously taken a research course
Yes 25 56.8
No 19 43.2
Age (years)
Mean 51.4
SD 10.6

Assessment of participant knowledge

The fellows’ baseline and follow-up assessments were linked using their CRFT ID numbers. Each assessment consisted of 29 identical open-ended questions created to measure the fellow’s knowledge of the CRFT training topics (Table 1). The baseline questionnaire also included the Patient Trust in Medical Researchers scale (39), an adaptation of the Computer Engaged Research Index, multiple levels of empowerment indices (40, 41), and two health literacy assessments, the Newest Vial Sign and the Rapid Assessment of Adult Literacy in Medicine (37, 4245). This analysis only focuses on the initial 29 open-ended assessment questions, administered at both baseline and after the completion of CRFT, in order to assess the participants’ knowledge gain. For the full baseline and final questions with sample answers, see Coats et al. (37). A baseline and final score were computed for each individual by summing the individual question’s scores, with a possible score of 0, indicating an incorrect answer, 1 indicating a partially correct answer, or 2 indicating an essentially correct answer. Baseline and follow-up assessments were graded by two graders using a rubric to ensure consistency (37). We created a sum score for each individual for both the baseline and follow-up assessments. To assess the difference between the baseline and follow-up assessment, we performed a paired t-test after finding no evidence against normality using the Shapiro–Wilk test. We then looked at fellows’ responses to see if each individual question’s score increased, decreased, or remained the same from baseline to follow-up.

For each question, we examined the percent that scored “essentially correct” on the baseline assessment as compared to the percent that scored “essentially correct” on the follow-up assessment. We analyzed the data in a two-way contingency table, assessing the change from baseline to follow-up. To assess whether the contingency table is symmetric, we used McNemar’s test, with the null hypothesis that the questions are answered correctly or incorrectly at the same rate from baseline to follow-up.

Fellows took pre- and post-tests at each of the 12 didactic training sessions. These pre- and post- tests were developed by the CRFT team and approved by the teaching faculty member each week. The first eight sessions consisted of five multiple-choice questions on the pre- and post-tests. The final four sessions consisted of 10 multiple-choice questions on the pre- and post-tests. Questions assessed the learning objectives the faculty member intended to cover during the weekly session; pre and post assess the same content but use different items. Pre- and post-tests were scored by a team of CRFT research assistants using a SAS macro developed for this purpose (46). Due to the violation of normality assumptions, we used a non-parametric test, the Wilcoxon signed-rank test, to evaluate the score differences on pre-test and post-test for each session.

Evaluation of participant satisfaction

At the end of each of the 12 didactic training sessions, fellows completed evaluations. Evaluations consisted of six statements with Likert-scale response options and four free response questions; eight of the sessions had two parts, and therefore, two evaluations, making for a total 20 evaluation questions. We focus on the quantitative Likert-scale response questions for this analysis. Fellows were asked to rate the following statements from 1 to 5, with 1 being “Strongly Disagree” and 5 being “Strongly Agree” for evaluation questions 1–5, and 1 being “Poor” and 5 being “Excellent” for the sixth evaluation question:

  1. The exercise learning objectives were met.

  2. The information learned in this session was helpful.

  3. I understood the concepts presented in this session.

  4. The facilitator(s) were well organized.

  5. The facilitator(s) seemed knowledgeable about the subject.

  6. Overall, how would you rate this session?

The mean for each statement is computed, as well as a range of within-session means.

The CRFT CAB felt it was important to also assess satisfaction with teaching in the program. A main goal of the CRFT program is to create a bridge between the St. Louis community and researchers; to foster this connection, faculty satisfaction is imperative. Fifteen faculty members taught the 12 didactic training sessions and completed evaluations. The CRFT faculty evaluation consisted of seven Likert response items:

  1. Community Research Fellows seemed well prepared for today’s training session

  2. Community Research Fellows frequently took notes in the training session

  3. Community Research Fellows contributed to discussions in the training session

  4. Community Research Fellows provided comments that were insightful and constructive

  5. Community Research Fellows asked insightful and constructive questions

  6. Community Research Fellows listened attentively and seemed interested when I presented materials and information

  7. Overall, how would you rate your experience teaching this session?

Items 1–6 have Likert-scale responses 1 to 5, with 1 being “Strongly Disagree” and 5 being “Strongly Agree” and item 7 has Likert-scale responses with 1 being “Poor” and 5 being “Excellent.” In addition, there were three yes/no items:

  1. Would you be willing to teach again for the CRFT program in the future?

  2. Did you learning anything from the Fellows during the training session?

  3. Are you willing to collaborate with Fellows on a CBPR pilot project?

This analysis only examines the 10 close-ended questions on the faculty evaluation which also included 3–4 opened ended response questions depending on answers to the close-ended questions. The mean, SD, minimum and maximum for each of the 7 Likert response items were computed. The three dichotomous response questions were examined by the percent reporting yes to each question.

Results

Assessment of participant knowledge

Overall, there is evidence of knowledge gained, with the average score increasing from 20.6 on the baseline assessment to 37.1 on the final assessment (mean change of 16.5, range −7, 41). This corresponds to an absolute percent increase of 28.4% (range 12, 70.7%). Only three fellows (out of 44, 6.8%) decreased their scores from baseline to follow-up. The paired t-test for this knowledge change was highly significant (p < 0.001). Of the 29 questions, on average fellows answered about 7 (23.5%) questions essentially correctly at baseline (mean, 6.8; SD, 7.0; median, 4.0) and 16 (54.8%) questions essentially correctly at follow-up (mean, 15.9; SD, 6.7; median, 15.0). We then looked at fellows’ responses to see if each individual question’s score increased, decreased, or remained the same from baseline to follow-up (Table 3). At least 50% of the fellows increased their score for 12 (41.4%) of the 29 questions. The three questions with the greatest percent increase in score were defining bio-repository or biobank [36 (81.8%) increased their score from baseline to follow-up], defining the Belmont Report [28 (63.6%) increased their score from baseline to follow-up], and explaining the difference between primary and secondary data [26 (59.1%) increased their score from baseline to follow-up]. The three smallest differences were for naming two of the most common data sets used for health policy research [5 (11.4%) increased their score from baseline to follow-up], defining the term ethnography [11 (25.0%) increased their score from baseline to follow-up], and describing the health promotion planning model that is best to prevent and reduce substance abuse in African-American communities [12 (27.3%) increased their score from baseline to follow-up].

Table 3.

Baseline to follow-up question score changes (N = 44).

Score decreased n (%) Score remained the same n (%) Score increased n (%) % CRFT fellows answered essentially correct
McNemar’s test
Baseline Follow-up S statistic p-Value
1. What is informed consent? 7 (15.9) 17 (38.6) 20 (45.5) 31.8 61.4 7.35 0.012*
2. What is the Belmont report? 2 (4.6) 14 (31.8) 28 (63.6) 9.1 47.7 17.00 <0.001*
3. What is Tuskegee experiment? 4 (9.1) 26 (59.1) 14 (31.8) 50.0 72.7 6.25 0.021*
4. Define health literacy 5 (11.4) 21 (47.7) 18 (40.9) 29.6 59.1 9.94 0.002*
5. Define evidence-based public health 2 (4.5) 21 (47.7) 21 (47.7) 27.3 56.8 13.00 <0.001*
6. Define cultural competency 3 (6.8) 20 (45.5) 21 (47.7) 31.8 70.5 15.21 <0.001*
7. What role does the Institutional Review Board play in research? 3 (6.8) 17 (38.6) 24 (54.6) 36.4 77.3 14.73 <0.001*
8. What is HIPAA? 7 (15.9) 14 (31.8) 23 (52.3) 29.6 70.5 13.50 <0.001*
9. Explain the difference between quantitative and qualitative research methods 2 (4.6) 16 (36.4) 26 (59.1) 27.3 63.6 12.80 <0.001*
10. What is the difference between primary and secondary data? 1 (2.3) 17 (38.6) 26 (59.1) 20.5 54.6 13.24 <0.001*
11. Explain the difference between community-based participatory research and traditional research 5 (11.4) 14 (31.8) 25 (56.8) 27.3 65.9 12.57 0.001*
12. What is epidemiology? 6 (13.6) 16 (36.4) 22 (50.0) 20.5 56.8 11.64 0.001*
13. What is a bio-repository or biobank? 2 (4.6) 6 (13.6) 36 (81.8) 6.9 79.5 30.12 <0.001*
14. What is a clinical trial? 4 (9.1) 18 (40.9) 22 (50.0) 13.6 43.2 11.27 0.001*
15. What is the mixed method approach? 1 (2.3) 19 (43.2) 24 (54.6) 20.5 68.2 21.00 <0.001*
16. Define the term ethnography 2 (4.6) 31 (70.5) 11 (25.0) 6.8 27.3 9.00 0.004*
17. What is the purpose of focus groups? 5 (11.4) 19 (43.2) 20 (45.5) 29.6 63.6 13.24 <0.001*
18. What is the overarching goal of healthy people 2020? 3 (6.8) 16 (36.4) 25 (56.8) 22.7 72.7 18.62 <0.001*
19. What type of information should you expect to get from a community health assessment? 7 (15.9) 23 (52.3) 14 (31.8) 27.3 47.7 6.23 0.023*
20. Describe the health promotion planning model that you believe is best to prevent and reduce substance abuse in an African-American community 8 (18.2) 24 (54.6) 12 (27.3) 11.4 22.7 2.27 0.227
21. What are the social determinants of health? 11 (25.0) 16 (36.4) 17 (38.6) 27.3 40.9 2.25 0.210
22. List three social determinants of health 11 (25.0) 19 (43.2) 14 (31.8) 54.6 61.4 0.47 0.648
23. What is research? 10 (22.7) 15 (34.1) 19 (43.2) 31.8 61.4 7.35 0.012*
24. Define racial health disparities? 4 (9.1) 17 (38.6) 23 (52.3) 25.0 72.7 21.00 <0.001*
25. What are the components of a SMART goal? 5 (11.4) 21 (47.7) 18 (40.9) 13.6 40.9 10.29 0.002*
26. What is an odds ratio? 5 (11.4) 25 (56.8) 14 (31.8) 2.3 27.3 9.31 0.003*
27. What is a p-value? 3 (6.8) 26 (59.1) 15 (34.1) 4.6 31.8 10.29 0.002*
28. List an effective method to advocate for a specific health issue in your community 9 (20.5) 20 (45.5) 15 (34.1) 43.2 56.8 2.00 0.238
29. Name two of the most common used data sets for health policy research 6 (13.6) 33 (75.0) 5 (11.4) 11.4 13.6 0.14 1.0000

*p < 0.05.

For each of the 29 questions, we examined which fellows scored “essentially correct” on the baseline assessment as compared to scoring “essentially correct” on the follow-up assessment. Using McNemar’s test, we found that 24 of the 29 questions had statistically significant results, suggesting that there was evidence of improvement from baseline to follow-up (Table 3). This further suggests that there was not only overall score improvement but also question level improvement for the majority (82.8%) of the questions.

We then examined the sample stratified by whether or not the fellow had taken a previous research course. Twenty-five of the 44 fellows (56.8%) self-reported previously taking a research course. This group had a mean difference between the baseline and follow-up assessment scores of 13.4, or 23.1% absolute increase. The remaining 19 fellows (43.2%) had a mean difference between baseline and final assessment scores of 20.6, or a 35.5% absolute increase in score. The scores in both groups are normally distributed, with a Shapiro–Wilk p-value of 0.778 and 0.783, respectively. We performed a two-sample t-test and found that the score differences are statistically significant, with a p-value of 0.038, indicating that although both groups had significant overall increase of knowledge from the baseline to the follow-up assessment, fellows who had not previously taken a research course had a significantly higher increase in knowledge. This suggests that both fellows who had taken a research course previously and those who had not, gained knowledge from the CRFT program, and that the knowledge gained was more substantial for those who had never taken a research course. This further emphasizes that this course is well designed, both for those who have some previous knowledge about research, and for those who lack it completely.

Comparisons for the mean percent of correct scores on pre- and post-tests at each session showed that in 11 of the 12 sessions, post-test scores were higher than pre-test scores; one session, Public Health Research, had an average post-test score lower than the pre-test score. Based on the Wilcoxon signed-rank tests, sessions 6 (Epidemiology), 8 (CBPR), 9 (Research Ethics), 10 (Qualitative Methods), and 11 (Clinical Trials) post-test scores were significantly higher than pre-test scores (p  = 0.001, <0.001, <0.001, <0.001, and 0.028, respectively); the post-test score for session 3, Public Health Research, was significantly lower than the pre-test score (p  = 0.003) (Table 4).

Table 4.

CRFT pre-/post-tests scores and score difference (post-test minus pre-test) by session.

Session n Pre-test percent
Post-test percent
Score difference
Wilcoxon signed-rank test
Mean SD Mean SD Mean SD p
1. Community health 42 76.67 18.70 81.90 13.11 5.24 24.22 0.189
2. Research methods 43 80.00 21.82 85.12 16.96 5.12 28.32 0.229
3. Public health research 39 88.21 12.75 75.90 17.88 −12.31 23.67 0.003*
4. Health literacy 41 79.51 18.16 80.49 16.42 0.98 26.44 0.747
5. Cultural competency 36 85.56 16.29 90.00 13.09 4.44 21.97 0.269
6. Epidemiology 41 74.63 22.37 88.29 17.87 13.66 25.47 0.001*
7. Quantitative methods 40 55.50 21.95 64.50 23.75 9.00 31.36 0.067
8. CBPR 39 71.28 16.41 89.23 15.11 17.95 22.85 <0.001*
9. Research ethics 39 74.87 14.85 87.69 11.80 12.82 15.55 <0.001*
10. Qualitative methods 42 61.90 19.54 72.14 12.79 10.24 14.73 <0.001*
11. Clinical trials 38 77.37 13.69 81.32 11.19 3.95 10.54 0.028*
12. Health policy research 41 61.71 15.64 65.12 16.90 3.41 17.26 0.278

*p < 0.05.

Evaluation of participant satisfaction

Overall, session evaluations were high, with all session mean evaluation scores between 4 and 5 on a five point Likert-scale, with 4 indicating “Good” or “Agree” and 5 indicating “Excellent” or “Strongly Agree” depending on the question (Figure 1). The average response to the first evaluation statement, assessing whether the learning objectives were met, is 4.4, with the individual session averages ranging from 4.2 to 4.6. The session with the highest mean in terms of this first statement was second session, part two, Data. The average response to the second statement, whether information learned during the session was helpful, is 4.5 with a mean range from 4.1 to 4.7. The session with the highest mean in terms of the second statement is session three, Public Health Research. The average response to the third statement, whether the fellow understood the concepts presented in the session, is 4.3 with the individual session averages ranging from 3.7 to 4.6. The session with the highest average response to this item is session 8, CBPR. The average response to the fourth statement, whether the facilitator was well organized, is 4.6 with the individual session averages ranging from 4.1 to 4.8. The session with the highest average response to this item is session 7, Quantitative Research. The average response to the fifth statement, whether the facilitator seemed knowledgeable, is 4.7 with the individual session averages ranging from 4.3 to 4.9. The session with the highest average response to this item is the second half of session four, Health Literacy. The sixth item, the overall session evaluation, has a mean of 4.5 with a range from 3.7 to 4.8. The session with the lowest overall evaluation is second session, part two, Data. This is intriguing, as this session also ranked the highest for the first item, evaluating if the exercise learning objectives were met, suggesting that fellows ranked sessions overall based on more than whether the learning objectives were met. The session with the highest overall mean evaluation is the fourth session, part two, Health Literacy. As seen above, this session also ranked as having the most knowledgeable session facilitator. This information is useful for evaluating the importance of faculty involvement in this program. Averaging all six items for each session, the mean within-session evaluation is 4.5 with a range from 4.1 to 4.7. This indicates that, on average, responses for all items for each session was between “Agree” and “Strongly Agree” or “Good” and “Excellent.”

Figure 1.

Figure 1

Fellows’ evaluations. ***Overall mean evaluation score, averaging all six Likert-scale evaluation responses from each fellow.

Faculty evaluations are an integral part of the CRFT program’s evaluation. To note, 15 (100%) of the faculty members responded that they would be willing to teach again for the CRFT program. Thirteen (87%) responded that they learned something from the fellows during the training session and 15 (100%) responded that they are willing to collaborate with fellows on a CBPR pilot project. The mean overall faculty rating of their teaching experience is 4.8 (Table 5). This indicates that the majority of the faculty members rated the experience as “Excellent” on a Likert-scale from 1-“Poor,” to 5-“Excellent.” This further suggests a CBPR success.

Table 5.

CRFT faculty evaluations. Likert-scale responses ranged from 1 to 5, with 1 being “Strongly Disagree” and 5 being “Strongly Agree” or 1 being “Poor” and 5 being “Excellent.

Question N Mean SD Minimum Maximum
Community Research Fellows seemed well prepared for today’s training session 15 4.67 0.49 4 5
Community Research Fellows frequently took notes in the training session 15 4.40 0.63 3 5
Community Research Fellows contributed to discussions in the training session 15 4.93 0.26 4 5
Community Research Fellows provided comments that were insightful and constructive 15 4.87 0.35 4 5
Community Research Fellows asked insightful and constructive questions 15 4.93 0.26 4 5
Community Research Fellows listened attentively and seemed interested when I presented materials and information 15 4.87 0.35 4 5
Overall, how would you rate your experience teaching this session? 15 4.8 0.56 3 5

Discussion

We assessed participant knowledge gain and evaluated participant and faculty satisfaction. The CRFT program evaluation results build upon and enhance the positive results demonstrated in the CARES program. The completion rate in the CRFT program was much higher than that in the CARES program; 90% of the fellows completed the 15-week training, as opposed to 68% in the CARES program, and 88% completed the baseline and follow-up assessments, as compared to 58% in the CARES program (3). Additionally, attendance rates were higher in the CRFT program, and the sample increased from 11 completing both the baseline and follow-up in the CARES program to 44 completing the baseline and follow-up in CRFT. Our findings suggest that the participants did gain knowledge through this program, as evidenced through the significant increase in score from baseline to follow-up assessment and the average increase in all but one of the pre-/post-tests. We also found that, while both participants who had previously taken a research course and those who had not experienced significant increases in assessment scores from the baseline to follow-up assessment, those who had never taken a research course experienced higher increases in scores on average. This suggests that the CRFT program is well designed, both for those who have some previous knowledge about research, and for those who lack it, and, as would logically follow, those who have no previously research knowledge will learn substantially more. We believe that it was important for the CRFT program to have a diverse group of fellows in terms of previous knowledge as some of the learning occurs in small groups with other classmates. Though we have a modest sample size (n = 44), we chose robust tests to evaluate all aspects of the CRFT program.

This quantitative evaluation analysis is just one component of the comprehensive (formative and summative) mixed-methods (quantitative/qualitative) evaluation of the CRFT program. The use of multiple items to assess participant knowledge and satisfaction allows for triangulation of results. For example, Session 3 Public Health Research was the only session to see a significant decrease in score on the post-test compared to the pre-test. However, in the session evaluation, participants said that the information learned in the session was helpful; more so than any other session (highest mean). This suggests a need to refine items on assessment instruments, specifically pre- and post-tests. Session pre- and post-tests consisted solely of multiple-choice items to help reduce participant burden and time of administration. However, we needed to increase the number of questions (from 5 to 10) to increase the variability of question types (easy, medium, hard) thus increasing variability in scores across participants; a major limitation is some participants felt like they were taking a test. Inconsistency between pre-test and post-test scores could be due to differences in items used on the assessments. In future work, we examine the use of the same items (in a different order) on the pre-/post-test.

The participant and faculty evaluations were similarly positive, with all averages above four on a five point Likert-scale, averaging between “Good” and “Excellent” or “Agree” and “Strongly Agree” depending on the question. The overall satisfaction, both from the participant and the faculty member’s perspective, is crucial in evaluating a CBPR program. Our assessment of the knowledge gained by fellows through the CRFT program confirms that the first, second, and fifth CRFT program goals have been met. Our evaluation of participant and faculty satisfaction suggests the fourth program goal has been met. In terms of the third program goal, the CRFT program has funded two pilot projects, led by participants in the program, and two more projects have been funded by other sources; we believe that CRFT has been instrumental in enhancing the CBPR infrastructure in St. Louis, opening the doors to collaboration in the future. CRFT pilot projects extend Fellows learning beyond the classroom setting to implementing CBPR projects in real community-based settings, demonstrating the ability of the course to create community–academic partnerships that address community driven research questions (37).

Conclusion

The CRFT program was successful in increasing participant knowledge in public health topics, such as epidemiology, CBPR, research ethics, and clinical trials, as evidenced by the increased scores from the baseline to final assessment and positive difference in pre- and post-tests administered after each session. At the initiation of this program, we set out to achieve five goals.

The first goal was to enhance community knowledge and understanding of research. From a quantitative perspective, there is evidence of knowledge gained with an improvement from 20.6 on the baseline assessment to 37.1 on the final assessment (mean change of 16.5) and the mean increase in post-test scores compared to pre-test scores. This demonstrates that the course content (Table 1), was at an appropriate level for the incoming fellows, despite the diversity in education background. Additionally, the evaluations completed after each session overwhelmingly suggest that the fellows felt that the learning objectives were met during each session, again demonstrating that the communities’ knowledge and understanding of research was enhanced by this program.

The second goal was to create a pool of trained community members who can serve on Institutional Review Boards and community advisory boards. While the quantitative data here suggest that the fellows, on average, would have the knowledge base to serve on these boards, the actual outcomes more clearly support this objective. CRFT alumni have formed their own Patient Research Advisory Board and are currently serving on at least five advisory boards and community partnerships across the WUSM medical center; five CRFT alumni current serve on the CRFT community advisory board.

The third goal was to develop CBPR pilot projects that address health disparities in the St. Louis Greater Metropolitan Area. The knowledge gained, as demonstrated here, was a springboard for 9 groups consisting of a total of 30 fellows, to submit brief proposals for pilot projects. Four of these were funded, and are currently underway in the St. Louis Greater Area (37).

The fourth goal was to enhance the infrastructure for CBPR. The gathering of community members and faculty for bidirectional learning was successful in developing CBPR infrastructure, and the positive participant and faculty evaluations suggest both groups enjoyed this approach to partnership development. Additionally, the four pilot projects demonstrate that the foundation laid by this course created a conducive environment for community members to become involved in the research process.

Finally, our fifth goal was to provide community members with skills to engage as equal partners in every phase of the research process. Our course covers the spectrum of the research process, from research methods, to quantitative and qualitative analysis, to research ethics, synthesis, and evaluation. We demonstrate here, through the increase in scores, that knowledge was gained in these areas, giving the participants the information needed and pilot projects provide an opportunity for this goal to be realized. Overall, the CRFT program was successful, as evidenced by the knowledge gained, the positive participant and faculty evaluations, and the movement toward initiating fellow-inspired community-centric pilot projects.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The CRFT program was sponsored by Program to Eliminate Cancer Disparities (National Institutes of Health, National Cancer Institute grant U54CA153460) at the Siteman Cancer Center, Barnes Jewish Hospital, and Washington University School of Medicine. The authors thank the CRFT fellows for their participation and contribution, and the faculty and staff that volunteered their time and talents to the design, instruction, and implementation of the CRFT program.

References

  • 1.Robson C. Real World Research: A Resource for Social Scientists and Practitioner-Researchers. Oxford: Wiley-Blackwell; (2002). [Google Scholar]
  • 2.Tandon SD, Phillips K, Bordeaux BC, Bone L, Brown PB, Cagney KA, et al. A vision for progress in community health partnerships. Prog Community Health Partnershn (2007) 1(1):11–30. 10.1353/cpr.0.0007 [DOI] [PubMed] [Google Scholar]
  • 3.Goodman MS, Si X, Stafford JD, Obasohan A, Mchunguzi C. Quantitative assessment of participant knowledge and evaluation of participant satisfaction in the CARES training program. Prog Community Health Partnershn (2012) 6(3):361–8. 10.1353/cpr.2012.0051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Goodman MS, Dias JJ, Stafford JD. Increasing research literacy in minority communities: CARES fellows training program. J Empir Res Hum Res Ethics (2010) 5(4):33–41. 10.1525/jer.2010.5.4.33 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Goodman M, Stafford J. Suffolk County Minority Health Action Coalition. Mini-summit health proceedings. Center for Public Health and Health Policy Research Mini-Summits on Minority Health, 2008. Community Engaged Scholarship for Health; (2011). Available from: www.ces4health.info [Google Scholar]
  • 6.Kwon S, Rideout C, Tseng W, Islam N, Cook WK, Ro M, et al. Developing the community empowered research training program: building research capacity for community-initiated and community-driven research. Prog Community Health Partnershn (2012) 6(1):43. 10.1353/cpr.2012.0010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Freudenberg N, Tsui E. Evidence, power, and policy change in community-based participatory research. Am J Public Health (2014) 104(1):11–4. 10.2105/AJPH.2013.301471 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Fawcett SB, Paine-Andrews A, Francisco VT, Schultz JA, Richter KP, Lewis RK, et al. Using empowerment theory in collaborative partnerships for community health and development. Am J Community Psychol (1995) 23(5):677–97. 10.1007/BF02506987 [DOI] [PubMed] [Google Scholar]
  • 9.Viswanathan M, Ammerman A, Eng E, Garlehner G, Lohr KN, Griffith D, et al. Community-Based Participatory Research: Assessing the Evidence. Summary, Evidence Report/Technology Assessment: 99. Rockville, MD: AHRQ Agency for Healthcare Research and Quality; (2004). [PMC free article] [PubMed] [Google Scholar]
  • 10.Minkler M, Wallerstein N, editors. Community-Based Participatory Research for Health: From Process to Outcomes. San Francisco, CA: John Wiley & Sons; (2011). [Google Scholar]
  • 11.Israel BA, Eng E, Schulz AJ, Parker EA, editors. Introduction to methods in community-based participatory research for health. Methods in Community-Based Participatory Research for Health. San Francisco, CA: Jossey-Bass; (2005). p. 3–26. [Google Scholar]
  • 12.D’Alonzo KT. Getting started in CBPR: lessons in building community partnerships for new researchers. Nurs Inq (2010) 17(4):282–8. 10.1111/j.1440-1800.2010.00510.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Butterfoss FD, Francisco VT. Evaluating community partnerships and coalitions with practitioners in mind. Health Promot Pract (2004) 5(2):108–14. 10.1177/1524839903260844 [DOI] [PubMed] [Google Scholar]
  • 14.US Department of Health and Human Services. Healthy People 2020 (2011). Available from: http://www.healthypeople.gov/2020/about/disparitiesAbout.aspx
  • 15.Israel BA, Krieger J, Vlahov D, Ciske S, Foley M, Fortin P, et al. Challenges and facilitating factors in sustaining community-based participatory research partnerships: lessons learned from the Detroit, New York City and Seattle Urban Research Centers. J Urban Health (2006) 83(6):1022–40. 10.1007/s11524-006-9110-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Baker EL, White LE, Lichtveld MY. Reducing health disparities through community-based research. Public Health Rep (2001) 116(6):517. 10.1016/S0033-3549(04)50083-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Boulware LE, Cooper LA, Ratner LE, LaVeist TA, Powe NR. Race and trust in the health care system. Public Health Rep (2003) 118(4):358. 10.1016/S0033-3549(04)50262-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Lichtenberg PA, Brown DR, Jackson JS, Washington O. Normative health research experiences among African American elders. J Aging Health (2004) 16(5 Suppl):78S–92S. 10.1177/0898264304268150 [DOI] [PubMed] [Google Scholar]
  • 19.Braun KL, Nguyen TT, Tanjasiri SP, Campbell J, Heiney SP, Brandt HM, et al. Operationalization of community-based participatory research principles: assessment of the National Cancer Institute’s Community Network Programs. Am J Public Health (2012) 102(6):1195–203. 10.2105/AJPH.2011.300304 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Pinto RM. Community perspectives on factors that influence collaboration in public health research. Health Educ Behav (2009) 36(5):930–47. 10.1177/1090198108328328 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Jagosh J, Macaulay AC, Pluye P, Salsberg J, Bush PL, Henderson J, et al. Uncovering the benefits of participatory research: implications of a realist review for health research and practice. Milbank Q (2012) 90(2):311–46. 10.1111/j.1468-0009.2012.00665.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Israel BA, Coombe CM, Cheezum RR, Schulz AJ, McGranaghan RJ, Lichtenstein R, et al. Community-based participatory research: a capacity-building approach for policy advocacy aimed at eliminating health disparities. Am J Public Health (2010) 100(11):2094–102. 10.2105/AJPH.2009.170506 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Jagosh J, Pluye P, Macaulay AC, Salsberg J, Henderson J, Sirett E, et al. Assessing the outcomes of participatory research: protocol for identifying, selecting, appraising and synthesizing the literature for realist review. Implement Sci (2011) 6(1):24. 10.1186/1748-5908-6-24 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Salimi Y, Shahandeh K, Malekafzali H, Loori N, Kheiltash A, Jamshidi E, et al. Is community-based participatory research (CBPR) useful? A systematic review on papers in a decade. Int J Prev Med (2012) 3(6):386. [PMC free article] [PubMed] [Google Scholar]
  • 25.Wallerstein N, Duran B. Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. Am J Public Health (2010) 100(S1):S40–6. 10.2105/AJPH.2009.184036 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Nguyen G, Hsu L, Kue KN, Nguyen T, Yuen EJ. Partnering to collect health services and public health data in hard-to-reach communities: a community-based participatory research approach for collecting community health data. Prog Community Health Partnershn (2010) 4(2):115–9. 10.1353/cpr.0.0120 [DOI] [PubMed] [Google Scholar]
  • 27.Trinh-Shevrin C, Islam N, Tandon SD, Abesamis N, Hoe-Asjoe H, Rey M. Using community-based participatory research as a guiding framework for health disparities research centers. Prog Community Health Partnershn (2007) 1(2):195. 10.1353/cpr.2007.0007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Wallerstein NB, Duran B. Using community-based participatory research to address health disparities. Health Promot Pract (2006) 7(3):312–23. 10.1177/1524839906289376 [DOI] [PubMed] [Google Scholar]
  • 29.Shalowitz MU, Isacco A, Barquin N, Clark-Kauffman E, Delger P, Nelson D, et al. Community-based participatory research: a review of the literature with strategies for community engagement. J Dev Behav Pediatr (2009) 30(4):350–61. 10.1097/DBP.0b013e3181b0ef14 [DOI] [PubMed] [Google Scholar]
  • 30.Bordeaux BC, Wiley C, Tandon SD, Horowitz CR, Brown PB, Bass EB. Guidelines for writing manuscripts about community-based participatory research for peer-reviewed journals. Prog Community Health Partnershn (2007) 1(3):281. 10.1353/cpr.2007.0018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Balazs CL, Morello-Frosch R. The three Rs: how community-based participatory research strengthens the rigor, relevance, and reach of science. Environ Justice (2013) 6(1):9–16. 10.1089/env.2012.0017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Eder M, Tobin JN, Proser M, Shin P. Special issue introduction: building a stronger science of community-engaged research. Prog Community Health Partnershn (2012) 6(3):227–30. 10.1353/cpr.2012.0040 [DOI] [PubMed] [Google Scholar]
  • 33.Cargo M, Mercer SL. The value and challenges of participatory research: strengthening its practice. Annu Rev Public Health (2008) 29:325–50. 10.1146/annurev.publhealth.29.091307.083824 [DOI] [PubMed] [Google Scholar]
  • 34.Minkler M. Community-based research partnerships: challenges and opportunities. J Urban Health (2005) 82:ii3–12. 10.1093/jurban/jti034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Davis SW, Cassel K, Moseley MA, Mesia R, De Herrera PA, Kornfeld J, et al. The cancer information service: using CBPR in building community capacity. J Cancer Educ (2011) 26(1):51–7. 10.1007/s13187-010-0159-x [DOI] [PubMed] [Google Scholar]
  • 36.Minkler M. Ethical challenges for the “outside” researcher in community-based participatory research. Health Educ Behav (2004) 31(6):684–97. 10.1177/1090198104269566 [DOI] [PubMed] [Google Scholar]
  • 37.Coats JV, Stafford JD, Thompson VS, Javois BJ, Goodman MS. Increasing research literacy the community research fellows training program. J Empir Res Hum Res Ethics (2015) 10(1):3–12. 10.1177/1556264614561959 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Merriam SB. Andragogy and self-directed learning: pillars of adult learning theory. New Dir Adult Cont Educ (2001) 2001(89):3–14. 10.1002/ace.3 [DOI] [Google Scholar]
  • 39.Mainous AG, Smith DW, Geesey ME, Tilley BC. Development of a measure to assess patient trust in medical researchers. Ann Fam Med (2006) 4(3):247–52. 10.1370/afm.541 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Minkler M, Blackwell AG, Thompson M, Tamir H. Community-based participatory research: implications for public health funding. Am J Public Health (2003) 93(8):1210–3. 10.2105/AJPH.93.8.1210 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Khodyakov D, Stockdale S, Jones A, Mango J, Jones F, Lizaola E. On measuring community participation in research. Health Educ Behav (2013) 40(3):346–54. 10.1177/1090198112459050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Davis TC, Crouch MA, Long SW, Jackson RH, Bates P, George RB, et al. Rapid assessment of literacy levels of adult primary care patients. Fam Med (1991) 23(6):433–5. [PubMed] [Google Scholar]
  • 43.Davis TC, Michielutte R, Askov EN, Williams MV, Weiss BD. Practical assessment of adult literacy in health care. Health Educ Behav (1998) 25(5):613–24. 10.1177/109019819802500508 [DOI] [PubMed] [Google Scholar]
  • 44.Murphy PW, Davis TC, Long SW, Jackson RH, Decker BC. Rapid estimate of adult literacy in medicine (REALM): a quick reading test for patients. J Read (1993) 37:124–30. [Google Scholar]
  • 45.Weiss BD, Mays MZ, Martz W, Castro KM, DeWalt DA, Pignone MP, et al. Quick assessment of literacy in primary care: the newest vital sign. Ann Fam Med (2005) 3(6):514–22. 10.1370/afm.405 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Lyons SA, Goodman MS. Community Research Fellows Training Program Evaluation: Using SAS® to Analyze Pre-and Post-Test Data. Burlington, VT: New England SAS Users Group; (2013). p. 1–6. [Google Scholar]

Articles from Frontiers in Public Health are provided here courtesy of Frontiers Media SA

RESOURCES