Skip to main content
Journal of Urban Health : Bulletin of the New York Academy of Medicine logoLink to Journal of Urban Health : Bulletin of the New York Academy of Medicine
. 2019 Jul 26;96(6):912–922. doi: 10.1007/s11524-019-00374-0

Assessing Research Activity and Capacity of Community-Based Organizations: Refinement of the CREAT Instrument Using the Delphi Method

Debbie Humphries 1,2,, Maria Ma 1, Nicole Collins 1, Natasha Ray 2, Eric Wat 3, Jill Bazelon 4, Jim Pettinelli 5, David A Fiellin 1,5,6
PMCID: PMC6904697  PMID: 31350725

Abstract

Community-based organizations (CBOs) are essential partners in community-engaged research, yet little is known about their research capacity. Community experts and organizations bring unique knowledge of the community to research partnerships, but standard validated measures of CBO research capacity do not yet exist. We report here on the refinement through a structured Delphi panel of a previously developed and piloted framework of CBO research capacity and an accompanying instrument, the Community REsearch Activity Assessment Tool (CREAT). A Delphi panel composed of twenty-three experts recruited from community (52%) and academic researchers (48%) from around the USA participated in five rounds of review to establish consensus regarding framework domains, operational definitions, and tool items. Panelists rated the importance of items on a 5-point Likert scale and assessed for the inclusion and language of items. Initial rounds of review began with reviewing the framework and definitions, with subsequent rounds including review of the full instrument. Concluding rounds brought back items that had not yet reached consensus for additional review. Median response values (MRV) and intra-quartile ranges (IQR) were calculated for each Likert item. Items with an MRV > 3.5 were deemed as having reached consensus and were retained. Language changes were made for items with MRV > 2.0 and < 3.5 and an IQR > 1.5. Items with MRV < 2.0 were excluded from the final tool. Panelist response rate was high (> 75%). Consensus was achieved for the inclusion of all domains, subdomains and operational definitions except “evidence-based practices.” Extensive changes to the CREAT instrument were made for clarification, to provide additional detail and to ensure applicability for CBOs. The CREAT framework and tool was refined through input from community and academic researchers. Availability of a validated tool to assess research capacity of CBOs will support targeted research capacity building for community organizations and partners, thus strengthening collaborations.

Keywords: Capacity building/organization and administration, Community-based participatory research/organization and administration, Cooperative behavior, Data collection/methods


Community-based organizations (CBOs) encompass a range of public or nonprofit entities that are engaged in meeting human, educational, environmental, or public safety needs at the community level. [1] Community-engaged research (CEnR), where community stakeholders and researchers work together on a research agenda, plays a vital role in addressing health disparities and in addressing the social determinants of health in communities [24]. Awareness of the importance of CEnR in translation (T3 and T4) and implementation research is growing, with the June 2013 Institute of Medicine Report that reviewed the progress of the Centers for Translational Science Awards (CTSAs) identifying a continued focus on community engagement as a key priority for CTSAs [5]. The recommendation for CTSAs on CEnR is to “Ensure Community Engagement in All Phases of Research,” [5] highlighting the importance of community involvement and describing a range of potential benefits [5]. Community engagement and participation include a variety of stakeholders and is an important element of creating research systems that address health disparities.

Throughout the health system there is a recognized need to strengthen CBO capacity to engage in CEnR, to evaluate and implement evidence-based programs, and effectively advocate for the clients and communities they serve [610]. The field of “implementation science” is working to bridge the domains of scientific research and program development [11, 12] to strengthen the development of effective community-level interventions. Despite the recognition of the need to involve communities and the CBOs who serve them in implementation and translational research, the evidence base and associated tools to measure effective and successful CBO and researcher partnerships, research capacity, and research infrastructure are limited.

Building CBO research capacity has received increased attention in recent years [9, 13] both from community-based participatory researchers (CBPR) [14, 15] who anchor the most participatory end of the engagement continuum, and also from the broader scientific community. [16] Strengthening the research capacity of CBOs will allow community partners to more fully contribute their expertise to the development, implementation and evaluation of interventions. A recent forum for CBOs engaged in research identified several areas where capacity building could enhance their ability to participate as equal partners in research partnerships [17]. Although awareness of the importance of research capacity of CBOs has grown, a uniform framework of the domains of research capacity within CBOs has not yet been developed. Domains such as grant writing, developing research questions, establishing research priorities, collecting primary and secondary data, disseminating research knowledge, and informing policies and practices with research findings [7, 1823] have been identified as important in research capacity assessments for other sectors. However, no list of research capacity domains has been developed in conjunction with or targeted to CBOs. Thus, despite the acknowledged importance of CEnR and the key role for CBOs in CEnR, the overall evidence base for the science of CEnR is limited by the absence of valid and reliable tools to measure CBO capacity to engage in research.

The varying perspectives on which skills best define research capacity is a key challenge facing research capacity building efforts for CBOs [6, 24]. For example, Wilson and colleagues’ framework of important CBO capacities includes use of research evidence as the only research skill [9, 13], Indyk and Indyk argue for the need for relational database capacity to support operations research in the form of continuous quality improvement [16], and Carroll-Scott and colleagues argue for understanding of the entire research process from stating research questions to analyzing and reporting data [24]. In the absence of consensus regarding the assessment of organizational research capacity, ad hoc measures of research activity and outcomes are used by academic researchers, such as peer-reviewed publications, successful applications for research grants, and presentations at academic conferences [2527]. However, such end product measures of research activity and capacity may be more in line with prioritized outcomes for academic researchers, and may not address the priority outcome of programmatic impact from a CBO perspective. Potential determinants of research capacity include research-training activities (e.g., staff trained, training courses, workshops) [6, 2831], the capacity to use research findings to improve programming and health outcomes [6, 13], attitudes towards conducting and collaborating in research [32], or the number of research projects [4]. In addition to identification of specific domains of research activity and capacity for CBOs, an instrument is needed that can assess those domains and capture changes that occur as a result of research capacity building efforts.

As previously reported [33], a literature review and in-depth interviews with CBOs were used to develop a framework of four domains reflective of CBO organizational research capacity:

  1. Organizational support for research (subdomains include interest, motivations, attitudes; infrastructure (space, time, technology); interest in strengthening research capacity; relationships with external researchers);

  2. Generalizable experiences (subdomains include monitoring and evaluation; identifying, using, and disseminating secondary data; implementing research into practice; finding/adopting model programs or best practices);

  3. Research specific experiences (subdomains include knowledge and use of basic research terminology; conducting literature reviews; asking research questions and using research questions to collect primary data; disseminating research results and advocacy; conducting ethical research); and

  4. Funding (subdomains include writing/receiving grants for programs and services; writing/receiving grants for research; being a “lead” organization on a grant).

The tool was piloted with 25 community-based AIDS service organizations, 2 federally qualified health centers, and 3 local health departments, to refine the content and operational definitions for the essential domains of research capacity. The framework and results of the piloting have been previously published [33].

The lack of a uniform measure of the capacity of CBOs may partly relate to the complexity of the concepts involved, the range of viewpoints, and the absence of a criterion or gold standard. Structured consensus building processes that take input from a variety of stakeholders and experts in a given subject matter, such as the Delphi method, can help articulate and codify the parameters of challenging concepts such as research capacity [34]. In the Delphi method, experts from a variety of backgrounds are assembled in a series of rounds to gather information and vote on proposed content. Through an iterative multistage process, designed to transform opinion into group consensus, rounds of voting are held until group consensus is reached [35]. The Delphi method preserves anonymity, collects input from diverse individuals, and avoids the dominant influence of a small number of individuals [36]. The Delphi method has been used extensively in health care and research to create consensus criteria for a variety of domains of health care research and quality [34].

The purpose of the current research was to refine the framework and instrument with a Delphi panel composed of experts from around the USA representing CBPR organizations, CTSAs, and CEnR practitioners to strengthen content validity of the CREAT. By highlighting the existing expertise and research priorities of community organizations, together with their capacity building needs, we hope to strengthen the capacity for high-quality community-engaged research. Availability of a valid instrument to assess research capacity of CBOs may be of use in building that capacity—identifying targeted needs and allowing implementation of targeted training to build the knowledge generation capacity. We hope to significantly advance the science of CEnR by providing an innovative instrument with national input and “buy in” from both community and academic partners.

Methods

The initial development of the CREAT framework and instrument were previously described [33].

Participants

To incorporate a wide range of expertise on community research capacity, twenty-three academic and/or community researchers from across the USA were invited to be on the panel, and all twenty-three were accepted (Table 1). We recruited potential panelists through multiple routes (and panelists fit in multiple categories): national experts (n = 11), CTSA affiliates with CEnR expertise (n = 6), referrals from colleagues (n = 5), and a posting to the CBPR listserv (n = 7) of the Community-Campus Partnerships for Health (CCPH) (Table 1). We received > 170 responses from the CCPH listserv and selected seven who represented geography and experiences missing from the existing panel. The final panel included 11 academic researchers, 11 community researchers from CBOs, and 1 CBO/academic panelist. Panelists were from academic institutions or CBOs from eleven different states (Table 1). All Delphi panel participants received $750 compensation for their time.

Table 1.

Delphi panelist recruiting and key characteristics

Number of panelists
Panelist recruiting process
  Sources used to identify potential panelists
    Community Campus Partnerships for Health community-based participatory research listserv 7
    Known to research team 11
    Referred by colleagues 5
  Criteria used to select potential panelists
    Reputation, geography, recommendations, years of experience, CTSA affiliation, balance of academic and community perspective
  Response rate: 23 invited, all accepted 23/23
Panelist key characteristics
  Geographical distribution (number of panelists)
    Arkansas (1), California (5), Colorado (1), Connecticut (3), Georgia (2), Hawaii (1), Missouri (1), North Carolina (3), New York (3), Oklahoma (1), Pennsylvania (2)
  Stakeholder affiliations
    CBOs 11
    Academic 11
    Both 1
    CTSA affiliates 6
  Years of experience (median)
    CBO 13
    Academic 14

Delphi Process

We applied a standard consensus-based Delphi methodology to refine the framework, operational definitions, and instrument [35, 36]. The Delphi approach was selected as it is designed to transform expert opinion into group consensus through rounds of individual voting [35]. In the current study, during each Delphi round, panelists were asked to rate the importance and appropriateness of item language on a 3- or 5-item Likert-scale (see Table 2). Table 2 provides a summary of the focus and results of each round of the Delphi panel. Panelists were provided opportunities to explain their quantitative ratings, to suggest additional wording changes, and to provide suggestions for new domains, subdomains, key terms, definitions, and questions. Rounds 1, 2, 4, and 5 were disseminated using the online Qualtrics platform. Items were added and modified after feedback from each round. For the third Delphi round, panelists were presented with the full CREAT instrument, and asked to rate whether the question stem and responses should be excluded, included with revisions, or included as is (306 different questions included in round 3). Panelists were also encouraged to propose changes and indicate if question instructions were clear enough. Due to the length of the CREAT instrument, the round 3 questionnaire was e-mailed to panelists as pdf and word files. In round 2, one panelist’s responses were excluded because quantitative and qualitative responses to the same questions were inconsistent, thus raising concerns of reverse scoring by that respondent. Repeated attempts to clarify the inconsistencies were not successful.

Table 2.

Summary of Delphi rounds

Round Content of round (quantitative measure) # of items reviewed # (%) of items accepted1 # of items requiring revision2 Participation rate Community researcher participation3
1 Framework and definitions (Likert (1 = strongly disagree” to 5 = strongly agree”)) 82 76 (93%) 6 20/23 (87%) 10/12 (83.3%)
2 Framework & definitions that still need consensus (same responses as round 1) 26 24 (92%) 2 21/23 (91%) 12/12 (100%)
3 Instrument items (1 = drop item, 3 = revise item, 5 = keep item) 306 274 (90%) 32 18/23 (79%) 9/12 (75%)
4 Instrument items that still need consensus (same responses as round 3) 50 41 (82%) 8 18/20 (90%)4 9/10 (90%)
5 Remaining instruments and items from R2 and R4 that need consensus (same as round 2 and round 4) 9 5 (56%) 4 18/20 (90%) 10/10 (100%)

1Accepted – IQR < 1.5 & median > 3.5

2Needs revision – (median < 3.5 or IQR ≥ 1.5)

3As participation varied across rounds, this column reports the proportion of community researchers completing each round

4Three panelists withdrew between rounds 3 and round 4

Participation

Panelist participation was over 75% for each round, and slightly more community researchers than academic researchers maintained full participation throughout (Table 2).

Quantitative Analysis Methods

After each round, the median response value (MRV) from the 5-point Likert scale and the interquartile range (IQR) of the responses for each item were calculated. Items with a MRV > 3.5 and a IQR < 1.5 rating were considered to have achieved a high consensus rating and therefore incorporated (accepted) into the final instrument. Any items with a MRV < 2 were considered to have a low consensus rating and were excluded from the final instrument. Items with a MRV between 3.5 and 2 were considered to have an intermediate rating and were presented to the panelists for a second review in the next round. Round 3 used a 3-point scale to preliminarily determine the inclusion and exclusion of CREAT instrument questions. While the CREAT tool has 51 items, round 3 asked multiple questions about each tool item—wording, responses included, and scale categories—and multiple overarching questions about each domain and subdomain, including introductory text acceptability, and inclusion of additional constructs.

Qualitative Analysis Methods

Panelists entered their comments into the online Qualtrics system (rounds 1–2, 4–5), or the word or pdf document used in round 3. The five-member research team assessed all qualitative comments to inform modifications of items that were presented back to panelists in the next round. Comments on items that did not reach consensus were reviewed by the research team using an iterative process, and themes in those comments were identified inductively in the tradition of grounded theory [37].

Results

In the first four rounds of the Delphi panel, between 82 and 93% of survey items reviewed achieved a MRV > 3.5 and IQR < 1.5 (Table 3). The final round, with 9 items, had only 56% accepted. Comments on the four remaining items were discussed by the review team, and three were determined to be wording and clarification; the outstanding question was about the term “evidence-based” and the accompanying definition.

Table 3.

Overview of changes to domains, key terms and definitions from the Delphi process (R1-R2, R5)

Domain/key terms Original Revised
Organizational support for research Interest in strengthening research capacity Commitment to research capacity (interests, motivations, attitudes)
Infrastructure (space, time, technology) Infrastructure (space, time, staff, technology)
Practical research experiences Implementing research into practice Using research in practice
Finding/adopting model programs or best practices To be determined
New subdomain addition Conducting community assessments
Research specific experiences Asking research questions and using research Asking research questions and using research questions to collect primary data
Funding (FUN) New subdomain addition Negotiating budgets
Researchers Any individual person or organization who worked with your organization on a specific research project, such as a program evaluation, a community needs assessment, or another research project. Researchers can be affiliated with a university, a private organization, or a non-profit. Individuals trained through formal or informal means in designing studies, collecting, analyzing, and interpreting data, and applying results. Researchers can be affiliated with an academic institution, a private organization, a non-profit, or an independent consultant.
Research into practice Using primary and secondary data to inform your work. Research findings used to identify promising or evidence-based policies, programs or services to improve the health and well-being of the community.
Dissemination Sharing research findings from primary or secondary data analyses. Sharing research findings with stakeholders such as: funders, partners, policy-makers, publications and community members in order to drive responsive changes such as: behavioral, policy and funding.
Research ethics Protecting the confidentiality and rights of your community members or clients while using their information for research purposes. Conducting research that protects the confidentiality and rights of participants and is respectful of participants and their communities’ cultural beliefs and practices.
New terms
Research partnerships An equitable relationship formed for the purpose of conducting research between a community-based organization and/or community members and another agency such as: an academic institution, a private organization, or a non-profit.
Institutional Review Board (IRB) A committee that reviews research involving human beings to ensure that the rights and welfare of humans are protected. IRBs are in place to ensure that human research follows federal, institutional and ethical guidelines.
Community-based participatory research

A definition proposed by Minkler and Wallerstein (2003):

CBPR is a collaborative approach to research, [CBPR] equitably involves all partners in the research process and recognizes the unique strengths that each brings. CBPR begins with a research topic of importance to the community with the aim of combining knowledge and action for social change to improve community health and eliminate health disparities

Community-defined evidence Adapted definition from Martínez, K.J., Callejas, L., & Hernandez, M. (2010), “A set of practices that communities have used and determined by community consensus over time to yield positive results and…have reached a level of acceptance by the community.”

Changes to Domains, Subdomains, and Definitions (Rounds 1–2, 5)

Based on the Delphi process, extensive changes to the CREAT framework were made for clarification, to provide additional detail and to ensure applicability for CBOs (Table 3). Panelists provided wording recommendations for domain and subdomain modifications. Wording changes generally involved making the language more universally acceptable for both academic and community researchers. Three of the four domains achieved high consensus among panelists, but the domain “Generalizable Experiences” was modified to “Practical Research Experience” in round two, as panelists believed the previous domain name was too vague. The two subdomains “Implementing research into practice” and “Identifying, using, and disseminating secondary data” were changed to “Using research in practice” and “Identifying, using, and sharing secondary data”, respectively, to make the language more universal. The revised framework is shown in Fig. 1.

Fig. 1.

Fig. 1

Revised CREAT framework. The framework includes four domains, with between 8 and 16 questions in each domain. *Practical Research Experiences domain includes the following subdomains: monitoring and evaluation; identifying, using, and sharing secondary data; conducting community assessments; using research in practice; finding/adapting evidence-based practices. The Research Specific Experiences domain includes the following subdomains: knowledge and use of basic research terminology; conducting literature reviews; asking research questions and using research questions to collect primary data; disseminating research results and advocacy; conducting ethical research. The Funding domain includes the following subdomains: being a “lead” organization on a grant; writing/receiving grants for research; writing/receiving grants for programs and services; negotiating budgets. §The Organizational Support for Research domain includes the following subdomains: infrastructure (space, time, staff, technology); commitment to research capacity (interests, motivations, attitudes); relationships with external researchers. **Consensus was not reached on the best language for the subdomain of what had previously been called evidence-based practices

Changes to Instrument (Rounds 3–5)

No items were dropped from the instrument. During rounds 3–5 panelists recommended numerous language changes, predominantly related to clarity, conciseness, and better applicability to CBO activities (Fig. 2). The final instrument consists of 51 questions, and 33 of the questions have four or more components, where users are asked to check all that apply.

Fig. 2.

Fig. 2

Overview of Delphi panel changes to instrument (R3–R5). Examples are given of changes that were considered substantive, clarification, definitions, and scale changes.  In addition, the numbers of each type of change from the pre-Delphi tool to the post-Delphi tool are given

Non-Consensus

A consensus for the subdomain of “evidence-based practices” or “finding/adopting model programs or best practices” was not reached. Academic panelists favored changing the original subdomain name from “finding/adopting model programs or best practices” to “evidence-based practices” (MRV = 5; IQR = 1), whereas CBO panelists had divergent opinions, with some preferring the original language and others favoring a language change (MRV = 3; IQR = 3.5). The non-consensus regarding evidence-based practice and model programs was also reflected in the key term definitions, where academic panelists preferred changing the key term to “evidence-based practice” (MRV = 5; IQR = 3), and CBO panelists were divided, with some preferring the term “model programs” as more inclusive (MRV = 3; IQR = 2).

With a grounded theory approach, we identified three key themes in the comments about the language of evidence-based: (1) relevance to CBOs, (2) the scientific hierarchy of types of evidence and practice, and (3) use of inclusive terminology. Both academic and CBO respondents identified each of these concerns.

Relevance to CBOs

Both academic and CBO panelists were aware of the importance of the language of the entire instrument being relevant to CBOs, although their perspectives on what is relevant varied.

I have found ‘model programs’ to be a more easily embraced term among the CBOs I work with. ‘[E]vidence-based’ is academic jargon and is perceived as arrogant and the common pushback is ‘Evidence for who? No one talked to us.’ We also should not ignore the model programs and wise practices that get shared and implemented through networking and the informal sharing between CBOs with common interests. CBO5

I like the new wording [evidence-based]; it [sic] think it reflects a process that works for CBO’s and their stakeholders. ACAD4

Hierarchy of evidence and practice

Some respondents noted that the terms fell in different places in the scientific hierarchy of evidence, and supported the more rigorous terminology.

[M]odel programs are not evidence based programs. Models are one offs that may not have been rigorously examined. Evidence based should have been well evaluated by a neutral party and have a rubber stamp with an endorsement. ACAD3

As we know Evidenced based means it’s been tested which exceeds a model best practice. CBO8

Use of Inclusive Terminology

While some participants noted the relative rigor of the different terms, other respondents argued for a more inclusive approach, highlighting the desirability of using terms that covered a broader spectrum of the experience of CBOs.

There are areas where evidence based practices do not exist. I prefer the term model programs. This allows for model or ‘promising practices’ to be adapted/used by CBOs. CBO/ACAD1

I caution against focusing solely on ‘evidence-based’ practices. ‘Model Programs’ is more encompassing. Unless we create two categories “evidence-based” only is limiting. There are model programs/ or Model practices [that] have yet to be evaluated. Other terms as used in previous page: ‘Promising practices, models’ CBO11

Best practices includes evidence-based and more. It includes expert opinion which is sometimes the best there is. ACAD9

Discussion

The CREAT instrument and framework covers core elements of the knowledge generation needs and priorities of CBOs. Delphi panelists came to consensus on the framework, definitions, and questions with the exception of those noted below. Delphi panelists came to consensus on adding two additional subdomains, one in the funding domain focused on negotiating budgets, and the other in the practical research domain focused on community assessments. Panelists did not drop any of the domains or subdomains in the tool.

There was greater than 75% participation rate in each round, and participants provided extensive feedback. Following the in-depth review and extensive revisions based on expert input, the instrument has content validity, as experts consider that the instrument captures the substance and complete definition and “range of meanings” of the subdomains and domains [38, 39].

The panelists were not able to reach consensus on the language of “evidence-based.” During round 2, the language of evidence-based practices and model programs was widely discussed, and no immediate consensus was reached. The phrase “evidence-based practices” was considered by some community panelists to be an academic term and they believed model programs would be more relevant for CBOs. Most academic panelists disagreed. When we revised the terms in round 5 panelists did not reach consensus. The disagreement on language of evidence-based may represent CBO experiences where the evidence-based practices accepted by the scientific community did not seem relevant to the needs of the communities they serve [40] or the larger movement for broader terminologies such as practice-based evidence [41].

Importance of CBO knowledge generation

In addition to utility for the purposes of CEnR partnerships and translational research activities, the CREAT instrument will also be useful for CBOs who want to systematically strengthen their own knowledge generation activities. “Knowledge generation” refers to a broad spectrum of information-gathering activities, from program evaluation to rigorous experimental trials. CBOs have extensive knowledge generation needs implicit in their daily activities, and increasing their ability to use appropriate levels of rigor in all their knowledge generation activities is important. Along the spectrum of CBO knowledge generation needs, the field of continuous quality improvement (CQI) [42] represents an example of a relevant rigorous approach to generating and assessing relevant information for improving the functioning of an organization. Organizational evaluation capacity is another area where knowledge generation methods are explored and tools are emerging for assessing capacity [43, 44]. The CBPR community has been working with CBOs to embody a co-creation of knowledge since its inception [45].

The CREAT framework and instrument is an important addition to strengthening translational research, by providing an instrument with demonstrated content validity [38, 39] for assessing changes in CBO research capacity and activity. The instrument will strengthen the ability to look systematically at impacts of CBO capacity building activities, and to identify important CBO organizational determinants of effective CEnR partnerships. By strengthening CBO research capacity, we also increase the ability of CEnR partnerships to engage in co-creation of knowledge [46] or collaborative generation of knowledge, with academic researchers working as equal partners alongside stakeholders from other sectors. Following the Delphi process, the refined CREAT is now ready for field testing and national reliability and construct validation, and will be able to support targeted and more effective CBO research capacity building efforts.

Acknowledgments

This study was supported by a grant from the Yale Center for Clinical Investigation, as a CTSA Consortium Multi-Institutional Pilot Grant Titled: “Strengthening the Science of Community Engaged Research: a Tool to assess Research Activity and Capacity.” This publication was made possible by CTSA Grant UL1 TR000142 from the National Center for Advancing Translational Science (NCATS), a component of the National Institutes of Health (NIH), and NIH roadmap for Medical Research.

Compliance with Ethical Standards

The study proposal was submitted to the Yale University Institutional Review Board (#1510016671), and was determined not to be Human Subjects Research under 45 CFR 46.102 (f).

Disclaimer

Its contents are solely the responsibility of the authors and do not necessarily represent the official view of NIH.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Carroll-Scott A, Perez M, Toy P. Performing a community assessment curriculum. Los Angeles: UCLA Center for Health Policy Research, Health DATA Train-the-Trainer Project; 2004. [Google Scholar]
  • 2.Cheadle A, Sullivan M, Krieger J, Ciske S, Shaw M, Schier JK, Eisinger A. Using a participatory approach to provide assistance to community-based organizations: the Seattle partners community research center. Health Educ Behav. 2002;29(3):383–394. doi: 10.1177/109019810202900308. [DOI] [PubMed] [Google Scholar]
  • 3.Marmot M, Wilkinson RG. The social determinants of health. Oxford: Oxford University Press; 1999. [Google Scholar]
  • 4.Centers for Disease Control. CDC HIV Prevention Strategic Plan: Extended through 2010. 2007 [cited 2011 11/4/11].
  • 5.Institute of Medicine . The CTSA program at NIH: opportunities for advancing clinical and translational research. Washington, D.C.: National Academies Press; 2013. [PubMed] [Google Scholar]
  • 6.Gadsby, E.W., Research capacity strengthening: donor approaches to improving and assessing its impact in low- and middle-income countries. Int J Health Plann Manag, 2010. [DOI] [PubMed]
  • 7.Sadana R, Pang T. Health research systems: a framework for the future. Bull World Health Organ. 2003;81(3):159. [PMC free article] [PubMed] [Google Scholar]
  • 8.Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bull World Health Organ. 2006;84(8):620–628. doi: 10.2471/blt.06.030312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Wilson MG, Rourke SB, Lavis JN, Bacon J, Travers R. Community capacity to acquire, assess, adapt, and apply research evidence: a survey of Ontario’s HIV/AIDS sector. Implement Sci. 2011;6(1):54. doi: 10.1186/1748-5908-6-54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Napoles AM, Stewart AL. Transcreation: an implementation science framework for community-engaged behavioral interventions to reduce health disparities. BMC Health Serv Res. 2018;18(1):710. doi: 10.1186/s12913-018-3521-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Madon T, Hofman KJ, Kupfer L, Glass RI, Public health Implementation science. Science. 2007;318(5857):1728–1729. doi: 10.1126/science.1150009. [DOI] [PubMed] [Google Scholar]
  • 12.Shekar M. Delivery sciences in nutrition. Lancet. 2008;371(9626):1751. doi: 10.1016/S0140-6736(08)60757-6. [DOI] [PubMed] [Google Scholar]
  • 13.Wilson MG, Lavis JN, Travers R, Rourke SB. Community-based knowledge transfer and exchange: helping community-based organizations link research to action. Implement Sci. 2010;5(5):33. doi: 10.1186/1748-5908-5-33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Israel, B.A., A.J. Schulz, E. Parker, A.B. Becker, A.J.I. Allen, and J.R. Guzman, Critical issues in developing and following community participatory research principles, in Community-Based Participatory Research for Health, M. Minkler and N. Wallerstein, Editors. 2003, Jossey-Bass: San Francisco. p. 53–76.
  • 15.Hacker K, Tendulkar SA, Rideout C, Bhuiya N, Trinh-Shevrin C, Savage CP, Grullon M, Strelnick H, Leung C, DiGirolamo A. Community capacity building and sustainability: outcomes of community-based participatory research. Prog Community Health Partnersh. 2012;6(3):349–360. doi: 10.1353/cpr.2012.0048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Indyk L, Indyk D. Collecting data along the continuum of prevention and care: a continuous quality improvement approach. Soc Work Health Care. 2006;42(3–4):47–60. doi: 10.1300/J010v42n03_04. [DOI] [PubMed] [Google Scholar]
  • 17.Community campus partnerships for health, Community Leaders from Across the U.S. Call for Health Research Equity and Impact, in 2nd National Community Partner Forum. 2012: Washington, D.C.
  • 18.Bautista, M., L. Velho, and D. Kaplan, Comparative study of the impacts of donor-initiated programmes on research capacity in the south. In Report to the Directorate-General for Development Cooperation. 2001, Division for Research and Communication, Ministry of Foreign Affairs: The Netherlands.
  • 19.Cooke J. A framework to evaluate research capacity building in health care. BMC Fam Pract. 2005;6:44. doi: 10.1186/1471-2296-6-44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Alliance for Health Policy and Systems Research . Strengthening health systems: the role and promise of policy and systems research. Geneva: WHO; 2004. [Google Scholar]
  • 21.Vega MY. The CHANGE approach to capacity-building assistance. AIDS Educ Prev. 2009;21(5 Suppl):137–151. doi: 10.1521/aeap.2009.21.5_supp.137. [DOI] [PubMed] [Google Scholar]
  • 22.Downey LH, Castellanos DC, Yadrick K, Threadgill P, Kennedy B, Strickland E, Prewitt TE, Bogle M. Capacity building for health through community-based participatory nutrition intervention research in rural communities. Fam Community Health. 2010;33(3):175–185. doi: 10.1097/FCH.0b013e3181e4bb58. [DOI] [PubMed] [Google Scholar]
  • 23.Hunter J, Lounsbury D, Rapkin B, Remien R. A practical framework for navigating ethical challenges in collaborative community research. Global J Commun Psychol Pract. 2011;1(3):12–22. doi: 10.7728/0103201102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Carroll-Scott A, Toy P, Wyn R, Zane JI, Wallace SP. Results from the Data & Democracy initiative to enhance community-based organization data and research capacity. Am J Public Health. 2012;102(7):1384–1391. doi: 10.2105/AJPH.2011.300457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Campbell SM, Roland MO, Bentley E, Dowell J, Hassall K, Pooley JE, Price H. Research capacity in UK primary care. Br J Gen Pract. 1999;49(449):967–970. [PMC free article] [PubMed] [Google Scholar]
  • 26.Griffiths F, Wild A, Harvey J, Fenton E. The productivity of primary care research networks. Br J Gen Pract. 2000;50(460):913–915. [PMC free article] [PubMed] [Google Scholar]
  • 27.Shaw S, Macfarlane F, Greaves C, Carter YH. Developing research management and governance capacity in primary care organizations: transferable learning from a qualitative evaluation of UK pilot sites. Fam Pract. 2004;21(1):92–98. doi: 10.1093/fampra/cmh120. [DOI] [PubMed] [Google Scholar]
  • 28.Cooke J, Owen J, Wilson A. Research and development at the health and social care interface in primary care: a scoping exercise in one National Health Service region. Health Soc Care Community. 2002;10(6):435–444. doi: 10.1046/j.1365-2524.2002.00395.x. [DOI] [PubMed] [Google Scholar]
  • 29.Lester HE, Carter YH, Dassu D, Hobbs FD. Survey of research activity, training needs, departmental support, and career intentions of junior academic general practitioners. Br J Gen Pract. 1998;48(431):1322–1326. [PMC free article] [PubMed] [Google Scholar]
  • 30.Owen J, Cooke J. Developing research capacity and collaboration in primary care and social care: is there enough common ground? Qual Soc Work. 2004;3:398–410. [Google Scholar]
  • 31.Raghunath AS, Innes A. The case of multidisciplinary research in primary care. Primary Health Care Res Dev. 2004;5(3):264–273. [Google Scholar]
  • 32.Hurst K. Building a research conscious workforce. J Health Organ Manag. 2003;17(5):373–384. doi: 10.1108/14777260310505147. [DOI] [PubMed] [Google Scholar]
  • 33.Humphries DL, Carroll-Scott A, Mitchell L, Tian T, Choudhury S, Fiellin DA. Assessing research activity and capacity of community-based organizations: development and pilot testing of an instrument. Prog Community Health Partnersh. 2014;8(4):421–432. doi: 10.1353/cpr.2014.0067. [DOI] [PubMed] [Google Scholar]
  • 34.Beattie E, Mackway-Jones K. A Delphi study to identify performance indicators for emergency medicine. Emerg Med J. 2004;21(1):47–50. doi: 10.1136/emj.2003.001123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32(4):1008–1015. [PubMed] [Google Scholar]
  • 36.Boulkedid R, Abdoul H, Loustau M, Sibony O, Alberti C. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLoS One. 2011;6(6):e20476. doi: 10.1371/journal.pone.0020476. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Suddaby R. From the editors: what grounded theory is not. Acad Manag J. 2006;49(4):633–642. [Google Scholar]
  • 38.Babbie E. Survey research methods. 2. Belmont, CA: Wadsworth Publishing Company; 1990. [Google Scholar]
  • 39.DeVellis RF. Scale development: theory and applications. 3. Thousand Oaks, CA: Sage Publications; 2012. [Google Scholar]
  • 40.Echo-Hawk H. Indigenous communities and evidence building. J Psychoactive Drugs. 2011;43(4):269–275. doi: 10.1080/02791072.2011.628920. [DOI] [PubMed] [Google Scholar]
  • 41.Abe J, Grills C, Ghavami N, Xiong G, Davis C, Johnson C. Making the invisible visible: identifying and articulating culture in practice-based evidence. Am J Community Psychol. 2018;62(1–2):121–134. doi: 10.1002/ajcp.12266. [DOI] [PubMed] [Google Scholar]
  • 42.Kahan B, Goodstadt M. Continuous quality improvement and health promotion: can CQI lead to better outcomes? Health Promot Int. 1999;14(1):83–91. [Google Scholar]
  • 43.Bourgeois I, Toews E, Whynot J, Lamarche MK. Measuring organizational evaluation capacity in the Canadian federal government. Can J Program Eval. 2013;28(2):1–19. [Google Scholar]
  • 44.Bourgeois I, Whynot J, Theriault E. Application of an organizational evaluation capacity self-assessment instrument to different organizations: similarities and lessons learned. Eval Program Plann. 2015;50:47–55. doi: 10.1016/j.evalprogplan.2015.01.004. [DOI] [PubMed] [Google Scholar]
  • 45.Minkler M, Wallerstein N. Community-based participatory research for health. San Francisco: Jossey-Bass; 2003. [Google Scholar]
  • 46.Greenhalgh T, Jackson C, Shaw S, Janamian T. Achieving research impact through co-creation in community-based health services: literature review and case study. Milbank Q. 2016;94(2):392–429. doi: 10.1111/1468-0009.12197. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Urban Health : Bulletin of the New York Academy of Medicine are provided here courtesy of New York Academy of Medicine

RESOURCES