Abstract
Aim:
Using a stakeholder-engaged approach, this study conducted content validation and item reduction of a quantitative measure of research engagement.
Methods:
A five-round modified Delphi process was used to reach consensus on items. Rounds 1–3 and 5 were conducted using web-based surveys. Round 4 consisted of a 2-day, in-person meeting. Delphi panelists received individualized reports outlining individual and aggregate group responses after rounds 1–3.
Results:
Over the five-round process, items were added, dropped, modified, and moved from one engagement principle to another. The number of items was reduced from 48 to 32, with three to five items corresponding to eight engagement principles.
Conclusions:
Research that develops standardized, reliable, and accurate measures to assess stakeholder engagement is essential to understanding the impact of engagement on scientific discovery and the scientific process. Valid quantitative measures to assess stakeholder engagement in research are necessary to assess associations between engagement and research outcomes.
Keywords: content validation, Delphi process, stakeholder engagement, survey measure
1 |. INTRODUCTION
Stakeholder-engaged research ensures that patients and/or health stakeholders can provide their unique perspective on research questions, study designs, and important outcomes, forming the foundation for patient- and stakeholder-centered research (Fleurence et al., 2013; Patient-Centered Outcomes Research Institute, 2012). This approach to research has increased relevance to stakeholders (e.g., patients, providers, and policy makers). Studies in this domain require planning for sustainability after funding ceases, trust among collaborating stakeholders, and ensuring research capacity of all partners to leverage resources and develop reciprocal relationships between researchers and community health stakeholders (Fleurence et al., 2013; Patient-Centered Outcomes Research Institute, 2012).
The level of community health stakeholder engagement in research can vary greatly, from minimal engagement to fully collaborative partnerships over time within a single study or across studies. A need exists to rigorously evaluate the impact of stakeholder engagement on the development, implementation, and outcomes of research studies in order to move from lessons learned to evidence-based practices for stakeholder engagement (Goodman & Sanders Thompson, 2017). This requires the development and validation of measurement tools that can be used to assess stakeholder engagement in research projects and/or teams. However, the extent to which stakeholders in research partnerships feel engaged has not received sufficient attention in the literature. It is important to understand how the level of engagement in a partnership is developing, and to what extent level of engagement is a predictor of outcomes in the larger study.
A systematic review of measures of stakeholder engagement in research showed that this area of research was not very strong methodologically (Bowen et al., 2017). Many investigators had measured aspects of stakeholder engagement, but most of these measures lacked any validation, and most had not undergone comprehensive validation. The measures were categorized into two groups: counting measures and theoretically based proxy measures (Bowen et al., 2017). Examples of counting measures include the following: number of people who attended a board meeting, counts of attendees at community meetings, and frequency of attendance at events (Bowen et al., 2017). Counting measures assume attendance is engagement, but it is unclear whether attendance is a good proxy for engagement or whether some other cluster of factors motivate people to attend events and activities. Examples of theoretically based proxy measures include the following: Degree to which participants felt they were part of a positive community, degree to which participants felt comfortable sharing their thoughts and opinions, and the level of confidence regarding their neighbors’ willingness to participate in a neighborhood problem-solving process (Bowen et al., 2017). Proxy measures assess some construct that is possibly related to engagement, but most do so without corroboration. In the Bowen et al. (2017) review, neither type of measure was tracked over time, and most measures were not related to outcomes or progress.
Comprehensive validation of quantitative measures to assess stakeholder engagement in research would make a major contribution to community-engaged science (Goodman & Sanders Thompson, 2017). This will allow for the systematic measurement and collection of data on the level of stakeholder engagement—repeatedly over time within and across projects. Standardized, reliable, and accurate measures to assess stakeholder engagement are essential to understanding the impact of stakeholder engagement on scientific discovery and the scientific process. These measures are necessary to assess associations between the level and type of engagement and research outcomes, which can involve understanding the mechanisms through which nonacademic stakeholder engagement influences the development and quality of patient reaction to health interventions, new technologies, and behavioral recommendations.
Previous work developed and pilot tested a measure to assess community engagement in research (Goodman et al., 2017). Although this work contributed to the science of community engagement by helping to fill the gap in existing measures, to our knowledge there are no comprehensively validated measures of stakeholder engagement in research. Here, we discuss a stakeholder-engaged approach to content validation of a quantitative measure of stakeholder engagement. Content validation examines the appropriateness of the content and the structure of the instrument (Abell, Springer & Kamata, 2009; Glässel, Kirchberger, Kollerits, Amann, & Cieza, 2011; Schwab, 1999).
2 |. METHODS
2.1 |. Delphi panelists
The Delphi panel consisted of a broad range of stakeholders (e.g., patients, caregivers, advocacy groups, clinicians, researchers) who have experience with and knowledge about community-engaged research. We also included one community stakeholder panelist that had no prior research experience to provide the perspective of someone that is new to this type of work. A total of 19 individuals were recruited to serve on the Delphi panel. Panelists were recruited by email using a convenience snowball sampling approach based on the networks of the project team members. The initial panel composition was weighted toward academics and led to recruitment of additional community partner panelists recommended by academic panelists. Demographic characteristics of Delphi Panelist are presented in Table 1. Panelists were majority female (90%), non-Hispanic or non-Latino/a (95%), African-American or Black (63%), with some college or higher education (100%). The median age of panelists was 55 years (range: 26–76 years), with a median of 10 years (range: 0–35 years) of research experience and 10 years (range: 0–30 years) of community-based participatory research (CBPR) experience.
TABLE 1.
Categorical Variables | Categories | N (%) |
---|---|---|
Gender | Male | 2 (89.5%) |
Female | 17 (10.5%) | |
Race/Ethnicity | Hispanic/Latino(a) | 1 (5.3%) |
Non-Hispanic/Latino(a) Black | 11 (57.9%) | |
Non-Hispanic/Latino(a) White | 6 (31.6%) | |
Non-Hispanic/Latino(a) Multiracial | 1 (5.3%) | |
Education level | Some college or associate degree | 4 (21.1%) |
College degree | 1 (5.3%) | |
Graduate degree | 14 (73.7%) | |
Stakeholder type | Academic stakeholder | 8 (42.1%) |
Community stakeholder | 11 (57.9%) | |
Ever direct service provider | Yes | 9 (47.4%) |
No | 10 (52.6%) | |
Currently a direct service provider (n = 9) | Yes | 4 (44.4%) |
No | 5 (55.6%) | |
Region | Midwest | 8 (42.1%) |
Northeast | 2 (10.5%) | |
South | 6 (31.6%) | |
West | 3 (15.8%) | |
Continuous variables | Median (range) | |
Age (years) | 55 (26–76) | |
Years of research experience | 10 (0–35) | |
Years of CBPR experience | 10 (0–30) | |
Years of direct provider experience (n = 9) | 18 (2–30) |
Abbreviation: CBPR, community-based participatory research
2.2 |. Development of a quantitative tool to evaluate stakeholder engagement in research
The original version of the measure of community engagement in research was developed and used in 2013 by the Program for the Elimination of Cancer Disparities (PECaD) at Siteman Cancer Center (Arroyo-Johnson et al., 2015; Thompson et al., 2014). This measure was pilot tested using a survey tool that contained 96 items on community engagement pertaining to 11 engagement principles (EPs; Goodman et al., 2017). Each EP was quantitatively measured on two scales—one to assess quality (how well), as measured by three to five quality items; and one to assess quantity (how often) with three to five corresponding quantity items (Goodman et al., 2017). The questions had 5-point Likert scale response options: 48 questions measuring quality and 48 questions measuring quantity. Further details on the original measure are published elsewhere (Goodman et al., 2017). Content validation for the present study began with the previously developed survey tool based on the 11 EPs frequently cited in the literature (Ahmed & Palermo, 2010; Burke et al., 2013; Butterfoss & Francisco, 2004; Butterfoss, Goodman, & Wandersman, 1996; Clinical & Translational Science Awards Consortium Community Engagement Key Function Committee Task Force on the Principles of Community Engagement, 2011; Goodman et al., 2017; Israel et al., 2008; Israel, Schulz, Parker, & Becker, 1998; Khodyakov et al., 2013, 2011; McCloskey et al., 2012; Nueces et al., 2012; Wallerstein & Duran, 2006).
2.3 |. Procedures
A five-round modified Delphi process was used to reach consensus on items used in the measure of community engagement in research (Gupta & Clarke, 1996; Hasson & Keeney, 2011; Helmer, 1967; Hsu & Sandford, 2007; Linstone & Turoff, 2002, 2011; Sackman, 1975). Rounds 1 through 3 and 5 consisted of virtual feedback through online web surveys, whereas round 4 was a 2-day, in-person meeting. Rounds 1 through 3 were also preceded by a webinar, and round 4 was preceded by an in-person presentation summarizing the previous rounds. One panelist dropped after the first round; the remaining 18 panelists remained engaged in the process through round 5. The five-round process took approximately 1 year—round 1, July 2017 (n = 19); round 2, October through November 2017 (n = 18); round 3, February through March 2018 (n = 18); round 4, April 2018 (n = 16); and round 5, July through August 2018 (n = 18).
2.4 |. Reaching consensus on items of engagement principles
The items from the original measure of community engagement in research were presented to panelists through an online survey using the Qualtrics platform. On the first survey (round 1), panelists were shown each item and asked whether they would keep, modify, or remove the item. If a panelist chose modify or remove, they were asked a follow-up question to explain why they would remove the item or what they would modify about the item. For round 1 only, after each EP and the corresponding items, panelists were also asked whether there was anything related to the EP that was not asked but should be, and a follow-up open-ended question if they responded yes. The research team reviewed results from the survey, and they identified items lacking consensus (more than 20% of panelists choosing “modify” or “remove”) for discussion; however, more weight was given to feedback from community stakeholders, and some areas of global concern raised discussion on items that did not lack consensus. Items for the measure of community engagement in research were edited (as appropriate) based on panelist feedback. Items were dropped, modified, added, or not changed. Results were presented to Delphi panelists through a webinar and also through personalized reports comparing each panelist’s own responses to the summarized results from the group as a whole. The revised items of the measure were then presented again to panelists for a second round and then a third round using the same process as in round 1.
Round 4 differed from rounds 1 through 3 and consisted of a 2-day, in-person meeting of the panelists, the research team, and a professional editor. The goal of the in-person meeting was to facilitate understanding and discussion of EP items that would permit development of group consensus. Group consensus for EP items was greater than 80% agreement for each item. The in-person meeting was held in New York City. Panelists who could not attend the in-person meeting were given the option to attend virtually through webinar and phone conference and/or take a pre-meeting survey that reviewed the items to be discussed on the first day of the in-person meeting. Instead of having response options of “keep,” “modify,” or “remove,” in round 4, panelists were asked whether they agreed or disagreed with each item that had been modified in the previous round on Day 1. On Day 2, panelists were presented all items regardless of modifications and asked whether they agreed or disagreed with the item. On each day, discussions were held about items with disagreement, and revisions were made to the measure items. Round 4 results from days 1 and 2 were combined and presented as one Delphi round.
After the in-person meeting, the professional editor reviewed the items again and made additional edits to a few of the items for clarity and consistency of terminology across items. Because not all panelists were able to attend the in-person meeting and to make sure all panelists saw the final revised version of the items, an additional round was added to the Delphi process (round 5) that presented all items to the panelists again, regardless of previous modifications, asking whether panelists agreed or disagreed with the item. A follow-up open-ended question was asked after each EP about whether a panelist disagreed with any item in the EP, as opposed to after each item as had been done in previous rounds. Round 5 concluded the modified Delphi process. The Institutional Review Board (IRB)/University Committee on Activities Involving Human Subjects, Office of Research Compliance at New York University and the Human Research Protections Office at Washington University in St. Louis approved this study.
3 |. RESULTS
The entire five-round modified Delphi process was conducted over approximately a 1-year period—with the kickoff webinar and round 1 survey released in June 2017. One panelist dropped after the first round, leaving 18 panelists engaged throughout the five-round process (8 academic and 10 community stakeholders). All 18 remaining panelists were able to participate in online surveys for rounds 2, 3, and 5. Ten panelists were able to attend the in-person meeting, and six panelists who could not attend the meeting were able to complete the pre-meeting survey. Three of the six panelists who completed the pre-meeting survey were also able to attend the in-person meeting virtually at varied times throughout the 2-day meeting. Two panelists did not participate in round 4 in any format.
3.1 |. Delphi round 1 (initial)
Table 2 shows the results for item modifications based on round 1 Delphi panel feedback. The research team’s discussion and revisions were guided by panelists’ “keep,” “modify,” or “remove” responses on the survey, in addition to panelists’ open-ended comments on what to modify or why to remove. Several other general themes that appeared throughout this round prompted terminology changes throughout the measure. Panelists described the original measure as appearing as if the academics were leading the process instead of the process being led jointly by academic and community stakeholders, which prompted a change in terminology throughout from the term “community members” to “all partners.” Panelists also noted that the measure was very long and that the number of items needed to be reduced. Part of the charge of the Delphi panel was to reduce the number of items in the measure. The original measure consisted of 48 items, the measure was reduced to 43 items after Delphi round 1.
TABLE 2.
Original engagement principles | Number of items | ||||
---|---|---|---|---|---|
Total | Not changed | Dropped | Added | Modified | |
1. Focus on local relevance and social determinants of health | 4 | 2 | 0 | 0 | 2 |
2. Acknowledge the community | 4 | 0 | 4 | - | - |
3. Disseminate findings and knowledge gained to all partners | 5 | 0 | 3 | - | 2a |
4. Seek and use the input of community partners | 5 | 0 | 1 | 1 | 4 |
5. Involve a cyclical and iterative process in pursuit of objectives | 5 | 1 | 0 | 1 | 4 |
6. Foster co-learning, capacity building, and co-benefit for all partners | 5 | 0 | 1 | 1 | 4 |
7. Build on strengths and resources within the community | 4 | 0 | 0 | 1 | 4 |
8. Facilitate collaborative, equitable partnerships | 5 | 0 | 2 | 3 | 3 |
9. Integrate and achieve a balance of all partners | 4 | 1 | 1 | - | 2 |
10. Involve all partners in the dissemination process | 4 | 0 | 1 | 2a | 3 |
11. Plan for a long-term process and commitment | 3 | 0 | 3 | - | - |
Newly Added EP – Build Trustb | - | - | - | 4 | - |
Total | 48 | 4 | 16 | 13 | 28 |
Two items were moved to EP10 (new EP7 in the subsequent versions) and modified from the previous round.
Four items were added to a new EP that did not exist in the original measure, “Build Trust.”
After round 1 revisions, only four items were not changed in any way. Sixteen items were dropped, 13 were added, and 28 were modified. Most item modifications included content changes so that items would appear more bidirectional as opposed to researcher led. For example, an item from EP1 (i.e., “focus on health problems that the community thinks are important”) was changed to “examine data together to determine the health problems that most people in the community think are important,” and an item from EP2 (i.e., “use the ideas and input of community members”) was changed to “include the ideas and input of all partners.”
The main reason for dropping items and EPs was redundancy and overlap with other items and EPs. Two EPs (“Acknowledge the community” and “Plan for a long-term process and commitment”) and all of their corresponding items were dropped. Another EP (“Disseminate findings and knowledge gained to all partners”) was also dropped; however, only three of the items were dropped, whereas two of the items were modified and moved to the following EP: “Involve all partners in the dissemination process.” Items were added during this round in response to comments about missing themes that panelists said were important to community engagement. The wording for several added items was pulled from or based on an existing measure of CBPR (Arora, Krumholz, Guerra, & Leff, 2015). A new EP (i.e., “Build trust”) was added after round 1, with all four corresponding items added in response to comments that a trust principle was missing from the measure.
3.2 |. Delphi rounds 2 through 4
Table 3 presents results for item modification from round 2. The revised EPs and items presented in round 2 consisted of eight EPs, each with four to nine items for a total of 43 items. The research team’s discussion and revisions were guided by panelists’ “keep,” “modify,” or “remove” responses on the survey, in addition to panelists’ open-ended comments on what to modify or why to remove. Common themes that panelists noted as issues during this round were redundancy and overlapping items, using wording that was too prescriptive, and lack of consistency and appropriateness with language and phrasing, which led to seven items being dropped and 23 items being modified. For example, an item from EP4 (i.e., “I share resources [e.g., materials, space, contacts, etc.] with my partner[s] when appropriate”) from an existing CBPR measure (Arora et al., 2015) was changed to “partners share resources to build capacity.” In round 2, no items were added, and 13 items were not changed.
TABLE 3.
Revised engagement principles | Number of items | |||
---|---|---|---|---|
Total | Not changed | Dropped | Modified | |
1. Focus on local relevance and social determinants of health | 4 | 3 | 0 | 1 |
2. (was #4) Seek and use the input of all partners | 5 | 0 | 0 | 5 |
3. (was #5) Involve a cyclical and iterative process in pursuit of objectives | 6 | 2 | 2 | 2 |
4. (was #6) Foster co-learning, capacity building, and co-benefit for all partners | 5 | 1 | 1 | 3 |
5. (was #7) Build on strengths and resources within the community/target population | 5 | 1 | 1 | 3 |
6. (was #8, 9) Facilitate collaborative, equitable partnerships | 9 | 4 | 2 | 3 |
7. (was #10) Involve all partners in the dissemination process | 5 | 0 | 1 | 4 |
8. (new) Build trust | 4 | 2 | 0 | 2 |
Total | 43 | 13 | 7 | 23 |
Note: No items were added after round 2.
Table 4 presents results for item modifications from rounds 3 and 4. The same eight EPs were presented in rounds 3 and 4, with only slight changes to the wording of the EP titles. The measure presented in round 3 consisted of 36 total items (4–7 items per EP). The research team’s discussion and revisions were guided by panelists’ “keep,” “modify,” or “remove” responses on the survey, in addition to panelists’ open-ended comments on what to modify or why to remove items. A total of 20 items were not changed, five items were dropped, two were added, and 11 were modified. Modifications to items included wording changes to make items broader (e.g., adding “as appropriate” to some items) and continuing to make items more bidirectional. For example, an item from EP4 (i.e., “work together to insure all partners get what they need from the project”) was modified to “the partnership provides benefits to all partners” (round 2) and changed to “the partnership adds value to the work of all partners” (round 3). Based on round 3 feedback, an item from EP6 (i.e., “the partners mutually agree on ownership of data and intellectual property”) was changed to “partners agree on ownership and governance of data and intellectual property”; this item was modified again, to “partners agree on ownership and management responsibility of data and intellectual property” in round 4.
TABLE 4.
Final engagement principles | Number of items changes made after Delphi survey 3 |
Number of items changes made during round 4 |
Final (round 5) |
||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Total | Not changed | Dropped | Added | Modified | Total | Not changed | Dropped | Added | Modified | Total items | |
1. Focus on community perspectives and determinants of health | 4 | 3 | 0 | 0 | 1 | 4 | 3 | 0 | 0 | 1 | 4 |
2. Partner input is vital | 5 | 1 | 1a | 0 | 3 | 4 | 1 | 1 | 1b | 2 | 4 |
3. Partnership sustainability to meet goals and objectives | 4 | 2 | 0 | 0 | 2 | 4 | 4 | 0 | 1 | 0 | 5 |
4. Foster co-learning, capacity building, and co-benefit for all partners | 4 | 2 | 0 | 0 | 2 | 4 | 4 | 0 | 0 | 0 | 4 |
5. Build on strengths and resources within the community or patient population | 4 | 3 | 1 | 0 | 0 | 3 | 1 | 0 | 0 | 2 | 3 |
6. Facilitate collaborative, equitable partnerships | 7 | 4 | 1 | 0 | 2 | 6 | 3 | 2b | 0 | 1 | 4 |
7. Involve all partners in the dissemination process | 4 | 1 | 2 | 1 | 1 | 3 | 2 | 0 | 0 | 1 | 3 |
8. Build and maintain trust in the partnership | 4 | 4 | 0 | 1a | 0 | 5 | 5 | 0 | 0 | 0 | 5 |
Total | 36 | 20 | 5 | 2 | 11 | 33 | 23 | 3 | 2 | 7 | 32 |
One item from EP2 moved to EP8.
One item from EP6 moved to EP2.
The main reason for dropping items was in response to comments of redundancy and the possibility of the item being too similar to another item (overlap). One of the items was moved from the EP “Partner input is vital” to the EP “Build and maintain trust in the partnership.” The measure presented in round 4 consisted of 33 total items (3–6 per EP). Panelists’ “agree” or “disagree” response to items, in-person meeting discussions regarding problem areas of the items, and pre-meeting survey comments guided revisions to the items during this round. A total of 23 items were not changed, three were dropped, two were added, and seven were modified. One of the items was moved from the EP “Facilitate collaborative, equitable partnerships” to the EP “Partner input is vital.”
3.3 |. Delphi round 5 (final)
The measure presented in round 5 consisted of the same eight EPs presented in rounds 2 through 4, with slight wording modifications. The measure contained 32 total items (3–5 per EP). EPs 1, 2, 4, and 6 are measured with four items each. EPs 3 and 8 are measured with five items each, and EPs 5 and 7 with three items each. Table 5 presents the final wording of items and the consensus of panelists, both overall and stratified by stakeholder type (academic or community). All items received 88.9% consensus or higher. There was total (100%) consensus on a majority (n = 20; 63%) of the items. When stratified by stakeholder type, all items received 90% or higher community stakeholder consensus and 75% or higher academic stakeholder consensus. There were eight items (25%) where one academic panelist disagreed. There were three items (9%) where two academic panelists disagreed and one item (3%) where one academic and one community panelist disagreed. The items and EPs presented here are the final version of the measure resulting from the 5-round modified Delphi process.
TABLE 5.
Engagement principle | Item | Overall consensus (N = 18) (%) | Stakeholder consensus (N = 10) (%) | Academic consensus (N = 8) (%) |
---|---|---|---|---|
EP1.1 | Focus on issues important to the community | 100 | 100 | 100 |
EP1.2 | Examine data together to determine the health problems that most people in the community think are important | 88.9 | 100 | 75 |
EP1.3 | Incorporate factors (for example—housing, transportation, food access, education, employment) that influence health status, as appropriate | 100 | 100 | 100 |
EP1.4 | Focus on cultural factors that influence health behaviors | 88.9 | 100 | 75 |
EP2.1 | All partners assist in establishing roles and responsibilities for the collaboration | 100 | 100 | 100 |
EP2.2 | All partners have the opportunity to share ideas, input, leadership responsibilities, and governance (for example—memorandum of understanding, bylaws, organizational structure) as appropriate for the project | 100 | 100 | 100 |
EP2.3 | Plans are developed and adjusted to meet the needs and concerns of the community or patient population | 100 | 100 | 100 |
EP2.4 | Through mutual agreement, partners take on specific tasks according to their comfort, capacity, and expertise | 100 | 100 | 100 |
EP3.1 | Continue community-engaged activities until mutually agreed-upon goals are achieved | 88.9 | 90 | 87.5 |
EP3.2 | Partners continue community-engaged activities beyond an initial project, activity, or study | 100 | 100 | 100 |
EP3.3 | All partners share updates, progress, strategies, and new ideas regularly | 100 | 100 | 100 |
EP3.4 | Plan for ongoing problem solving | 94.4 | 100 | 87.5 |
EP3.5 | Involve all partners in determining next steps | 94.4 | 100 | 87.5 |
EP4.1 | All partners have a variety of opportunities to gain new skills or knowledge from their involvement | 100 | 100 | 100 |
EP4.2 | Encourage all partners to learn from each other | 94.4 | 100 | 87.5 |
EP4.3 | The partnership adds value to the work of all partners | 100 | 100 | 100 |
EP4.4 | Partners share resources to build capacity | 100 | 100 | 100 |
EP5.1 | Build on strengths and resources within the community or patient population | 94.4 | 100 | 87.5 |
EP5.2 | Work with existing community coalitions and organizations | 88.9 | 90 | 87.5 |
EP5.3 | Team includes representation from the local community or patient population | 100 | 100 | 100 |
EP6.1 | Establish fair and equitable processes to manage conflict or disagreements | 94.4 | 100 | 87.5 |
EP6.2 | All partners are comfortable with the agreed-upon timeline to make collaborative decisions about the project | 100 | 100 | 100 |
EP6.3 | Partners agree on ownership and management responsibility of data and intellectual property | 100 | 100 | 100 |
EP6.4 | Treat all partners’ ideas with openness and respect | 94.4 | 100 | 87.5 |
EP7.1 | All partners have the opportunity to be coauthors when the work is published | 100 | 100 | 100 |
EP7.2 | The partners can use knowledge generated from the partnership | 100 | 100 | 100 |
EP7.3 | Involve interested partners in dissemination activities | 94.4 | 100 | 87.5 |
EP8.1 | The environment fosters trust among partners | 100 | 100 | 100 |
EP8.2 | Partners are confident that they will receive credit for their contributions to the partnership | 100 | 100 | 100 |
EP8.3 | Mutual respect exists among all partners | 100 | 100 | 100 |
EP8.4 | All partners respect the population being served | 100 | 100 | 100 |
EP8.5 | Partners understand the culture of the organizations and community (ies) involved in the partnership | 88.9 | 100 | 75 |
The final question of the round 5 survey asked panelists whether they had any final comments that they wanted to add. Many of the panelists noted the hard work that went into the Delphi process, expressed appreciation for being included in the Delphi panel, and stated feeling that their voices were heard. Several also expressed excitement about the potential for the measure to be used in practice. One panelist stated, “Thanks for including me and for all of the hard work that has gone into this project. I think that your findings will make significant contributions to the field.” Another panelist stated, “This was a great experience. I feel the voices and input of the experts were used throughout the process.” In addition, another panelist concurred with the previous sentiments, stating, “This was an incredibly rigorous undertaking that I’m honored to be associated with. I hope that all of the leaders of this project can really make this available and accessible to a diverse array of researchers.”
4 |. DISCUSSION
This content validation builds on the limited measurement literature relevant to the evaluation of stakeholder engagement on decision-making in health research (Arora et al., 2015; Boivin et al., 2018; Goodman et al., 2017; Israel, 2005; Khodyakov et al., 2013; Schulz, Israel, & Lantz, 2003; Weir, D’Entremont, Stalker, Kurji, & Robinson, 2009). Validated, self-reported quantitative measures to assess stakeholder engagement are scant, and no gold-standard measure exists, resulting in a large methodological gap in the community-engaged research literature. Importantly, content validation of this measure of stakeholder-engaged research incorporated stakeholder engagement into the effort. The in-person meeting was an essential component of the content validation process, as it enabled panelists to discuss and reach consensus on items that in previous rounds lacked consensus and clarity in the wording. Panelists in attendance were highly engaged throughout the 2-day meeting, contributing to lively discussions and providing their insights, concrete examples of when the current format of items would not work, and appropriate language that suggests bidirectional partnerships.
Recent work developed and validated a measure to assess patient engagement in research (Hamilton et al., 2018). The patient engagement in research scale (PEIRS) has 37 self-report items organized across seven themes and has demonstrated preliminary content and face validity in pilot testing (Hamilton et al., 2018). However, assessment using this tool is limited to the patient stakeholder group. There is also a new measure to assess the level of patient-centeredness of a research study on the basis of the study abstract (Wilkins, Villalta-gil, & Houston, 2018). This measure gives an overall assessment of the study but does not assess perceptions of engagement from individual stakeholders involved in the study.
The institutions in the Clinical and Translational Science Award consortium develop and implement community-engaged research approaches and evaluate the role of community–academic partnerships on clinical and translational science (Eder et al., 2018). Institutional members suggest that unique institutional priorities create barriers to the development and implementation of shared metrics for stakeholder engagement across projects and institutions. Across institutions, there was a lack of attention to the development and implementation of metrics to assess community engagement in and contributions to research (Eder et al., 2018). While the newer measures noted represent progress (Hamilton et al., 2018; Wilkins et al., 2018), these measures are unlikely to address the needs identified by CTSA consortium members.
Psychometrically sound measures are essential for quantitative research at any stage. Thus, methodological research that develops standardized, reliable, and accurate measures to assess stakeholder engagement is essential to evaluate which approaches or programs most contribute to advancing the process of turning observations in the laboratory and clinic into interventions that improve the health of and are acceptable to individuals and the public. Along with the ability to improve health, interventions are effective when accepted by and culturally appropriate for the target population, key strengths of patient-centered and CBPR.
The results of the content validation presented in this study should be considered in light of the study limitations. The sample of Delphi panelists was recruited using a convenience snowball sampling approach based on the networks of the project team members. The resulting sample was majority female (90%), non-Hispanic (95%), African-American or Black (63%), with some college or higher education (100%) and resided in the Midwest or Southern region of the United States (72%). The views of other ethnic groups or gender identities, particularly those with no representation in the sample (e.g., Asian, Native American, and transgender) might be inadequately reflected in the content validation process. In addition, other relevant identities (those representing health professions and disciplines not included; limited English proficiency, nationality, sexual orientation, health status, etc.) were not queried, and the impact of their presence or absence is unknown. Despite these limitations, we recruited a diverse national sample of Delphi panelists with experience in community engagement and research. Given the high education level of the panelists, attention was given to the reading level and terminology used in the items. In the next phase of this work, interviews were conducted using cognitive response testing to ensure readability of the measure. In addition, eight (44%) panel members were not able to attend the round 4 in-person meeting. We were able to have six of these panel members complete a web-based survey that provided feedback in advance of the meeting, and three of these panelists participated via webinar or phone during part of the meeting. To address this issue and to reach final consensus, an additional web-based round was added to the Delphi process in which all 18 panelists participated.
Without reliable and valid quantitative measures of stakeholder engagement, the field will forever remain as a single-site, niche field, where interventions rely on the unique qualities of the engagers for success. Instead, the content validation of this quantitative measure of stakeholder engagement as part of a comprehensive validation allows for common standardized evaluation metrics that can be used across and within projects over time to track progress on engagement and to ensure that engagement works in all aspects of the project across the translational continuum. Standardized metrics can be used to develop evidence-based practices that move the field beyond lessons learned from individual studies and project teams.
In future work, the authors intend to conduct construct validation of the measure and to examine its psychometric properties. Given the lack of existing measures for stakeholder engagement in research, we will examine correlative validity with measures of CBPR (Arora et al., 2015), community-academic partnerships (Bell-Elkins, 2002; Kagan et al., 2012), community participation in research (Khodyakov et al., 2013), collaboration (Derose, Beatty, & Jackson, 2004; Mattessich, Murray-Close, Monsey, & Amherst H. Wilder Foundation, 2001), coalition engagement (Peterson, 2006), and partnership assessment (Center for the Advancement of Collaborative Strategies in Health, 2002; National Collaborating Center for Methods & Tools, 2008). After the measure has been comprehensively evaluated and implemented in English, there is the potential for translation to other languages.
ACKNOWLEDGMENTS
The authors thank Dr. Goldie Komaie for her project management throughout the Delphi process and Dr. Sharese Willis for her participation in the round 4 in-person meeting and help editing the manuscript. The authors are incredibly grateful for the insightful contributions provided by the members of the Delphi panel.
Funding information
Patient-Centered Outcomes Research Institute, Grant/Award Number: ME-1511-33027
This work was supported through a Patient Centered Outcomes Research Institute (PCORI) Award (ME-1511-33027). All statements in this report, including its findings and conclusions, are solely the authors’ and do not necessarily represent the views of PCORI, its Board of Governors, or its Methodology Committee.
REFERENCES
- Ahmed SM, & Palermo AGS (2010). Community engagement in research: Frameworks for education and peer review. American Journal of Public Health, 100(8), 1380–1387. 10.2105/AJPH.2009.178137 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arora PG, Krumholz LS, Guerra T, & Left SS (2015). Measuring community-based participatory research partnerships: The initial development of an assessment instrument. Progress in Community Health Partnerships: Research, Education, and Action, 9(4567), 549–560. 10.1353/cpr.2015.0077 [DOI] [PubMed] [Google Scholar]
- Arroyo-Johnson C, Allen ML, Colditz GA, Hurtado GA, Davey CS, Thompson VLS, … Goodman MS (2015). A tale of two community networks program centers: Operationalizing and assessing CBPR principles and evaluating partnership outcomes. Progress in Community Health Partnerships: Research, Education, and Action, 9, 61–69. 10.1353/cpr.2015.0026 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bell-Elkins J (2002). Assessing the CCPH Principles of Partnership in a Community-Campus Partnership. Retrieved from http://depts.washington.edu/ccph/pdf_files/friendlyprinciples2.pdf
- Boivin A, L’Espérance A, Gauvin FP, Dumez V, Macaulay AC, Lehoux P, & Abelson J (2018). Patient and public engagement in research and health system decision making: A systematic review of evaluation tools. Health Expectations: An International Journal of Public Participation in Health Care and Health Policy, 21, 1075–1084. 10.1111/hex.12804 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bowen D, Hyams T, Goodman M, West K, Harris-Wai J, & Yu JH (2017). Systematic review of quantitative measures of stakeholder engagement. Clinical and Translational Science, 10(5), 314–336. 10.1111/cts.12474 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Burke JG, Hess S, Hoffmann K, Guizzetti L, Loy E, Gielen A, … Yonas M (2013). Translating community-based participatory research principles into practice. Progress in Community Health Partnerships: Research, Education, and Action, 7(2), 109–109. 10.1353/cpr.2013.0020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Butterfoss FD, Goodman RM, & Wandersman A (1996). Community coalitions for prevention and health promotion: Factors predicting satisfaction, participation, and planning. Health Education Quarterly, 23(1), 65–79. [DOI] [PubMed] [Google Scholar]
- Butterfoss FD, & Francisco VT (2004). Evaluating community partnerships and coalitions with practitioners in mind. Health promotion practice, 5(2), 108–114. 10.1177/1524839903260844 [DOI] [PubMed] [Google Scholar]
- Center for the Advancement of Collaborative Strategies in Health. (2002). Partnership Self-Assessment Tool—Questionnaire. Retrieved from https://atrium.lib.uoguelph.ca/xmlui/bitstream/handle/10214/3129/Partnership_Self-Assessment_Tool-Questionnaire_complete.pdf?sequence=1&isAllowed=y [Google Scholar]
- Clinical and Translational Science Awards Consortium Community Engagement Key Function Committee Task Force on the Principles of Community Engagement. (2011). Principles of Community Engagement. NIH Publication No. 11-7782. Retrieved from http://www.atsdr.cdc.gov/communityengagement/
- Derose K, Beatty A, & Jackson C (2004). Evaluation of Community Voices Miami: Affecting Health Policy for the Uninsured. RAND Corporation; 10.7249/TR177 [DOI] [Google Scholar]
- Eder MM, Evans E, Funes M, Hong H, Reuter K, Ahmed S, … Wallerstein N (2018). Defining and measuring community engagement and community-engaged research: Clinical and translational science institutional practices. Progress in Community Health Partnerships: Research, Education, and Action, 12(2), 145–156. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fleurence R, Selby JV, Odom-Walker K, Hunt G, Meltzer D, Slutsky JR, & Yancy C (2013). How the patient-centered outcomes research institute is engaging patients and others in shaping its research agenda. Health Affairs, 32(2), 393–400. 10.1377/hlthaff.2012.1176 [DOI] [PubMed] [Google Scholar]
- Glässel A, Kirchberger I, Kollerits B, Amann E, & Cieza A (2011). Content validity of the extended ICF Core Set for Stroke: An international Delphi survey of physical therapists. Physical Therapy, 91(8), 1211–1222. 10.2522/ptj.20100262 [DOI] [PubMed] [Google Scholar]
- Goodman MS, & Sanders Thompson VL (2017). The science of stakeholder engagement in research: Classification, implementation, and evaluation. Translational Behavioral Medicine, 7, 486–491. 10.1007/s13142-017-0495-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goodman MS, Thompson VLS, Arroyo Johnson C, Gennarelli R, Drake BF, Bajwa P, … Bowen D (2017). Evaluating community engagement in research: Quantitative measure development. Journal of Community Psychology, 45(1), 17–32. 10.1002/jcop.21828 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gupta UG, & Clarke RE (1996). Theory and applications of the Delphi technique: A bibliography (1975–1994). Technological Forecasting and Social Change, 53(2), 185–211. 10.1016/S0040-1625(96)00094-7 [DOI] [Google Scholar]
- Hamilton CB, Hoens AM, McQuitty S, McKinnon AM, English K, Backman CL, … Li LC (2018). Development and pre-testing of the Patient Engagement In Research Scale (PEIRS) to assess the quality of engagement from a patient perspective. PLoS One, 13(11), 10.1371/journal.pone.0206588e0206588 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hasson F, & Keeney S (2011). Enhancing rigour in the Delphi technique research. Technological Forecasting and Social Change, 78(9), 1695–1704. 10.1016/j.techfore.2011.04.005 [DOI] [Google Scholar]
- Helmer O (1967). Delphi Method.PDF. Retrieved from www.rand.org/topics/delphi-method.html
- Hsu C, & Sandford B (2007). The delphi technique: Making sense of consensus. Practical Assessment, Research & Evaluation, 12(10), 1–8. 10.1016/S0169-2070(99)00018-7 [DOI] [Google Scholar]
- Israel BA, Schulz AJ, Parker EA, & Becker AB (1998). Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health, 19, 173–202. 10.1146/annurev.publhealth.19.1.173 [DOI] [PubMed] [Google Scholar]
- Israel BA (2005). Methods in community-based participatory research for health. Jossey-Bass Inc Pub. [Google Scholar]
- Israel BA, Schulz AJ, Parker EA, Becker AB, Allen AJ, & Guzman JR (2008). Critical issues in developing and following CBPR principles. Community-Based Participatory Research for Health: From Process to Outcomes, 47–66. [Google Scholar]
- Kagan JM, Rosas SR, Siskind RL, Campbell RD, Gondwe D, Munroe D, … Schouten JT (2012). Community-researcher partnerships at NIAID HIV/AIDS clinical trials sites: Insights for evaluation and enhancement. Progress in Community Health Partnerships: Research, Education, and Action, 6(3), 311–320. 10.1353/cpr.2012.0034 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khodyakov D, Stockdale S, Jones A, Mango J, Jones F, & Lizaola E (2013). On measuring community participation in research. Health Education & Behavior: The Official Publication of the Society for Public Health Education, 40(3), 346–354. 10.1177/1090198112459050 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khodyakov D, Stockdale S, Jones F, Ohito E, Jones A, Lizaola E, & Mango J (2011). An exploration of the effect of community engagement in research on perceived outcomes of partnered mental health services projects. Society and Mental Health, 1(3), 185–199. 10.1177/2156869311431613 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Las Nueces D, Hacker K, DiGirolamo A, & Hicks LS (2012). A systematic review of community-based participatory research to enhance clinical trials in racial and ethnic minority groups. Health Services Research, 47(3 Pt 2), 1363–1386. 10.1111/j.1475-6773.2012.01386.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Linstone HA, & Turoff M (2002). The Delphi Method—Techniques and Applications (pp. 1–616). Addison-Wesley; 10.2307/1268751 [DOI] [Google Scholar]
- Linstone HA, & Turoff M (2011). Delphi: A brief look backward and forward. Technological Forecasting and Social Change, 78(9), 1712–1719. 10.1016/j.techfore.2010.09.011 [DOI] [Google Scholar]
- Mattessich PW, Murray-Close M, Monsey BR, & Amherst H Wilder Foundation (2001). Collaboration—what makes it work. Amherst H. Wilder Foundation. [Google Scholar]
- McCloskey DJ, McDonald MA, Cook J, Heurtin-Roberts S, Updegrove S, Sampson D, & Eder M (2012). Community engagement: Definitions and organizing concepts from the literature, Principles of Community Engagement (2nd ed., p. 41 The Centers for Disease Control and Prevention. [Google Scholar]
- National Collaborating Center for Methods and Tools. (2008). Partnership evaluation: The Partnership Self-Assessment Tool. Retrieved: January 8, 2019, from https://www.nccmt.ca/knowledge-repositories/search/10
- Abell N, Springer DW, & Kamata A (2009). Developing and Validating Rapid Assessment Instruments. New York: Oxford University Press; Retrieved from http://202.166.170.213:8080/xmlui/bitstream/handle/123456789/1517/DevelopingandValidatingASSESSMENT.pdf?sequence=1&isAllowed=y. [Google Scholar]
- Patient-Centered Outcomes Research Institute. (2012). Patient-Centered Outcomes Research. Retrieved January 1, 2019, from https://www.pcori.org/research-results/patient-centered-outcomes-research
- Peterson JW, Lachance LL, Butterfoss FD, Houle CR, Nicholas EA, Gilmore LA, … Friedman AR (2006). Engaging the community in coalition efforts to address childhood asthma. Health Promotion Practice, 7(2 suppl), 56S–65S. 10.1177/1524839906287067 [DOI] [PubMed] [Google Scholar]
- Sackman H (1975). Summary evaluation of Delphi. Policy Analysis, 1(4), 693–718. 10.1007/s13398-014-0173-7.2 [DOI] [Google Scholar]
- Schulz AJ, Israel BA, & Lantz P (2003). Instrument for evaluating dimensions of group dynamics within community-based participatory research partnerships. Evaluation and Program Planning, 26(3), 249–262. 10.1016/S0149-7189(03)00029-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schwab DP (1999). Measurement foundations: Validity and validation, Research Methods for Organizational Studies (pp. 31–48. Lawrence Erlbaum Associates, Inc; http://www.mendeley.com/research/measurement-foundations-validity-and-validation/ [Google Scholar]
- Thompson VLS, Drake B, James AS, Norfolk M, Goodman M, Ashford L, & Colditz G (2014). A community coalition to address cancer disparities: Transitions, successes and challenges. Journal of Cancer Education: The Official Journal of the American Association for Cancer Education, 10.1007/s13187-014-0746-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wallerstein NB, & Duran B (2006). Using community-based participatory research to address health disparities. Health Promotion Practice, 7(3), 312–323. [DOI] [PubMed] [Google Scholar]
- Weir E, d’Entremont N, Stalker S, Kurji K, & Robinson V (2009). Applying the balanced scorecard to local public health performance measurement: Deliberations and decisions. BMC Public Health, 9, 127 10.1186/1471-2458-9-127 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilkins CH, Villalta-gil V, Houston MM, Joosten Y, Richmond A, Vaughn YC, … Wallston KA (2018). Development and validation of the Person-Centeredness of Research Scale. Journal of Comparative Effectiveness Research, 7(12), 1153–1159. [DOI] [PMC free article] [PubMed] [Google Scholar]